Thomas J. Hardin, Ph.D.

Some decisions hinge on the gap between “the model ran” and “the model's right.”

I provide independent technical judgment on whether materials, mechanical, thermodynamic, or AI/ML models are strong enough to support an irreversible decision—before capital, strategy, or governance is locked in.

MIT-trained PhD. Independent. No implementation, no advocacy.

Set up a scoping call

What This Actually Looks Like

Every engagement follows a structured, transparent process designed for high-stakes decisions. Engagements typically run 2-3 weeks, culminating in a concise written assessment and follow-up discussion.

  1. Problem framing & scope

    We start with a short scoping call to define the decision at hand, the technical claims or models involved, and what domains (materials, mechanical, thermodynamic, AI/ML) are relevant. If the problem falls outside my expertise, I'll say so upfront.

  2. Independence & Conflict Check

    Before work begins, I confirm no prior advisory, implementation, or financial ties could bias judgment. Independence is built into every engagement — no execution, advocacy, or outcome-based incentives.

  3. Technical Review & Analysis

    I dig into the model and its assumptions, examining validation, uncertainty, failure modes, edge cases, and scaling risks. The focus is always: Is this model strong enough to carry the decision it is being used to justify?

  4. Written Judgment

    Deliverables are concise, clear, and defensible under scrutiny. They document:

    • What was reviewed (and what was not)
    • Key assumptions, limitations, and dependencies
    • Material risks, gaps, and failure modes
    • Plain-language guidance on whether the model supports the stated decision

    These reports are designed for use in board decks, investment memos, audit files, or governance discussions. They are not marketing materials, advocacy, or implementation plans.

  5. Follow-Up & Interpretation

    I walk decision-makers through the findings, clarify implications for risk or valuation, and answer questions.

Background

MIT PhD in materials science. Six years at a national nuclear weapons laboratory, where models aren't academic exercises - they justify irreversible decisions about hardware you can't test. That's where I learned what rigorous validation actually requires.

Fifteen peer-reviewed publications including first-author work in Nature Communications on machine learning for materials. President Harry S. Truman Fellowship in National Security Science and Engineering. Technical experience spans atomic-scale simulation to continuum modeling, across materials mechanics, plasma physics, and biomedical systems.

I've built models that carried high-stakes decisions. I know what validated looks like under real scrutiny, and what “the model ran” dressed up as confidence looks like. That's the judgment I bring to independent assessment work.

Examples of Model Failures I've Caught

Technical due diligence

Novel processing technique

An R&D group developing a new processing method showed experimental data with statistically significant performance improvements. Their model attributed the effect to undefined electronic effects.

The model neglected a heat source. When I accounted for thermal effects, the claimed performance gain disappeared entirely - the results were indistinguishable from simple heating.

Recommended against investment. Prevented a multi-year R&D program chasing an artifact.

Model credibility assessment

ML surrogate model

A team developing machine learning surrogates for atomistic simulations reported unprecedented performance and requested significantly increased funding.

Their training pipeline subtly leaked information from test data into the training process. When I identified and the team corrected the leakage, model performance collapsed to baseline.

Program was defunded, avoiding 18+ months of development on a fundamentally broken approach.

Model credibility assessment

Multiscale battery model

An R&D team sought increased funding for a scale-bridging machine learning model they'd developed for a candidate battery material.

Their model's simplifying assumptions neglected a primary physical effect, rendering the predictions trivial. I flagged this and the funding decision was paused. The team revised their approach to account for the missing physics.

Funding proceeded with the corrected model.

Technical due diligence

Metamaterial optimization

A defense technology pitch claimed a novel metamaterial design would deliver step-change performance improvements. I evaluated the theoretical maximum gain from their proposed optimization. Best case: marginal improvement, well below what would justify the development cost and timeline.

Funder declined.

Model credibility assessment

ML-based materials screening

A commercial ML vendor claimed their composition screening model achieved high accuracy across a novel material class.

The model's actual region of validity was far narrower than advertised. Outside that region, predictions were nonsensical. Sensitivity to model parameter perturbations was severe and undocumented.

Buyer declined procurement, avoiding integration of an unreliable tool into their discovery pipeline.

Technical due diligence

Metallurgical prediction method

A company considered acquiring patent rights to a computational method from an academic lab. The method predicted a critical metallurgical property that would enable faster materials development.

I reviewed the validation data, tested the underlying assumptions, and confirmed the approach was sound - defensible and reproducible under operational conditions.

The company acquired the rights and found them useful.

How we work together

Technical Due Diligence

$25-40k | 2-3 week engagement

Independent assessment of technical claims, execution risk, and team capability for high-stakes decisions. Focused on materials, mechanical systems, thermodynamics, computational models, and AI/ML applications.

Common uses: Investment or acquisition decisions, strategic partnerships, founder/team/technology evaluation, pre-commitment risk assessment.

Model Credibility Assessment

$20-50k | Scoped to specific model or system

Evaluation of whether a specific computational model, simulation, or AI/ML system is fit to support real decisions. Focus on validation, assumptions, failure modes, and operational readiness.

Common uses: Model-driven regulatory or strategy commitments, situations where you're dependent on third-party technical claims, reproducibility concerns, assessing model maturity before operational deployment.

Fractional Technical Advisor

$15-25k per month | Ongoing

Ongoing access to independent technical judgment for organizations making repeated high-stakes decisions. Monthly working sessions covering architecture decisions, technical risk tradeoffs, and framing of complex problems. No implementation, no execution role.

Common uses: Sanity-checking assumptions before major commitments, translating technical risk for non-technical leadership, reducing avoidable technical blind spots.

Set up a scoping call

Do you have a problem that would benefit from independent technical judgement? Let's set up a scoping call:

hardin@noblebrook.com