Jason Stiltner

ML Engineer • Applied Research • Production AI Systems

I work at the intersection of frontier ML research and real-world AI systems, turning ideas like continual learning, safety-critical RAG, and agentic workflows into working software.

Currently at HCA Healthcare, serving 180+ hospitals with 44M+ patient encounters annually.

Recent Work

Featured
Multi-Agent Coordination36.8% improvement

Grounded Commitment Learning dissolves the interpretation problem in AI coordination. Instead of assuming shared understanding, agents coordinate through verifiable behavioral contracts with explicit consequences—connecting Hart-Moore economics to multi-agent systems.

Key discoveries: the punishment paradox (consequences hurt cooperation, r = -0.951) and Dunbar-like scaling limits (~100 agents). All validated with rigorous statistics (p < 0.001, large effect sizes).

This connects economic theory to AI safety—providing auditable, accountable coordination.

Research Implementation at Production Scale

I specialize in translating frontier research into production-ready systems. This capability comes from 20 years building integration layers across domains—from language instruction in Paris to voice-first field applications for beekeeping to cross-platform gaming APIs. When I encounter a new ML architecture, I recognize patterns I've implemented before in other contexts.

This pattern recognition enables rapid implementation without sacrificing production quality: comprehensive testing, proper error handling, and systems designed for reliability in safety-critical environments.

Learn More About My Background

What Drives Me

I'm driven by learning—both how machines learn and how I can keep learning myself. The most important technical challenge of our time is building AI systems that are genuinely helpful, harmless, and honest. I believe this requires both rigorous safety research and production engineering discipline—understanding how models behave in theory and ensuring they behave reliably in practice.

My work in clinical AI has taught me that safety isn't a constraint on capability—it's a design requirement. Systems that refuse to answer when uncertain are more trustworthy than systems that always produce output. This philosophy extends beyond healthcare to all frontier AI development.

Explore Further

See what I've built, learn my story, or get in touch