Jason Stiltner
ML Engineer • Applied Research • Production AI Systems
I work at the intersection of frontier ML research and real-world AI systems, turning ideas like continual learning, safety-critical RAG, and agentic workflows into working software.
Currently at HCA Healthcare, serving 180+ hospitals with 44M+ patient encounters annually.
Recent Work
FeaturedNested Learning is a meta-pattern that appears at every layer of the AI stack—from model internals to agentic orchestration to RAG systems. I recognized this while extending Google's NeurIPS 2025 paper with bidirectional and non-adjacent knowledge bridges, achieving +89% improvement at high regularization.
Two key insights: (1) knowledge must flow bidirectionally between fast and slow learners, and (2) critical signals need non-adjacent bridges that skip intermediate layers. These principles apply universally: agentic systems, RAG pipelines, fine-tuning, even human organizations.
This is pattern recognition from 20 years building integration layers—applied to frontier ML research.
Research Implementation at Production Scale
I specialize in translating frontier research into production-ready systems. This capability comes from 20 years building integration layers across domains—from language instruction in Paris to voice-first field applications for beekeeping to cross-platform gaming APIs. When I encounter a new ML architecture, I recognize patterns I've implemented before in other contexts.
This pattern recognition enables rapid implementation without sacrificing production quality: comprehensive testing, proper error handling, and systems designed for reliability in safety-critical environments.
What Drives Me
I'm driven by learning—both how machines learn and how I can keep learning myself. The most important technical challenge of our time is building AI systems that are genuinely helpful, harmless, and honest. I believe this requires both rigorous safety research and production engineering discipline—understanding how models behave in theory and ensuring they behave reliably in practice.
My work in clinical AI has taught me that safety isn't a constraint on capability—it's a design requirement. Systems that refuse to answer when uncertain are more trustworthy than systems that always produce output. This philosophy extends beyond healthcare to all frontier AI development.
Explore Further
See what I've built, learn my story, or get in touch