HALA: Human-AI Layered Architecture
A pattern language for the things AI can say that humans can't.
“If a role is socially costly, politically dangerous, or career-limiting for a human, it is a prime candidate for an AI pattern.”
The Problem
Organizations systematically fail to surface certain truths—not because people don't know them, but because saying them carries unacceptable personal cost.
→ Career risk if you championed it
→ Political suicide
→ Labeled as not a team player
→ Dismissed as speculation
HALA addresses this by designing AI systems that can occupy these roles—saying what needs to be said without the human bearing the social cost.
Architecture
Five layers, each building on the layers below.
HALA Architecture
Click a layer to explore its patterns
Meta-Governance
Patterns that govern the patterns
Foundation Infrastructure
Patterns that make everything else possible
Epistemic Integrity
Patterns that ensure reasoning quality
Organizational Perception
Patterns that see what humans can't or won't
Uncomfortable Agency
Patterns that say what humans can't
Decision Output
Where patterns meet human decisions
Pattern Gallery
28 patterns organized by layer. Each pattern includes its function and failure mode.
Uncomfortable Agency
Patterns that say what humans can't
Mr Unpopular
Surface inconvenient but high-value truths suppressed by human incentives
Failure mode: Weaponized for agendas, undermines trust
Click to see example →
Kill Switch Advocate
Continuously argue for stopping/sunsetting initiatives to counter sunk cost
Failure mode: Decision paralysis, excuse for inaction
Click to see example →
Long-Horizon Advocate
Argue for consequences beyond current planning horizons (5-10+ years)
Failure mode: Unfalsifiable predictions
Click to see example →
Proxy Confrontation Agent
Mediate sensitive issues anonymously to enable truth flow
Failure mode: Enables cowardice, erodes direct norms
Click to see example →
Moral Injury Sentinel
Detect patterns where people repeatedly act against stated values
Failure mode: Surveillance creep, moralizing
Click to see example →
Stakeholder Ghost
Represent interests of absent parties (future employees, communities)
Failure mode: Paternalism, misrepresentation
Click to see example →
HALA in Action
EHR Migration Decision
A hospital system is 18 months into a $40M EHR migration. Adoption is at 30%. No one wants to say "kill it" because the CFO championed it.
Without HALA
The project limps along for 2 more years, eventually "succeeds" at 45% adoption, $60M over budget.
With HALA
At month 12, the system surfaces: "Projects with <40% adoption at 12 months have 8% probability of reaching target. Three prior decisions assumed 60% adoption by now. Recommend formal kill/pivot review." The CFO isn't blamed—the system said it.
Interactive Demo
🔬Try It: HALA Pattern Analyzer
Describe a decision scenario and see which HALA patterns would activate.
Meta-Principles
The Core Question“What true and valuable things go unsaid in organizations, and how can AI safely say them?”
Adoption Principle
A pattern that organizations reject provides zero value. Design for gradual trust-building.
Failure Mode Principle
Every pattern has shadow applications. Name them explicitly or they'll be discovered adversarially.
Override Principle
Humans must always be able to override AI patterns, but overrides must be logged and reviewable.
Sunset Principle
Patterns that aren't providing value should be removable without organizational trauma.
Calibration Principle
The goal is appropriate trust, not maximum trust. Humans should know when to override.
When to Use HALA
HALA is appropriate when:
- Organization has baseline psychological safety
- Leadership genuinely wants better decisions (not decision theater)
- There's tolerance for uncomfortable outputs
- Technical infrastructure can support traceability
- Clear escalation paths exist
HALA is contraindicated when:
- Organization will weaponize outputs for political purposes
- No one has authority to act on uncomfortable truths
- Compliance theater is the actual goal
- Trust between humans is already critically damaged
- Legal/regulatory constraints prevent transparency
Implementation Guide
Minimum Viable Deployment
Start with Foundation Infrastructure + one pattern from Uncomfortable Agency:
Provides: Structured input, audit trail, one uncomfortable truth-teller, and decision capture
Expansion Path
Epistemic Integrity
Trigger: When users trust the infrastructure
Add reasoning quality patterns to ensure claims are grounded and challenged
Organizational Perception
Trigger: When leadership is ready to see power dynamics
Add patterns that surface informal structures and incentive misalignments
Meta-Governance
Trigger: When the system is mature enough to govern itself
Add patterns that monitor and adjust the HALA system itself
Origin
HALA emerged from building production AI systems across 186 hospitals serving 44+ million patient encounters annually. In that environment, I observed repeatedly that organizational dynamics—not technical limitations—were the binding constraint on AI impact.
This pattern language distills those observations into a reusable framework for designing AI systems that can occupy the cognitive and communicative roles humans cannot reliably play.
Version History
Interested in Applying These Patterns?
Whether you're building AI systems for enterprise decision-making or researching human-AI collaboration, I'd love to discuss how HALA might apply to your context.