Writing/HALA

HALA: Human-AI Layered Architecture

A pattern language for the things AI can say that humans can't.

If a role is socially costly, politically dangerous, or career-limiting for a human, it is a prime candidate for an AI pattern.
28 patterns5 layers6 new in v2
Download PDF

The Problem

Organizations systematically fail to surface certain truths—not because people don't know them, but because saying them carries unacceptable personal cost.

“This project should be killed”
→ Career risk if you championed it
“The executive's proposal has fatal flaws”
→ Political suicide
“We're slowly abandoning our stated values”
→ Labeled as not a team player
“This decision will hurt us in 5 years”
→ Dismissed as speculation

HALA addresses this by designing AI systems that can occupy these roles—saying what needs to be said without the human bearing the social cost.

Architecture

Five layers, each building on the layers below.

HALA Architecture

Click a layer to explore its patterns

Meta-Governance

Patterns that govern the patterns

Adoption GradientTrust CalibratorCapture Detector+3 more
6
patterns

Foundation Infrastructure

Patterns that make everything else possible

Semantic InterfaceSemantic CompilerEvent-Sourced Memory+1 more
4
patterns

Epistemic Integrity

Patterns that ensure reasoning quality

Grounding ContractBase-Rate EnforcerConfidence Collar+3 more
6
patterns

Organizational Perception

Patterns that see what humans can't or won't

Org-Shadow ModelerIncentive TranslatorStatus-Blind Analyst+2 more
5
patterns

Uncomfortable Agency

Patterns that say what humans can't

Mr UnpopularKill Switch AdvocateLong-Horizon Advocate+3 more
6
patterns

Decision Output

Where patterns meet human decisions

Decision Historian
1
patterns
New in v2
Data flows down
Governance flows up

Pattern Gallery

28 patterns organized by layer. Each pattern includes its function and failure mode.

Uncomfortable Agency

Patterns that say what humans can't

Mr Unpopular

Surface inconvenient but high-value truths suppressed by human incentives

Failure mode: Weaponized for agendas, undermines trust

Click to see example →

Kill Switch Advocate

Continuously argue for stopping/sunsetting initiatives to counter sunk cost

Failure mode: Decision paralysis, excuse for inaction

Click to see example →

Long-Horizon Advocate

New

Argue for consequences beyond current planning horizons (5-10+ years)

Failure mode: Unfalsifiable predictions

Click to see example →

Proxy Confrontation Agent

Mediate sensitive issues anonymously to enable truth flow

Failure mode: Enables cowardice, erodes direct norms

Click to see example →

Moral Injury Sentinel

Detect patterns where people repeatedly act against stated values

Failure mode: Surveillance creep, moralizing

Click to see example →

Stakeholder Ghost

New

Represent interests of absent parties (future employees, communities)

Failure mode: Paternalism, misrepresentation

Click to see example →

HALA in Action

EHR Migration Decision

A hospital system is 18 months into a $40M EHR migration. Adoption is at 30%. No one wants to say "kill it" because the CFO championed it.

Without HALA

The project limps along for 2 more years, eventually "succeeds" at 45% adoption, $60M over budget.

With HALA

At month 12, the system surfaces: "Projects with <40% adoption at 12 months have 8% probability of reaching target. Three prior decisions assumed 60% adoption by now. Recommend formal kill/pivot review." The CFO isn't blamed—the system said it.

Kill Switch AdvocateDecision Historian

Interactive Demo

🔬Try It: HALA Pattern Analyzer

Describe a decision scenario and see which HALA patterns would activate.

0/2000 characters
or try an example:

Meta-Principles

The Core Question

What true and valuable things go unsaid in organizations, and how can AI safely say them?

1

Adoption Principle

A pattern that organizations reject provides zero value. Design for gradual trust-building.

2

Failure Mode Principle

Every pattern has shadow applications. Name them explicitly or they'll be discovered adversarially.

3

Override Principle

Humans must always be able to override AI patterns, but overrides must be logged and reviewable.

4

Sunset Principle

Patterns that aren't providing value should be removable without organizational trauma.

5

Calibration Principle

The goal is appropriate trust, not maximum trust. Humans should know when to override.

When to Use HALA

HALA is appropriate when:

  • Organization has baseline psychological safety
  • Leadership genuinely wants better decisions (not decision theater)
  • There's tolerance for uncomfortable outputs
  • Technical infrastructure can support traceability
  • Clear escalation paths exist

HALA is contraindicated when:

  • Organization will weaponize outputs for political purposes
  • No one has authority to act on uncomfortable truths
  • Compliance theater is the actual goal
  • Trust between humans is already critically damaged
  • Legal/regulatory constraints prevent transparency

Implementation Guide

Minimum Viable Deployment

Start with Foundation Infrastructure + one pattern from Uncomfortable Agency:

Semantic InterfaceEvent-Sourced MemoryMr UnpopularDecision Historian

Provides: Structured input, audit trail, one uncomfortable truth-teller, and decision capture

Expansion Path

1

Epistemic Integrity

Trigger: When users trust the infrastructure

Add reasoning quality patterns to ensure claims are grounded and challenged

2

Organizational Perception

Trigger: When leadership is ready to see power dynamics

Add patterns that surface informal structures and incentive misalignments

3

Meta-Governance

Trigger: When the system is mature enough to govern itself

Add patterns that monitor and adjust the HALA system itself

Origin

HALA emerged from building production AI systems across 186 hospitals serving 44+ million patient encounters annually. In that environment, I observed repeatedly that organizational dynamics—not technical limitations—were the binding constraint on AI impact.

This pattern language distills those observations into a reusable framework for designing AI systems that can occupy the cognitive and communicative roles humans cannot reliably play.

Version History

v1.0Initial pattern language with 18 patterns
v2.0Added layered taxonomy, failure modes, 6 new patterns, meta-governance layer, boundary conditions
Download Full PDFv2.0 — 28 patterns across 5 layers

Interested in Applying These Patterns?

Whether you're building AI systems for enterprise decision-making or researching human-AI collaboration, I'd love to discuss how HALA might apply to your context.