The loop
The six components are one system. The ELI platform generates the training signal. Stage verifiers encode it as learned decision boundaries. The Living Knowledge Bank provides the domain context. The Governance Layer makes every output traceable. The Domain Orchestrator runs the pipeline — sequencing stages, carrying state forward, and acting on the verifier signal at each gate. Belief Guided Perception ensures that what the system observes sharpens with every cycle of reasoning.
What makes this trustworthy isn’t any single component — it’s the loop. Every halt generates training data. Every correction sharpens the verifier. Every ingested guideline deepens the knowledge graph. The system compounds because the data that trains it is produced by the experts who understand the domain, as a by-product of using it.
Two phases of proprietary data. Training & Production. No separate annotation pipeline. The product gets more capable as a by-product of normal clinical governance.
The components
Uncertainty isn’t a failure mode. It’s the escalation and training signal.
The ELI platform is how the system learns to reason. An expert watches the AI think through a case — not just the answer, but every step of the chain. They correct it at the reasoning level. Those corrections become the training data for the stage verifiers: fine-tuned models that evaluate each stage once the system is in production. The reasoning loop follows an OODA structure — Observe, Orient, Decide, Act — each stage with its own verifier. When a verifier can't resolve a stage, it escalates to an expert. Their correction permanently retrains that specific verifier. No annotation pipeline. No extra work.
Not a document index. A versioned knowledge graph that compounds.
Experts can add to it, query it, and talk to it directly. A self-discovery mode treats the existing knowledge bank as a set of working beliefs, then actively scans new literature for anything that challenges them. Where new evidence conflicts with what the system currently holds, it flags the gap for a curator — so the knowledge base stays current without anyone having to manually monitor the field.
Learned thresholds, not hand-coded rules. Stage-local, not global.
Think of each verifier as a traffic light. Green passes. Amber sends the stage back with a confidence score, specific guidance on what was insufficient, and enriched retrieval — before it re-executes. Red halts and escalates to an expert. Each verifier is bound to a single OODA stage — a correction at Orient doesn't alter the Decide verifier, and a failure at Decide doesn't reset to Observe. The more difficult cases the system encounters, the more precisely calibrated its verifiers become.
Three independent layers. Each updateable without touching the others.
Experts can initiate updates when practice changes — a new guideline, a flagged drug interaction, a refined reasoning pattern. The update goes through the knowledge bank pipeline, generates a versioned change record, and propagates downstream. Every recommendation carries an immutable audit trail: knowledge bank version, active verifiers, confidence scores, and guideline citations.
What you observe depends on what you already know.
This eliminates flat intake — where every question is asked regardless of clinical relevance — and makes each OODA cycle progressively more precise. In practice, the system asks the right questions earlier, misses fewer high-risk signals, and produces a more complete picture with less friction for the clinician.
Sequences stages, carries state forward, acts on the verifier signal at each gate.
The Orchestrator manages the cross-stage state that makes reasoning coherent — the beliefs formed in Orient shape what Decide weighs; the output schema from Act is governed by what was confirmed in earlier stages. It sequences stages, carries context forward, and acts on the three-state verifier signal at each gate. The pipeline is not a series of independent calls. The Orchestrator holds it together.