The DNAi Cognitive Lifecycle

From Question to Answer — How a Fiduciary AI Brain Thinks

Think of DNAi like a human nervous system. When you touch a hot stove, your nerves carry the signal in (afference), your brain processes the danger (inference), and your muscles pull your hand back (efference).

When someone asks Asha a medical question, the same three phases happen — but instead of nerves and muscles, it’s vectors, evidence, and language. The inference layer beats like a heart: it contracts (compresses knowledge into the best evidence) and relaxes (expands to check for contradictions and gaps).
Afference (Sensory Input)
Inference (Cognitive Loop)
Efference (Motor Output)
Gate (stop / redirect)
👂
I. Afference
Sensory intake — hearing and understanding the question
A1
Query Arrives
The user types a question. It enters through the API as raw text — like sound waves hitting an eardrum.
A2
Safety Screening
Jailbreak detection and PHI (personal health info) redaction. Like the brain’s amygdala — a danger check before anything else processes.
⚡ GATE: Jailbreak → blocked instantly
A3
Language Detection & Translation
Detects 30+ languages. If the question is in Hindi, Spanish, or Urdu, it’s translated to English for processing, then the answer is translated back. Like a bilingual interpreter in your ear.
A4
Query Classification (VectorGates)
Converts the question into a 768-dimensional vector and classifies it: medical? social? greeting? math? Like the thalamus — the brain’s switchboard routing signals to the right cortical area.
A5
Gravity & Effort Scoring
Gravity = clinical stakes (0–1). “What’s a headache?” → 0.3. “Chest pain with shortness of breath” → 0.95.
EAS = cognitive effort needed (0–10). Simple lookup → 2. Multi-drug interaction → 9.
Whichever is higher picks the smarter (and slower) AI model.
⚡ GATE: Gravity ≥ 0.95 → Emergency protocol
A6
Memory Recall (DNAid)
Retrieves your personal context — past conversations, health history, preferences — from your private vector collection. Like hippocampal recall: “I’ve talked to this person before.”
A7
Evidence Retrieval
Searches 591 collections with 121M+ vectors — PubMed abstracts, FDA drug labels, clinical guidelines, textbooks, OpenAlex papers. Like sending scouts to every library on Earth simultaneously. Returns the top 20 most relevant sources.
🧠
II. Inference
The cognitive loop — a beating heart of reasoning
— SYSTOLE (Contraction) — Compress & Filter
S1
Evidence Boost
Systematic reviews and meta-analyses get 2× priority. Title matches boosted. Like blood pressure pushing the most oxygen-rich blood forward first.
S2
Wernicke Layer
Non-LLM comprehension. A cross-encoder scores each piece of evidence: does it support, contradict, or say nothing about the question? No language model — pure math.
S3
Domain Subspace Filter
Tensor Coherence Engine projects evidence into a domain subspace. Off-topic noise (energy ratio < 0.15) is removed. Like the heart’s valves — only relevant blood flows through.
S4
Neural Darwinism
Evidence becomes CIUs (Competitive Informational Units) and fight in the Epistemic Arena. Weak evidence dies. Strong evidence survives. Survival of the fittest — for facts.
— DIASTOLE (Relaxation) — Expand & Verify
D1
Contradiction Detection
Inverse retrieval: actively searches for evidence that disagrees with the winners. Like the heart relaxing to refill — the system opens up to opposing viewpoints. Contradictions penalize confidence.
D2
CFΔ Calculation
Confidence-Falsifiability Delta — a single number (0–1) measuring how trustworthy the evidence is. Combines source quality, similarity, recency, and contradiction penalties.
⚡ GATE: CFΔ < 0.50 → “I don’t have enough evidence”
D3
CIVERA Gateway
Is the evidence complete enough to answer without the LLM? If yes → LLM is bypassed entirely (Repository Verified). If no → LLM gets to reason. Like triage: can the body heal itself, or does it need the brain?
⚡ GATE: Repository Verified → LLM bypassed
D4
Active Inference (Free Energy)
Based on Karl Friston’s theory of how real brains work. The system has beliefs about the answer. It compares beliefs to evidence. The gap = “free energy.” High free energy → inject caution. Low → proceed confidently.
D5
Resolve(P) — Socratic Gate
If stakes are high, confidence is low, and evidence is thin: instead of guessing, Asha asks you a clarifying question. Like a good doctor who says “tell me more” instead of jumping to a diagnosis.
⚡ GATE: High stakes + low CFΔ + thin evidence → ask, don’t answer
💬
III. Efference
Motor output — speaking the answer into existence
E1
System Prompt Assembly
All evidence, context, bias warnings, therapeutic guidance, and fiduciary constraints are woven into a single prompt — like the motor cortex assembling every muscle command needed for one coordinated movement.
E2
LLM Generation (Wernicke-Broca)
A model-agnostic LLM (swappable across vendors) receives everything and generates the response token-by-token. It is the Wernicke-Broca layer — language comprehension and production. It doesn’t think. It verbalizes what the cognitive loop already decided. Gravity picks the model tier: heavier questions get the most powerful model.
E3
Citation Validation
Every [1], [2], [3] in the response is checked against real sources. Fake citations are stripped. Like a copy editor fact-checking before publication.
E4
Output Verification
A second CIVERA pass compares the generated text against the evidence. Did the LLM stay faithful? Did it hallucinate? Like a pharmacist double-checking a prescription before handing it to you.
E5
Abstention Calibration
If the evidence doesn’t mention the specific entity asked about, the system cleanly refuses instead of guessing. A confident “I don’t know” is better than a fluent wrong answer. Silence is epistemic honesty.
E6
Response Streaming
The answer streams to your screen word-by-word via SSE (Server-Sent Events). Sources appear as clickable “My Associations” links at the bottom. Like speech — continuous, not all-at-once.
E7
Memory Consolidation (OutputCIU)
If the answer was confident enough (CFΔ ≥ 0.55), it’s decomposed into individual claims and stored as OutputCIUs in your personal vector collection. Like sleep — the brain replays important moments and commits them to long-term memory. Next time you ask, Asha remembers what she told you.