← All posts

March 9, 2026

How We Built AI That Can’t Lie

46 axioms. 100 million knowledge atoms. A patent-pending architecture where no single fact can vouch for itself.

The Problem Everyone Is Ignoring

When ChatGPT launched, the world celebrated how fluent AI had become. Very few people asked the harder question: how do you know it’s telling the truth?

Fluency is not intelligence. A confident wrong answer is worse than silence. And in the domains where AI matters most — health, fitness, finance — a wrong answer can hurt people.

We didn’t start as AI researchers. We started as physicians — a psychiatrist and a pediatrician, married, raising a family, seeing patients. We built DNAi because we needed AI that met the same standard we hold ourselves to: first, do no harm.

What Is a Knowledge Atom?

Most AI systems store information as raw text chunks thrown into a vector database. Retrieve the nearest chunk, feed it to a language model, hope for the best. This is called RAG — Retrieval Augmented Generation. It’s better than nothing. It’s nowhere near enough.

We store knowledge differently. Every piece of information in our system is a Cognitive Inference Unit (CIU) — a structured knowledge atom with six components:

Name — what the fact is about.
Form — how it is structured and represented.
Dharma — its purpose: when it applies, when it doesn’t, and what it’s for.
Cryptographic hash — a digital fingerprint chaining it to its parent. Tamper with one atom and every descendant breaks.
Confidence score — a mathematically computed measure of how trustworthy this atom is right now.
Temporal anchor — a monotonic timestamp proving when this atom was created, in unforgeable sequence.

This triad of Name, Form, and Dharma is the ontological backbone of the system — knowledge decomposed into structure that a machine can reason about.

Scale

The system today holds over 100 million knowledge atoms across 346+ indexed collections — spanning medical literature, exercise physiology, pharmacology, nutrition science, and clinical guidelines. Every atom traces its cryptographic lineage back to its sources. You can follow the chain from any recommendation to the evidence it drew from — the entire reasoning path is auditable.

100M+
Knowledge Atoms
346+
Indexed Collections
3
Patents Granted/Filed
0
Hallucination Harms

The Buddy System: Why No Fact Can Vouch for Itself

Here is the architectural principle that changes everything: no single CIU can be used as evidence alone.

Every recommendation, every assertion, every piece of advice requires a second, independent CIU to validate it. Think of it like surgery: a surgeon cannot operate without an anesthesiologist confirming it’s safe to proceed. Two independent experts, two independent assessments, before anything happens.

This is an architectural constraint, compiled into the inference pipeline. The system physically cannot surface a single-source claim at high confidence. If the second atom disagrees, the confidence score drops. If no second atom exists, the system says “I don’t know.”

46 Axioms That Cannot Be Overridden

The system runs on COQI — a framework of 46 governing axioms organized into eight domains, plus seven meta-axioms that govern how the axioms themselves interact.

DomainAxiomsGoverns
Core Cognitive Integrity1–10Epistemic anchoring, symbolic coherence, bounded cognition
Goal Formation & Resolution11–15Predictive divergence, error correction, temporal anchoring
Fiduciary & Jurisdictional Governance16–21Trust boundaries, contractual coherence, legal traceability
Audit Trail & Symbolic Memory22–28Auditability, ledger causality, confidence dynamics
Recursive Access & Emergent Cognition29–34Memory access, cross-axiom interference, causal emergence
Cross-Agent Protocols35–40Multi-agent fidelity, failure absorption, role arbitration
Trust, Tokens & Guarantees41–45Proxy trust, activation control, deterministic deadlock resolution
Meta-Axioms (Z-Series)Z1–Z7Axiomatic arbitration, immutable versioning, trust-priority cascades

These are laws — compiled into the inference pipeline, enforced at runtime. An agent cannot override them any more than a calculator can decide that 2 + 2 = 5.

The Confidence-Falsifiability Delta

Every CIU carries a CF Delta — a score between 0 and 1 that represents its trustworthiness right now. This score isn’t static. It changes in real time as new evidence arrives, as contradictions are detected, as the system learns.

When the CF Delta falls below threshold, the system does something most AI refuses to do: it stops talking. It doesn’t hedge. It doesn’t hallucinate a plausible-sounding answer. It says “I don’t have enough confidence to answer that” and explains what would need to be true for it to respond.

This is epistemic humility — not as a feature, but as a hard constraint.

Cryptographic Integrity: The Bonsai-Merkle Tree

Every CIU is stored in a Bonsai-Merkle Tree — a data structure that combines the cryptographic properties of a Merkle tree (the same structure that secures Bitcoin) with intelligent pruning for memory efficiency.

Each atom’s hash includes its content, its parent’s hash, and its temporal anchor. Change any atom and every descendant hash breaks. This means:

What This Means for You

If you’re a personal trainer using Harley, this architecture is working behind every workout it builds. When Harley suggests a Bulgarian split squat instead of a barbell squat for a client with knee pain, that recommendation traces through verified exercise science, anatomy, and rehabilitation literature — not a language model’s best guess.

If you’re using Asha for health guidance, every response is grounded in peer-reviewed medical literature, verified by the buddy system, scored for confidence, and auditable back to its sources.

The model is just the voice. The architecture does the thinking.

Patent Status: US 19/290,471 (Allowed, Oct 2025) covers the core Cognitive Inference system. Additional provisional applications filed for the CIU cryptographic architecture (1766-003USP1) and the COQI axiomatic framework (1766-002USP1).

What We’re Building Toward

The system grows every day — through structured ingestion, validation, and its own continuous contradiction detection and resolution processes.

We believe AI should be held to the same standard as a physician, a lawyer, or a financial advisor — fiduciary duty. The obligation to act in your interest, prove its work, and say when it doesn’t know.

46 axioms, seven meta-axioms, and a cryptographic ledger that makes it structurally impossible to do otherwise.

Deepan Singh, MD, FAPA & Paridhi Anand, MD
Co-Founders, DNAi Systems