Navigation
Home Investors
The Case
The SaaS Problem The Graveyard The Five Levels Why Lisp
Platform
Platform Overview Before You Ask For Workflow Geeks Deployment Migration Mainframe Path
Advisory
Advisory Board Legal & Clinical
The Stack
Chandra Protocol CRC Standard GABA Standard Aegis Genera
General Reasoning · Before You Ask

We built the answer
before we built
the business case.

An honest account of why we built what we built, what each component does, what we are not pretending it does, and the one question worth asking us first.

View the PDF ↓

Regulated enterprises are deploying AI.
The frameworks weren't written for this.

SOC 2. FedRAMP. HIPAA. CMMC. Every one of them was written before an agent could execute a thousand workflow steps in the time it takes a compliance officer to open a ticket. The existing frameworks do not cover this. Not because the authors were negligent — because the problem did not exist yet.

The gap between what compliance frameworks assume and what regulated AI systems actually do is widening every quarter. Nobody with a material stake in the current order is saying this plainly. They have too much invested in the frameworks as written.

The answer is not faster snapshots. The answer is a continuous, append-only, unforgeable record of every state transition — one that makes the audit record the authorization mechanism for what happens next, not a receipt for what already happened.

Most AI failures are not model failures.
They are question failures.

A reasoning system -- human or machine -- can only produce outputs as well-formed as the question it received. If the question is underspecified, ambiguous, or structurally flawed, even a powerful system produces misleading outputs with high confidence. In a regulated environment, that compounds: a wrong answer anchored to a malformed question generates an audit trail that records the wrong answer as authoritative.

The industry response has been better models, stronger guardrails, and faster audits. None of those address the actual failure point. Guardrails operating on bad inputs produce compliant nonsense. Audit logs recording malformed decisions produce evidentiary garbage. Better models answering flawed questions produce more articulate errors.

The correct intervention is upstream. Question formation must be treated as a first-class, enforceable, auditable system component -- not a UX consideration or a model capability. A question that has not been validated has not been authorized. A decision that proceeds from an unvalidated question is not a governed decision.

The authorization chain must begin before execution -- at the point where intent is formalized, constraints are made explicit, and ambiguity is resolved or rejected. The audit record is not downstream of the decision. It is the precondition for it. That is the architectural claim this stack makes. Each component enforces it at a different layer.

Six components. Each load-bearing.
None optional in a fully governed deployment.

You do not need all six on day one. But you should know what they are and why they connect.

Chandra Protocol
chandraprotocol.com

The audit foundation. Open, append-only, hash-chained record of every artifact and state transition. MIT licensed. The Chandra CU is unforgeable — you cannot retroactively alter a chain without breaking every subsequent hash. This is not a logging system. It is an evidentiary substrate.

DXMachine
genreason.com/platform

The compliance-grade Value Stream Management platform. Pre-built regulated workflows, card-level work item tracking, audit-native process orchestration. Chandra runs underneath from day one. This is the entry point for most organizations — immediate operational value while the full compliance architecture assembles underneath you.

CRC Standard
crcstandard.com

Chain Responsibility Continuity — an open architectural standard defining what it means to deny the chain: no orphaned state transitions, no unattributed agent actions, no authorization gaps between human decision and machine execution. MIT licensed. General Reasoning publishes and governs the standard.

GABA Standard
gabastandard.com

Governed AI Boundary Attestation. A formal standard for documenting and attesting residual risks at AI inference boundaries that architecture alone cannot eliminate. Where CRC defines the posture, GABA governs the acknowledgment of what remains. A CISO can take a GABA certification to a board.

ANDM
genreason.com

Agent-Native Development Maturity. A maturity model for organizations deploying agents into regulated workflows. Three invariants: dark factory not dark code, auditability as authorization, reconstruction over recovery. ANDM gives procurement a scoring framework and engineering a defensible architectural posture.

Aegis Genera
aegisgenera.com

A purpose-built Linux image constructed with the Yocto Project containing exactly what is required to run Allegro Common Lisp — and nothing else. No shell binary. No package manager. No browser. No USB or Bluetooth support. No unnecessary kernel subsystems. Read-only root filesystem with a signed boot chain and TPM attestation. The application-layer attack surface is not hardened. It is eliminated — absent by construction, not reduced by configuration. A shell that is not present cannot be exploited. A package manager that does not exist cannot be abused for persistence. A Mythos-class model scanning this image finds no application-layer surface to chain across.

The honest answers to
the obvious objections.

Why Lisp in 2026?

Because Lisp is not a language choice — it is a thought amplification choice. Allegro Common Lisp has a paying customer base in regulated industries today: financial services, defense, intelligence. They pay for it because it is worth paying for. The ecosystem is small, coherent, and commercially serious — exactly the qualities a governed execution substrate requires. You can substitute SBCL. We chose Allegro because we are building toward paying customers, not toward the open source community.

Why build new standards instead of extending existing ones?

Because existing standards were not designed for agent execution speed and cannot be extended to cover it without breaking their own internal logic. SOC 2 Trust Service Criteria assume human-mediated controls. CRC, GABA, and ANDM do not extend SOC 2 — they cover the ground SOC 2 cannot reach. They are additive, not competitive. An organization pursuing SOC 2 Type II is a better candidate for CRC certification, not a worse one.

Why a small Birmingham company and not a known vendor?

Because the known vendors have too much invested in the current order. Oracle can add an AI feature and call it governance. Salesforce can ship an agent runtime and call it compliant. Neither of them can rebuild their audit architecture from the substrate up without breaking existing revenue. We can. We built from the audit record outward, not from the existing product inward. That is not a positioning claim. It is a structural fact about what is possible from each starting position.

Is this complexity justified, or would something simpler work?

Something simpler will work until it does not — and in a regulated context, "until it does not" means a failed audit, a breach incident, or an unauthorized agent action with no evidentiary trail. The complexity in this stack is not aesthetic. Each component exists because the problem it solves cannot be solved by the component below it. The stack is the minimum coherent answer to the full problem. Not the maximum.

Can a company this early be trusted with compliance infrastructure?

That is the right question. The honest answer: evaluate the architecture, not the company size. Chandra Protocol is MIT licensed and fully auditable. CRC and GABA are open standards. The core of DXMachine is built on Allegro Common Lisp and AllegroServe — production-proven infrastructure with a decades-long track record. We are small. The architecture is not fragile. Those are separable facts. We invite the scrutiny.

Start here. The rest becomes visible from inside.

Start with DXMachine and Chandra. Pre-built regulated workflows, card-level work item tracking, audit-native process design. Chandra runs underneath from day one — every state transition recorded, hash-chained, unforgeable. We handle the data transforms into DX. You get operational value immediately while the compliance architecture assembles underneath you.

We are not going to answer every question before you know what to ask. The architecture is coherent. The standards are open. The code is auditable. If something does not make sense, ask us. That conversation is more useful than five more pages of documentation.