Navigation
Home Investors
The Case
The SaaS Problem The Graveyard The Five Levels Why Lisp
Platform
Platform Overview Deployment Migration Mainframe Path
Advisory
Advisory Board Legal & Clinical
Legal · Clinical · AI Disruption · Regulated Industries

The two domains everyone is talking about.
What they are not talking about.

Legal and clinical workflows are widely cited as prime AI disruption targets. The disruption is real and it is already underway. What the conversation consistently misses is the regulatory audit requirement — the layer that general-purpose AI tools cannot satisfy and that DXMachine is architecturally designed to address.

Replacing the work is the easy part. Producing work that survives a regulatory examination, an OCR audit, an accreditation review, or a bar complaint is the hard part. In regulated environments, the disruption is not just about speed and cost. It is about defensibility. That is the layer DXMachine owns.

The Shared Thesis

AI is replacing the work.
It is not replacing the accountability.

Every major law firm and health system is running AI pilots. Document review, prior authorization, clinical documentation, contract analysis — all being hit simultaneously. The productivity gains are real. The disruption to entry-level professional work is real. What is not being solved is the audit trail problem: when an AI system produces a legal determination or a clinical recommendation in a regulated environment, someone is still accountable for it, and that accountability requires a defensible record of what the AI did, why, and under what conditions.

What general-purpose AI delivers
Faster outputs with no audit architecture
ChatGPT, Copilot, and every general-purpose AI tool applied to legal and clinical work produces outputs faster and cheaper than the human equivalent. None of them produce an audit trail that answers the question a regulator, a bar examiner, or an accreditation body will ask: what model produced this output, what data did it reason over, what capability constraints were in place, and who was accountable for the result.
What DXMachine adds
Workflow-native attestation at the point of production
Every AI-assisted output produced inside a DXMachine workflow is hardware-attested at the moment of production — not assembled retrospectively. The audit trail is not a log file. It is a cryptographically signed execution record that documents the model, the inputs, the capability constraints, the human review step, and the disposition. The output is defensible because the process that produced it is documented, continuously, as a native property of the workflow itself.

"The question is not whether AI will disrupt legal and clinical work. It already is. The question is which organizations will be able to defend the AI-assisted work they are already doing — and which ones will discover the gap when an examiner asks."

Two Domains · One Architecture

Different disruption vectors.
Identical audit requirement.

Clinical Operations · Regulated Healthcare
Clinical
Prior Authorization · Clinical Documentation · Coding Compliance · HEDIS · Accreditation Evidence · AI Governance
Clinical AI disruption is arriving from multiple directions simultaneously — payer AI systems automating prior authorization decisions, ambient documentation tools capturing clinical encounters, coding AI classifying diagnoses and procedures, care gap identification tools surfacing HEDIS measure opportunities. Each of these produces outputs that CMS, OCR, or an accreditation body may examine. Almost none of them produce the attestation record that examination requires.
The specific exposure: prior authorization AI producing coverage determinations without a defensible audit trail. CMS has made clear that AI-assisted prior auth decisions are subject to the same review requirements as human decisions. The clinical organization that cannot produce a complete record of how a prior auth determination was reached — what clinical criteria were applied, what AI system evaluated them, what human review occurred — is carrying undocumented regulatory exposure on every AI-assisted determination it has made.
What clinicians are actually worried about — and where DXMachine stands
The black box problem. Clinicians don't trust AI recommendations they can't explain. DXMachine doesn't make the model transparent — that's a model problem, not a workflow problem. What it does is document exactly what model produced the output, what inputs it reasoned over, and what capability constraints were in place. That is not the same as explainability, but it is what a regulator asks for — and it is what no general-purpose AI tool currently provides.
Liability and accountability. When AI makes a mistake, who is responsible? Inside a DXMachine workflow, that question has a documented answer. The z-board chain of custody records every human review step, every hand-off, every sign-off. The accountability chain is not ambiguous — it is timestamped, attributed, and locked. The clinician who reviewed and approved the AI output is identified. The AI system that produced it is identified. The record exists.
Automation bias and over-reliance. The danger of clinicians deferring to AI outputs without applying judgment is real and well-documented. DXMachine's Agent Examiner — a non-bypassable constitutional constraint on every agent dispatch — enforces human review steps at the workflow level. The architecture does not allow AI outputs to move through a workflow without the human checkpoints the workflow requires. Automation bias is not a culture problem you manage. It is an architecture problem you design out.
Bias and inequality. AI models trained on biased data produce biased outputs. DXMachine attests what model was used — it does not fix the model. What it does do is create the audit record that makes biased AI outputs discoverable: when a pattern of disparate recommendations surfaces in an OCR audit or an accreditation review, the DXMachine record identifies exactly which model produced which outputs under which conditions. That is the foundation any remediation or accountability process requires.
The disruption window: clinical administrative AI is being adopted faster than governance frameworks are being built. The gap between "we use AI for prior auth" and "we can defend every prior auth decision AI touched" is where the regulatory exposure lives — and where DXMachine's attestation architecture is the specific answer.
Domain Advisory Board · Two New Seats

The people who have been accountable
for the outputs AI is now producing.

Both seats have the same underlying filter: operational accountability for workflows whose outputs had regulatory consequences — not thought leadership about AI disruption in their field. The person we want has already tried to deploy a general-purpose AI tool into a legally or clinically regulated workflow and hit the wall where the outputs weren't defensible. That failure experience is the credential.

Domain · Seat 06
Open
Clinical Operations · Regulated Healthcare
Prior Authorization · Clinical Documentation · Coding · HEDIS · CMS · Accreditation · AI Governance
The clinical disruption advisory seat is looking for someone who has managed clinical administrative workflows at the intersection of care delivery and compliance obligation — not a physician who speaks about AI in medicine, but the person accountable for workflows whose outputs CMS, an accreditation body, or an OCR auditor has actually examined. They have watched general-purpose AI land in their department, seen the trust and accountability gaps firsthand, and have been looking for a governance layer that actually closes them.
Looking for
  • VP of Clinical Operations, Director of Clinical Documentation Integrity, or equivalent at a health system or regional hospital network — someone accountable for workflow outputs that survived or failed regulatory examination
  • Direct experience with prior authorization operations at scale — specifically the governance gap between "AI assisted the determination" and "we can defend every determination AI touched"
  • Working knowledge of CMS prior auth requirements as they apply to AI-assisted decisions, and what an OCR audit of AI-assisted clinical documentation actually looks like
  • A perspective on the black box trust problem from the operator side — not the clinician who distrusts AI outputs, but the compliance officer who cannot produce the documentation that would make those outputs defensible
  • Exposure to the liability and accountability question in practice — when an AI-assisted clinical decision is challenged, who in your organization is named, and what documentation did you reach for
  • A view on where automation bias is creating real exposure in organizations that have deployed clinical AI without human review enforcement at the workflow level

Not looking for
Physicians whose primary relationship to clinical AI is as end users or conference advocates. Health IT vendors or consultants whose business model depends on the current tooling landscape. People whose clinical compliance experience is policy-level rather than operational. AI ethicists whose concern is bias in the abstract rather than bias in the audit record.

If your workflows are already using AI
and you cannot yet defend every output,
we want to talk.

No pitch deck. No NDAs on first contact. A conversation about the audit trail gap, the architecture that closes it, and whether there is fit for an advisory seat or a design partner relationship.

Seats 05 and 06 are part of the Domain Advisory Board. View all ten open seats on the Advisory Board page.