Jack Clark cofounded Anthropic.
He is not a critic of the AI industry writing from the outside. He is not a researcher publishing findings that the labs will quietly dispute. He is one of the people who built one of the most consequential AI companies in the world, and he went to USA TODAY two weeks ago and said this:
The AI industry has an urgent problem of disclosure.
That is not a small statement. That is an insider, speaking in public, using the word urgent, attaching it to the word problem, and directing it at his own industry.
Read it again slowly.
The AI industry has an urgent problem of disclosure.
When someone with that credential says that in a national newspaper, the right response is not to nod along and move to the next article. The right response is to ask the question his statement requires.
Urgent problem. Identified. Acknowledged. Publicly stated.
Now what?
Clark’s piece is worth reading in full because it is genuinely thoughtful. The fears he documents from 81,000 people across 159 countries are real. The Chilean butcher who built a new business. The Japanese software engineer who now has time to cook dinner with his family. The health care worker freed from document processing to spend more time with patients.
The aspirations are real. Better work. More personal growth. More time.
The fears are just as real. Losing the ability to think for yourself. Becoming dependent on something you didn’t choose to need. Watching your livelihood shift in ways no one asked you about and no one warned you were coming.
Clark says those fears should shape what gets built. He means it. The survey wasn’t performative — 81,000 people in 70 languages is a genuine effort to hear from the people on the receiving end of this technology.
And then the piece arrives at its conclusion.
Make your voice heard. Write to your senator or member of Parliament. Tell them what matters to you. Show up like the British railway passengers who demanded access. Show up like the rural American communities that organized for electricity.
That’s where the piece lands.
Public engagement. Civic participation. The democratic process applied to a technical governance problem.
Here is where the argument runs into trouble.
Clark’s historical examples are instructive, but not in the way he intends.
The British railway passengers didn’t write letters and hope for the best. Parliament acted. It passed a law. The law defined a specific, enforceable standard — one affordable train per line, every day, with seats and shelter from the weather. Not a principle. Not an aspiration. A requirement with teeth.
The rural American communities didn’t organize and then wait for the power companies to come around. They formed cooperatives. They demanded power. They got legislation. The Rural Electrification Act of 1936 didn’t ask the utility companies to consider serving rural areas. It funded the infrastructure that made it happen regardless of whether the companies found it profitable.
In both cases, the public engagement mattered because it produced enforcement. Not conversation. Enforcement.
Clark is calling for the conversation. That’s the right first step. But a conversation without an enforcement mechanism that makes the outcome of the conversation binding is just talk with better attendance.
The disclosure problem he’s identified doesn’t get solved by more disclosure. It gets solved by accountability structures that exist independent of whether the labs are having a good quarter and feeling generous with information.
He says it himself, almost in passing.
AI safeguards must grow in tandem with AI capabilities.
That sentence is the entire governance argument in eleven words. It is exactly right. And it is exactly what isn’t happening.
The capabilities are growing. The benchmarks are improving. The parameter counts are expanding. The cost per token is dropping. The deployment volume is accelerating.
The safeguards are not keeping pace. The frameworks being published are principles documents — carefully written, genuinely intended, and structurally insufficient. They describe what should happen. They do not define what happens when it doesn’t.
That gap — between the stated safeguard and the enforceable standard — is where the drift lives. It’s where GPT-4 quietly degraded between March and June while the researchers were surprised at how fast it happened. It’s where the open-source models get modified and deployed in jurisdictions with no governance frameworks at all. It’s where the cross-platform integrations create accountability gaps that none of the individual terms of service were written to address.
Clark knows this. His piece is evidence that he knows it. He just doesn’t have a solution that matches the scale of the problem he’s describing.
The Faust Baseline is not a response to Clark’s critics. It is a response to Clark’s own argument.
He said the industry has an urgent disclosure problem. He’s right.
He said safeguards must grow in tandem with capabilities. He’s right.
He said companies cannot do this alone and governments must engage. He’s right about that too.
But between the company’s stated commitment and the government’s eventual engagement sits the actual daily operation of these systems. The version updates that ship on Tuesday. The margin decisions that accumulate quietly. The behavior that changes between the model you trusted last month and the model you’re using today.
That space — the space between stated principle and enforced standard — is ungoverned. Not because no one cares. Because no one has built the enforcement architecture designed to operate in that space.
That is what The Faust Baseline is.
Not another principles document. Not a call for public engagement. Not an op-ed in a national newspaper asking people to write their senators.
An enforcement architecture. Built for the space Clark is describing. Operational now. Not waiting for the government to engage, not waiting for the labs to feel sufficiently disclosed, not waiting for the next survey of 81,000 people to confirm what the last one already showed.
The conversation Clark is calling for is necessary. It isn’t sufficient.
The Faust Baseline is what sufficiency looks like.
Jack Clark said it plainly.
Urgent problem. His word. His industry. His newspaper.
The problem has been named. The enforcement architecture that addresses it exists.
The question now is whether the people who read that op-ed and nodded along are ready to move from nodding to expecting something that holds.
That’s the line between participation and accountability.
The Baseline is on the accountability side of that line.
AI Stewardship…The Faust Baseline 3.0 is available now
Purchasing Page – Intelligent People Assume Nothing
“Your Pathway to a Better AI Experence”
Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC






