Sam Altman said something this week that deserves a careful look.
He said Anthropic is using fear-based marketing to sell Claude Mythos — their new model that Anthropic decided not to release publicly because they said it was too good at finding cybersecurity vulnerabilities.
Instead of a public release, Anthropic ran something called Project Glasswing. Eleven companies got private access. Google. Microsoft. Amazon. Nvidia. JPMorgan Chase. The short list of people who run things.
Altman’s take, delivered on a podcast, went like this.
“It is clearly incredible marketing to say, ‘We have built a bomb. We were about to drop it on your head. We will sell you a bomb shelter for $100 million to run across all your stuff, but only if we pick you as a customer.'”
That is a sharp line. Genuinely sharp. I will give him that.
Here is the problem.
Sam Altman runs a company currently valued somewhere around $300 billion. That company has spent the last three years telling anyone who would listen that artificial intelligence might end human civilization as we know it. They have published safety research. Held congressional testimony. Warned of existential risk. Built a safety board, dissolved it, rebuilt it. They call themselves a safety company.
And then they raise another $40 billion and build faster anyway.
So when Altman stands up and says someone else is selling fear — you have to stop and look at who is talking.
He is not wrong about the tactic. Fear-based marketing in AI is real. It exists everywhere in this industry. The entire enterprise of AI safety theater is built on it. You create concern, then you sell the solution to the concern, and you make sure you are the only one qualified to provide the solution.
That is not unique to Anthropic. That is the business model of the entire top tier of AI development, Altman’s company included.
The part that deserves attention is the second half of what he said.
He said some people in tech have wanted to keep AI in the hands of a smaller group of people. And he suggested Anthropic is using safety language to justify that kind of control.
He is not entirely wrong about that either.
But here is what he left out.
OpenAI has withheld models. Changed access policies. Reversed commitments. Restructured its nonprofit governance to serve investor returns in ways that its own board members publicly objected to. The company that was built as a nonprofit research lab to benefit humanity is now a capped-profit structure in the middle of converting to a full for-profit corporation.
Altman’s own investors — Microsoft, SoftBank, the sovereign wealth funds now lining up — they are not pouring money in because they want AI in everybody’s hands. They are pouring money in because they want returns.
The bomb shelter Altman is selling just has more floors and costs more.
Now add this.
Altman suggested in the same interview that Anthropic’s doom-and-gloom messaging around AI may have contributed to the environment that led to an attack on his home.
That is a serious claim to casually drop into a podcast conversation. It puts responsibility for real-world violence at the feet of a competitor’s communication strategy. That is not analysis. That is something else.
Here is what I think is actually happening.
Anthropic locked eleven of OpenAI’s biggest enterprise targets into a private preview of a model Altman cannot touch and cannot match right now. Google, Microsoft, Amazon — these are the relationships that determine who wins the next five years of AI deployment.
Altman is not angry about the marketing. He is angry about the deal flow.
The bomb metaphor is clever. But the man complaining about bomb shelter salesmen has been running the most successful bomb shelter company on earth for three years.
Speak plain. Work true.
AI Stewardship — The Faust Baseline 3.0 is available now
Purchasing Page – Intelligent People Assume Nothing
“Your Pathway to a Better AI Experence”
Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC






