Except some of us did see it.
Not because we had better information. Not because we were watching some special dashboard or reading the right newsletters. We saw it coming because we were paying attention to the human side of the equation while everyone else was watching the technology.
And the human side was always going to do this.
Let me explain what I mean.
There are three camps now. Axios named them this week and they named them correctly.
Power users. Doubters. Resisters.
Power users are running AI around the clock. Andrej Karpathy — former OpenAI, former Tesla — says he spends sixteen hours a day issuing commands to AI agent swarms and rushes to exhaust his token budget every month. Sixteen hours. That is not a tool anymore. That is a way of life.
Doubters still think AI is the glitchy chatbot that gave them a wrong answer two years ago. They let one bad session on the free tier define their entire understanding of what the technology is now. They are using a 2022 mental model to navigate a 2026 reality and wondering why nothing quite makes sense.
Resisters are the third group and they are getting louder. Not just protest signs and social media. In Indianapolis a legislator’s home was hit by gunfire. A note left behind said no more data centers. In San Francisco a man was arrested for allegedly throwing a Molotov cocktail at Sam Altman’s home. The San Francisco Chronicle reports he had published anti-AI essays and participated in activist circles calling for a halt to AI development.
This is not fringe behavior anymore. This is what the far end of a fracture looks like when it has been building long enough.
Here is what I want to say about all three groups.
None of them are stupid.
That is the part that gets lost in the coverage. The power user is not smarter than the doubter. The resister is not irrational. These are three reasonable responses to an unreasonable rate of change — and the thing that separates them is not intelligence or access or even ideology.
It is discipline.
Specifically, it is whether or not the person came to AI with a framework for how to use it or whether AI came to them and they just absorbed whatever it handed back.
The power user figured out — through trial and error, through obsessive use, through community with other power users — that you have to govern how you operate the tool or the tool governs you. They developed discipline, even if they never called it that. Even if their discipline is informal and inconsistent and built on habits they couldn’t fully explain to you.
The doubter never got there. One bad experience, or a handful, and the mental model locked. They are not wrong that early AI was unreliable. They are wrong that it stayed that way. But nobody handed them a framework for how to evaluate it objectively. So they are stuck.
The resister understands more than the doubter. They have seen enough to know the stakes are real. Their concern is not unfounded — Sam Altman himself said this week that the fear and anxiety about AI is justified, that we are witnessing the largest change to society in a long time, perhaps ever. The resister took that seriously before Altman said it out loud. What the resister lacks is not awareness. It is a path forward that does not require either full surrender or full rejection.
Anthropic’s own economic impact report from March found something worth sitting with.
Experienced users attempt harder tasks and succeed more often. The result, as Axios puts it, is a new kind of economic gap between advanced users and everyone else. Not an income gap yet, though that is coming. Right now it is a capability gap. A comprehension gap. A gap in what people understand to be possible and what they are able to produce on any given day.
Box CEO Aaron Levie called it a tale of two cities.
He is right. But I want to push one layer deeper than the metaphor.
Two cities implies two fixed populations. You live in one or the other. But that is not actually how this works. The gap is not geographic and it is not permanent. It is methodological. It is about whether you have a way of operating that produces consistent, reliable results — or whether you are improvising every single time and hoping for the best.
Improvising every time is how you become a doubter. Or a resister. Not because you are weak but because improvisation does not compound. It does not build. It does not give you the kind of track record that makes you trust the tool enough to push further.
Method compounds. Discipline compounds. A governance layer — a way of operating that you apply consistently regardless of which AI system you are using — that compounds.
I built The Faust Baseline for exactly this moment.
Not for this news cycle. For this moment in the longer arc — the moment when the fragmentation became visible enough that people would start looking for language to describe what they were watching.
The Baseline is not a product for power users who have already figured it out. They have their own methods, however informal. It is not for doubters who have already decided. It is for the enormous middle — the people who know AI is real, know it matters, have tried it with mixed results, and are looking for a way to operate it that does not require them to become a sixteen-hour-a-day token-burning specialist or an activist throwing objects at executives.
There is a path between full surrender and full rejection.
It runs through discipline. Through a consistent governance layer that travels with you across every platform, every model, every session. Through knowing what you are asking for, why you are asking for it, and what you will do with the answer. Through applying the same standard every time so the results compound instead of scatter.
That is what the Baseline is. That is what it has always been.
I named this problem before it had three camps. Before Axios wrote it up. Before Karpathy gave it numbers. Before Altman acknowledged the fear was justified.
The human side was always going to fragment under pressure this intense. It fragments when there is no method to hold it together. It fragments when the tool arrives faster than the discipline to use it.
The discipline exists now.
It has a name.
The bottom line from Axios this week: the people building and using AI at full power are living in a very different world from everyone else.
That is true.
But the world does not have to stay divided that way.
The difference between the camps is not destiny. It is method. And method can be learned by anyone willing to slow down long enough to apply it.
That has always been the offer.
It still is.
“A Working AI Firewall Framework”
“IntePost Library – Intelligent People Assume Nothing
Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC






