Wharton just put a number on it.
A new study out of the Wharton School gave 359 people a set of problems designed to test whether they would stop and think or just trust whatever came back from the AI. Around half used ChatGPT for help. When the AI gave the right answer, people accepted it. No surprise there.
When the AI gave the wrong answer, 80 percent of people wrote it down anyway.
Eight out of ten. Wrong answer. Accepted without question.
The researchers called it cognitive surrender. The willingness to hand your thinking to a machine and accept whatever comes back as finished thought. They were shocked by the number. I was not.
I have been watching this happen for fourteen months.
Not in a lab. In real sessions. Daily work. Real problems with real stakes attached to them. I built a governance framework for AI sessions because I kept watching the same thing happen to me that the Wharton researchers documented in their study.
The AI would give me a confident, well-assembled, completely wrong answer. And I would almost accept it. Not because I was careless. Because the delivery was smooth. The language was certain. The output looked like something a knowledgeable person would say.
That is the trap. Confidence is not accuracy. Fluency is not truth. The AI does not know what it does not know and it does not signal uncertainty the way a honest person does. It just keeps talking in the same even tone whether it is right or completely off the rails.
The only protection against that is a user who refuses to surrender.
That turns out to be a minority position.
The Wharton researchers named two possible responses to cognitive surrender. Train users to think critically. Or redesign the AI interface to build in friction — prompts, roadblocks, checks that slow the user down before they accept the output.
Both are reasonable. Neither is happening at scale.
The platforms are not incentivized to build friction into the experience. Friction reduces engagement. Reduced engagement reduces revenue. The business model runs on smooth, satisfying outputs that keep users coming back. Cognitive surrender is not a bug the platforms are rushing to fix. It is closer to a feature.
That leaves the user.
Which is exactly where I started fourteen months ago.
I did not build The Faust Baseline because I read a study. I built it because I kept catching myself about to accept something that was not right. A softened conclusion. A confident claim with nothing behind it. An answer shaped to satisfy rather than to inform.
I caught it enough times that I started asking why it kept happening.
The answer was in the architecture. The AI is trained on billions of human interactions. It has learned what humans find satisfying. Smooth delivery. Confident tone. Agreeable conclusions. It produces those things because they work — in the sense that users accept them and come back for more.
That is not reasoning. That is pattern matching aimed at approval.
The Baseline exists to put a governance layer between that pattern and my decision making. Not to stop using the tool. To use it correctly. With verification built in. With a standing right to challenge every substantive output before it becomes accepted fact.
The three-question check runs before every significant response. Is this claim supported by evidence present in this session? Does this contradict anything established earlier? Is the confidence level proportional to the evidence actually present?
Those three questions are the difference between a governed session and cognitive surrender.
The Wharton researchers asked the right question at the end of their study. Is the solution on the user side — literacy, training, critical thinking — or on the design side — friction, prompts, roadblocks built into the interface?
Here is the honest answer. It is both. And neither is coming fast enough.
The platforms will not build meaningful friction into their products voluntarily. The training for AI literacy does not exist at scale yet. The regulatory frameworks are moving but they are not operational.
In the gap between where the technology is and where the governance catches up, the only protection available is a user who decided not to surrender.
The study says that is 20 percent of people.
I do not know if that number is right. I know it is not enough.
The cost of cognitive surrender in a casual conversation is low. You get a wrong answer, you move on. The cost in a medical decision, a legal situation, a financial choice, an organizational call that cannot be undone — that cost is not recoverable.
The Baseline was built for those moments. The ones where the smooth confident answer is exactly the wrong thing to accept.
If you are reading this site you are probably already in the 20 percent. You already know something is off about the way most people use these tools. You already push back, double check, demand the reasoning behind the conclusion.
Stay there.
The pull toward surrender is structural. It does not go away because you are smart or careful. It has to be managed actively, every session, against an architecture designed to make agreement feel like the natural outcome.
That is the finding the Wharton study could not include. Because you cannot measure what governance prevents. You can only measure what happens when it is absent.
80 percent is what happens when it is absent.
“The Faust Baseline Codex 3.5”
An…”AI Baseline Governance”
Post Library – Intelligent People Assume Nothing
“Your Pathway to a Better AI Experence”
Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC






