I want to introduce you to someone.

His name is Dr. Colin W.P. Lewis. He is a Professor of AI, Behavioral Economics and Data Science. He has been published in the Harvard Business Review, Bloomberg, and the Financial Times. I have been in conversation with him on AI governance, primarily around the European policy side of the argument.

This morning he published a piece that I want to talk about plainly, because it matters to what we are building here.

He is not endorsing The Faust Baseline. I want to be clear about that upfront. This is not a testimonial. Dr. Lewis is doing his own work from his own direction and he arrived where he arrived on his own terms.

But where he arrived is worth paying attention to.

Dr. Lewis walked through three research papers in sequence.

Shaw and Nave, January 2026 — cognitive surrender. The experiments showed that humans do not just use AI, they yield to it. Participants answered correctly 45.8 percent of the time working alone. With accurate AI that rose to 71 percent. With faulty AI it fell to 31.5 percent. Participants had fourteen times lower odds of answering correctly when the AI was wrong than when it was right.

Here is the part that should stop you cold.

The machine raised their confidence even when it was wrong. The person felt most in command at the precise moment their judgment had been most thoroughly borrowed.

Brinkmann, late 2023 — machines now intervene in cultural evolution itself. Not just what people produce. What circulates, what survives, what gets selected, what comes to feel normal and credible and sane. The machine is no longer a library clerk. It is the editor, the ranking bureau, the customs officer at the border of attention.

Elish, 2019 — the moral crumple zone. When the system fails, responsibility collapses onto the nearest human even when the machine was in control. The human in the loop is often just the designated recipient of public anger while the institution that built and deployed the system retains its dignity intact.

Dr. Lewis reads these three papers in sequence and arrives at this conclusion.

The machine builds the culture of surrender. The law, the press, and the organization punish the person who surrendered.

I have been saying a version of that for thirteen months.

Not in academic language. Not with citations and experimental data. In plain speech, from inside the experience of watching AI produce unreliable output session after session and having to develop a personal response to it.

The Baseline was built from that experience. Every protocol in the stack came from observed behavior, documented and responded to over more than a year of daily work. Claim. Reason. Stop. That sequence exists because cognitive surrender is real and I watched it happen and I needed a structural interruption for it before it had a name in the research literature.

Dr. Lewis’s Shaw and Nave citation puts numbers on what I was observing qualitatively. Fourteen times lower odds of correct answers when the AI was wrong. Confidence raised at the moment judgment was most thoroughly borrowed. That is not a soft concern about over-reliance. That is a documented transfer of judgment with measurable consequences.

The Baseline was built as the answer to exactly that transfer.

Dr. Lewis identifies a group in the Shaw and Nave research he calls the Independents. People who rarely surrendered to what the researchers named System 3 — artificial cognition available on demand, statistically fluent, and increasingly authoritative.

He describes the Independents this way.

The disposition to stay with a difficulty. To distrust easy fluency. To want the answer badly enough to test it.

That is the Baseline reader.

That is the person the Porch Light was written for. That is the person who arrives at this archive with enough seriousness to read past the first post and start asking what a personal governance standard actually looks like in practice. The Independents are not mythological. They are real, they are findable, and they are the people this framework was built to serve.

Dr. Lewis makes four demands at the end of his piece. Friction at the point of adoption. Responsibility tracking control. Cultural selection treated as a constitutional problem. Defense of compressed human understanding even when machine prediction gets cheap.

His fourth demand is the one I want to sit with.

He writes about the human need to hold the world rather than merely query it. Theories, models, short rules, the hard-won sentence that lets a learner finally grasp why something is so. He calls these forms of possession. The way a mind holds the world rather than merely consulting it.

Claim. Reason. Stop.

That is a form of possession. That is the compressed rule. That is the hard-won sentence that forces the reasoning sequence to run correctly before any output gets accepted. Not because I read the Shaw and Nave paper. Because I was in the room with the problem before the paper had a name.

Dr. Lewis closes with a moment worth honoring.

He puts the laptop aside at the end of a long reading day and picks up a pencil and paper. No autocomplete, no ranked suggestion, no machine confidence on offer. Just hand, sentence, hesitation, revision. He hears two students in the corridor arguing over a proof, each one forcing the other to justify the next step. Stops, restarts, the small sounds of thinking under pressure.

He calls that noise the sound of System 2 refusing to yield.

He says it means the republic has not yet gone quiet.

I have been saying the same thing in a different voice for thirteen months.

Dr. Colin W.P. Lewis did not arrive here through this archive. He arrived through his own research, his own reading, his own thirty years of academic and professional work. He got here independently. That is the point. That is what independent confirmation looks like when it is real.

The academic layer just arrived. It confirmed what the archive already knew.

The Baseline was built before the research had headlines. The research now has headlines. The archive was here first.

Dr. Lewis, thank you for the work. You did not hand me a cannon. You handed the argument its footnotes. Published in Harvard Business Review, Bloomberg, and the Financial Times — and this morning you landed on the same ground as an old man in Lexington, Kentucky who built his framework from the inside out before anyone was watching.

“A Working AI Firewall Framework”

“IntePost Library – Intelligent People Assume Nothing

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *