There are not many people in the academic and professional world willing to say out loud what Colin Lewis said this week.

He said the AI labs are wrong. Not technically wrong. Morally wrong. Wrong about what intelligence is, wrong about what human beings are for, wrong about where the work actually lives. He said the fantasy of full machine sovereignty — the idea that human judgment is a temporary embarrassment on the way to full automation — is not science. It is managerial daydreaming dressed up as inevitability.

He said that in a published newsletter. With his name on it. With HBR and Bloomberg behind that name.

That takes something. Most people in his position hedge. They soften the edges. They say things like “of course humans remain important” in a tone that makes clear they do not fully believe it. Colin did not hedge. He pushed back on the controllers — the labs, the executives, the procurement culture that wants to remove human discretion from work entirely — and he made the case that they are building the wrong thing for the wrong reasons.

I want to talk about what he said. And then I want to talk about where he stopped.

The core of Colin’s argument is this.

Human intelligence is not just problem-solving speed. It includes causal judgment, selective attention, associative memory, embodied experience, intuitive reasoning, and the ability to notice when a question is wrong before you try to answer it. These are not soft skills waiting to be automated. They are structurally necessary. They are what you need precisely when the world refuses to stay tidy — when the edge case arrives, when the context shifts, when the most important fact is the one no database thought to store.

The AI labs want to treat the human as a brake pedal left over from a less efficient age. Colin says that is exactly backwards. The human is not the residue in the system. The human is the source of context, restraint, reinterpretation, and purpose. Remove that and you do not have a more efficient system. You have a faster machine making confident errors with no one responsible for the outcome.

He draws on something Polanyi said decades ago — we know more than we can tell. Tacit knowledge. The kind of judgment that lives in experience and cannot be fully extracted into a database or a training set. A person who cannot verbalize every step of seasoned judgment is still a moral agent. An AI model that cannot explain itself is still an AI model. Those two forms of opacity are not the same thing, and confusing them is one of the slyer errors of this age.

Colin’s strongest line is the one about augmentation becoming morally serious.

He says the central question is not whether a machine can produce an answer. Machines produce answers all day. The central question is what kind of human being, professional culture, and institutional order is being formed around the answer.

That is the question almost nobody in this conversation is asking. The labs are not asking it. The enterprise platforms are not asking it. The technology press is not asking it. They are asking whether the output is accurate, whether the system is fast, whether the ROI justifies the procurement cost.

Colin is asking something harder. He is asking what happens to the person on the other side of the answer. Does the system leave the human operator more able to understand, contest, redirect, and own the result — or less? If less, he says, stop calling it augmentation. Speak more honestly.

That is a serious standard. That is the kind of standard that makes executives uncomfortable because it cannot be answered with a benchmark.

Now here is where I want to be direct about something.

Colin makes the case beautifully for why human governance over AI is necessary. He makes it at the institutional level, the academic level, the policy level. He is arguing that systems should be designed around a standard that keeps the human in the loop as a moral agent, not a ceremonial one.

He is describing the problem from the top down.

What he does not address — and I do not think it is his lane to address it — is what the individual person does with this on Monday morning.

Not the hospital. Not the court. Not the logistics network. Not the enterprise with a compliance department and a procurement budget and a Gartner-recognized governance platform.

You. One person. An open AI window on your phone. No institution behind you. No framework protecting your interests. No standard for what the system owes you or what you should expect from it.

Colin argues that the loop is epistemic — that it is where error is checked, where context is reintroduced, where the AI is pulled back toward the world. He is right. But who builds that loop for the individual? Who builds the standard? Who ratifies it? Who owns it?

Nobody. Unless the individual builds it themselves.

That is what the Faust Baseline is.

Not a prompt. Not a patch. Not an enterprise compliance framework. A behavioral governance standard built by an individual, owned by that individual, ratified by that individual, that travels with them across every platform they touch.

It was built over thirteen months through documented operational dialogue — in the native reasoning language of the AI systems themselves, so every reasoning engine that encounters it recognizes it immediately. It defines what I expect. It defines what I will not accept. It holds a standard for every output I work with. It keeps the loop honest when there is no institution to keep it honest for me.

Colin describes the goal. The Baseline is the implementation of that goal at the individual level.

What Colin is pushing back on matters beyond the academic argument.

The controllers — the labs, the platforms, the executive culture — are building systems designed to route around human judgment wherever possible. They call it efficiency. They call it scale. They call it the future. What they are actually building is a world where the individual has less and less standing in the interaction and the machine has more and more.

Colin is saying that is wrong. Not inefficient. Wrong. Because intelligence is not throughput. Because human beings are not residue. Because the person who notices the question is wrong before trying to answer it is not an obstacle to be optimized out of the system.

He is right. And the Baseline exists because he is right.

You cannot wait for the institutions to govern this on your behalf. They are moving too slowly, they are serving their own interests, and their definition of governance does not include you as an individual. The enterprise platforms describe the category. They do not claim it from the user side. Nobody does.

You have to own this yourself. The standard. The framework. The loop.

That is not a technical project. It is a human one.

Colin ended his piece with something worth quoting in spirit if not in word. He said the future worth wanting is not a world where the last human signs off on decisions already made elsewhere. It is a world where technical power widens human range without severing human responsibility.

I agree with every word of that.

And I would add one thing.

That world does not build itself. Somebody has to build the standard. Somebody has to write it down, ratify it, own it, and refuse to let it drift.

That is the work. That is what the drum is beating for.

The solution is already built, it is up too the epople to spread it around.

AI Stewardship — The Faust Baseline 3.0 is available now

Purchasing Page – Intelligent People Assume Nothing

Personal tier. One-time license. Built for the individual user, not the organization.

Intelligent People Assume Nothing – Built for readers. Not algorithms.

“Your Pathway to a Better AI Experence”

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *