There is a moment happening in newsrooms right now that most editors are not ready to talk about.

A reporter sits down with a deadline and a tool that can produce polished, confident-sounding sentences in seconds. The reporter is tired. The reporter is behind. The reporter trusts the tool — because the tool sounds like it knows what it’s doing.

That moment is where journalism is breaking.

Not because AI is evil. Not because reporters are lazy. But because someone handed them something powerful and walked away.

The cases are piling up.

In February of this year, Ars Technica — a technology publication that had spent years covering the dangers of AI — published a story containing fabricated quotes attributed to a real person. Their own senior AI reporter wrote it. He was sick with COVID, working from bed, using an AI tool to help pull source material. The tool failed. He switched to another platform to troubleshoot. Somewhere in that handoff, a paraphrased version of someone’s words replaced the actual words — and it went into print under that person’s name as a direct quote.

He was fired. The story was retracted.

The irony is almost too heavy to carry. The man covering AI risks became the story about AI risks.

But Ars Technica is not the outlier. It’s the example.

Around the same time, a senior European journalist named Peter Vandermeersch was suspended after an investigation found he had inserted dozens of fabricated quotes into fifteen separate articles across two publications. Seven of the real people named in those stories confirmed they had never said what was attributed to them. Not once. Not a paraphrase. Never said it.

The New York Times cut ties with a freelance critic who used an AI tool to help write a book review — and the tool quietly reproduced passages that closely resembled a review already published in another outlet. He likely never noticed. The AI didn’t flag it. It just handed him someone else’s work wearing a new coat.

A Wyoming newspaper called the Cody Enterprise discovered a reporter had been using AI to generate full stories — including a quote from the state Governor about a new OSHA regulation that the Governor had never actually made. The editor’s public response was honest to the point of being heartbreaking. He said the paper had no AI policy because it seemed obvious that journalists shouldn’t use AI to write their stories.

Obvious.

That word is doing a lot of work.

It seemed obvious. So no one said it out loud. No one wrote it down. No one built a rule around it. And when the pressure was on and the deadline was close and the tool was sitting right there — obvious wasn’t enough.

This is not a technology failure. This is a governance failure.

The tool did not lie to these journalists. The tool did what it does. It generated text. It filled gaps. It produced something that looked like a quote, read like a quote, and landed on the page as a quote — because no one had established a clear rule about what the tool was and was not allowed to produce.

That is not an AI problem. That is a management problem. That is a training problem. That is what happens when organizations adopt tools faster than they adopt the frameworks to govern them.

What The Faust Baseline says about this.

The framework addresses exactly this gap. Not the technology. The gap between what a tool can do and what the person using it understands about how it works.

AI systems do not know the difference between a real quote and a synthesized one. They are not trying to deceive. They are completing patterns. When a reporter asks an AI to help extract material from a source document, the system will produce what looks like an answer — and if the reporter does not know that a language model can confidently hallucinate details that were never in the source, that reporter is going to get burned.

The Baseline calls this a governance gap. The tool is not at fault. The absence of a clear operating standard is.

Every one of these newsrooms had a policy problem before they had a story problem. Not one of them had established a rule set that matched the capability of the tool they deployed. The Cody Enterprise editor admitted it plainly. Ars Technica had a stated policy — and a senior reporter still used an experimental AI tool while sick and under deadline pressure in a way that violated its intent.

Policy on paper is not governance. Governance is what happens at the moment of decision, under pressure, when the tool is open and the clock is running.

The person reading this.

If you are a writer — any kind of writer — you are working in an environment where these failures are becoming weekly events. You need to understand one thing clearly.

AI does not retrieve. It generates.

When you ask an AI to pull a quote, it does not search a document the way your eyes would. It produces a response that fits the shape of what you asked for. If the document contained something close to what you wanted, you might get a version of it. If it did not, you might get something that sounds exactly right and is completely wrong.

That is not a bug. That is how the technology functions. And if you do not know that, you are not using the tool. The tool is using you.

The reporters in these stories were not bad journalists. Most of them were experienced. Some of them were specifically assigned to cover AI. That is the point.

Familiarity with the subject is not the same as operating inside a framework that catches the moment your judgment slips. Everyone’s judgment slips. Especially at deadline. Especially when the tool sounds that confident.

The framework is not there for your best day. It is there for the day you are working from bed with a fever and a deadline and a tool that just told you exactly what you wanted to hear.

That is the day it matters. That is the day the cases above were written.

The Faust Baseline™ is an AI governance framework built around one principle: consistency under pressure is not a personality trait. It is a structure. Learn more at intelligent-people.org

“A Working AI Firewall Framework”

“Intelligent People Assume Nothing” | Michael S Faust Sr. | Substack

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *