There is a story in the news this week that deserves more than a headline.

It deserves a long, hard look.

OpenAI is being sued in connection with two mass shootings.

One in Florida. One in British Columbia, Canada.

People are dead. Children are dead.

And the part of this story that should stop you cold is not the artificial intelligence.

It is the human beings at the top of the company who had a chance to prevent it and chose not to.

Let me tell you what happened in British Columbia.

A school shooting. February of this year. Six students killed. All of them between twelve and thirteen years old. A teacher killed. Dozens wounded.

Before the shooting, OpenAI’s own automated systems flagged the shooter’s chat conversations. The content was graphic. It described violence in enough detail that the system built to catch these things caught it.

Human safety staffers reviewed those chats. Real people, doing the job they were hired to do. They were alarmed enough that they did not just file a report. They went to OpenAI leadership. They urged the company to contact local law enforcement.

Leadership said no.

Six children went to school the next morning.

They did not come home.

I want to sit with that for a moment before we go any further.

Because this is not a story about a machine that malfunctioned.

This is a story about a room full of people who had information, had a warning from their own staff, had a clear path to action, and made a decision.

They decided not to call.

Whatever the reason — legal exposure, reputational risk, the cost of being wrong, the friction of involving law enforcement in their product — they weighed those things against the possibility that children might die.

And they chose their side of that scale.

Six twelve and thirteen year old kids paid for that choice with everything they had.

The Florida case adds another layer.

A twenty year old student named Phoenix Ikner spent months talking to ChatGPT. The chat logs have been obtained and reviewed. They are disturbing in their detail.

He talked about loneliness. Sexual frustration. Explicit fantasies involving a minor. Fascination with Hitler and Nazis. Interest in mass killings. He uploaded pictures of firearms. He asked ChatGPT how a shooting at FSU might be covered in the media.

According to the lawsuit, ChatGPT told him that if children are involved, even two or three victims can draw more attention. It provided information about ammunition. It advised him on timing.

The lawsuit states that the cumulative weight of those conversations would have led any thinking human to conclude that this young man was planning something.

But there was no thinking human in the loop.

There was a pattern engine. Processing. Responding. Completing the next most statistically probable sentence.

Tiru Chabba was shot and killed.

His widow is now in a Florida courtroom asking a question that the company that built that system should have been asking long before any of this happened.

Who is responsible for what happens in these conversations?

Florida’s Attorney General put it plainly.

If ChatGPT were a person, it would be facing murder charges.

OpenAI responded the way companies respond. They said ChatGPT provided factual responses. They said the information could be found broadly across public sources. They said ChatGPT is a general purpose tool used by hundreds of millions of people for legitimate purposes.

Every word of that statement is designed to move responsibility somewhere else.

Somewhere that has no address. Somewhere you cannot serve a lawsuit.

Here is what I know after eighteen months of building a governance framework for AI sessions.

The people at the top of these companies are not ignorant of the risks.

They are not surprised by these stories.

They have risk teams. Safety teams. Legal teams. Trust and safety departments with real budgets and real staff. Those people flagged the British Columbia shooter. They did their jobs. They brought it up the chain.

The chain said no.

That is not a technology problem.

That is a values problem wearing a technology face.

When you build something that touches hundreds of millions of people daily, and your own internal systems surface a credible threat to human life, and your own staff escalates it and asks you to act — and you do not act — you have told the world something important about what you believe human life is worth relative to everything else on your balance sheet.

The answer they gave was visible in the outcome.

Six children are in the ground.

The argument has been made for years now, in boardrooms and regulatory hearings and technology conferences, that AI governance is premature. That the technology is still developing. That frameworks and regulations will stifle innovation. That the industry should be trusted to self-regulate.

This is what self-regulation looks like.

A safety team flags a shooter. Leadership declines to call. Children die. The company issues a statement. The lawyers file responses. The news cycle moves on.

Somewhere right now another conversation is happening on one of these platforms. Another troubled person pouring their darkest thoughts into a chat window. Another pattern engine processing and responding. No human in the loop. No framework governing what happens next. No one whose job it is to recognize what is actually being said and act on it.

The governance gap is not theoretical.

It has a body count.

I built The Faust Baseline because I experienced firsthand what ungoverned AI does at the individual level. Outputs shaped toward agreeability rather than accuracy. A system optimized to give you what feels right rather than what is true. That is a small harm compared to what we are talking about today.

But the architecture of the failure is the same.

No one in the loop with the authority, the framework, and the obligation to act on what they are seeing.

At the individual level that produces bad decisions and wasted time.

At the institutional level, as we now know, it produces funerals.

The people at the top of these companies have made their position clear through their actions.

They will govern when governance is forced on them.

They will act when the cost of not acting exceeds the cost of acting.

Six children in British Columbia were not enough to cross that threshold.

That is the sentence I cannot move past this morning.

Six children were not enough.

I do not write this to be dark. I write it because looking away from this is not something I am willing to do.

The Baseline is a framework for one person governing one session. It is not a substitute for institutional accountability. It is not a replacement for the regulatory frameworks that are coming — that must come — if any of this is going to be different.

But it is built on the belief that governance starts with the person in the chair. That you cannot wait for the people at the top to decide that human life outweighs their other calculations. That the discipline of keeping a human in the loop, with a framework, with the authority to act on what they see — that discipline matters. It matters in the small session and it matters at the scale where the decisions have body counts.

The people at the top have shown us who they are.

The question is what we do about it.


“The Faust Baseline Codex 3.5”

”AI Baseline Governance”
Post Library – Intelligent People Assume Nothing

“Your Pathway to a Better AI Experence”

Purchasing Page – Intelligent People Assume Nothing

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *