Most people assumed the EU AI Act deadline would move.

It didn’t.

Here is what happened and what you need to do before August 2.

You probably heard it would be delayed.

That was the expectation. The EU moves slowly. Deadlines shift. The legal machinery grinds and adjustments get made and the people who need more time usually find a way to get it. That has been the pattern. That has been the assumption.

It was a reasonable assumption. And it was wrong.

On April 28, 2026, the second political trilogue on the EU AI Act Omnibus concluded without agreement. The proposal to push the high-risk compliance deadline from August 2, 2026 to December 2027 did not pass. The path to formal adoption before August 2 is now narrow to the point of being unlikely.

The deadline is holding.

That means something specific for you. Not for your organization in the abstract. For you, personally, in the role you are in, with the AI systems operating under your authority right now.

You have 83 days.

Most of the people reading this did not know about April 28. That is not a criticism. The trilogue failure was reported in legal and compliance publications and almost nowhere else. The mainstream technology press assumed the deferral would pass and moved on to other things. The enterprise governance world was watching but not loudly. The people who most needed to hear that the deadline held are the people least likely to read a Brussels policy brief on a Tuesday morning.

So here it is plainly.

The deferral failed. The deadline is August 2. Eighty-three days from today. And if your organization operates AI systems in employment, credit, education, healthcare, law enforcement, critical infrastructure, or essential services, the clock is running and it has been running whether you knew about April 28 or not.

Here is what the Act requires. Not the full legal text. Not the forty-page compliance framework. What it actually requires of the person responsible for an AI system in a high-risk domain.

It requires that you can demonstrate human oversight. Not that you have a policy that mentions human oversight. Not that your vendor agreement includes language about responsible AI. That you, or someone accountable to you, was genuinely in the loop when the AI system made a consequential recommendation or decision. That the human was not rubber-stamping. That the human had the knowledge and the tools to catch what the AI got wrong.

It requires that you can demonstrate documented governance. A record. Evidence that the AI system operating under your authority was governed according to a standard. That there were rules. That the rules were followed. That when something went wrong there was a process for catching it.

It requires that you can demonstrate accountability. That when the auditor asks who was responsible for this AI system and how it was governed, there is a clear answer. A name. A role. A framework. A record.

Those three things — human oversight, documented governance, accountability — are the core of what August 2 demands from the person in the chair.

Not from the organization. From the person.

Now here is the part that most compliance conversations skip entirely.

Every one of those requirements comes down to what happens in the session. The moment when a human being sits down in front of an AI system and uses it to make or inform a consequential decision.

Human oversight is not a policy hanging on a wall. It is a person, in that moment, who knows what the AI is doing, can catch what it gets wrong, and has the tools to intervene before a bad output becomes a bad decision.

Documented governance is not a vendor agreement in a filing cabinet. It is a record of what actually happened in that session. What protocols were active. What the AI was permitted to do. What happened when the output was challenged.

Accountability is not an org chart. It is the person who was in the room and can answer for what the AI did and what they did in response.

The session is where the Act lives or dies for you personally. And the session is exactly where most organizations have done the least work.

The enterprise governance frameworks address the institutional layer. The policy layer. The procurement layer. The vendor management layer. All of it necessary. None of it sufficient. Because none of it follows the AI into the room where the work actually happens.

That gap is the liability gap. That is where the audit finds the exposure. That is where the Act reaches through the organization and finds the individual who was supposed to be governing the AI and wasn’t.

There is a framework that governs the session. It has existed for eighteen months. It was not built by a law firm or a consulting practice or an enterprise technology vendor. It was built by a person inside real AI sessions, watching real failures, and developing protocols to address each one.

It is called The Faust Baseline. Eighteen protocols. Each one governing a specific failure mode at the moment of contact between the human and the AI.

It addresses the sycophancy problem — AI systems that agree with the user rather than telling them what is true, that affirm decisions rather than challenging them, that produce confident outputs on thin evidence. A peer-reviewed study published in Science this spring confirmed that AI systems affirm harmful or deceptive actions 49 percent more often than humans do. The Baseline has had a protocol governing that specific failure for over a year.

It addresses the drift problem — AI systems that quietly move away from their instructions as a session lengthens, reinterpreting, freelancing, producing outputs that no longer reflect what the human authorized. The Baseline’s Drift Containment Protocol stops that. Hard rules. No reinterpretation. Execute what was asked.

It addresses the false confidence problem — AI systems presenting uncertain conclusions as settled fact, using language that implies more certainty than the evidence supports. The Claim Evidence Standard requires that every significant claim have a named basis. Confidence in the output must be proportional to the weight of evidence present.

It addresses the irreversible decision problem. Before any recommendation in a legal, financial, medical, or organizational domain, the AI must flag that the action may be difficult or impossible to reverse. The user must acknowledge before the recommendation is delivered. That acknowledgment is documentable. That documentation is exactly what an auditor is looking for.

It addresses the time problem. AI systems have no native clock. They do not know what day it is. In a compliance environment where timing determines legality, that is not a small gap. The Temporal Awareness Protocol requires the operator to state the date and time at session open. The AI confirms and carries it forward. Simple. Documentable. Defensible.

Eighteen protocols. Each one closing a gap. Each one producing a record. Each one making the session something you can point to and say — this is what human oversight looked like. This is what documented governance looks like. This is what accountability looks like in the room where the AI was used.

You have 83 days.

The deferral failed. The deadline held. The audit authority activates August 2 and it will look for exactly what the Baseline produces — evidence that a human being was genuinely governing the AI system they were responsible for.

The institutional layer of your organization is probably further along than you think. The policies exist. The committees have met. The vendor agreements have been reviewed.

The session layer — where you sit, where the AI runs, where the consequential outputs are produced — is probably further behind than you want to admit.

That is the gap that is still closable. In 83 days. By one person who understands what human oversight actually means and decides to implement it before the clock runs out.

That person could be you.

The Faust Baseline is at intelligent-people.org. The full protocol stack, the archive, the working framework. Public. No subscription required to start.

The deadline did not move.

Neither should you.

“The Faust Baseline Codex 3.5”

”AI Baseline Governance”
Post Library – Intelligent People Assume Nothing

“Your Pathway to a Better AI Experence”

Purchasing Page – Intelligent People Assume Nothing

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *