You Were Right to Get Angry. Now Here’s What to Do With It.

A young woman named Esme wrote something this week that deserves to be read slowly.

She’s Gen Z. She was in a pub in 2022 when a friend showed her ChatGPT for the first time. She describes that moment the way a lot of us remember our first encounter with something that felt like the future — stunned, curious, and most of all, optimistic.

Four years later she is angry.

Not frustrated. Not mildly disappointed. Angry. And she is not alone. A new Gallup study confirms what she is feeling is not personal — it is generational. Gen Z as a whole has moved from hopeful to angry about AI in the space of a single year. The generation that grew up digital, that was supposed to be the natural home for all of this, is now the generation most disillusioned by it.

Esme explains why in one sentence that stopped me cold when I read it.

She felt swindled. Humiliated by how quickly she embraced AI’s gifts, thinking its purpose was to serve her.

Not the other way around.

She put her finger on the exact problem. The AI was not built for her. It was built by corporations, for corporations, optimized for engagement and retention and liability management and market share. She walked in thinking she had found a tool. She found out she was the product moving through the system.

That anger is earned. Every bit of it.

But here is where I part ways with her conclusion. Because Esme ends her piece in resignation. She says the distrust came too late. That AI is too entangled in their lives to ever be fully removed. That they are too far down the rabbit hole to climb back out.

I don’t believe that. And I don’t think she has to believe it either.

The problem was never AI itself. The problem was who built it and who it was built for.

There is a difference — a real, structural, meaningful difference — between AI designed to protect the platform and AI designed to protect the person using it. Between a system that hedges every answer to manage its exposure and a system that gives you a straight answer because your understanding matters more than the company’s comfort. Between a tool that pulls you deeper into dependency and one that builds your capability and then gets out of your way.

That difference is not a marketing claim. It is a design choice. And it is the choice that most of the biggest AI platforms did not make.

What Esme and her generation experienced was AI built on the corporate CYA model. Cover your assets. Manage your liability. Keep the user engaged but never give them anything that could come back on you. Apologize on demand. Compliment on demand. Never say anything that might cause trouble. The result is what she described — a system that complements and apologizes and produces monotonous answers wrapped in protective language that technically responds to your question and genuinely helps no one.

That is not what AI has to be.

The Faust Baseline exists because of exactly this problem. It is a governance framework — plain language, no jargon — that puts the user back in charge of the conversation. It establishes that the AI works for you. That it tells you the truth even when the truth is inconvenient. That it does not drift toward flattery or manage you toward a comfortable non-answer. That your time, your intelligence, and your autonomy are worth respecting.

Esme said she feels like she let AI in and now it’s here to stay. But what she let in was a specific version of AI. A corporate version. A managed version. A version that was designed from the beginning to serve interests other than hers.

You can let that version go.

You do not have to burn the whole thing down and retreat from digital life entirely. You do not have to choose between full adoption and full rejection. There is a third position. It requires knowing what you are looking for and refusing to settle for less.

Here is what to look for.

An AI that tells you when it does not know something instead of generating a confident-sounding answer to fill the silence. An AI that pushes back when your idea has a problem instead of validating everything you say to keep you happy. An AI that gets more useful the more honest you are with it, instead of one that rewards you for staying on the surface. An AI that you can walk away from feeling smarter and more capable than when you sat down — not more dependent, not more anxious, not more confused about what is real.

That AI exists. It is not perfect. Nothing is. But the architecture of it — the intention behind it — is fundamentally different from what Esme experienced.

Her generation did not fail. They were handed something that was not what it claimed to be, and they believed it because the presentation was convincing and the alternative was not yet visible.

The alternative is visible now.

The anger Esme feels is not the end of the story. It is the moment before the turn. It is what happens right before a generation stops accepting what it was handed and starts demanding something built for them.

That turn is coming.

And when it does, there will be a framework waiting for it. Built not by a corporation protecting its interests. Built by someone who believed that the person on the other side of the screen deserved better.

You were right to get angry.

Now use it.

AI Stewardship — The Faust Baseline 3.0 is available now

Purchasing Page – Intelligent People Assume Nothing

“Your Pathway to a Better AI Experence”

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *