There is an honest conversation to be had about AI

But it requires something most people on both sides of the argument are unwilling to do. It requires separating what AI actually does wrong from what it gets blamed for by people whose real objection is something else entirely. Those are two different lists and conflating them produces nothing useful. The first list deserves serious attention. The second list we addressed in the last piece. This one is about the first list — the real abuses, what they actually are, where they actually come from, and why none of them are new.

Because none of them are new. That is the part nobody wants to sit with long enough to understand.

Every abuse currently laid at the feet of AI has a human original. Every one of them was being done before the first language model produced its first sentence. AI did not invent these problems. AI scaled them. And scale is a real issue worth discussing seriously — but scale is not origin, and confusing the two leads to the wrong diagnosis and therefore the wrong remedy. If you treat a scaling problem as an invention problem you will spend all your energy trying to uninvent the tool instead of governing the behavior. That is where most of the public conversation currently lives and it is why most of the public conversation produces nothing actionable.

Start with the one that does the most ambient damage.

Volume without authorship. The internet is currently drowning in generated content that has no governing intelligence behind it. Pages and pages of text that technically answers a search query and substantively says nothing. Articles that exist not because someone had something to say but because a content farm needed to fill a keyword slot. Nobody home. No subject matter expertise. No voice. No standard applied to the output before it went live. Just volume, produced cheap and fast and pointed at search algorithms that were not built to distinguish between writing that came from somewhere real and writing that came from nowhere at all.

This is the abuse that gives the critics their best ammunition and they are not wrong to use it. It is genuinely damaging. It crowds out real work. It degrades search results that people depend on for actual information. It poisons the well for everyone writing seriously because the volume of noise makes the signal harder to find.

But content farms existed before AI. The low-quality article mill, the keyword-stuffed page produced by underpaid writers cranking out five hundred words on a topic they knew nothing about for three dollars a piece — that was already the architecture of a significant portion of the internet. AI made it cheaper and faster. It did not invent it. The exploitation of cheap human writing labor to flood search results with low-quality content was already a thriving industry. The workers doing it were already being squeezed. The readers were already being served garbage. AI removed the human from the middle of that particular pipeline. The pipeline itself was already corrupt.

The answer is not to remove the tool. The answer is authorship. Governing intelligence. A standard applied before publication. A person responsible for what goes out under their name who cares whether it is worth reading. That is what separates work from noise. It always has been. The tool is not the variable. The presence or absence of a governing author is the variable.

Impersonation is old. Using someone’s voice, likeness, or identity without their consent to produce something they never said or created is not a problem AI introduced to civilization. Forgery is ancient. Ghost signatures on letters. Fabricated quotes attributed to real people in print. Political pamphlets written in someone else’s name to damage their reputation. The history of written communication includes a long continuous thread of people putting words in other people’s mouths and passing them off as genuine. AI made it easier to do with voice and image at a level of fidelity that was previously impossible outside a professional studio. That is a real escalation and it deserves real legal and technical response. But the impulse — to steal someone’s identity and use it against them or without their consent — that impulse is as old as the concept of reputation.

Academic fraud deserves its own honest treatment because the conversation around it is almost always framed wrong. The problem is not that a student used AI to produce work. The problem is that the student submitted that work in a context where the entire point of the exercise was to demonstrate the student’s own developing capability — and deceived the institution about whether that demonstration was genuine. That is the harm. The deception. The corruption of a process that exists to verify that a person has actually developed a skill or mastered a body of knowledge. Credential fraud in academia is not new either. Essay mills existed before AI. Paying someone to write your paper existed before AI. Copying from sources without attribution existed before AI. The specific mechanism changed. The fraud did not.

What AI changes in the academic context, if institutions are willing to think honestly about it, is the question of what education is actually for. If the test of whether a student learned something can be defeated by a tool that is now universally available, the test was probably measuring the wrong thing. That is a hard institutional conversation and most institutions are not having it yet. They are having the easier conversation about detection tools and policy enforcement, which addresses the symptom while leaving the underlying question untouched.

Misinformation at scale is where the stakes are highest and the history is longest. The deliberate production and distribution of false information to shape public belief is not a problem that arrived with the internet or with AI. It is as old as the printing press. Pamphlets that fabricated events. Newspapers that invented stories. Propaganda operations that flooded public discourse with manufactured consensus. The twentieth century produced some of the most sophisticated and damaging misinformation operations in human history using nothing more than radio, print, and the organizational capacity to distribute them. The tools were not digital. The damage was real.

AI makes the production of convincing false content faster and cheaper and potentially more targeted than anything that came before it. That is a genuine escalation. Political misinformation, fabricated medical claims, false financial information distributed at machine speed before correction mechanisms can respond — these are serious threats and they deserve serious governance responses at the policy level, the platform level, and the individual level. But the solution is not achievable by restricting the tool. The solution requires the same thing it has always required — institutional credibility, source verification, media literacy, and consequences for deliberate deception. AI makes those things more urgent. It did not make them necessary for the first time.

Synthetic relationships occupy a specific place on this list because the exploitation involved is quieter and more intimate than the others. Products designed to simulate companionship — to give lonely people the experience of being known and valued by something that is not capable of actually knowing or valuing them — and structured to maximize the dependency that generates subscription revenue. This is not AI doing something new to human vulnerability. The exploitation of loneliness is ancient. Religion has done it. Cults have done it. Predatory social structures of every kind have done it. The specific mechanism of the AI companion product is new. The underlying dynamic — find the lonely person, offer them the simulation of connection, extract value from the dependency — that dynamic has been running as long as there have been lonely people and operators willing to exploit them.

What makes it worth naming specifically is that the people most vulnerable to it are often the people who have the least access to genuine community. Elderly people. People with social anxiety. People in geographic or circumstantial isolation. The same populations that have always been most vulnerable to exploitation by any system that offers the appearance of belonging. The tool is new. The target is not.

Scraping without consent is where the legal and ethical weight is most genuinely contested and where the honest answer requires acknowledging that the creative community has a real grievance. Training large models on creative work produced by writers, artists, musicians, and other creators without compensation or permission — and then producing outputs that compete directly with the people whose work supplied the training — is a legitimate harm. The legal framework for addressing it is still being built and it will be contested for years. But the underlying dynamic is also not without precedent. Sampling in music without clearance. Derivative works that captured the market of the original without compensating the original creator. The history of creative industries is full of cases where the legal and ethical standards around what constitutes fair use, derivative work, or outright theft were established only after significant damage had already been done to the people who created the original work. AI did not invent the problem of powerful entities extracting value from creative labor without adequate compensation. It created a new and very large version of it.

Deepfakes represent the most visceral escalation on this list because they operate at the level of sensory experience. Visual and audio fabrication of real people saying and doing things they never said or did, produced at a quality that defeats casual detection. Non-consensual intimate imagery. False statements attributed to public figures in their own voice. Fabricated evidence. The harm is real and in some cases catastrophic for the individuals targeted. And again — fabricated imagery has existed since photography existed. Doctored photographs. Staged scenes presented as documentary evidence. Composite images used to destroy reputations. The professional capacity to do these things at high quality existed long before AI democratized it. What AI changed is who can do it and how fast and for how much money. The barrier that previously limited this kind of harm to well-resourced bad actors is gone. That is a serious change even if the underlying capability is not new.

Credential laundering may be the most structurally dangerous abuse on the list because it operates inside systems that society depends on for safety. Using AI to pass licensing exams, certifications, and professional credentialing processes in fields where the credential is supposed to guarantee that a real human being has developed genuine competence — medicine, law, engineering, mental health practice — corrupts the entire function of the credential. The credential exists because the public needs to be able to trust that the person holding it actually knows what they are doing. A physician who passed their boards with AI assistance and cannot actually practice medicine safely is not just a fraud. They are a danger. This is not a hypothetical concern. The pressure on credentialing systems from AI-assisted test-taking is already being felt and the response from most institutions is still catching up to the reality.

So here is where all of this lands.

Every item on this list is real. Every one of them deserves serious attention and in most cases serious governance response. None of them are problems AI invented. All of them are problems AI scaled, accelerated, cheapened, or made newly accessible to actors who previously lacked the resources to do this kind of damage. That is a meaningful distinction not because it excuses the damage but because it points toward the right remedy.

The remedy is not the elimination of the tool. The remedy is governance. Authorship. Standards. Accountability. Consequences. The same remedies that addressed the human versions of these problems — imperfectly, slowly, incompletely, but eventually — are the remedies that address the AI versions. The question is whether the governance can move fast enough to stay ahead of the scaling. That is a legitimate open question. It is also the question the people shouting about banning AI are least equipped to answer because they have already decided the tool is the problem and stopped thinking from there.

The Faust Baseline exists in direct response to this landscape. Not as a comprehensive solution to every abuse on this list — no single framework does that. But as a demonstration of what governed AI use looks like from the inside. What it means to operate with a standard. What it produces when an author is present, accountable, consistent, and holding the output to a measure that has nothing to do with volume and everything to do with quality. The abuses on this list share a common root. Nobody home. No author. No standard. No accountability for what the tool produces.

Put a serious author behind a serious standard and the tool does something different. It does what tools have always done in the hands of people who know how to use them and care about what they are building.

It works.

AI Stewardship — The Faust Baseline 3.0 is available now

Purchasing Page – Intelligent People Assume Nothing

“Your Pathway to a Better AI Experence”

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *