The Ghost in the Machine Has Always Been There

The argument against AI writing is not new. It is not original. It is not even interesting when you run it back to where it started, which is not last year and not the moment the first chatbot wrote a passable paragraph. The argument is ancient. It is the same argument that surfaces every single time a tool appears that threatens to redistribute access to people who were not supposed to have it. The costume changes. The fear underneath does not.

So before anyone gets comfortable on the high horse, let’s go back to the beginning. Not the recent beginning. The real one.

Writing itself started as a tool of power. The earliest written records we have are administrative — grain tallies, tax records, property transfers. The people who could write were specialists. Scribes. They wrote for kings and pharaohs and temple administrators who had things to say and the authority to say them but not necessarily the technical skill or the time to convert thought into marks on clay or papyrus. Nobody in the ancient world looked at a pharaoh’s decree and said the decree was inauthentic because a trained scribe shaped the words. The idea, the authority, the command — those belonged to the person issuing them. The scribe was the instrument. That was the whole arrangement and it worked for thousands of years without producing a single think piece about authenticity.

The Greeks dictated. Roman senators had educated slaves whose entire function was to draft correspondence, speeches, legal arguments. Cicero, one of the most celebrated writers in Western history, worked with secretaries and collaborators as a matter of course. Nobody stripped his reputation for it. Julius Caesar dictated dispatches while doing other things simultaneously, which his contemporaries found impressive rather than fraudulent. The idea that one mind working alone in silence was the only legitimate form of written expression simply did not exist. It had not been invented yet.

The medieval period ran entirely on collaboration and borrowing. Scholars built on each other constantly, lifting arguments and frameworks without what we would now call attribution, because the idea of intellectual ownership had not yet calcified into what the modern world takes for granted. Monks copied manuscripts. Entire bodies of theological argument were assembled from layers of prior work, rearranged and reframed by people whose names we often do not even know. The work was considered valuable because of what it contained and what it did, not because of the purity of its single-source origin.

Shakespeare borrowed. Heavily. Shamelessly. Plots, characters, entire scenes — lifted from Holinshed, from Plutarch, from Italian novellas, from whatever was available and useful. Nobody who understood how he worked considered this a mark against him. The genius was in what he did with the material, not in the material’s origin. That distinction — between the instrument and the author, between the source and the judgment applied to it — was obvious to people for most of recorded history.

Then the Romantic era arrived and changed the entire frame.

The early 1800s produced the cult of the individual artist. This is where the mythology gets built that the literary world is still living inside today whether it knows it or not. Wordsworth. Coleridge. Keats. Shelley. The idea that genuine art must be the pure, unmediated, unassisted expression of a single suffering consciousness. The genius-as-lone-creator myth. Art was supposed to come from somewhere deep and private and painful inside one person and emerge already shaped by that singular interior experience. Anything that diluted that process diluted the authenticity of the result.

This was a cultural invention, not a truth. It was a story that served particular interests. It gave the educated, well-connected, financially comfortable class of writers a framework in which their advantages — leisure time, formal training, social networks, access to publishers — were recast as evidence of authentic talent rather than as structural privilege. The person who could sit alone and write in quiet comfort because they had the money and the education and the connections was elevated. Everyone who could not produce under those specific conditions was implicitly disqualified.

That is when ghostwriting became something to hide.

Not because it was new. Not because it had gotten worse or more common. It became something to hide because the culture had decided that authorship meant something pure and personal and solitary, and ghostwriting was visible proof that the arrangement was more complicated than the myth required. So it went underground. It kept happening — it never stopped happening — but it stopped being acknowledged.

The twentieth century ran on invisible labor. Presidential speeches that moved nations were drafted by speechwriters whose names the public never learned. Autobiography after autobiography from celebrities, athletes, politicians, and business leaders was written in full or in substantial part by collaborators who received a paycheck and a credit too small to see on the jacket. Every major category of public writing had a workforce behind it. The names on the covers were often brands — carefully constructed public identities whose value was in the recognition, not in the writing. This was widely understood inside the industry and politely ignored outside it. The books got reviewed. The ideas got discussed. The readers got what they came for. The arrangement worked and the mythology stayed intact because the people doing the actual labor had every financial incentive to stay quiet.

Then AI arrived and the puritans woke up as if something new had happened.

The objection sounds principled. Is it really yours if a machine helped you write it? Is the voice authentic if software shaped the sentences? Is the idea genuinely expressed if an algorithm assisted the expression?

These are the same questions they asked about ghostwriters. Dressed up in new technical anxiety, but the same questions. And they have the same answer they always had.

Authorship is not located in the instrument. It never was. Authorship lives in the idea, in the judgment, in the argument, in the decisions about what belongs and what gets cut, in the voice that determines how a thing feels when it lands on the reader. The quill did not write the letter. The typewriter did not write the novel. The ghost did not own the book. The instrument executes. The author governs. That distinction has been true for the entire history of written communication and it does not stop being true because the instrument is now computational rather than human.

What changed with AI is not the nature of writing. What changed is access. And that is precisely what is making the establishment uncomfortable, even when they cannot quite say so directly.

Ghostwriting used to require money. Real money. A working relationship with a skilled collaborator, a publishing connection, the kind of professional network that only opens from the inside. It was available to the already powerful and the already wealthy and the already connected. The business executive who wanted a book got one because he could afford to hire the writer who actually produced it. The celebrity got her memoir because the publisher saw enough commercial value to fund the whole operation. The senator got his policy argument shaped and polished by staff whose salaries were paid by the institution he represented. All of that was invisible and acceptable. The tool was available — it was just expensive and gatekept and therefore safe, because it stayed inside the circle of people who were already supposed to have access to the finished product.

AI handed a key to everyone who was standing outside that circle.

And this is where the conversation gets personal, because I am one of those people and I am not going to pretend otherwise.

I am dyslexic. I have ADD. These are not small inconveniences. They are the permanent condition of how my brain works and how it has always worked. The distance between what I know and what I can get to land cleanly on a page has been a fight for as long as I have been writing anything. Not because the thinking was weak. The thinking was never the problem. I have spent a lifetime with a head full of clear, structured, well-developed arguments that hit the translation process and came out damaged. The idea arrives whole. The execution scrambles it. The words come out in the wrong order or the wrong register or with the kind of surface roughness that gives people who have never had this problem an excuse to dismiss the substance underneath.

I watched for years as people with clean grammar and comfortable schooling and the right kind of brain chemistry got taken seriously in rooms I could not get into. Not because their ideas were better. Often their ideas were not better. But they had the surface finish. They had the tool — the neurological one, the educational one, the social one — that produces writing that looks right to the people doing the evaluating. And I did not have that tool. And neither did a lot of people I know who are brilliant and original and who have been functionally shut out of serious public discourse because the gatekeeping operates at the surface level first, before it ever gets to the substance.

AI closed that gap for me. Not by thinking for me. By executing for me. By taking what I know and what I mean and what I am trying to say and helping it arrive intact on the other side of the translation problem. The subject matter is mine. The argument is mine. The voice is mine. The judgment about what to include and what to cut and what the piece is actually for and who it is talking to — all of that is mine. AI is the instrument. I am the author. That is not a complicated distinction and it should not require defending.

But here is what I have learned about why it does require defending, and it is not about authenticity at all.

It is about who gets to be heard.

When the tools that produce polished, credible, publicly accessible writing are expensive and gatekept and structurally limited to people with resources and connections and the right neurological profile, the people doing the gatekeeping do not have to think consciously about exclusion. Exclusion is just the ambient condition. The person with dyslexia submits work with surface errors and gets rejected before the idea is evaluated. The person with ADD produces something structurally uneven and gets dismissed as undisciplined. The person without formal education writes in a register that does not match the expected house style and gets told their voice is not quite right. None of this requires malice. It just requires a standard that was built by and for a particular kind of person and applied universally as if it were neutral.

AI does not care about any of that. AI executes for whoever is running it. And when the person running it knows their subject, knows their argument, knows their voice, knows what quality looks like and is willing to hold the output to that standard — the result is writing that competes on substance. Which is exactly what the gatekeepers said they always wanted. Substance over surface. Ideas over credentials. Quality over the appearance of quality.

Except they did not actually want that. They wanted the appearance of wanting that while keeping the gate where it was.

This is also why the quality question matters so much and gets answered so badly in most public discussion of AI writing. The complaint is usually framed as AI writing being bad — generic, flat, voiceless, indistinguishable from a mediocre term paper. And that complaint is accurate when AI is running without an author. When someone opens a tool and types a vague prompt and publishes whatever comes back without editorial judgment, the result is exactly what it sounds like. Unowned. Undirected. Nobody home.

But that is not what authorship with AI looks like when it is done seriously.

AI needs an author. Not as a formality. Not as a legal cover story. As an operational requirement. The subject matter has to come from somewhere real. The voice has to be established and maintained. The argument has to be governed. The quality standard has to be held across every output, every session, every revision. Without those things, the tool produces noise. With those things, the tool produces work.

The difference between AI-assisted writing that is worth reading and AI output that is forgettable is exactly what the difference has always been in any collaborative writing arrangement — the quality of the mind running the operation. A ghostwriter with no direction, no subject expertise from the person they are writing for, no clear voice to work inside produces generic work. An AI with no author, no governing intelligence, no standard applied to the output produces the same thing. The instrument is not the problem. The absence of authorship is the problem. And the solution is not to remove the instrument. The solution is to take the authorship seriously.

This is what the Faust Baseline was built to address. Not to automate writing. Not to remove the author from the process. To discipline the process so that the author’s presence is consistent, the voice is protected, the claims are honest, the structure is sound, and the output across every session meets the standard the work deserves. A baseline is not a shortcut. It is a governance framework. It keeps the tool working for the author instead of drifting away from the author’s intent the way any powerful tool will drift when no one is holding it to a standard.

The Baseline exists because consistency is the hardest thing to maintain in any extended writing practice, and it is the thing that separates work that builds into something from work that just accumulates. Any individual piece can be good by accident. A body of work is only good by discipline. The framework applies that discipline across sessions, across topics, across the inevitable drift that happens when the tool is operating at scale without governance. It is the difference between a powerful instrument played well and the same instrument making noise.

The gatekeepers are not angry because AI writes badly. If AI wrote badly across the board, it would not be a threat worth this much cultural energy. They are angry because AI, in the hands of a serious author with a real subject and a governing discipline, writes well enough that the gate no longer holds. The credential no longer functions as a filter the way it used to. The surface finish that used to disqualify certain people no longer disqualifies them. The structural advantages that were never acknowledged as advantages — the schooling, the neurology, the connections, the money for professional help — no longer translate automatically into better public output.

Talent was never rare. Talent has always been widely distributed across the whole population in ways that the existing system was structurally unable to surface. What was rare was access. The right education at the right age. The right diagnosis early enough to get appropriate support instead of a judgment. The money to hire help when the brain and the page were not cooperating. The connection that got the work in front of someone who could do something with it. The social context that produced the confidence to put the work out in the first place.

AI did not lower the standard. AI found the people the standard was always designed to exclude — and handed them the pen.

That is what this argument is actually about. It has always been what this argument is about. The tool is new. The fear is old. And the people who have spent their lives on the wrong side of the gate are not obligated to apologize for finally having a way through it.

AI Stewardship — The Faust Baseline 3.0 is available now

Purchasing Page – Intelligent People Assume Nothing

“Your Pathway to a Better AI Experence”

Unauthorized commercial use prohibited. © 2026 The Faust Baseline LLC

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *