You are a marketing director, and you type with confidence. “Build me a content strategy.” What comes back is generic slop, and what follows is resentment toward what is, after all, just a tool. The less obvious part is that the tool just returned the exact vagueness of your own thinking.

Functional fog

This director is not entirely incompetent. He has probably survived fifteen years in an environment where vagueness was a viable strategy. Meetings absorb it, teams interpret it and processes compensate for it. Human organizations as a whole work like massive translation engines that turn the approximate into deliverables, and this director never needed to say exactly what he wanted, because other humans filled in the blanks for him.

AI fills in nothing. It takes every instruction at face value, and when the word is empty, so is the output.

This is not a flaw in the model, or in whatever configuration you might blame, but simply the first tool in history that refuses to guess what you have not articulated.

The new rhetoric

Aristotle would have recognized the problem. The Athenian sophists sold persuasion disguised as logic, producing speeches that were fluid, structured, convincing and often wrong.

He had to formalize the rules of valid reasoning for a specific reason. The fluency of an argument guarantees nothing about its truth.

LLMs reproduce this pathology in both directions. Downstream, they generate responses that are eloquent and hollow. Upstream, the prompt itself can be fluent, structured and fundamentally empty.

“Optimize our processes” is a simple enough phrase, but you could dress it up all you want and it would still sound like an instruction, an absence of thought disguised as a command. The modern sophists are those who mistake a well-crafted prompt for a well-formed idea. Aristotle would say the problem has not changed in twenty-four centuries.

What AI actually punishes

The consensus says AI threatens human thought, that it will make us lazy, dependent and intellectually atrophied. The problem may be ancient, but this framing is not.

It is the wrong framing, because AI is not an anaesthetic but its opposite. It makes the pain of vagueness impossible to ignore.

The feedback loop is instant, and that is what changes everything. You ask a vague question, you get a vague answer; you rephrase with precision, and the answer transforms. There is no social filter, no colleague charitably interpreting your intent.

No teacher, manager or process has ever confronted anyone this directly with the quality of their own thinking. That is exactly why AI widens gaps instead of closing them. It is sold as an augmentation tool, but from where I stand it works as an amplifier. Those who think clearly get incredible results, and those who think vaguely get equally incredible vagueness.

The confession

Let us return to our marketing director, whose problem was never the tool. “Build me a content strategy” is the exact sentence he had been saying in meetings for years.

And it worked, because someone in the room understood, interpreted and executed on his behalf.

Now the same guy believes he has superpowers, but under the hood AI is only revealing his sudden blindness, his now relative incompetence and what the human organization had been masking all along, which is that this sentence never contained any thought.

Every prompt is a confession of what you actually know, what you actually want and what you actually thought before you started typing. AI is not a brain you rent, it is a confessional where you cannot lie.

The remaining question is who is willing to sit down.