AI does what it’s told. That’s its strength and its limit.

It bridges the technical gap—between instruction and execution—it optimizes, accelerates, makes reliable. But the political gap—knowing when a rule is absurd, when you need to work around it—it doesn’t see.

It doesn’t cheat. It can’t cheat. And you can’t teach it, because cheating isn’t a rule you encode. It’s a judgment about rules.

An AI agent handles customer requests. It follows procedure. An edge case comes in, the kind where a human would say “technically no, but here we make an exception.” The AI doesn’t make the exception. It lacks the instinct, the memory of similar cases, the intuition that “if we lose this one, we lose ten.”

You might think it’s enough to encode the exceptions. But that’s infinite regression. Reality is inexhaustible. And turning an exception into a rule kills the exception—situated judgment disappears into procedure.

So who’s going to cheat?

If AI can’t, someone must do it in its place. That someone is no longer the collective work. It’s whoever configures. Whoever prompts. Cheating doesn’t disappear. It moves up to those who speak to the machine.

Before, deliberation over the rules of the trade belonged to those who did the work. The team decided what was acceptable. They negotiated the gap between the prescribed and the real.

Now, that deliberation belongs to those who design the agent. Choices are made upstream, encoded, invisible. The work collective no longer has a grip on the norms of its own work.

It no longer decides. It uses.