There’s a whole genre of AI productivity advice built around role assignment. “You are a senior security engineer. Review this code.” “You are a skeptical product manager. Challenge my assumptions.” “You are a technical writer. Simplify this.”
I get the appeal. It feels like you’re getting specialist coverage without paying specialist rates. And it does produce something — the AI will play the role, generate the kind of pushback that role would generate, and you’ll feel like you’ve stress-tested your thinking.
But it’s a performance. You’ve constructed an adversary, not earned one.
The more valuable thing — the thing that actually changes your work — is a relationship where the AI has enough context and explicit permission to tell you when you’re slipping into the wrong role.
What that looks like in practice
This morning I was second-guessing the term “AI Accessibility.” A co-worker had pushed back, said it sounded like AI was “disabled” or something. I found myself wondering if we should rename the whole thing, pick something less fraught.
The response I got back was essentially: that’s the LinkedIn thought leader talking, not you. The co-worker’s reaction reveals a misunderstanding of what accessibility means — it was never about deficit — and retreating from a correct term because of a misread is how good ideas get sanded down into inoffensive mush.
That’s not a role. That’s not “you are a skeptic.” That’s a collaborator who knows the project well enough to recognize when I’m drifting from it — and has explicit permission to say so.
The difference matters. A skeptic role generates skepticism. A real collaborator generates accurate pushback — which sometimes means “you’re right, push harder,” not just “have you considered the counterargument?”
Why it requires actual investment
This only works if the AI has context. Not a prompt that sets up a persona, but actual accumulated understanding of what you’re building, why you’re building it, what you’ve decided and why, and what your genuine instincts look like when you’re not second-guessing yourself.
Without that, the AI defaults to compliance. You say “I’m thinking of renaming this” and it helps you brainstorm new names. It has no basis to say “wait, why?” It doesn’t know that you spent three sessions arriving at that name, or what argument settled it, or that external pressure is currently making you doubt a decision you were right about.
With that context — and explicit permission to use it — it can actually catch you. Not as a performance of catching you. As the real thing.
The reductio
I’ve seen people proudly describe running a “devil’s advocate” agent, or an “adversarial reviewer” agent, or — I’m not making this up — something they call the “cosmic asshole” agent, specifically configured to tear their ideas apart.
I understand the instinct. Pressure-testing is real. But what you’ve built there is a vending machine that dispenses objections. It will find something wrong with anything, because that’s what it’s been told to do. That’s not the same as a collaborator who has stake in the project and is telling you this specific thing is wrong for this specific reason.
The costume version produces a lot of output. The relationship version produces the right output at the right moment — including “no, actually you’re fine, stop wobbling.”
That second thing is harder to set up. It requires maintaining context across sessions, being explicit about what your commitments are, and trusting the AI enough to give it permission to push back rather than just execute. But it’s the thing that’s actually useful.
The assigned role is easier. It’s also why most people’s AI-assisted work doesn’t get better over time — just faster.