Every few months someone publishes a hot take about AI replacing product managers. The argument is always the same: AI can write user stories, analyze data, prioritize backlogs, generate roadmaps. What's left?
The people making this argument don't understand what LLMs actually are, and they don't understand what PMs actually do.
What an LLM actually is
A large language model is a stochastic text predictor. It was trained on a massive corpus of internet text to predict the next token given a sequence of tokens. That's it. It's extraordinarily good at this, which makes it useful for a wide range of tasks - but it's not reasoning in any meaningful sense. It's pattern-matching at scale.
This matters because the things that make a PM valuable are not pattern-matching problems. When I'm deciding whether to build real-time order tracking or invest in improving our search ranking algorithm, I'm not looking for the statistically likely next token. I'm weighing specific tradeoffs: engineering complexity, infrastructure cost, user impact, competitive pressure, what our data actually shows about where users drop off. An LLM trained on generic product management content cannot do this for me. It doesn't know that our search has a cold-start problem for new sellers, or that our payment gateway has a 2-second latency spike that makes real-time anything painful to build.
LLMs also hallucinate. Not occasionally - regularly. They produce confident, well-formatted text that is factually wrong. For synthesis tasks where you can verify the output, this is manageable. For judgment calls where the whole point is that you're making a decision under uncertainty, a confidently wrong answer is worse than no answer.
Write user stories, generate roadmaps, summarize research, draft specs from a prompt
Weigh tradeoffs with full context, build trust with engineers, know when to push back on leadership, understand why users behave the way they do
Where the replacement risk is real
Here's what the "AI replaces PMs" crowd gets right, even if they're wrong about the mechanism.
A lot of PM work is currently done badly. Specs that are vague because the PM didn't think hard enough. Prioritization that's really stakeholder appeasement dressed up in RICE scores. User research that confirms what the PM already believed. Roadmaps that are fiction.
AI makes it easier to produce the artifacts of PM work without doing the underlying thinking. You can generate a spec from a one-line prompt. It will look like a spec. It will have acceptance criteria and edge cases and a nice structure. It will also be wrong in ways that aren't obvious until an engineer spends a week building the wrong thing.
I've seen this happen. We had an engineer spend a week building a batch job that ran nightly when what we actually needed was a real-time webhook handler. The spec said "sync order status" - I meant immediately on status change, he reasonably read it as "keep them in sync." A nightly batch job does keep them in sync. It also means a customer's order shows as "processing" for 23 hours after it shipped. That's a support ticket, a refund request, and a lost customer. The spec was vague because I hadn't thought hard enough about the timing requirement. AI-generated specs are vague in exactly this way, because the AI doesn't know what "sync" means in your specific context.
What actually can't be automated
The things I do that are genuinely hard: deciding which of the 40 things on the backlog actually matters given what I know about our users and our business. Knowing when an engineer's estimate is off because the problem is harder than it looks versus because they're sandbagging. Understanding that a feature request from a large seller is really about a workflow problem we haven't solved, not the feature they're asking for. Knowing when to push back on a direction from leadership and when to execute.
None of these are things you can prompt your way to. They require context that lives in my head from months of working with the team, reading support tickets, watching session recordings, and making decisions and seeing what happened.
The other thing that can't be automated is trust. My engineering team ships things for me because they trust that I've thought through the problem, that my specs are accurate, and that I'll back them up when something goes wrong. That trust was built over time through specific interactions. An AI can't build it.
The actual opportunity
I use AI every day. It saves me real time on synthesis, on first-pass editing, on getting oriented in a new technical area quickly. When I need to understand how our payment processor's webhook retry logic works before writing a spec, I can get a solid mental model in ten minutes instead of an hour.
But I use it as a tool, not as a replacement for thinking. The PMs who will be fine are the ones who use AI to get the low-value work off their plate so they can spend more time on the high-value stuff. The ones who use it to avoid thinking will eventually have nothing left to offer - not because AI replaced them, but because they replaced themselves.
The replacement risk isn't "AI does the PM's job." It's "AI enables PMs to fake the job more convincingly, and the ones who were already faking it will get found out when the products they ship keep failing."