I've watched three different teams try to use AI to generate their product roadmaps in the last year. Not as a brainstorming aid - as the actual output. Feed in user research, business goals, competitive analysis, and get back a prioritized feature list. The results were coherent, well-structured, and almost entirely wrong.
The problem isn't that AI is bad at generating lists. It's excellent at that. The problem is that a roadmap isn't a list. It's a set of bets, and the value of each bet depends on information that doesn't exist in any document you can feed a model.
What roadmaps actually encode
When I build a roadmap at Uzum, I'm not just prioritizing features by impact and effort. I'm encoding a dozen constraints that live nowhere except in my head and in conversations I've had over the past six months.
I know that our payments infrastructure team is mid-migration from a synchronous payment flow to an async event-driven architecture. That means any feature touching checkout is going to be 3x harder to build for the next two quarters than it looks on paper - because the engineers will be working against a codebase that's half-migrated, with two different patterns coexisting in the same service. An AI looking at our backlog sees "checkout optimization" as a medium-complexity item. I know it's a nightmare right now.
I know that one of our senior engineers has been wanting to rebuild the order status notification system for eight months. She's identified a real problem - we're doing polling where we should be using webhooks, which means our status updates lag by up to 30 seconds and we're hammering our own database with unnecessary reads. If I put that project on the roadmap, she'll execute it faster and better than anything else we could assign her. That's not in any document. It's a conversation we had.
I know that a competitor is about to launch a feature that will make one of our planned Q2 items irrelevant. I know this because someone on our growth team has a contact there. That's not in any document either.
A roadmap that doesn't account for these things isn't a roadmap. It's a feature list. AI can generate feature lists. It cannot generate roadmaps.
The specific things AI can't see
Let me be concrete about the categories of context that matter and that AI systematically lacks.
User research docs, business goals, competitive analysis, backlog items
Tech debt topology, team motivation, org politics, regulatory timing, informal intel
Technical debt topology. Every codebase has areas that are clean and areas that are a mess. The mess isn't random - it accumulates in specific places for specific reasons, usually because those areas changed requirements frequently or were built under time pressure. A feature that touches a clean service might take two weeks. The same feature touching a legacy service with no test coverage and three different data models might take eight. AI doesn't know which is which.
Team capability and motivation. Not all engineers are interchangeable. Some people are exceptional at certain problem types. Some are burned out on certain domains. Some are learning a new skill and will be slower for the next quarter but faster after that. Roadmap sequencing that ignores this is roadmap sequencing that will miss its dates.
Organizational politics. Some initiatives have a champion in leadership who will clear blockers. Others are technically on the roadmap but nobody senior actually cares about them, which means they'll get deprioritized the moment something urgent comes up. AI cannot read the room.
Regulatory and market timing. In Uzbekistan, payment regulations change. A feature that's fine to ship today might require a compliance review if we wait three months. That's not in any document - it's in a conversation with our legal team.
Where AI actually helps
I use AI in roadmap work constantly. Just not for the prioritization itself.
Synthesizing support ticket themes across thousands of tickets - AI is faster and more consistent than I am. Generating a long list of possible directions before I start narrowing - AI gets me to 40 options in 10 minutes instead of 90. Stress-testing my reasoning by asking it to argue against my current priorities - it's a useful sparring partner.
The pattern is: AI helps me think, I make the decisions. The moment you flip that and let AI make the decisions while you review them, you've lost the thing that makes a roadmap valuable. You've traded judgment for the appearance of rigor.
The rigor problem
This is what actually worries me about AI-generated roadmaps. They look rigorous. They have structure, they reference data, they use the right vocabulary. A PM who presents an AI-generated roadmap to stakeholders is implicitly saying "this is what the analysis shows." But the analysis is only as good as the inputs, and the inputs are always incomplete.
A roadmap that looks systematic is harder to challenge than one that's clearly a judgment call. That's dangerous, because the judgment call - when it's grounded in real context - is usually more accurate than a systematic process applied to incomplete data. I'd rather have a stakeholder push back on my reasoning than accept a confident-looking output that's missing half the relevant information.
The hard parts of roadmapping are hard because they require judgment that can only come from being close to the work. AI can help you get to the starting line faster. It can't run the race for you.