Why Roadmaps Lie

@safarslife·January 15, 2024·— views

Every roadmap I've ever shipped is a snapshot of what I believed on the day I wrote it. By the time it reaches the team, some of those beliefs are already wrong. By the time Q2 arrives, half the items have shifted - wrong priority, wrong scope, wrong assumptions about what the system could actually do. I used to scramble to explain the gap. Now I understand the gap is the point.

The problem isn't that PMs are bad at planning. It's that a roadmap is a confidence interval dressed up as a schedule. The moment you publish it, it starts decaying.

The roadmap that feels solid

Checkout redesign: August 14th. Payment reliability improvements: September 1st. Seller analytics dashboard: Q4.

The roadmap that's actually honest

Checkout redesign: Q3, assuming no major UX research findings that change scope. Payment reliability: Q3, dependent on payment provider API support for async callbacks (validating now). Seller analytics: Later, assumptions TBD.

What a roadmap actually is

When I was still writing code, I understood that a system design document was a hypothesis. You'd sketch out the architecture, make assumptions about load and latency, and then reality would correct you during implementation. Nobody expected the design doc to survive contact with the database. You updated it as you learned.

Roadmaps should work the same way. They don't, because stakeholders read them as contracts.

At Uzum, I spent my first year treating the roadmap like a delivery commitment. I'd present it, people would nod, and I'd feel like I'd accomplished something. Then we'd hit a dependency we hadn't modeled - a third-party payment provider API that didn't support the webhook pattern we'd designed around, or a database schema that made the new feature's query plan catastrophically slow at our transaction volume. The roadmap said we'd ship in Q2. The system had other opinions.

The gap between what I promised and what shipped wasn't a planning failure. It was the cost of not knowing what I didn't know yet. That cost is always real. The question is whether you pretend it doesn't exist.

The precision trap

The worst roadmaps I've seen have specific dates next to every item. "Checkout redesign: August 14th." That date came from somewhere - probably a planning session where someone asked "how long will this take?" and an engineer said "six weeks" and someone counted forward on a calendar. Six weeks of what, exactly? Six weeks assuming no incidents, no scope changes, no discovery that the proposed approach won't work at scale.

At Uzum's transaction volume, we've had features that worked fine in staging fall apart in production because the query pattern that's fast on 10,000 rows is a full table scan on 50 million. That's not an engineering failure. That's the kind of thing you find out when you're building at scale, and it adds time. The roadmap date doesn't care.

I stopped putting specific dates on roadmap items unless there's a hard external constraint - a regulatory deadline, a partner launch, something that actually can't move. For everything else, I use quarters. "We expect this in Q3." That's honest. "We'll ship this August 14th" is a number I'll have to explain in September.

What I actually track

The artifact I care about more than the roadmap is the assumption log. Every significant item on the roadmap has a set of beliefs underneath it: what we think the user problem is, what we think the technical approach will be, what we think the scope is. When those beliefs change - and they always change - I update the log and we revisit the item.

This sounds like more work. It's actually less, because it prevents the thing I hate most: building something for six weeks and then discovering in the retrospective that the core assumption was wrong in week two and nobody said anything because the roadmap was committed.

💡

The assumption log is the artifact that matters. Every roadmap item should have a one-line assumption statement. When the assumption breaks, the item moves or changes shape. That's not a planning failure - that's the system working correctly.

I've watched engineers implement features they privately suspected were wrong because the spec said so and the spec came from the roadmap and the roadmap was "committed." That's expensive. A batch job that runs nightly when you needed a real-time event handler. A caching layer built for read-heavy load when the actual pattern is write-heavy. The roadmap didn't cause those mistakes directly, but the culture of treating it as a contract made it harder to surface them early.

Holding it loosely

The roadmap should be the least stable artifact your team produces. If your roadmap hasn't changed in three months, either you're building something with perfect requirements - which doesn't exist - or you're not learning fast enough to update it.

I keep a simple three-column view: Now, Next, Later. No dates except where they're real. Each item has a one-line assumption statement. When the assumption breaks, the item moves or changes shape. Stakeholders who need dates get ranges and explicit dependencies: "Q3, assuming the payment provider API supports async callbacks, which we're validating now."

That's not a weaker roadmap. It's an honest one. The fiction version feels more solid until it doesn't, and then you're explaining to a VP why August 14th became October.

Roadmaps are useful. I use them. But they're a communication tool, not a contract, and the moment you start defending items you privately know are wrong because the roadmap says so, you've crossed into theater. The team deserves better than that.