Prioritization Frameworks Are Overrated

@safarslife·October 20, 2024·— views

RICE. MoSCoW. WSJF. Kano. ICE. The PM internet loves a prioritization framework. There are entire courses on how to score your backlog using weighted formulas. There are Notion templates with color-coded matrices. There are consultants who will teach your team the right way to prioritize.

I've tried most of them. They're fine. They're also not the thing that actually determines what gets built.

What frameworks are actually for

Prioritization frameworks are useful for one thing: making your reasoning legible to other people. When you score items using RICE and share the scores, stakeholders can see why you ranked things the way you did. They can argue with your inputs. They can point out that you underestimated the reach of a feature or overestimated the confidence. That's a useful conversation.

What frameworks are not useful for is generating the right answer. The right answer comes from understanding your users, your business, and your technical constraints well enough to have good judgment. The framework structures that judgment in a way that's easier to communicate. If you have bad judgment, a RICE score won't fix it. You'll just have a spreadsheet full of confidently wrong numbers.

I've seen this play out at Uzum. We had a backlog item for improving seller onboarding - the flow where new merchants set up their store, upload products, connect their bank account. The RICE score was mediocre: moderate reach, moderate impact, high effort. By the numbers, it ranked below a dozen other items. But anyone who'd spent time with the seller support queue knew that onboarding friction was causing us to lose sellers in the first week, and those sellers never came back. The metric we were using for "reach" counted active sellers, not churned ones. The framework was measuring the wrong thing, and it was measuring it confidently.

We built the onboarding improvement anyway. It was the right call. The framework didn't help us get there.

The false precision problem

The thing that bothers me most about scoring frameworks is the false precision. You assign a "reach" score of 50,000 users. Why 50,000? Because you estimated it. The estimate might be off by a factor of two in either direction. Then you multiply it by an "impact" score of 3 (on a scale of 1-5, because that's what the template says), divide by an "effort" score of 2 weeks, and get a RICE score of 75,000.

That number feels precise. It's not. It's a structured guess dressed up as a calculation.

I've watched teams spend an hour debating whether a feature's impact score should be 3 or 4, which changes the RICE score by 25%, which changes the ranking. The debate is real but the precision is fake. You don't know if the impact is 3 or 4. You're guessing. The hour you spent debating it would have been better spent talking to users or looking at the actual data.

The deeper problem is that scoring frameworks treat all uncertainty as equivalent. A feature where you're highly confident about reach and impact gets the same treatment as one where you're guessing at both. The framework doesn't capture that difference. Your judgment does.

⚠️

RICE scores feel precise but they're structured guesses. A feature scored at 75,000 vs 72,000 is not meaningfully different. The hour spent debating impact scores would be better spent talking to users.

What I actually do

I write down the three to five things I think we should build next and the reasoning behind each one. Then I share that with the team and stakeholders and we discuss it.

The discussion is the point. Not the framework, not the scores - the conversation about why we think these things matter, what we're betting on, what we're trading off. That conversation surfaces disagreements and assumptions that a spreadsheet would hide.

When there's genuine disagreement about priority, I try to make the disagreement concrete. "You think we should build the seller analytics dashboard first. I think we should fix the inventory sync reliability first. Here's my reasoning: the analytics dashboard is a nice-to-have for sellers who are already successful on the platform. The inventory sync failures are causing sellers to oversell, which creates order cancellations, which damages buyer trust. Buyer trust is harder to rebuild than seller satisfaction." That's a real conversation. Comparing RICE scores is not.

The other thing I do is make the cost of being wrong explicit. Some decisions are cheap to reverse - you can ship a feature, see that it's not working, and pull it back or iterate. Others are expensive to reverse - you've made an architectural commitment, or you've set a user expectation that's hard to walk back, or you've spent six months of engineering time on something that turns out to be wrong. I weight expensive-to-reverse decisions more heavily, and I'm more willing to do extra validation before committing to them. No framework I've seen captures this well.

When frameworks actually help

I'm not saying never use a framework. When you have a very large backlog and you need to do a rough sort to figure out what to even discuss, a scoring exercise can help. When you're onboarding a new stakeholder who wants to understand how you make decisions, walking through a framework can be useful. When you're trying to get a team that's been arguing about priority for weeks to converge on something, a structured exercise can break the deadlock.

But the framework should serve the conversation, not replace it. The moment you're treating the RICE score as the answer rather than a starting point for discussion, you've lost the plot. The score is a way to make your assumptions visible. The assumptions are what matter.

Good prioritization is mostly just good judgment applied consistently. Frameworks can help you communicate that judgment. They can't substitute for it, and they can't save you from the consequences of not understanding your users or your system well enough to know what actually matters.