I've run OKRs at two companies. At one of them, they were a genuine planning tool that changed what we built. At the other, they were an elaborate ritual that consumed two weeks of every quarter and changed nothing. The difference wasn't the framework. The framework is identical everywhere. The difference was whether the OKRs were written before or after the roadmap was decided.
At the company where OKRs worked, the process started with the objectives. We'd spend a week arguing about what actually mattered this quarter - not what we were planning to build, but what outcomes we were trying to achieve. Then we'd figure out what to build. When a new request came in mid-quarter, the first question was "which key result does this move?" If the answer was none, the request went to the bottom of the list. The OKRs were a filter.
At the company where they didn't work, the roadmap was decided in November for the following year. In January, the team would look at what they were planning to build and write key results that those things would presumably move. The OKRs were a post-hoc justification. Nobody was surprised when they didn't change anything, because they were never meant to.
Roadmap decided in November. In January, write key results to justify what's already planned. OKRs change nothing.
Start with objectives. Argue about what matters. Then figure out what to build. New requests get filtered: which key result does this move?
The grading game
Most OKR implementations I've seen develop a grading problem within two or three quarters. The standard advice is to target 0.7 on a 0-1 scale - if you're consistently hitting 1.0, your goals weren't ambitious enough. In theory this encourages stretch goals. In practice it teaches teams to write key results they can hit at 0.7 without too much effort.
The game works like this: you write a key result that sounds ambitious but has enough slack that you can hit 0.7 even if things go sideways. You learn the vocabulary of ambition without the substance of it. After a few quarters, everyone knows the game and plays it, and the OKRs become a performance of rigor rather than an exercise in it.
The root cause is using OKRs for performance management. When missing a key result has career implications, people will write key results they can hit. You can't have it both ways - either OKRs are a planning and learning tool, or they're a performance measurement tool. Trying to make them both produces the worst of each.
The fix I've seen work is to explicitly decouple OKR grades from performance reviews and to change the question at the end of the quarter. Instead of "why did you miss this?" the question becomes "what did you learn about why this was harder than expected?" That's a different conversation. It requires leadership that's genuinely okay with missing targets, which is rarer than it sounds.
What the framework actually gets right
The part of OKRs I find genuinely valuable is the forcing function of writing the objective. Having to articulate, in one sentence, what you're trying to achieve this quarter - and having that sentence be ambitious enough to be worth caring about - is hard. It forces a conversation about what actually matters that often doesn't happen otherwise.
At Uzum, writing a good objective for a quarter means I have to answer: what does success look like for our users, not just for our metrics? What would we be proud of having shipped? What would make this quarter matter? Those are uncomfortable questions when you're deep in execution mode, and the OKR process forces you to answer them.
The key results are more variable. Sometimes they're the right measure of the thing you care about. Sometimes they're a proxy that drifts from the actual goal. I've had key results that we hit perfectly while the underlying objective got worse - we optimized for the metric and missed the point. The discipline of checking, every few weeks, whether your key results are still pointing at the right thing is worth doing regardless of what you call it.
The cadence problem
The quarterly cadence is probably too slow for most product teams. At Uzum, the market moves faster than that. A competitor launches something, a regulatory change comes through, a technical dependency shifts - and suddenly the key results you wrote in January are measuring the wrong things in March.
I've moved toward treating OKRs as a direction-setting tool rather than a commitment mechanism. We write them quarterly, but we review them monthly and explicitly ask whether they still make sense. If they don't, we update them. This makes some people uncomfortable - "you're just changing the goalposts" - but I'd rather have accurate goals than consistent ones.
The teams I've seen get the most value from OKRs are the ones that treat them as a conversation starter, not a contract. The conversation about what matters this quarter, why it matters, and how you'll know if you're making progress - that conversation is valuable. The specific format you use to have it is much less important than whether you actually have it.