There's a version of the PM job that looks great in job descriptions. You spend mornings doing user research, afternoons synthesizing insights, evenings writing crisp specs. You're always one sprint ahead of the team, always discovering the next problem before the current one ships.
I have never lived that version of the job. Not at Uzum, not anywhere.
What actually happens is you spend most of your time in delivery mode - unblocking engineers, reviewing designs, answering questions that should have been in the spec, managing stakeholder expectations about why the thing that was supposed to ship last week didn't. Discovery gets squeezed into the margins. A quick look at support tickets here, a data pull there, a conversation with the ops team you meant to follow up on but didn't.
Why delivery eats everything
Delivery is urgent. Discovery is important. You know how that plays out.
When a sprint is in flight, there are always things pulling you toward the current work. An engineer hits a blocker and needs a decision - do we handle this edge case now or defer it? A design review surfaces a problem with the flow that requires a product call. QA finds something weird in the payment confirmation step that might be a race condition or might be a test environment issue, and you need to figure out which before you can decide whether to hold the release.
None of these are bad things. They're the job. But they're all about what's shipping now, not what you should build next.
Discovery has no immediate deadline. Nobody is blocked waiting for your user research. The consequences of skipping it are real but delayed - you build the wrong thing, you solve a symptom instead of the cause, you miss that the feature you're about to build already exists in a form users don't know about. Those consequences show up weeks or months later, not today. So you skip it, or you do a thin version of it, or worst of all, you do it after the fact - research that confirms what you already built.
What I've actually tried
At Uzum, I tried blocking two mornings a week for discovery. No meetings, no Slack, just research. It worked for about a month before the meetings crept back in. The problem with time-blocking is that it only works if you're willing to defend it, and defending it means saying no to things that feel urgent. When an engineer is blocked on a decision that affects three other engineers, "I have discovery time blocked" is a hard position to hold.
The approach that's actually stuck is treating discovery as a constraint on delivery, not a separate track. Every sprint, I try to have one specific question I need to answer before we can confidently scope the next thing. Not a research project - a question. "Do users who abandon checkout at the payment step do so because of the UX or because of payment method availability?" That's answerable in a few hours with the right data pull and a look at support ticket patterns.
Treat discovery as a constraint on delivery, not a separate track. One answered question per sprint beats a research project that never starts. Tie the question to something concrete or it won't happen.
The question forces me to do the work because it's tied to something concrete. If I don't answer it, I'm going into the next sprint planning session with a gap I know about. That's uncomfortable enough to motivate action.
The thing I got wrong early
For a long time I thought discovery and delivery were in tension - that more of one meant less of the other. That's not quite right. The real tension is between learning and shipping, and you need both.
A shipped feature tells you things no amount of research can. When we launched a new order tracking flow at Uzum, we had done the research, we thought we understood what users wanted. Then we saw the actual usage patterns in production and realized users were checking order status at a frequency we hadn't anticipated - multiple times per hour during the delivery window. That changed how we thought about push notifications, about caching the status endpoint, about the whole feature. We couldn't have learned that from interviews.
But if you're only in delivery mode, you're building on assumptions that get staler every sprint. At some point the gap between what you think users need and what they actually need gets wide enough that you ship something that misses entirely. I've done that. It's expensive - not just in engineering time, but in the credibility you spend explaining why the thing you shipped isn't getting used.
The honest answer
If you're a PM and you feel like you're not doing enough discovery, you're probably right. Most of us aren't. The job as practiced is much more delivery-heavy than the job as described.
The goal isn't some perfect balance. The goal is to make sure you're learning something every sprint, even if it's small. One answered question. One assumption validated or invalidated. One thing you know now that you didn't know two weeks ago. That's enough to keep the assumptions from going completely stale and to keep you from building confidently in the wrong direction for six months before anyone notices.
The PMs I've seen get this right aren't the ones with the most structured discovery processes. They're the ones who are genuinely curious about whether what they're building is working, and who treat that curiosity as part of the job rather than a luxury for when things slow down. Things don't slow down. You have to make the space.