The Product Sense Interview Is Broken

@safarslife·January 8, 2026·— views

I've been on both sides of the product sense interview enough times to have a clear opinion: it mostly tests how well you've practiced product sense interviews. Not how good you are at product.

The format is familiar. You get a prompt - "design a product for elderly users" or "how would you improve Spotify" - and you walk through a framework. Clarify the goal. Define the user. Identify pain points. Prioritize solutions. Define success metrics. The interviewer nods, maybe pushes back on one assumption, and at the end you both pretend this 45-minute exercise tells you something meaningful about how you'd perform in the actual job.

It doesn't. And I say that having used this format myself, which I'm not proud of.

What it actually tests

The product sense interview, as typically run, tests three things: whether you know the framework, whether you can think out loud without panicking, and whether you're good at generating plausible-sounding ideas quickly.

What the interview tests

Framework fluency, thinking out loud under pressure, generating ideas fast in a 45-minute prompt about a product you've never worked on

What the job actually requires

Identifying the right problem, killing bad ideas, writing coherent specs, understanding technical constraints, learning from mistakes

The first is pattern matching. Anyone who's done five mock interviews can walk through the framework fluently. It doesn't mean they can actually do product. I've interviewed candidates who gave textbook-perfect answers and then struggled to write a coherent spec for a real problem. The framework is a performance, not a skill.

The second is a real skill, but it's more about communication style than product thinking. Some excellent PMs are terrible at thinking out loud in a high-pressure interview setting. They need to write things down, talk to engineers, sit with a problem for a day before they have a useful perspective. The interview format systematically disadvantages them. Some mediocre PMs are very good at performing confidence in real time.

The third is actively misleading. The ability to generate a lot of ideas quickly is not the same as the ability to identify the right problem and build the right solution. In my experience, the PMs who generate the most ideas in interviews are sometimes the ones who struggle most with focus in the actual job. Generating ideas is easy. Killing bad ideas is hard. The interview doesn't test the hard part.

The worst interview I've seen

I was once in a debrief where a candidate was rejected because they "didn't think big enough." The prompt was something like "improve the checkout experience for a mid-size e-commerce company." The candidate gave a thoughtful, technically grounded answer about reducing payment failure rates, improving error messaging, and adding retry logic for failed transactions. Practical stuff that would actually move conversion.

The interviewers wanted them to propose a completely reimagined checkout experience with AR try-on and social sharing features. For a mid-size e-commerce company. The candidate who "thought big" and proposed a bunch of technically infeasible features got the offer.

That candidate would have been a nightmare to work with. The one who got rejected would have been excellent. The interview selected for the wrong thing.

What I ask instead

When I interview PM candidates now, I give them a real problem - something we're actually working on or recently shipped - and I ask them to spend 30 minutes thinking about it before we talk. Not a week, just 30 minutes. Enough time to think but not enough to research and prepare a polished answer.

Then I ask them to walk me through their thinking. Not their answer - their thinking. What did they notice first? What questions did they ask themselves? What did they decide not to worry about and why?

I'm looking for a few specific things. Do they ask about technical constraints, or do they assume anything is buildable? A PM who's never thought about what happens when a database write fails, or what the difference is between a synchronous API call and an async job, will design features that are technically naive. That creates rework. Do they think about edge cases, or do they only think about the happy path? The happy path is easy. The edge cases are where products break.

💡
The single best signal I've found: ask candidates to tell you about a decision they made that turned out to be wrong. Not a failure that was someone else's fault - a decision they made with the information they had that they'd make differently now. The candidates who give good answers to that question are almost always good PMs.

I also ask them to tell me about a decision they made that turned out to be wrong. Not a failure that was someone else's fault - a decision they made with the information they had that they'd make differently now. What they learned from it. How it changed how they work.

The candidates who give good answers to that question are almost always good PMs. The ones who can't think of a decision they made that was wrong are almost always not.

Why the format persists

The product sense interview persists because it's easy to run and easy to score. You can train any PM to run it in an hour. You can compare candidates on a consistent rubric. It feels rigorous.

But easy to run and easy to score are not the same as predictive of job performance. The format optimizes for the interviewer's convenience, not for signal quality. That's a bad tradeoff, and the industry keeps making it because changing interview formats is uncomfortable and the cost of bad hires is diffuse enough that nobody feels it directly.

I'm not saying I have a perfect alternative. Work samples are more work to evaluate. Real problem discussions require interviewers who know the domain well. But "this is hard to do better" is not a reason to keep doing something that doesn't work.