We shipped a feature at Uzum that I was proud of. Two sprints of work, clean design, solid engineering. We announced it. We watched the adoption numbers.
Nobody used it. Not "low adoption" - genuinely nobody. After three months, fewer than 0.5% of active users had touched it, and most of those were internal team members testing it. We had built something that solved a problem users didn't have, or didn't have badly enough to change their behavior.
The question was: what do we do with it?
The sunk cost trap
The instinct is to fix it. You've already invested the time, so you should invest more time to make it work. Maybe the onboarding is bad. Maybe users don't know it exists. Maybe we need to promote it more aggressively.
Sometimes that's right. Sometimes a feature fails because of discovery or onboarding problems, not because the feature itself is wrong. But you have to be honest about which situation you're in, and most teams aren't.
In our case, the feature was in the main navigation. Users who found it understood what it did. They just didn't want it. The problem wasn't execution - it was the premise. Investing more in a feature that solves the wrong problem is not fixing the feature. It's delaying the decision to kill it.
What keeping a dead feature actually costs
The obvious cost is the engineering time to eventually remove it. That's real but usually not the biggest cost.
The less obvious cost is what you're paying every sprint to keep it alive. Every feature in your codebase is a feature you have to maintain. It has to work with every other feature. It has to be included in your test matrix. When you make a change to a shared service or a shared database table, you have to verify it doesn't break the dead feature too. A feature with 0.5% adoption still gets included in every regression cycle. It still gets considered when you're planning schema migrations. It still shows up in your monitoring dashboards as noise.
At Uzum, we have features that touch the order state machine - a core piece of infrastructure that a lot of things depend on. Every time we need to add a new order state or change a state transition, we have to trace through every feature that reads or writes order state. A dead feature that touches the order state machine is not free to keep around. It's a code path that has to be understood, tested, and accounted for every time we touch that part of the system.
There's also the cognitive cost. A cluttered product is harder to reason about. When a new engineer joins and tries to understand the codebase, every dead feature is something they have to learn exists, understand the purpose of, and then figure out whether it's actually used. Dead code paths are a form of technical debt that compounds over time.
The third cost is the product itself. Every feature that doesn't earn its place makes the product slightly worse for the features that do. It adds visual noise. It adds navigation complexity. It adds surface area for bugs. A product with 20 features that all get used is better than a product with 30 features where 10 of them are dead weight.
How I made the call
I gave the feature a deadline before we shipped it. Three months of data, then we evaluate. At the three-month mark, I wrote up the data, the original hypothesis, and what we'd learned. The hypothesis was wrong. The data was clear. The decision was to remove it.
I presented this to the team and to the stakeholders who'd originally requested the feature. The conversation was easier than I expected. When you have clear data and you've given the feature a fair chance, most reasonable people accept the conclusion. What makes these conversations hard is when there's no pre-agreed success metric, so the decision feels arbitrary. "We're killing it because adoption is low" is a judgment call. "We're killing it because it didn't hit the 5% adoption threshold we set before shipping" is a data-driven decision.
The harder conversation was with myself. I'd advocated for this feature. I'd been wrong. Killing it meant admitting that publicly, at least internally. That's uncomfortable.
But the alternative - keeping a dead feature alive because I didn't want to admit I was wrong - is worse. It's bad for the product, it's bad for the codebase, and it's bad for the team's trust in my judgment. If I can't kill my own bad ideas, why would anyone trust me to make good ones?
The rule I use now
Every feature I ship now has a success metric and a timeline defined before we start building. "We'll consider this successful if X% of users do Y within Z months." If it doesn't hit that bar, we evaluate whether to invest more or remove it.
This makes the kill decision easier because it's not a judgment call in the moment - it's a commitment made upfront. The feature either earns its place or it doesn't. That clarity is worth more than the discomfort of setting a bar you might not hit.
It also changes how you build. When you've committed to a specific adoption metric, you think harder about discoverability, onboarding, and whether the feature is actually solving a real problem. You're less likely to ship something you're not confident in, because you know you'll be held to the number in three months.
The features that survive this process are the ones worth keeping. The ones that don't are the ones you should have killed earlier.