Why Developers Keep Shipping Vulnerable Code

@safarslife·August 19, 2025·— views

I've reviewed enough PRs to have a pattern in my head for what vulnerable code looks like. A missing ownership check on an API endpoint that takes a resource ID as a parameter. A query built with string concatenation instead of parameterized inputs. An auth middleware that's applied to 11 of 12 routes because someone added the 12th route in a hurry and forgot to include it in the middleware chain.

When I flag these in review, the response is almost never "I didn't know that was a problem." It's usually "yeah, I was going to fix that" or "I knew that was sketchy but I wasn't sure how to handle it." They knew. So why did it ship?

The deadline problem is more specific than you think

Security takes time in ways that aren't evenly distributed. Writing a parameterized query instead of string concatenation takes maybe five extra minutes. Implementing proper input validation takes longer. But the real time cost is in the edge cases that security-conscious code forces you to think about.

Take IDOR - insecure direct object reference. The classic version: your API endpoint is /api/orders/12345 and it returns order details. The vulnerable version just fetches the order by ID. The secure version fetches the order by ID and then checks that the authenticated user owns that order. That ownership check is one line of code. But writing it correctly means you have to think about: what's the data model for ownership? Is it a direct user_id on the order, or does it go through a merchant relationship? What if the user is an admin? What if the order is shared between users?

At Uzum, orders can be associated with multiple parties - the buyer, the merchant, the logistics partner. An "ownership check" isn't a simple order.user_id == current_user.id. It's a query that joins across multiple tables and has to account for different permission levels. An engineer who's under deadline pressure and hasn't thought through the full permission model will write the simple check, know it's incomplete, and ship it anyway because the alternative is stopping to design the full permission model, which is a week of work they don't have.

The deadline is concrete. The vulnerability is abstract - it might never be exploited. That's the calculation happening in the engineer's head, and it's not irrational given the incentives they're working under.

The incentive structure is backwards

Developers are rewarded for shipping features. They're not rewarded for preventing vulnerabilities. The feature that ships gets noticed in the sprint review. The SQL injection that didn't happen because someone wrote careful parameterized queries is invisible.

This isn't a developer problem - it's a system problem. When the incentives point toward shipping and away from security, you get insecure code. Not because developers don't care, but because caring about security has a cost and shipping has a reward.

Backwards incentives

Feature ships on Friday. Sprint review celebrates velocity. Security review is optional. Vulnerability discovered 6 months later.

Aligned incentives

Security acceptance criteria in every spec. Ownership check is part of the definition of done. Finding a vulnerability in review is a win, not a failure.

The companies that do security well have changed this calculus. They make security part of the definition of done - a feature isn't shipped until it's passed a security review. They treat finding a vulnerability in code review as a win, not a failure. They make the cost of a security incident visible: not as a threat, but as context. "This endpoint processes payment data for 50,000 transactions a day. An IDOR here means any authenticated user can read any other user's payment history." That's not abstract. That's a concrete consequence that changes how an engineer thinks about the ownership check.

The knowledge gaps that actually matter

Most developers know about SQL injection and XSS. The knowledge gaps that cause real problems are more subtle.

IDOR is underappreciated because it doesn't look like a "security vulnerability" - it looks like a missing business logic check. An engineer who's thinking about security in terms of "don't concatenate SQL strings" and "escape HTML output" might not think of "verify the authenticated user owns this resource" as a security concern. It's just... correct behavior. But it's also the most common class of vulnerability I see in code review.

Race conditions in authentication flows are another one. If your "check permission, then perform action" logic isn't wrapped in a transaction or protected by a lock, you have a TOCTOU vulnerability - time of check to time of use. An attacker who can send two requests simultaneously might be able to pass the permission check with one request while the other performs the action. This requires understanding how your database handles concurrent transactions, which is not something every developer thinks about when they're writing an API endpoint.

The knowledge transfer that actually works is code review, not training. When a senior engineer or security-focused PM looks at a PR and says "this endpoint is missing an ownership check, here's why that's a problem and here's how to fix it" - that's how knowledge transfers. A one-day security training once a year doesn't change how people write code. A comment in a PR that explains the vulnerability and the fix does.

What I can actually do as a PM

I'm not a security engineer. But I can make security part of the process in ways that matter.

I include security acceptance criteria in specs for sensitive features. Not generic "the feature should be secure" - specific criteria. "The endpoint must verify that the authenticated user owns the resource before returning data. The verification must happen server-side, not client-side." That gives engineers something concrete to implement and gives reviewers something concrete to check.

💡
Specific security acceptance criteria in the spec are worth more than a security training. "The endpoint must verify ownership server-side before returning data" is something an engineer can implement and a reviewer can check. "The feature should be secure" is not.

I push back when a feature that touches sensitive data ships without a security review. Not because I can do the review myself, but because I can make it a requirement before the ticket moves to done.

I also try to make the cost of vulnerabilities concrete when I'm talking to engineers. Not "this could be a security problem" but "if this endpoint is vulnerable to IDOR, any of our 2 million registered users can read any other user's order history, including their delivery address and payment method details." That's a different conversation than "we should be more careful about security."

The developers who ship vulnerable code aren't bad developers. They're developers working in a system that doesn't make security easy or rewarded. Changing the system is a product and organizational problem as much as a technical one. That's something a PM can actually influence.