Most PMs treat security as someone else's problem. The security team handles it. Engineers implement it. The PM writes the feature and moves on.
I used to think this way. Then I spent enough time reading CVEs and doing security work on the side to understand something uncomfortable: a lot of vulnerabilities aren't engineering failures. They're product failures. The PM made a decision that created the attack surface, usually without realizing it.
What you're actually doing when you add a feature
Every feature you ship expands your attack surface. That's not a reason to not ship - it's a reason to understand what you're adding.
An attack surface is everything an attacker can interact with: every input field, every API endpoint, every file upload, every authentication flow, every third-party integration. When you add a "seller can upload product images" feature, you're now accepting arbitrary binary data from untrusted users. When you add a "share cart via link" feature, you're creating a new unauthenticated endpoint that needs to be scoped correctly or you'll leak order data. When you add a webhook integration with a payment processor, you need to verify the webhook signature or anyone can POST to your endpoint and fake payment confirmations.
These aren't edge cases. They're the predictable consequences of the features you're building.
The question I ask now when reviewing a spec: what new attack surface does this create, and what do we need to do about it? Not as a blocker - as a checklist item that gets addressed before the feature ships, not after.
The decisions that actually create vulnerabilities
CORS misconfiguration is a good example of a PM-level decision creating a security problem. CORS - Cross-Origin Resource Sharing - is the browser mechanism that controls which domains can make requests to your API. If you configure it too permissively (say, Access-Control-Allow-Origin: * on an authenticated endpoint), you've just allowed any website to make requests to your API on behalf of your logged-in users. That's not an engineering mistake in isolation. It's often the result of a PM saying "we need this API to work from our partner's domain" without specifying which partner, which endpoints, and what data they should have access to.
Authentication vs authorization is another one. Authentication is "who are you?" Authorization is "what are you allowed to do?" They're different problems and they fail in different ways. JWTs - the tokens we use to authenticate API requests - can be misconfigured in ways that let users forge tokens or access other users' data. But the authorization layer is where product decisions live: which user roles can see which data, which actions require which permissions, what happens when a seller tries to access another seller's order data. I've seen authorization bugs that were directly traceable to a spec that said "sellers can view order details" without specifying "only their own orders."
Rate limiting is a product decision too. If you don't rate-limit your login endpoint, you're allowing brute-force attacks. If you don't rate-limit your OTP endpoint, you're allowing OTP enumeration. These aren't things engineers add automatically - they need to be in the spec, with specific limits, because the right limits depend on the product context. A login endpoint for a consumer app has different rate limit requirements than an API endpoint for a B2B integration.
The third-party trust problem
Every third-party integration is a trust decision. You're saying "we trust this service with our users' data" or "we trust this service to be available when our users need it." At Uzum's scale, those commitments have real consequences.
When we integrate with a payment provider, we're trusting them with transaction data. When we integrate with a logistics partner's API, we're trusting their uptime for our order status updates. When we add an analytics SDK to our mobile app, we're trusting that SDK with everything the app can see - which, depending on permissions, might be a lot.
The questions aren't hard. What data does this service have access to? What happens if it's compromised? What's our fallback if it goes down? Is the data transfer encrypted in transit? Do we have a data processing agreement? Most of the time, nobody asks these questions because nobody thinks it's their job. It's the PM's job, because the PM is the one deciding to add the integration.
What I actually put in specs now
I include security acceptance criteria in every spec that touches authentication, user data, or external integrations. Not vague stuff like "the feature should be secure" - specific requirements: all user inputs validated server-side (not just client-side), API endpoints require authentication unless explicitly designed to be public, file uploads are type-checked and size-limited before storage, webhook endpoints verify signatures before processing.
Before any significant feature ships, I do a quick threat model with the engineering lead. Not a formal exercise - a 30-minute conversation where we ask: who might try to abuse this? What's the worst realistic thing they could do? What would we need to prevent it? This catches obvious problems before they become expensive ones. It also signals to engineers that security is being taken seriously, which makes them more likely to raise concerns when they see them.
The PM isn't writing the security code. But the decisions that create security problems are product decisions. Owning those decisions means owning the security implications that come with them.