"As a user, I want to filter my order history by date so that I can find past orders more easily."
I've written hundreds of these. They follow the format, they're technically correct, and they communicate almost nothing useful to the people who need to build the thing.
The format is the problem. Not the idea behind it - the format itself.
What the format actually captures
"As a [user], I want [feature] so that [benefit]" sounds like it's capturing the why. It usually doesn't. "So that I can find past orders more easily" is not a why. It's a restatement of the feature in slightly different words. The real why - what are users actually trying to do when they look up an old order? Are they checking a delivery address to reuse it? Disputing a charge? Looking for a product they want to reorder? - is completely absent.
This matters because the implementation changes depending on the answer. If users are mostly looking up orders to reorder items, the right solution might not be date filtering at all - it might be a "reorder" button on the order detail page. If they're disputing charges, they need to see payment details prominently. If they're checking delivery addresses, they need address information surfaced quickly. A date filter solves all of these problems equally poorly.
As a user, I want to filter my order history by date so that I can find past orders more easily.
Users who want to reorder a product scroll through 50+ orders on mobile for 30-60 seconds. Support tickets show they're looking for a specific product, not a specific order. Primary use case is product discovery, not order lookup.
The user story format gives you the feature request without the context that would tell you whether the feature request is even the right solution.
Why engineers end up filling in the gaps
When a ticket says "add date filtering to order history," an engineer has to make a dozen decisions that the ticket doesn't address. What's the default date range? Does filtering happen client-side or does it hit the database? If it hits the database, is there an index on the order date column, or are we about to do a full table scan on a table with 40 million rows? What's the empty state when no orders match? Does the filter persist across sessions, and if so, where does that state live?
A good engineer will ask some of these questions. But they'll also make judgment calls on the ones they don't ask about, because asking about everything would mean the ticket never gets started. Those judgment calls are often wrong - not because the engineer is bad, but because they don't have the context to make them correctly.
We had an engineer spend a week building a batch job that ran nightly to sync order statuses when what we actually needed was a real-time webhook handler. The spec said "sync order status." I meant immediately on status change. He reasonably read it as "keep them in sync." A nightly batch job does keep them in sync. It also means a customer's order shows as "processing" for 23 hours after it shipped. The user story format would have captured "as a user, I want to see my current order status" and missed the entire question of latency requirements.
What I write instead
Problem statements and acceptance criteria. No template, no format - just a clear description of the problem and a specific list of what done looks like.
The problem statement is a paragraph or two. It describes the user, the situation they're in, what they're trying to accomplish, and why the current experience fails them. For the order history example: "Users who want to reorder a product they've bought before currently have to scroll through their entire order history to find it. On mobile, with 50+ orders, this takes 30-60 seconds of scrolling. Support tickets show this is the most common reason users contact us about order history - they're not looking for a specific order, they're looking for a specific product they want to buy again."
That tells an engineer something. It tells them the primary use case is product discovery, not order lookup. It tells them mobile is the critical path. It tells them the volume of orders matters. Now when they're making implementation decisions, they have context.
The acceptance criteria are specific and testable. Not "users can filter by date" but: "Users can search order history by product name. Search is case-insensitive and matches partial product names. Results update as the user types, with a 300ms debounce to avoid hammering the search endpoint on every keystroke. If no results match, the empty state shows the search term and a 'clear search' link. Search state does not persist across sessions."
That's something an engineer can build and a QA person can test without asking me five follow-up questions.
The pushback I get
The most common pushback is that user stories are a standard format and everyone knows how to read them. True. It's also not a good reason to use a format that doesn't work.
The second pushback is that problem statements take longer to write. Also true. But a user story that doesn't capture the real requirement takes longer to implement, because the engineer has to fill in the gaps themselves, often incorrectly. The time you save writing a thin user story you spend in clarification meetings, rework, and support tickets about the thing you built wrong.
The third pushback is that user stories keep the focus on the user. I agree that's important. My problem statements always start with a description of the user and their context - I just don't constrain it to a single sentence in a rigid format. A paragraph that describes a real person in a real situation does more to keep the team focused on the user than "as a user, I want" ever did.
What I kept from user stories is the discipline of asking "why does this matter to the user?" That question is important. The format asks it but rarely answers it well. A good problem statement answers it in enough detail that the answer actually shapes the implementation.