Andrew Ng recently pointed out PMs are becoming a bottleneck1 because engineers are enjoying a massive productivity boost from AI. YCombinator also noticed this problem which is why their 2026 Request for Startups asked for a Cursor for Product Managers.2
If we speed up building without speeding up discovery, then we'll be collectively shipping more stuff people don't want.
Discovery is inevitable, but it only helps you avoid risk if you do it before shipping
The planning process is still alive and well, but now it's better accomplished with an agent than sitting there staring at a ticket screen. Planning with an agent is the only path fast enough to match the pace of AI delivery. You'd be crazy not to use this newfound power. I spend more time iterating and doing ad-hoc discovery research with an agent than I do actually implementing now. By a lot.
The problem is that the discovery work you do on the fly has none of the structure that helps us avert bias. It's not cited or recorded in a way that can withstand scrutiny, and is fraught with dangerous assumptions that we (both humans and agents) make when we approach it that way.
Every time we ship a new innovation, we inevitably discover whether it will be valuable, profitable, and usable. Product discovery is the art of doing that quickly in advance. You might argue that AI delivery speed makes "test in production" a more practical option, and that's true to a point. But it ignores an important bias: the first idea is almost never the best one in terms of customer value, enhanced margin, or competitive differentiation. You can't ask your customers to use 17 variants of a feature you just shipped and tell you which one they'd buy! So what actually happens is you ship an imperfect version of what could have been, and it becomes much more difficult to fix that once you have v1 in the hands of your users. What good is a hard-earned customer opportunity if your solution is dogshit? You might just give up on the whole idea when you were actually on to something valuable. This is why we have product discovery in the first place. It lets us iterate and compare ideas in an evidence driven way so we're not leaving money on the table.
The answer isn't to slow down the delivery. It's to make discovery fast enough so you can learn from experiments instead of your customers and the market at large.
Your coding agent doesn't need to be persuaded
PRDs exist because human engineers need narrative context to understand intent. You write a persuasive document that argues for why something should be built and gives enough context to make good decisions along the way.
Your coding agent doesn't need to be convinced. It will enthusiastically build whatever you tell it to build. The PRD is wasted on it, and worse, the narrative format causes real problems. Ambiguity that a human engineer would resolve by asking a question, the agent resolves by guessing. Internal contradictions that a human would flag, the agent just picks one interpretation and runs with it.
What the agent needs is a declaration, not a set of instructions. If you declare the target customer, a crisp problem statement, a sharp set of ordered user stories with acceptance criteria, and the assumptions that need to be true for this all to work, the agent will surprise you with how creative it can be in the implementation.
This pattern is already familiar in the software engineering realm: it's the difference between declarative and imperative. Declarative specs give agents room to be creative within constraints. Imperative specs turn them into a typing service and force them into mistakes because they don't understand how to avoid them.
Describe what needs to be true. Let the agent figure out how.
Empathy and assumptions: the missing agent context
So what goes into a declaration? I went through a long stretch of trial and error3 before landing on something simple: treat the agent a bit more like a human colleague and answer the fundamental questions they'd need to empathize with the user.
Who is the customer? What specific problem are we solving for them? What constraints exist? And most importantly: where are the risks? Every feature has hidden assumptions that need to be true for it to succeed. Once you phrase them as testable hypotheses instead of leaving them unsaid in the margins, you're looping the agent in on the risks. Current models have a smart approach to assumptions; they treat them like requirements. When that data is present from the start, the agent will steer the implementation in a way that makes them all true. If they encounter one they are feeling concerned about, they can even bring it up and help you gather some evidence on the spot.
Assumptions communicate intent, the things that might be inferred by people on the team, but that agents miss. Agents are basically living in the movie Memento4: their medium-term memory is wiped every time they wake up in the morning! Assumptions are the tattoos that keep them oriented.
The hard part isn't testing assumptions. Product people know how to do that. The hard part is seeing them in the first place. They're buried in the solution, invisible until something breaks. Little things like assuming your customer has the technical depth to use an API product, or assuming a marketplace has enough supply to satisfy demand at launch. We don't see our own bias clearly. We bring optimism when neutrality or pessimism might be wiser.
AI models are surprisingly good at seeing around these corners. Give them the customer context and a solution, and they'll pressure test your idea in a pre-mortem style. We leaned on that and built it into the Revelica story map5 playbook. You'll almost certainly see at least one assumption you weren't aware of.
Once you give the agent a declaration with assumptions called out, it can check its own work against the spec mid-build. It can query for evidence, both for and against an assumption: "what did customers say about this?" and just as importantly, "is there anything that contradicts what we're betting on?" It can flag when it's about to make a decision that contradicts a constraint. And it can log what it decided so the next session starts warm instead of cold.
What we wanted the whole time
This is what speeding up discovery actually looks like. I talk to customers, feed the transcripts into Revelica, and it synthesizes opportunities. I use those to make focused story map specs with the assumptions called out. I compare multiple ideas against each other and kill the ones where the assumptions are too risky. The best decision is a comparison, not an evaluation of a single option in isolation.
Then I hand the winner to Claude Code. The agent builds with full context. When it hits a decision point, we work together to gather evidence. Discovery and building happen in the same flow instead of in separate phases that don't talk to each other.
Teams shipping with agents are moving faster than ever. The question nobody seems to be asking is whether they're shipping faster toward the right thing, or just producing more waste at higher velocity. If 80% of features6 were rarely used when shipping was slow, what's the number now that shipping is fast and discovery is being skipped entirely?
We made it easier to try the story map generator now, go ahead and make one without even onboarding! Describe a feature idea, lay out the key customer context and get a set of user stories with assumptions right at the moment in time they could happen to your customer. That's the context your coding agent is missing. Copy or download it as json, which is possibly the only format I'm aware of that agents like more than markdown.
If you have feedback, it goes straight to our Slack. I can't help but drop what I'm doing to read them, even the hot takes.
Randall
