Posts
AI-assisted backlog refinement: using LLMs to write better user stories

Kelly Lewandowski
Last updated 10/04/20267 min read
Where AI adds real value in refinement
1. Expanding acceptance criteria
2. Identifying risks and dependencies
3. Splitting oversized stories
4. Drafting stories from raw inputs

A practical workflow for AI-assisted refinement
Prep stories before the session (10 min)
The product owner writes draft stories with basic acceptance criteria. Use the User Story Generator if starting from a rough feature description. This shouldn't take long — rough is fine. Run AI expansion on each story
Feed each story to an LLM with this prompt: "Given this user story and acceptance criteria, list edge cases, implicit assumptions, and missing scenarios. Also flag any potential risks or dependencies." Attach relevant context (data model, related stories, etc.). Review AI output as a team
Go through the AI-flagged items in refinement. Discard the noise, keep the genuine catches. The conversation is what matters, not the AI output itself. Estimate with fuller context
Stories that have been through AI expansion tend to surface complexity earlier. Some teams report refinement sessions running 20-30% shorter because fewer "wait, what about..." interruptions happen during estimation. Use planning poker to estimate with the full picture.
The pitfalls you need to watch for

Prompting tips that actually work
| Instead of | Try |
|---|---|
| "Write a user story for search" | "Write a user story for full-text search across project names and descriptions, for a user managing 50+ projects" |
| "Generate acceptance criteria" | "Generate edge-case acceptance criteria assuming a multi-tenant system with role-based permissions" |
| "Split this epic" | "Split this epic by user workflow step, keeping each story independently deployable" |
| "What are the risks?" | "Given this data model [paste schema], what are the migration risks and cross-service dependencies?" |