I Built an AI Agent to Do My Pre-Refinement. It Turned Into a Mirror of How We Wrote Tickets.
A time-saver that accidentally became an audit of our team's process — and why the fix turned out to be a second agent, not a smarter first one.
Full article excerpt tap to expand
try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 3780647) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Olexandr Uvarov Posted on Apr 28 I Built an AI Agent to Do My Pre-Refinement. It Turned Into a Mirror of How We Wrote Tickets. #agents #ai #llm #productivity 1 hour. Seven hours. Same ticket, same prompt, two days apart. The agent wasn't broken. It was showing me what my team had been doing silently for years. But I'm getting ahead of myself. The setup Most teams I've worked on have a stage between "product writes a ticket" and "the team estimates it" — pre-refinement, grooming, technical intake, different names for the same homework. A developer reads the ticket before the team meeting, opens the design tool, searches the repos for similar components, and leaves a technical comment: what already exists, what needs to be built, files involved, an hour estimate, open questions. Then the team meets and estimates. Without that homework, the meeting becomes the homework. Fifteen minutes stretches to forty-five. That homework cost me about a day and a half per sprint. Not glamorous. Not particularly hard. Mostly reading, searching, context-switching, and a lot of "I swear we built something like this six months ago, where is it." Perfect thing to hand to an AI agent. The first version The setup was intentionally boring. Nothing exotic under the hood — an LLM with tool access to our ticket system, the design tool, and both repositories, plus a prompt that said read these sources, produce this output, post it back as a comment. That's it. What the agent was supposed to produce, per ticket: What parts of the feature already exist in code What needs to be built from scratch A short plan with files involved An hour estimate Open questions I ran it across the backlog. Comments appeared. Plans looked reasonable. I moved on, happy to have my afternoons back. Then I started actually reading what it produced. Discovery one: the agent was blind to what we already had The agent kept confidently recommending we build things we'd already built. A typical shape: a ticket would describe a feature — say, a section that displays a list of items with specific formatting, calculations, and localization rules. The agent would read it, search the code, and recommend building it from scratch. Meanwhile the exact calculation lived in another component, built months ago for a different flow. The agent missed it because the component name didn't match the language of the ticket, and because we have hundreds of components with similar-sounding names. This wasn't the agent being dumb. This was me asking it to navigate our codebase the way a new hire does on day one — by keyword search and guesswork. Why this matters more than it sounds Our system is split across two repositories — a frontend one, and a CMS-style one. The frontend holds the actual UI components. The CMS-style repo holds configurable blocks that reference those components, and it's where product writes tickets from — picking blocks from a list, configuring their content, wiring them into flows. The problem isn't that components are missing. They're there. Both in the frontend and registered in the CMS. The problem is that there are a lot of them, built over years, and nobody remembers all of them.…
This excerpt is published under fair use for community discussion. Read the full article at DEV Community.