WeSearch

After Knowledge, Discipline

·20 min read · 0 reactions · 0 comments · 0 views
After Knowledge, Discipline

Anatomy of a Claude Code setup that pays for itself

Original article
DEV Community
Read full at DEV Community →
Full article excerpt tap to expand

try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 268473) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Sinisa Kusic Posted on Apr 28 • Originally published at ku5ic.substack.com After Knowledge, Discipline #claude #ai #productivity #devops Anatomy of a Claude Code setup that pays for itself The most common reaction when I show people my Claude Code workflow is some version of: "isn't that a lot of tokens?" It is. The flow front-loads context, plans before it implements, runs scripted checks after edits, and writes structured artifacts to disk for later steps to pick up. Compared to typing "build me a feature" into a fresh chat, it spends more. It also does the thing. That second sentence is the one most takes on AI-assisted development skip over. The cost objection treats tokens as the only line item on the invoice. The bigger line item, by a wide margin, is the cost of correcting an agent that drifted out of scope, hallucinated an API, edited the wrong file, or confidently produced something that has to be thrown away. Once you account for that, the calculus inverts. Structure is cheaper than chaos. This article is an anatomy of the structure I landed on. Everything described here lives in my dotfiles, public, at github.com/ku5ic/dotfiles/tree/main/claude. I will link the actual files as I go so you can read or steal whatever is useful. The interface was already there Before walking through the parts, it is worth saying out loud what the parts are made of, because nobody had this on their 2026 bingo card. The interface that makes AI agents predictable and produces quality output is not a vector database. Not a fine-tuned model. Not a proprietary framework. Not an orchestration layer with a clever name. It is markdown files in a sensible folder structure. CLAUDE.md at the repo root. A docs folder the agent reads before it touches code. Command files that encode a workflow. Rules files that encode standards. Plain text. Version controlled. Diffable. Greppable. The exact tooling we have had for three decades. For a couple of years the industry poured capital into building new primitives for LLMs. New storage layers, new retrieval mechanisms, new agent protocols, new runtime abstractions. Most of it was solving a problem the models did not actually have. The models were trained on open source. Open source runs on markdown and folders. READMEs, contributing guides, architecture docs, ADRs, issue templates. That is the native format of the corpus. Of course the models respond to it. Of course structure in a repo produces structure in the output. The unlock was not technical. It was noticing that the interface was already there. Three things follow from that, and the rest of this article is mostly working out the implications. First, the quality of your output is bounded by the quality of your written context. A thin CLAUDE.md produces thin work. A precise one produces precise work. Architecture documents, coding standards, and explicit constraints are no longer dead weight. They are executable. Second, folder structure is a contract. When the agent can infer where things belong from the tree alone, it stops guessing. When it cannot, it invents. The same property that makes a codebase readable to a new hire makes it legible to a model.…

This excerpt is published under fair use for community discussion. Read the full article at DEV Community.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from DEV Community