Shipping 20,000 Lines in a Day: What Actually Makes That Possible
It's not about typing speed. It's about the infrastructure, the structure, and the discipline that let you move fast without breaking things.
It's not about typing speed. It's about the infrastructure, the structure, and the discipline that let you move fast without breaking things.

13 commits. 176 files. 20,902 lines added. One day.
Not a hackathon. Not throwaway code. Production features: a complete chat system with streaming, a workflow builder with 11 node configuration panels, a cost analytics dashboard, execution streaming, and HITL approval flows.
People ask how. The answer isn't "AI writes code fast." AI is part of it. But code generation speed isn't the bottleneck. Everything else is.
The monorepo was set up. Packages had clear boundaries. Build tooling worked. CI was configured.
I didn't spend time on infrastructure that day. I spent time on features. All the boring setup work from previous weeks paid off.
Lesson: The fastest coding day is enabled by the slowest setup days. Invest in structure before you need speed.
Before writing any feature code, the data types existed:
pub struct Conversation { ... }
pub struct WorkflowConfig { ... }
pub struct CostAnalytics { ... }
pub struct ExecutionStream { ... }
These types were the spec. They defined what each feature needed to handle. When I (or an AI agent) started building, the question wasn't "what should this look like?" It was "implement this type."
Types as specs eliminate design decisions during implementation. You're not thinking; you're executing.
Each feature was built end-to-end:
Conversation system:
migration → DAL → handler → API hooks → React components
Workflow builder:
config types → node panels → properties panel → builder integration
Cost dashboard:
utilities → components → charts → page routes
I didn't build "all the database stuff" then "all the API stuff" then "all the frontend stuff." Each feature was complete before starting the next.
This meant each feature was testable immediately. No waiting for other layers.
My /docs directory has architecture decisions, API specs, data models, and frontend architecture guides. These aren't for compliance. They're for agents.
When an AI agent started working on the streaming system, it could read:
docs/03_engine/chat_system.md — how the chat system worksdocs/02_architecture/api_spec.md — API conventionsdocs/07_frontend/architecture.md — frontend patternsNo guessing. No asking me. The agent had context.
13 commits, not 1 giant commit. Each commit was a logical unit:
Each commit was reviewable. Each was revertible. If commit #8 introduced a bug, I could identify and fix it without touching the other 12.
The last few commits weren't features. They were cleanup:
This is intentional. The feature commits prioritize correctness and completeness. The cleanup commits prioritize quality.
Mixing these concerns slows everything down. Writing clean code on the first pass is slower than writing working code and cleaning it up in a separate pass.
Multiple AI agents worked on independent slices simultaneously:
Agent A: Rust backend for conversations
Agent B: React components for cost dashboard
Agent C: Workflow node configuration panels
Me: Architecture decisions, reviews, wiring
These agents didn't conflict because they worked in different directories. The monorepo structure made this safe.
My job wasn't writing code. It was directing traffic, reviewing output, and making decisions agents couldn't make.
Two agents adding different npm packages created merge conflicts in pnpm-lock.yaml. Resolved by batching dependency additions.
One agent used try/catch everywhere. Another used Result types. The cleanup pass standardized this, but it would have been better to specify the pattern upfront.
The OpenAPI client was regenerated mid-day. Some agent-written code referenced old types. Quick fix, but it interrupted flow.
One agent created overly complex utility functions when simple inline code would've worked. Premature abstraction. I simplified in the cleanup pass.
Here's the actual process:
Morning: Plan
Morning-Afternoon: Build
Late Afternoon: Integrate
Evening: Clean
AI doesn't magically produce 20,000 lines of good code. Here's the actual breakdown:
What AI did well:
What AI didn't do:
AI was the multiplier. The structure, types, docs, and decisions were the base. Without the base, the multiplier is zero.
If you want to ship at this pace:
Non-negotiables:
turbo, cargo, whatever)Helpful but not required:
What won't help:
Not every day. This was a push. A concentrated effort to get a set of features across the line.
Sustainable pace is lower. But the infrastructure that enabled this day — the monorepo, the types, the docs, the cleanup discipline — those make every day more productive.
The goal isn't 20,000 lines every day. The goal is that when you need to move fast, the system supports it instead of fighting you.
High-output days aren't about working harder or having better AI tools. They're about the months of investment in structure, types, documentation, and tooling that make speed possible.
The 20,000-line day was earned by every boring day that came before it.