Feb 18, 20266 min read

Shipping 20,000 Lines in a Day: What Actually Makes That Possible

It's not about typing speed. It's about the infrastructure, the structure, and the discipline that let you move fast without breaking things.

Shipping at Scale

13 commits. 176 files. 20,902 lines added. One day.

Not a hackathon. Not throwaway code. Production features: a complete chat system with streaming, a workflow builder with 11 node configuration panels, a cost analytics dashboard, execution streaming, and HITL approval flows.

People ask how. The answer isn't "AI writes code fast." AI is part of it. But code generation speed isn't the bottleneck. Everything else is.

What Made It Possible

1. The Structure Was Already There

The monorepo was set up. Packages had clear boundaries. Build tooling worked. CI was configured.

I didn't spend time on infrastructure that day. I spent time on features. All the boring setup work from previous weeks paid off.

Lesson: The fastest coding day is enabled by the slowest setup days. Invest in structure before you need speed.

2. Types Were Defined First

Before writing any feature code, the data types existed:

pub struct Conversation { ... }
pub struct WorkflowConfig { ... }
pub struct CostAnalytics { ... }
pub struct ExecutionStream { ... }

These types were the spec. They defined what each feature needed to handle. When I (or an AI agent) started building, the question wasn't "what should this look like?" It was "implement this type."

Types as specs eliminate design decisions during implementation. You're not thinking; you're executing.

3. Vertical Slices, Not Horizontal Layers

Each feature was built end-to-end:

Conversation system:
  migration → DAL → handler → API hooks → React components

Workflow builder:
  config types → node panels → properties panel → builder integration

Cost dashboard:
  utilities → components → charts → page routes

I didn't build "all the database stuff" then "all the API stuff" then "all the frontend stuff." Each feature was complete before starting the next.

This meant each feature was testable immediately. No waiting for other layers.

4. Docs as Context

My /docs directory has architecture decisions, API specs, data models, and frontend architecture guides. These aren't for compliance. They're for agents.

When an AI agent started working on the streaming system, it could read:

  • docs/03_engine/chat_system.md — how the chat system works
  • docs/02_architecture/api_spec.md — API conventions
  • docs/07_frontend/architecture.md — frontend patterns

No guessing. No asking me. The agent had context.

5. Incremental Commits

13 commits, not 1 giant commit. Each commit was a logical unit:

  1. Workspace & navigation overhaul
  2. Dashboard & cost analytics
  3. Workflow builder node configs
  4. Chat system (backend)
  5. Chat system (frontend)
  6. Execution streaming
  7. Lightweight status endpoint
  8. OTEL tracing + circuit breaker
  9. Bug fix: message threading
  10. Frontend cleanup pass
  11. Clippy cleanup
  12. Expensive operation refactor
  13. Dependency updates

Each commit was reviewable. Each was revertible. If commit #8 introduced a bug, I could identify and fix it without touching the other 12.

6. The Cleanup Discipline

The last few commits weren't features. They were cleanup:

  • Lint warnings fixed
  • Unused imports removed
  • Clippy suggestions applied
  • Expensive operations refactored into helpers
  • Dead code deleted

This is intentional. The feature commits prioritize correctness and completeness. The cleanup commits prioritize quality.

Mixing these concerns slows everything down. Writing clean code on the first pass is slower than writing working code and cleaning it up in a separate pass.

7. Parallel Agent Work

Multiple AI agents worked on independent slices simultaneously:

Agent A: Rust backend for conversations
Agent B: React components for cost dashboard
Agent C: Workflow node configuration panels
Me: Architecture decisions, reviews, wiring

These agents didn't conflict because they worked in different directories. The monorepo structure made this safe.

My job wasn't writing code. It was directing traffic, reviewing output, and making decisions agents couldn't make.

What Almost Went Wrong

Lock File Conflicts

Two agents adding different npm packages created merge conflicts in pnpm-lock.yaml. Resolved by batching dependency additions.

Inconsistent Error Handling

One agent used try/catch everywhere. Another used Result types. The cleanup pass standardized this, but it would have been better to specify the pattern upfront.

Stale Types

The OpenAPI client was regenerated mid-day. Some agent-written code referenced old types. Quick fix, but it interrupted flow.

Over-Generated Code

One agent created overly complex utility functions when simple inline code would've worked. Premature abstraction. I simplified in the cleanup pass.

The Workflow

Here's the actual process:

Morning: Plan

  • Review what needs to ship
  • Define types and contracts for each feature
  • Write brief specs for each vertical slice

Morning-Afternoon: Build

  • Assign slices to agents (or myself)
  • Work in parallel where possible
  • Commit each feature as it's complete
  • Quick smoke test after each commit

Late Afternoon: Integrate

  • Wire features together
  • Test cross-feature interactions
  • Fix integration issues

Evening: Clean

  • Run linters, fix warnings
  • Apply clippy suggestions
  • Refactor duplicated patterns
  • Remove dead code
  • Update documentation

The Myth of "AI Writes Code"

AI doesn't magically produce 20,000 lines of good code. Here's the actual breakdown:

What AI did well:

  • Boilerplate: React components, form handling, API hooks
  • Pattern matching: "Build another node config panel like this one"
  • Repetitive tasks: 11 similar but different configuration panels
  • Cleanup: finding and fixing lint issues

What AI didn't do:

  • Architecture decisions (workflow engine design, streaming protocol choice)
  • Novel problem-solving (circuit breaker implementation specifics)
  • Cross-feature integration (wiring components together)
  • Quality judgment (deciding what's good enough vs. needs work)

AI was the multiplier. The structure, types, docs, and decisions were the base. Without the base, the multiplier is zero.

What You'd Need

If you want to ship at this pace:

Non-negotiables:

  1. A monorepo with clear package boundaries
  2. Type contracts defined before implementation
  3. Documentation agents can read
  4. A build system that works (turbo, cargo, whatever)
  5. A cleanup discipline (separate pass, not mixed with features)

Helpful but not required:

  1. Generated API clients from specs
  2. Component libraries for consistent UI
  3. Database migration tooling
  4. CI that catches problems fast

What won't help:

  1. A faster AI model (speed isn't the bottleneck)
  2. More agents (coordination cost increases)
  3. Skipping the cleanup pass (you'll pay for it tomorrow)
  4. Skipping types/docs (agents will produce garbage)

Is This Sustainable?

Not every day. This was a push. A concentrated effort to get a set of features across the line.

Sustainable pace is lower. But the infrastructure that enabled this day — the monorepo, the types, the docs, the cleanup discipline — those make every day more productive.

The goal isn't 20,000 lines every day. The goal is that when you need to move fast, the system supports it instead of fighting you.


High-output days aren't about working harder or having better AI tools. They're about the months of investment in structure, types, documentation, and tooling that make speed possible.

The 20,000-line day was earned by every boring day that came before it.

Enjoyed this article?

Share it with others or connect with me