Why a Monorepo When You're Building With AI Agents
AI agents work better when your codebase has clear boundaries. A monorepo gives them that. Here's how.
AI agents work better when your codebase has clear boundaries. A monorepo gives them that. Here's how.

I didn't choose a monorepo because it's trendy. I chose it because I'm building with AI agents as development partners, and a monorepo is the only structure that doesn't fall apart when multiple agents work in parallel.
My stack is a polyglot monorepo: Rust backend, TypeScript frontend, shared packages, all in one repo. Turbo orchestrates builds. pnpm manages dependencies. Cargo workspaces handle the Rust side. SurrealDB is the database — and agents interact with it directly through an MCP server, so they can query schemas, inspect data, and write migrations without me acting as a middleman.
my-project/
├── core/ # Rust backend (Cargo workspace)
├── apps/web/ # TypeScript frontend (TanStack Start)
├── packages/api/ # Generated TypeScript API client
├── packages/ui/ # Component library
├── packages/store/ # State management
└── docs/ # Architecture docs
When I spin up multiple AI coding agents, each works in a different part of this tree. One handles Rust handlers. Another builds React components. A third generates API hooks from the OpenAPI spec.
This only works because the boundaries are explicit.
In a polyrepo world, each service lives in its own repository. That means:
No shared context. Agent A working on the API doesn't know what Agent B is building in the frontend. They can't see each other's types. They can't check contracts.
Integration happens later. You find mismatches during deployment, not during development. With AI agents, mismatches multiply because agents are fast and confident — even when wrong.
Coordination overhead. You need to tell each agent about changes in other repos. That's manual work. Manual work defeats the purpose of using agents.
Version drift. Each repo has its own dependency versions, its own conventions, its own truth. Agents need one truth, not five slightly different ones.
The most important thing in my monorepo: the packages/api package. It's auto-generated from the Rust backend's OpenAPI spec.
Rust backend → OpenAPI spec → Generated TypeScript client
When an agent adds a new endpoint in Rust, the types flow downstream. The frontend agent picks them up. No guessing. No stale interfaces.
This is the contract. Both agents can read it. Both agents can trust it.
A monorepo doesn't mean everything is tangled. It means everything is visible.
Each package has clear ownership:
core/crates/api/ — HTTP handlerscore/crates/workflow/ — Execution enginecore/crates/dal/ — Data accesspackages/ui/ — Reusable componentsapps/web/ — Application shellAn agent working in dal knows it shouldn't touch apps/web. The directory structure is the permission model.
Here's what a productive day looks like:
These agents rarely conflict because they're working in different directories. When they do touch the same file — usually routes.rs or a shared type — the merge is trivial because the changes are additive.
turbo build builds everything. cargo build builds the backend. pnpm test runs frontend tests. One CI pipeline. One source of truth.
When an agent asks "does the build pass?" — there's one answer. Not "it passes in this repo but I haven't checked the other three."
My docs/ directory sits alongside the code:
docs/
├── 00_project/ # Decisions, status
├── 01_product/ # Vision, roadmap
├── 02_architecture/# System design, API spec
├── 03_engine/ # Workflow, intelligence
├── ...../ # Some others..
└── 07_frontend/ # Frontend architecture
Agents can read these. They can understand the architecture before making changes. The docs are always up to date because they're in the same PR as the code.
Beyond docs, I configure skills for each agent — reusable instruction sets that encode project conventions, coding patterns, and domain-specific rules. A skill might tell the backend agent how DAL repositories are structured, or tell the frontend agent which component patterns to follow. Combined with the SurrealDB MCP server for direct database access, agents don't just read the codebase — they understand the full development environment.
On a typical feature, I run agents like this:
Phase 1: Backend first One agent writes the Rust handler, DAL layer, and database migration. It produces an OpenAPI spec update.
Phase 2: Parallel frontend Once the spec is generated, two agents can work simultaneously:
Phase 3: Integration I connect the pieces. Usually this is 10 minutes of wiring, not hours of debugging.
The monorepo makes Phase 2 possible. Without shared types, Phase 2 would be guesswork.
It's not perfect. Common issues:
Lock file conflicts. Multiple agents adding dependencies can conflict in pnpm-lock.yaml. Solution: let one agent handle dependency additions.
Build cache invalidation. Turbo's caching is aggressive. Sometimes an agent's change doesn't trigger the right rebuild. Solution: turbo build --force when things seem stale.
Context limits. In a monorepo, the full codebase doesn't fit in an agent's context window. Solution: good docs and clear package boundaries so agents only need to read their slice.
Merge conflicts in barrel files. Files like mod.rs or index.ts that re-export everything get touched by every agent. Solution: accept this friction; it's minor compared to the benefits.
From a recent sprint working with AI agents:
| Metric | Monorepo | Previous Polyrepo |
|---|---|---|
| Integration bugs | ~2 per feature | ~8 per feature |
| Time to first working PR | Hours | Days |
| Agent rework rate | ~10% | ~35% |
| Cross-boundary type errors | Near zero | Constant |
The biggest win isn't speed. It's confidence. When agents work in a monorepo with shared types, the output is correct more often.
A monorepo isn't magic. It doesn't help when:
A messy monorepo is worse than clean polyrepos. The structure has to earn its place.
Monorepos aren't about convenience. They're about creating an environment where multiple workers — human or AI — can build in parallel without stepping on each other. The shared types are the contract. The package boundaries are the guardrails. The single build system is the truth.
If you're building with AI agents, your repo structure isn't just an engineering decision. It's an agent productivity decision.
In a future post, I'll walk through the concrete setup: how to structure a monorepo from scratch for AI-driven development — the directory layout, the skills, the MCP servers, the docs conventions, and all the configuration that makes agents productive from day one.