Polyglot Monorepos: When Your Backend and Frontend Speak Different Languages
Rust and TypeScript in one repo. Different languages, one source of truth. Here's how to make it work.
Rust and TypeScript in one repo. Different languages, one source of truth. Here's how to make it work.

My backend is Rust. My frontend is TypeScript. They live in the same repository.
People ask: "Isn't that complicated?" Yes. But the alternative — keeping them in separate repos — is worse.
The answer is straightforward: different problems need different tools.
Rust for the backend because I need predictable performance, true concurrency, and memory safety for long-running workflow executions. (I wrote about this in Why I Chose Rust for AI Infrastructure.)
TypeScript for the frontend because React's ecosystem is unmatched for building complex UIs. TanStack Start, React Flow for visual workflow builders, Framer Motion for interactions. No Rust framework comes close for web UIs.
The question isn't "should both be the same language?" The question is "how do you keep them in sync?"
The key to making a polyglot monorepo work is a contract between languages. Mine is OpenAPI.
Rust backend (utoipa) → OpenAPI spec → TypeScript client (generated)
The Rust backend uses utoipa to generate an OpenAPI specification from the actual handler types. Not hand-written. Generated from code.
The TypeScript client is generated from that spec. Types, request functions, React Query hooks — all auto-generated.
make gen-api
One command. Rust types become TypeScript types. The contract is always accurate.
athena/
├── core/ # Rust (Cargo workspace)
│ ├── Cargo.toml
│ ├── crates/
│ │ ├── athena-api/ # HTTP handlers (Axum)
│ │ ├── athena-workflow/ # Execution engine
│ │ ├── athena-dal/ # Data access
│ │ └── athena-common/ # Shared types
│ └── Makefile
│
├── apps/
│ └── web/ # TypeScript (TanStack Start)
│ ├── package.json
│ └── src/
│
├── packages/
│ ├── api/ # Generated TypeScript client
│ ├── ui/ # Component library
│ ├── store/ # State management
│ └── tokens/ # Design tokens
│
├── turbo.json # Turbo config
├── pnpm-workspace.yaml # pnpm workspaces
└── Makefile # Orchestration
Two build systems coexist:
A root Makefile orchestrates both.
The Rust types are the source of truth:
#[derive(Serialize, Deserialize, ToSchema)]
pub struct Agent {
pub id: AgentId,
pub name: String,
pub workspace_id: WorkspaceId,
pub capabilities: Vec<Capability>,
pub status: AgentStatus,
}
This becomes part of the OpenAPI spec via utoipa. The generated TypeScript:
interface Agent {
id: string;
name: string;
workspace_id: string;
capabilities: Capability[];
status: AgentStatus;
}
If I add a field in Rust and forget to update the frontend, the TypeScript compiler catches it. If I remove a field, the compiler catches it. The types are the safety net.
dev:
make -j2 dev-api dev-web
dev-api:
cd core && cargo watch -x run
dev-web:
cd apps/web && pnpm dev
gen-api:
cd core && cargo run --bin gen-openapi
cd packages/api && pnpm generate
build:
cd core && cargo build --release
pnpm turbo build
test:
cd core && cargo test
pnpm turbo test
One make dev starts both. One make build builds both. One make test tests both.
The Makefile is the lingua franca. It doesn't care about Rust or TypeScript. It just runs commands.
Even across languages, conventions align:
| Concern | Rust | TypeScript |
|---|---|---|
| Naming | snake_case | camelCase |
| Error handling | Result<T, E> | TanStack Query errors |
| Validation | schemars + JSON Schema | Zod schemas |
| Testing | cargo test | vitest |
| Linting | clippy (pedantic) | biome |
The patterns differ. The discipline doesn't.
SurrealDB migrations live in core/migrations/. Both languages interact with the same schema.
DEFINE TABLE conversations SCHEMAFULL;
DEFINE FIELD agent_id ON conversations TYPE record<agents>;
DEFINE FIELD messages ON conversations TYPE array<object>;
DEFINE FIELD status ON conversations TYPE string
ASSERT $value IN ['active', 'archived', 'closed'];
The Rust DAL reads this schema. The TypeScript client consumes the API built on top of it. One database schema, two language ecosystems.
I won't pretend this is painless.
Rust compilation is slow. TypeScript builds are fast. Running both means waiting for the slow one.
Mitigation: Cargo caching, incremental builds, and cargo watch for development. The Rust rebuild only triggers when Rust code changes.
VS Code doesn't equally love both languages at once. Rust-analyzer and TypeScript LSP compete for resources.
Mitigation: Separate workspace settings per language directory. Sometimes I run two editor windows.
CI needs Rust toolchain and Node.js. Caching strategies differ. Build matrices get complex.
Mitigation: Docker build images with both toolchains. Turbo's remote caching for TypeScript. Cargo's target directory caching for Rust.
When a bug spans both languages — frontend sends wrong data, backend rejects it — you're debugging in two ecosystems.
Mitigation: The OpenAPI spec is the reference. Check the spec first. Then check which side violates it.
Not everyone knows both languages. Rust has a learning curve. TypeScript has its own quirks.
Mitigation: Package boundaries mean most developers only work in one language. Full-stack work is for senior developers and AI agents.
The obvious question. Why not all TypeScript (Node.js backend) or all Rust (Leptos/Yew frontend)?
All TypeScript: I tried this. Node.js can't handle the concurrency model I need for workflow execution. Memory usage with thousands of concurrent workflows is untenable. And runtime type erasure means production bugs that Rust's compiler would catch.
All Rust: Rust's web UI ecosystem isn't mature enough. Leptos and Yew are impressive but can't match React's component ecosystem. No React Flow equivalent. No Framer Motion equivalent. The productivity gap is real.
The polyglot approach costs complexity. It buys the best tool for each problem.
A polyglot monorepo makes sense when:
It doesn't make sense when:
Here's what makes this work especially well with AI agents: agents don't context-switch.
A Rust agent works in Rust. A TypeScript agent works in TypeScript. They don't get confused by the other language. The OpenAPI spec is the handoff point — readable by both.
In a single-language repo, an agent might mix concerns. In a polyglot repo, the language boundary enforces separation.
A polyglot monorepo is more work than a single-language setup. More tooling. More configuration. More things that can break.
But when your problems genuinely need different tools, forcing everything into one language is the more expensive choice. The contract between languages — auto-generated, type-checked, always in sync — is what makes the complexity manageable.
One repo. Two languages. One truth.