The AI-Driven Monorepo: A Technical Specification
This document outlines the real, practical, senior-engineer approach to building a robust, AI-driven monorepo. It details the "why" of the stack, the "how" of the structure, and the critical "tricks" learned from experience.
This specification covers:
- The "Holy Trinity" for AI: Why TypeScript + Zod + RPC is the core.
- Monorepo Structure: How you actually build it (with Moon).
- The Core Benefits: What you gain from this stack.
- The "Tricks": The critical, non-obvious best practices.
- The End-to-End Stack: The full picture, from client (TanStack Query) to deployment (Deno Deploy).
- The AI Workflow: The optimal way to instruct an AI to make changes.
1. The "Holy Trinity" for AI-Integrated Software
The combination of TypeScript + Zod + RPC is the foundation for AI-integrated software for three practical reasons:
- AI Needs Structure (TypeScript):
Large AI pipelines fall apart when data types drift. TypeScript forces every piece of data to have a shape, giving the AI a clear, labelled blueprint. - AI Outputs Can Be Wrong (Zod):
Zod is the "AI bouncer." You ask an AI to produce structured data (like JSON), and Zod validates it at runtime. This prevents "hallucinated" structures from breaking your system. - AI Systems Are Distributed (RPC):
Your architecture is a cluster of cooperating mini-services. RPC turns them into one unified organism, allowing the AI to call a database worker or run an analysis with a simple, typed function call:await api.runAnalysis({ emailId, rules }).
Putting it together:
- TypeScript ensures everything is well-defined.
- Zod ensures everything is well-behaved.
- RPC ensures everything is connected.
2. How to Structure It: The Monorepo
2.1 The Monorepo "Conductor" (Build System)
You don't just put folders together; you use a high-performance build system like Moon, Turborepo, Nx, or Rush to manage the monorepo.
- Why? Because it understands the "dependency graph" of your system. It knows frontend-admin depends on core and builds core first.
- Remote Caching: It can save build/test artifacts to a remote cache. If your CI pipeline has already built that exact code, your local machine downloads the artifact instead of re-building, saving immense time.
- Parallel Execution: It runs tasks (like build, test, lint) in parallel across all your packages.
2.2 The Folder Structure (Managed by Build System)
The core idea is a shared "schema + contract" layer that every part of the system imports.
/apps
/frontend-web (Next.js/React App)
/frontend-admin (Next.js/React App)
/backend-api (Deno Server)
/workers-ai (Deno)
/packages
/core
/schemas (All Zod schemas)
/types (All TS types, inferred from Zod)
/rpc (oRPC router definitions and type signatures)
/db (DrizzleORM table definitions)
/domain (Shared business logic, enums, constants)
/ui (Shared React components)
3. The Core Benefits of This Stack
- A. Zero Drift:
When the backend changes a schema -> the front-end breaks at compile time. When the front-end sends wrong data -> Zod rejects it at runtime. Nothing drifts. - B. A Perfect "Map" for Each Front-End:
Every front-end has the same callable RPC functions with the same types. No syncing docs. - C. Superior Developer Experience (TanStack Query): You don't just get a typed function (api.user.update). You get a fully typed hook (api.user.getDashboard.useQuery()) that automatically handles caching, loading states, error states, and optimistic updates. This is the "other half" of the RPC client.
- D. Safe Full-Stack Refactors: Change a field name in /packages/core/schemas? Moon helps identify, and TypeScript shows you, every single place that breaks instantly across the full stack.
4. The "Tricks": This is the Gold
These are the non-obvious rules for this specific stack.
-
Trick 1: Infer ALL Types from Zod. Never write TypeScript types by hand. The Zod schema is the single source of truth.
// In /core/schemas const User = z.object({ id: z.string(), role: z.enum(["admin", "user"]) }); // In /core/types type User = z.infer<typeof User>; -
Trick 2: Your Zod Schemas are the Universal Language.
This /core/schemas package is imported by the front-end, back-end, AI agents, and even your tests. -
Trick 3: RPC Routers Live in the Shared Core.
The front-end imports the definitions (the type signatures) from /core/rpc, but not the implementation (which lives in /apps/backend-api). -
Trick 4: Use Zod as the "AI Bouncer."
The AI -> Zod -> TypeScript -> RPC flow is your guaranteed-safe pipeline. -
Trick 5: Use Coarse-Grained RPC Calls.
- Bad: getUser(), then getUserPermissions(), then getUserNotifications()
- Good: getUserDashboard()
Give the front-end everything it needs for a view in one call.
-
Trick 6: Use Zod for Permissions (The AuthZ Layer). Don't just validate shape; validate permission. Create a schema for user permissions in /core and use it as RPC middleware.
// In /core/schemas/auth.ts const UserPermissions = z.object({ canDeleteUsers: z.boolean() }); // In your oRPC router (middleware) const adminProcedure = oc.use(isAuthed).use((input, context, meta) => { const perms = UserPermissions.parse(context.user.permissions); if (!perms.canDeleteUsers) { throw new Error("UNAUTHORIZED"); } return meta.next(input, context); });
5. The "Near-Perfect" End-to-End Stack
This is the full picture, from client to cloud.
- 5.1 The Monorepo Conductor: Build System (Moon)
- 5.2 The Frontend (Client): React + TanStack Query
- This is the consumer. TanStack Query is the client-side partner to your RPC, giving you type-safe hooks for data fetching, caching, and mutation.
- 5.3 The Runtime & API: Deno
- First-class TypeScript, no node_modules, security by default. The perfect modern runtime.
- 5.4 The API Layer: RPC (oRPC)
- The type-safe communication channel using oRPC for schema-driven, contract-first API design.
- 5.5 The Validation Layer: Zod
- The runtime "bouncer" for all data, especially from AI.
- 5.6 The Database ORM: DrizzleORM
- TypeScript-first, fully typed SQL queries.
- 5.7 The Database: Neon Postgres
- Serverless, scalable Postgres that pairs perfectly with a serverless runtime.
- 5.8 The Testing Layer: Playwright
- For End-to-End (E2E) testing. Your Playwright tests can import schemas and types from /core to generate type-safe mock data and validate API responses.
- 5.9 The Deployment Platform: Deno Deploy
- The native, serverless, edge-computing platform for Deno. It's the most frictionless way to deploy this entire Deno-based backend.
Stack Diagram
[ MONOREPO (Managed by Build System) ]
+-------------------------------------------------------------------------+
| |
| [ E2E TESTS (Playwright) ] |
| | (Imports types/schemas) |
| v |
| [ FRONT-END (React + TanStack Query) ] |
| | T
| v (Calls type-safe RPC hooks) Y
| [ RPC LAYER (oRPC) ] P
| | E
| v (Zod Runtime Validation) S
| v (Zod PermissionSchema Middleware) C
| [ BACKEND-API (Deno) ] R
| | I
| v (Type-safe queries) P
| [ ORM (Drizzle) ] T
| |
| v (SQL) |
| [ DATABASE (Neon Postgres) ] |
| |
+-------------------------------------------------------------------------+
|
v
[ DEPLOYMENT (Deno Deploy) ]
6. The Optimal Workflow: Instructing an AI Editor
Given this stack, there is a correct way to instruct an AI to make changes. You must force it to be "schema-first."
1. Why Start with Schemas?
- It's the "Contract": The schemas define what the code expects. The AI doesn't have to guess.
- It Prevents Silent Errors: If the AI only changes the function but ignores Zod, you risk runtime validation failures.
- It Aligns the Full Stack: A Zod schema change will force a Drizzle table change, which will force an RPC signature change, which will force a TanStack Query hook update.
2. How to Instruct the AI (The Playbook)
Never say: "Just change code somewhere in the backend."
Always say: "Here is the new type/schema. Make all code consistent with this."
Practical Example: Adding emailVerified: boolean to the User.
- You: "Update the User schema in /packages/core/schemas to add emailVerified: z.boolean(). Then, propagate this change everywhere."
- AI (Step 1): Updates the Zod schema in /packages/core/schemas/user.ts.
- AI (Step 2): Updates the Drizzle table definition in /packages/core/db/user.ts and generates a new migration file.
- AI (Step 3): Updates any RPC function signatures in /packages/core/rpc that return a User type.
- AI (Step 4): Updates the backend implementation (e.g., the updateUser function) to handle the new field.
- AI (Step 5): Updates any front-end calls (e.g., api.user.get.useQuery) in /apps/frontend-admin that are now broken by the type change.
- AI (Step 6): Generates new Zod validation tests.