Context-Driven Development

Ship fast. Ship right.

Star on GitHub Free & Open Source

AI coding agents guess at requirements, pick arbitrary approaches, and skip verification. Draft makes them execute pre-approved work — clear specifications, phased plans, TDD enforcement, and 3-stage code review. Free & open source. Works with Claude Code, Copilot & Gemini.

Draft Demo Video
Explore

The Core Problem

AI coding tools ship fast but create chaos. Without structure, they make assumptions, choose arbitrary approaches, and produce code that doesn't fit your architecture. Context window fills up, hallucinations increase, chat history is unsearchable, and every new session starts from zero.

🎯

Assumption-Driven

Guesses at requirements instead of asking clarifying questions

🔀

Arbitrary Choices

Picks random technical approaches without considering your stack

🧩

Poor Fit

Produces code that doesn't match existing patterns or conventions

⏭️

No Checkpoints

Skips verification and claims completion without proof

The Solution: Context-Driven Development

Draft treats context as a managed artifact alongside code. File-based persistent memory replaces ephemeral chat. Your repository becomes the single source of truth.

Chat-Driven
Draft Approach
Context in ephemeral chat
File-based persistent memory
No version history
Git-tracked specs with diffs and blame
Loads entire project every session
Scoped context per track
Unsearchable conversations
Grep-able specs and plans
Invisible to teammates
PR-reviewable planning artifacts

When to Use

Good Fit
🏗️

Design Decisions

Features requiring architecture choices, API design, or data model decisions

👥

Team Review

Work that will be reviewed by others — specs are faster to review than code

📦

Multi-Step Work

Complex implementations spanning multiple files, modules, or phases

5-Step Quickstart
quickstart
# 1. Initialize project context
/draft:init

# 2. Create a feature track
/draft:new-track "Add user authentication"

# 3. Start implementing
/draft:implement

# 4. Verify quality
/draft:coverage
/draft:review
Economics: Why Specs Win

Writing specs feels slower. It isn't. Overhead is constant (~20% for simple tasks), but savings scale with complexity, team size, and criticality.

Scenario
Time Comparison
Simple feature
1h → 1.2h (+20%)
Feature with ambiguity
3h + rework → 2h (33% savings)
Feature requiring team input
5h + meetings → 2.5h (50% savings)
Wrong feature entirely
Days wasted → Caught in review

For critical product development, Draft isn't overhead — it's risk mitigation.

Why Draft

Draft is a methodology-first plugin that layers onto tools you already use. No new IDE to adopt. No subscription. No vendor lock-in.

Free & Open Source
Tool
Cost
Draft
Free forever — open source, MIT licensed. Host tool costs apply (Claude Code, Copilot, etc.)
Kiro (AWS)
Free preview, then $19/mo (Pro) or $39/mo (Pro+)
Cursor
Free tier, then $20/mo (Pro) or $40/mo (Business)
Windsurf
Free tier, then $15/mo+
Works With Your Existing Tools

Draft is a plugin, not a replacement. Zero switching cost — install and start using it inside your current editor.

Claude Code

Native plugin with full slash command support. Install from marketplace in 30 seconds.

GitHub Copilot

Drop-in instructions file. Works in VS Code, JetBrains, Neovim — wherever Copilot runs.

Cursor

Add from GitHub as a Cursor rule. Works alongside your existing Cursor setup.

🚀

Antigravity IDE & Gemini

Uses a lightweight `.gemini.md` bootstrap file pointing to a global or local installation.

No Vendor Lock-in

Your specs, plans, and architecture docs are plain markdown files in your repo. Switch tools any time — your project knowledge stays with you.

Deeper Than Any AI IDE

Most AI coding tools focus on writing code faster. Draft focuses on writing correct code — with methodology depth no IDE provides.

Capability
Draft vs Alternatives
Architecture Discovery
30-45 page codebase analysis with Mermaid diagrams, data state machines, consistency boundaries. No other tool generates this.
11-Dimension Bug Hunt
Correctness, security, performance, concurrency, accessibility, and 6 more dimensions. Systematic, not ad-hoc.
ACID Production Patterns
Mandatory atomicity, isolation, durability, fail-closed, idempotency enforcement during code generation. Prevents entire categories of production bugs.
3-Stage Code Review
Automated validation + spec compliance + code quality with adversarial pass. More rigorous than any IDE-integrated review.
Architecture Decision Records
Full ADR lifecycle. Critical for teams and regulated industries. No AI IDE provides this.
Git-Aware Structured Revert
Task/phase/track-level rollback. Far more precise than manual git revert.
Jira Integration
Track-to-epic-to-story-to-subtask mapping. Bridges methodology to project management.
Monorepo Federation
Service discovery, cross-service dependency graphs. Essential for microservice architectures.
Best Fit For

Brownfield Projects

Existing codebases where understanding the architecture before changing it prevents production incidents.

Enterprise & Regulated

Teams needing ADRs, audit trails, change management, and traceable decision-making.

Backend & Infrastructure

Microservices, APIs, data pipelines — where ACID compliance and consistency boundaries matter.

Teams Over Solo

Specs and plans go through PR review before any code is written. The entire team aligns before implementation starts.

Install

Install Draft as a Claude Code plugin, or use the integrations for Cursor, GitHub Copilot, Antigravity IDE, and Gemini.

Claude Code (Marketplace)

claude code
# Install from marketplace
/plugin marketplace add mayurpise/draft
/plugin install draft

Prerequisites: Claude Code CLI, Git, and Node.js 18+.

Cursor

Cursor > Settings > Rules, Skills, Subagents > Rules > New > Add from Github:

cursor setup
https://github.com/mayurpise/draft.git

GitHub Copilot

Generates a comprehensive `copilot-instructions.md` context file for Copilot's specific requirements:

copilot setup
mkdir -p .github && curl -o .github/copilot-instructions.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/copilot/.github/copilot-instructions.md

Antigravity IDE

Install globally and set up your bootstrap configuration once:

antigravity setup
# Clone skills to Antigravity global directory
mkdir -p ~/.gemini/antigravity/skills
git clone https://github.com/mayurpise/draft.git ~/.gemini/antigravity/skills/draft
# Set up the bootstrap
curl -o ~/.gemini.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/gemini/.gemini.md

Gemini

Use the lightweight bootstrap file in your local repository:

gemini setup
curl -o .gemini.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/gemini/.gemini.md

Workflow

The Draft Workflow

By treating context as a managed artifact alongside code, your repository becomes the single source of truth.

📋
Context
Define the landscape
📝
Spec & Plan
Document the approach
🏗️
Decompose
Module architecture
Implement
Execute with confidence
Step 1: Context Files

/draft:init creates persistent context files that define your project landscape.

1
product.md
Who your users are, what problems you solve, key product goals
2
tech-stack.md
Languages, frameworks, libraries, patterns, conventions
3
workflow.md
TDD preferences, commit strategy, validation config, team processes
4
.ai-context.md
200-400 lines, machine-optimized, self-contained AI context. 15+ mandatory sections: architecture, invariants, interface contracts, data flows, concurrency rules, error handling, implementation catalogs, extension cookbooks, testing strategy, glossary (brownfield only). Source of truth for all AI agents.
5
architecture.md
30-45 page human-readable engineering reference (source of truth). 25 sections + 4 appendices with Mermaid diagrams, code snippets, and prose explanations. Generated from 5-phase codebase analysis.

These files live in draft/, are git-tracked, and load automatically. AI always starts with your ground truth instead of assumptions.

Step 2: Spec & Plan

When you ask AI to "add authentication," it immediately writes code. /draft:new-track conducts a collaborative intake — structured conversation where AI acts as expert collaborator. It asks clarifying questions, contributes expertise (patterns, risks, trade-offs), and updates the spec progressively.

Step 3: Decompose (optional)

For multi-module features, /draft:decompose maps your feature into discrete modules with defined responsibilities, API surfaces, and dependency graph. This prevents tangled code and circular dependencies.

Step 4: Implement

/draft:implement executes one task at a time from the plan, follows TDD cycle (test first, then code, then refactor), runs verification gates before marking completion, and triggers three-stage review at phase boundaries.

Step 5: Verify Quality

Passing tests doesn't guarantee good code. /draft:coverage measures test completeness (95%+ target). /draft:review runs context-aware checks: architecture conformance, security scans (hardcoded secrets, SQL injection, XSS), performance anti-patterns, and compliance with the spec. /draft:bughunt performs exhaustive bug hunting across 12 dimensions. /draft:deep-review audits entire modules for ACID compliance and architectural resilience.

The Constraint Hierarchy

Each document layer narrows the solution space. By the time AI writes code, most decisions are already made.

1
product.md
"Build a task manager for developers"
2
tech-stack.md
"Use React, TypeScript, Tailwind"
3
.ai-context.md
"Express API → Service layer → Prisma ORM → PostgreSQL" + data state machines, consistency boundaries
4
spec.md
"Add drag-and-drop reordering"
5
plan.md
"Phase 1: sortable list, Phase 2: persistence"
Key Insight

The AI becomes an executor of pre-approved work, not an autonomous decision-maker. Explicit specs, phased plans, verification steps, and status markers keep implementation focused and accountable.

Review Before Code

This is Draft's most important feature. In traditional AI coding, you discover the AI's design decisions during code review — after it's already built the wrong thing. With Draft, the AI writes a spec first. You review the approach in a document, not a diff. Disagreements are resolved by editing a paragraph, not rewriting a module.

Traditional AI Coding
Draft Approach
AI writes code immediately
AI writes spec first — you approve the approach
Review happens on code PR (too late)
Review happens on spec PR (cheap to change)
Disagreements require rewriting code
Disagreements resolved by editing a document
AI decisions are implicit and buried in code
AI decisions are documented and git-tracked
Collaborative Intake: AI as Expert Partner

Instead of dumping requirements at the AI and hoping for the best, /draft:new-track conducts a structured conversation where AI acts as an expert collaborator — asking the right questions, contributing knowledge, and building the spec progressively.

1
One question at a time
AI asks a single focused question, waits for your answer, then contributes expertise before moving to the next
2
AI contributes at each step
Pattern recognition, risk surfacing, trade-off analysis, fact-checking against your .ai-context.md
3
Grounded in vetted sources
Citations from Domain-Driven Design, Clean Architecture, DDIA, OWASP, 12-Factor App, GoF patterns
4
Checkpoints between phases
You control the pace with summaries and refinement opportunities
5
Progressive draft updates
spec-draft.md updates as conversation evolves. You see the spec take shape in real-time
Key Insight

This levels the playing field. Junior engineers get senior-level guidance. Senior engineers can't shortcut rigor. Both produce well-documented specs with traceable reasoning. The discipline scales across your entire team.

Architecture Mode (Optional)

Standard Draft gives you specs and plans. Architecture Mode goes deeper — it forces the AI to design before it codes. Every module gets a dependency analysis. Every algorithm gets documented in plain language. Every function signature gets approved before implementation begins.

When to use: Multi-module features, new projects, complex algorithms, teams wanting maximum review granularity.

Overkill for: Simple features touching 1-2 files, bug fixes with clear scope, configuration changes.

TDD Workflow

AI-generated code without tests is a liability. When TDD is enabled in workflow.md, Draft forces the AI to prove its code works at every step.

R
Red — Write failing test
Define what "correct" means before any code exists. Test must fail with assertion error, proving it actually tests the requirement
G
Green — Minimum code to pass
Write simplest implementation that makes the test pass. No extras, no "improvements" — every line justified by a failing test
R
Refactor — Clean with tests green
Improve code structure while tests stay green. Test suite acts as safety net
C
Commit — Following conventions
One task = one commit. Small, focused commits make reverts surgical and git blame useful
Architecture Discovery (Brownfield)

For brownfield projects, /draft:init performs a deep five-phase codebase analysis that generates architecture.md (comprehensive engineering reference) and derives .ai-context.md (machine-optimized AI context). These documents become the persistent context every future track references.

Phase 1: Orientation — System map with mermaid diagrams, directory hierarchy, entry points, request/response flow, tech stack inventory

Phase 2: Logic — Data lifecycle (state machines, storage topology, transformation chains), design patterns, complexity hotspots, conventions, external dependencies, critical invariants, security architecture, concurrency model, error handling, observability

Phase 3: Module Discovery — Module dependencies, module inventory, dependency order

Phase 4: Critical Path Tracing — End-to-end write/read/async paths with consistency boundaries, failure recovery matrix, commit points

Phase 5: Schema & Contract Discovery — Protobuf, OpenAPI, GraphQL, database schemas, inter-service dependencies

Phase 6: Test, Config & Extension Points — Test mapping, config discovery, extension cookbooks

Key Insight

Pay the analysis cost once, benefit on every track. Architecture discovery turns your codebase into a documented system that any AI assistant can understand instantly.

Revert Workflow

AI makes mistakes. When it does, you need to undo cleanly. Draft's revert understands the logical structure of your work at three granularities: task (single task's commits), phase (all commits in a phase), or track (entire track's commits). Preview, confirm, execute — git revert + Draft state update together.

Quality Disciplines

AI's default failure mode is to guess at fixes, skip verification, and claim success. Draft embeds three quality agents directly into the workflow.

Systematic Debugging

When a task is blocked ([!]), the Debugger Agent enforces a four-phase process: Investigate → Analyze → Hypothesize → Implement. After 3 failed hypothesis cycles, escalates to you with everything learned and eliminated. Root cause documented in plan.md.

Three-Stage Review

At every phase boundary, the Reviewer Agent runs three sequential checks:

Stage 1: Automated Validation — Architecture conformance, dead code, circular dependencies, security anti-patterns, performance issues.

Stage 2: Spec Compliance — All functional requirements implemented? Acceptance criteria met? No scope creep or missing features?

Stage 3: Code Quality — Follows project patterns from tech-stack.md? Appropriate error handling? Tests cover real logic? Maintainability and complexity.

Critical issues must be fixed before proceeding. Important issues should be fixed. Minor issues noted but don't block.

Code Coverage (95%+ target)

/draft:coverage runs your project's coverage tool and classifies every uncovered line:

Testable — Should be covered. Suggests specific tests to write.

Defensive — Error handlers for impossible states. Acceptable to leave.

Infrastructure — Framework boilerplate and entry points. Acceptable.

Module Lifecycle Audit

/draft:deep-review performs an exhaustive end-to-end lifecycle review of a service, component, or module. It evaluates ACID compliance, architectural resilience, and production-grade enterprise quality. Non-blocking by default. Results in deep-review-report.md.

Exhaustive Bug Hunt (11 dimensions)

/draft:bughunt performs exhaustive defect discovery: correctness, reliability, security, performance, UI responsiveness, concurrency, state management, API contracts, accessibility, configuration, tests, maintainability. Findings severity-ranked (Critical/High/Medium/Low) with file:line locations in bughunt-report.md.

Command Reference

Draft provides 16 slash commands for the full development lifecycle.

📋
/draft

Overview and intent mapping

What it does:

  • Shows available commands and guides you to the right workflow
  • Maps natural language intent to commands

Usage:

/draft
🚀
/draft:init

Initialize project context

What it does:

  1. Detects brownfield (existing) vs greenfield (new) project
  2. Architecture discovery (brownfield): 5-phase deep analysis with mermaid diagrams, data state machines, consistency boundaries
  3. Creates product.md, tech-stack.md, workflow.md, .ai-context.md, architecture.md
  4. Optionally enables Architecture Mode for module decomposition
  5. Creates tracks.md master registry

Usage:

/draft:init /draft:init refresh
🗂️
/draft:index

Monorepo federation and service aggregation

What it does:

  1. Scans immediate child directories for service markers
  2. Reads each service's draft/ context
  3. Synthesizes root-level documents for system-of-systems view
  4. Generates service registry, dependency graph, tech matrix
  5. Bughunt mode: Runs /draft:bughunt across subdirectories, aggregates results

Usage:

/draft:index /draft:index --init-missing /draft:index bughunt /draft:index bughunt dir1 dir2
📝
/draft:new-track

Collaborative intake for spec + plan creation

What it does:

  1. Creates spec-draft.md + plan-draft.md immediately
  2. Conducts collaborative intake — one question at a time
  3. AI contributes expertise: patterns, risks, trade-offs, citations
  4. Checkpoints between phases for refinement
  5. On confirmation: promotes drafts to spec.md + plan.md

Usage:

/draft:new-track "Add user authentication"
/draft:implement

Execute tasks with TDD workflow

What it does:

  1. Finds next uncompleted task in active track
  2. Executes TDD cycle: RED → GREEN → REFACTOR
  3. Updates plan status markers and metadata
  4. At phase boundaries: runs three-stage review

Usage:

/draft:implement
📊
/draft:status

Display progress overview

What it does:

  • Shows all active tracks with completion percentages
  • Displays current phase and task breakdown
  • Highlights blocked items

Usage:

/draft:status
/draft:revert

Git-aware rollback

What it does:

  1. Identifies commits by track pattern
  2. Shows preview with affected files and plan changes
  3. Requires explicit confirmation
  4. Executes git revert + updates plan markers

Revert levels:

  • Task: Single task's commits
  • Phase: All commits in a phase
  • Track: Entire track's commits
🏗️
/draft:decompose

Module decomposition + dependency mapping

What it does:

  1. Proposes modules with: name, responsibility, files, API, dependencies
  2. Maps dependencies, detects cycles, generates dependency diagram
  3. Creates architecture.md with module definitions (derives .ai-context.md)

Usage:

/draft:decompose
📈
/draft:coverage

Code coverage report (target 95%+)

What it does:

  1. Auto-detects coverage tool (jest, vitest, pytest-cov, go test)
  2. Runs coverage and captures output
  3. Reports per-file breakdown with uncovered line ranges
  4. Classifies gaps: testable, defensive, infrastructure

Usage:

/draft:coverage
/draft:deep-review

Module lifecycle audit

Evaluates:

  • ACID compliance
  • Architectural resilience
  • Production-grade enterprise quality
  • Structural analysis

Usage:

/draft:deep-review src/auth
🔍
/draft:bughunt

Exhaustive bug hunt across 11 dimensions

11 dimensions analyzed:

  • Correctness, reliability, security, performance
  • UI responsiveness, concurrency, state management
  • API contracts, accessibility, configuration, tests, maintainability

Usage:

/draft:bughunt /draft:bughunt --track my-feature
📋
/draft:review

Code review orchestrator

Track-level review:

  1. Stage 1: Automated static validation
  2. Stage 2: Spec compliance verification
  3. Stage 3: Code quality checks
  4. Optional: Runs /draft:bughunt

Usage:

/draft:review /draft:review --full
🔍
/draft:learn

Discover patterns, update guardrails

What it does:

  • Scans codebase for recurring coding patterns (3+ occurrences)
  • Learns conventions (skip in future analysis)
  • Learns anti-patterns (always flag in future)
  • Updates draft/guardrails.md

Usage:

/draft:learn /draft:learn promote /draft:learn migrate
📋
/draft:jira-preview

Generate Jira export for review

What it does:

  • Generates jira-export.md from track plan
  • Maps: Track → Epic, Phase → Story, Task → Sub-task
  • Auto-calculates story points from task count

Usage:

/draft:jira-preview
🎫
/draft:jira-create

Push issues to Jira via MCP

What it does:

  1. Creates issues from jira-export.md
  2. Creates Epic → Stories → Sub-tasks in order
  3. Updates plan and export with issue keys

Requirements:

  • MCP-Jira server configured in Claude Code settings

Usage:

/draft:jira-create
📐
/draft:adr

Architecture Decision Records

What it does:

  1. Documents significant technical decisions with context and rationale
  2. Creates structured ADR: Context, Decision, Alternatives, Consequences
  3. Stores at draft/adrs/NNNN-title.md

Usage:

/draft:adr /draft:adr "Use PostgreSQL"
🔀
/draft:change

Handle mid-track requirement changes

What it does:

  1. Analyzes impact on all completed and pending tasks
  2. Flags any [x] tasks retroactively invalidated by the change
  3. Proposes exact amendments to spec.md and plan.md
  4. Applies changes only after explicit confirmation (CHECKPOINT: yes / no / edit)
  5. Appends a timestamped entry to ## Change Log in plan.md

Usage:

/draft:change the export format should also support JSON /draft:change track add-export-feature add progress indicator

Team Collaboration

Draft's most powerful application is team-wide: every markdown file goes through commit → review → update → merge before a single line of code is written. By the time implementation starts, the entire team has already agreed on what to build, how to build it, and in what order.

The PR Cycle on Documents, Not Code
1
product.md + tech-stack.md + .ai-context.md + architecture.md + workflow.md
Tech lead runs /draft:init. For brownfield projects, Draft performs deep 5-phase architecture discovery — generating architecture.md (30-45 pages, 25 sections + appendices with Mermaid diagrams) and deriving .ai-context.md (200-400 lines, 15+ sections: invariants, interface contracts, data flows, cookbooks). Team reviews project vision, technical choices, system architecture, and workflow preferences via PR.
2
spec.md + plan.md
Lead runs /draft:new-track — a collaborative intake where AI asks structured questions one at a time, contributes expertise (patterns, risks, trade-offs), and builds the spec progressively. Team reviews requirements, acceptance criteria, and phased task breakdown via PR.
3
.ai-context.md + architecture.md
Lead runs /draft:decompose. Team reviews module boundaries, API surfaces, dependency graph, and implementation order via PR. architecture.md is the 30-45 page source of truth; .ai-context.md is the machine-optimized AI context derived from it. Senior engineers validate the architecture without touching the codebase.
4
jira-export.md → Jira stories
Lead runs /draft:jira-preview and /draft:jira-create. Epics, stories, and sub-tasks are created from the approved plan. Individual team members pick up Jira stories and implement.
5
Implementation + Verification
Only after all documents are merged does coding start. Every developer has full context: what to build (spec.md), in what order (plan.md), with what boundaries (.ai-context.md / architecture.md). After implementation, quality tools verify completeness.
Why This Changes How Teams Work
Traditional AI Development
Draft Team Workflow
Developer gets a Jira ticket and asks AI to build it
Developer gets a Jira ticket with linked spec, plan, and architecture already reviewed
Each developer makes independent design decisions
Design decisions are made once in documents, reviewed by the team
Integration problems surface during code review
Integration problems surface during architecture review — before any code exists
New team members read code to understand features
New team members read spec.md and plan.md to understand features
Key Insight

The CLI is single-user, but the artifacts it produces are the collaboration layer. Draft handles planning and decomposition. Git handles review. Jira handles distribution. Changing a sentence in spec.md takes seconds. Changing an architectural decision after 2,000 lines of code takes days.

Jira Integration

Sync tracks to Jira with a two-step workflow. Preview before pushing to catch issues early.

1
Preview
Generate jira-export.md with epic and stories using /draft:jira-preview
2
Review
Adjust story points, descriptions, acceptance criteria in the export file
3
Create
Push to Jira via MCP server using /draft:jira-create

Auto Story Points: 1-2 tasks = 1pt, 3-4 tasks = 2pts, 5-6 tasks = 3pts, 7+ tasks = 5pts

Videos

Short videos covering Draft's methodology, agents, and workflows. View full playlist

Draft - Ship fast. Ship right with AI Coding Assistant
Draft - Ship fast. Ship right with AI Coding Assistant
The Reviewer Agent
The Reviewer Agent
The Bughunt Agent
The Bughunt Agent
Taming AI Chaos
Taming AI Chaos
Context Driven Development
Context Driven Development
Collaborative Intake
Collaborative Intake
AI's Quality Gatekeeper
AI's Quality Gatekeeper

Codebase Research

Every AI interaction starts with understanding. /draft:init performs a deep 5-phase analysis of your codebase and produces architecture.md — a comprehensive engineering reference that derives .ai-context.md, captures how your system actually works. Not how it was designed. How it works today.

Why This Exists

AI coding assistants face a fundamental problem: they don't know your system. Every session, they re-discover your architecture by reading files, guessing at patterns, and inferring relationships. This costs tokens, wastes time, and produces hallucinations when the AI fills knowledge gaps with assumptions.

Without Codebase Research
With .ai-context.md
AI reads 50+ files to understand structure
One file provides complete system understanding
Guesses at data flow and state transitions
State machines, consistency boundaries, and failure paths documented
Misses security invariants and concurrency constraints
Critical invariants catalogued with enforcement locations
Each session starts from scratch
Persistent, git-tracked, always current
Different AI tools build different mental models
Single source of truth consumed by any AI tool
What It Captures

.ai-context.md is organized around the question every engineer and AI agent needs answered: "Where is my data right now, what state is it in, and what happens if something fails here?"

1
System Map
Directory structure, entry points, request/response flows with actual file:line references. Not a diagram of intent — a map of reality.
2
Data Lifecycle
State machines per domain object (valid transitions, invariants, enforcement locations). Storage topology across tiers (cache, DB, event log, archive). Data transformation chains across boundaries (API payload to persistence model to event).
3
Critical Paths
End-to-end write and read paths with consistency boundaries (where strong consistency ends and eventual consistency begins). Async/event paths with delivery guarantees. Failure recovery matrix — what happens to in-flight data when each stage fails.
4
System Constraints
Critical invariants (data safety, security, concurrency, ordering, idempotency). Security architecture. Concurrency model. Error handling and retry policies.
5
Extension Cookbooks
Step-by-step guides for common changes: "Add a new endpoint", "Add a new model", "Add a new integration". File-by-file instructions an AI agent can follow mechanically.
Dual Output: Machine + Human

One analysis, two outputs. Each optimized for its audience.

architecture.md (Source)
.ai-context.md (Derived)
Dense tables, YAML frontmatter, flat H2 sections
Prose paragraphs, annotated diagrams, onboarding framing
Token-efficient — minimal prose, maximum signal
Readable — explains the "why" behind decisions
Source of truth — all mutations happen here
Condensed from architecture.md — auto-refreshed on mutations
Consumed by: /draft:implement, /draft:deep-review, /draft:bughunt, /draft:review, any external AI tool
Consumed by: engineers, team leads, new hires, PR reviewers
Key Insight

The same analysis serves both audiences without compromise. AI agents get token-efficient tables they can parse reliably. Engineers get prose they can read over coffee. Neither format sacrifices for the other because they're generated from the same source.

How It Helps AI Assistants
$
Reduces Token Cost
Instead of reading dozens of files to understand context, the AI reads one structured document. Fewer input tokens per session, compounding across every interaction.
H
Eliminates Hallucinations
When the AI doesn't know your architecture, it invents one. .ai-context.md fills the knowledge gap with facts — actual file paths, real state machines, documented invariants — leaving nothing to guess.
A
Improves Accuracy
Code generation respects existing patterns, module boundaries, naming conventions, and data contracts. The AI writes code that fits your system because it knows your system.
B
Catches More Bugs
Bug hunting with data state machines finds invalid transitions. Consistency boundaries reveal eventual-consistency bugs. Failure recovery matrices expose missing recovery paths. Without this context, these bugs are invisible.
How It Helps Engineers
1
New Engineer Onboarding
Day-one understanding of how the system works. architecture.md provides the human-readable guide: system overview, getting started, how data flows end-to-end. No more "read the code and figure it out."
2
Senior Engineer Reference
Experienced engineers need different things — invariant tables, dependency graphs, concurrency models, failure matrices. .ai-context.md surfaces system constraints that even veterans forget or never documented.
3
Architecture Review
PR reviewers can check changes against documented module boundaries, data state machines, and critical invariants. Review becomes "does this change violate any documented contract?" rather than relying on tribal knowledge.
4
Incident Response
When production breaks at 2am, the failure recovery matrix tells you exactly what data state each component is in and how to recover. The consistency boundary map shows where eventual-consistency lags could explain the symptoms.
Living Document, Not a Snapshot

Architecture documentation rots. .ai-context.md doesn't — because it's maintained by the same workflow that changes the code.

S
Synced with Code
YAML frontmatter tracks the exact git commit the analysis was run against. Compare current HEAD vs frontmatter commit to detect staleness. /draft:init refresh updates the document when the codebase evolves.
M
Mutated by Commands
/draft:implement updates module status after completing tasks. /draft:decompose adds new modules with dependency graphs. Each mutation auto-refreshes the derived .ai-context.md. The documents evolve with the code.
G
Git-Tracked & Reviewable
Every change to architecture.md goes through the same PR review process as code. Architecture decisions become visible, diffable, and blameable. History is preserved.
P
Tool-Agnostic
Not locked to any AI vendor. Claude Code, Cursor, GitHub Copilot, Gemini, or any future tool can consume .ai-context.md (or architecture.md). Your codebase understanding is an asset you own, not a feature of a subscription.
Key Insight

Pay the analysis cost once, benefit on every interaction. The 10-minute init analysis saves hours of repeated context-building across every AI session, every code review, every new engineer onboarding, and every incident response. The ROI compounds with every use.

Reference

Project Structure
project structure
draft/
├── product.md          # Product vision, goals, guidelines
├── tech-stack.md       # Technical choices, accepted patterns
├── architecture.md     # Source of truth: 30-45 pages, Mermaid diagrams, code snippets
├── .ai-context.md      # Derived: 200-400 lines, machine-optimized AI context
├── workflow.md         # TDD, commit, validation config
├── guardrails.md       # Hard guardrails, learned conventions, anti-patterns
├── validation-report.md # Project-level quality checks (generated)
├── jira.md             # Jira project config (optional)
├── tracks.md           # Master track list
└── tracks/
    └── <track-id>/
        ├── spec.md      # Requirements
        ├── plan.md      # Phased task breakdown
        ├── architecture.md # Track modules, data paths (optional)
        ├── .ai-context.md   # Token-optimized, derived (optional)
        ├── metadata.json
        ├── validation-report.md # Quality checks (generated)
        └── jira-export.md # Jira stories (optional)
Status Markers

Simple markers track progress throughout specs and plans. Progress is explicit, not assumed.

[ ] Pending
[~] In Progress
[x] Completed
[!] Blocked
Iron Law

Evidence before claims, always. Never mark [x] without running verification, confirming output shows success, and showing evidence in the response.

Core Principles
1
Plan before you build
Create specs and plans that guide development before writing code
2
Maintain context
Ensure agents follow style guides and product goals consistently
3
Iterate safely
Review plans before code is written, catch issues early
4
Work as a team
Share project context across team members through git-tracked specs
5
Verify before claiming
Evidence before assertions, always — run tests, show proof
Constraint Mechanisms

How Draft keeps AI focused and accountable:

Mechanism
Effect
Explicit spec
AI only implements what's documented → Prevents scope creep
Phased plans
AI works on one phase at a time → Prevents over-engineering
Verification steps
Each phase requires proof of completion → Prevents false claims
Status markers
Progress is tracked, not assumed → Prevents lost context
Three-stage review
Spec compliance before code quality → Prevents quality gaps
Validation checks
Context-aware quality validation → Prevents pattern violations, vulnerabilities, tech debt

Industry Standards

Draft codifies the engineering culture of "Big Tech" (Google, Amazon, Stripe) into an AI-assisted workflow. The goal is to shift effort "left"—solving problems in writing before writing code.

Industry Practice Ranking

How Draft methods map to industry standards, ranked by impact on engineering quality (1 = Critical, 5 = Optimization).

Rank Practice Draft Implementation Industry Equivalent Companies
1 Design-First Engineering spec.md & plan.md
Writing a detailed spec and implementation plan before coding.
Design Docs / RFCs
Amazon "PR/FAQ", Google Design Docs.
Google, Amazon, Stripe, Uber
1 Monorepo / Shared Context /draft:index
Federated knowledge index and system maps.
Unified Codebase
Single source of truth, automated dependency graphing.
Google, Meta, Twitter
2 Test-Driven Development /draft:implement
Enforced "Red-Green-Refactor" workflow.
TDD / CI Gates
Tests written as part of the feature.
Netflix, Pivotal
3 Structured Code Review /draft:review
Three-stage review: 1) Spec compliance, 2) Code quality.
Readability / Owners
Google's "Critique" system.
Google, Meta
3 Arch. Decision Records .ai-context.md + architecture.md
Documenting why the system is built this way. Machine-optimized + human-readable dual output.
ADRs
Immutable records of architectural choices.
Spotify, AWS, GitHub
4 Bug Bashes /draft:bughunt
Systematic, categorized search for edge cases.
Bug Bashes
Scheduled team-wide testing sessions.
Microsoft, Game Studios
5 Service Catalog product.md
Standardized metadata for every project.
IDP (Internal Dev Platform)
Portals to manage service metadata.
Spotify, Lyft
Detailed Comparison
Draft Approach
Google/Amazon Approach
spec.md & plan.md
You cannot run /draft:implement without a spec. This forces the AI to "think" before "acting," preventing the generation of code that works but solves the wrong problem.
Design Docs & PR/FAQ
Google engineers write 5-20 page docs before coding ("Code is expensive, docs are cheap"). Amazon works backwards from the Press Release (PR/FAQ) to ensure customer value.
/draft:index
Aggregates context from multiple services (`draft/` folders) into a federated knowledge base, solving the "context window" problem for AI.
Monorepo Strategy
Google/Meta store code in one giant repo with massive tooling (Blaze/Buck) to manage dependencies and allow atomic refactors.
/draft:review
Three-stage review: Automated Validation (architecture, security, performance), Spec Compliance, and Code Quality. Stage 1 automates quality checks that would require manual review elsewhere.
Critique & Readability
Google requires 3 approvals: LGTM (Peer), Owners (gatekeeper), and Readability (Language Expert) to ensure code health.
Maturity Assessment

Adopting Draft places your workflow at Maturity Level 4/5 (High). You are operating with the "Staff Engineer" model of a FAANG company: proposing Implementations of Specs, performing Systematic Bughunts, and maintaining Architectural Documents.