The Problem Both Tools Solve
If you've used Cursor, Copilot, or Claude Code for anything beyond a toy project, you know the pattern:
- You ask the AI to "add user authentication"
- It immediately writes 400 lines of code
- It picked JWT when you wanted session-based auth
- It ignored your existing middleware pattern
- You spend an hour rewriting what took the AI 30 seconds to generate
The core problem isn't speed — it's direction. AI coding tools are fast but undirected. Without constraints, they guess at requirements, pick arbitrary approaches, and skip verification.
Kiro (from AWS) and Draft (my open-source plugin) both solve this with the same thesis: write specs before code. Both generate requirements, plans, and then execute against them.
The similarities end there.
What Kiro Gets Right
Credit where it's due. Kiro has genuine strengths:
- Autopilot mode — runs multi-step tasks without per-step confirmation
- Agent hooks — file save triggers auto-testing, auto-docs, credential scanning
- Visual IDE — it's a VS Code fork with integrated visual diffs and multimodal chat
- Bidirectional spec sync — edit code and specs update; edit specs and tasks regenerate
If you want a polished IDE experience with real-time automation, Kiro delivers. It's a good product.
But when I dug into what I actually need for production work — brownfield codebases, enterprise requirements, team workflows — the gaps became clear.
What Made Me Switch
1. No Bug Hunting
Kiro doesn't have a standalone bug hunting capability. You can ask the chat to "find bugs," but that's ad-hoc prompting — not systematic analysis.
Draft's /draft:bughunt analyzes code across 14 dimensions: correctness, reliability, security, performance, UI responsiveness, concurrency, state management, API contracts, accessibility, configuration, tests, dependency/supply chain security, algorithmic complexity, and i18n/l10n. Each finding has a severity level, confidence score, file:line location, and suggested fix. It generates regression tests.
This isn't a nice-to-have. On my last project, bughunt caught a race condition in a payment processing service that would have caused double-charges under concurrent requests. That single catch paid for the time I spent setting up Draft.
2. No ACID Enforcement
When Kiro generates code, it generates working code. But working code and production-safe code are different things.
Draft enforces ACID patterns during code generation:
- Atomicity — operations succeed completely or fail completely
- Isolation — concurrent operations don't corrupt shared state
- Durability — committed data survives crashes
- Fail-closed — failures default to the safe state
- Idempotency — retrying the same operation produces the same result
These aren't suggestions. They're mandatory patterns that Draft's implementation engine applies to every task. The difference between "works on my machine" and "works in production at 3am" is exactly these patterns.
3. No Code Review System
Kiro has no standalone code review feature. Draft runs a three-stage review at every phase boundary:
- Automated validation — architecture conformance, dead code, circular dependencies, security anti-patterns
- Spec compliance — every requirement in the spec implemented? No scope creep?
- Code quality — follows project patterns? Appropriate error handling? Tests cover real logic?
Critical issues block the next phase. You can't ship code that doesn't match the spec, because the system won't let you proceed.
4. No Architecture Discovery
This is the biggest gap.
When you point Kiro at an existing codebase, it generates a design doc with data flows and interfaces. Useful, but shallow.
Draft's /draft:init performs a 5-phase deep analysis that produces a 30-45 page architecture.md with:
- Mermaid diagrams of actual system structure
- Data state machines per domain object
- Consistency boundaries (where strong consistency ends and eventual consistency begins)
- Failure recovery matrices
- Critical invariants with enforcement locations
- Extension cookbooks ("how to add a new endpoint" — file by file)
From this, it derives .ai-context.md — a 200-400 line, token-optimized file that any AI tool can consume. One file gives the AI complete system understanding. No more spending 50+ file reads per session re-discovering your architecture.
5. IDE Lock-in
Kiro is a VS Code fork. To use it, you adopt a new IDE. Your existing VS Code extensions might work, but you're now dependent on Amazon maintaining compatibility.
Draft is a plugin. It runs inside:
- Claude Code — native plugin, 30-second install
- GitHub Copilot — drop-in instructions file
- Gemini — single file integration
- Cursor — add from GitHub as a rule
Switch AI tools anytime. Your specs, plans, and architecture docs are plain markdown in your repo. They go with you.
6. The Price
Kiro is in free preview now, but the pricing is published: $19/mo (Pro) or $39/mo (Pro+).
Draft is free. MIT licensed. Open source. The only cost is the host tool you're already paying for (Claude Code, Copilot, etc.).
What I Lost
Transparency matters. Here's what Draft genuinely doesn't have:
- No autopilot mode — Draft has checkpoints. You approve at phase boundaries. This is slower but deliberate for production work.
- No event-driven hooks — nothing happens without explicit invocation. You type
/draft:implement, not save-and-forget. - No visual IDE — Draft is terminal-based and chat-panel-based. No visual diffs, no drag-and-drop.
- No multimodal input — can't paste UI mockups as input. Text only.
- No bidirectional spec sync — specs are snapshots. Explicit
/draft:changeto update them (but you get an audit trail).
If you're building a quick prototype or a frontend-heavy app and you want the smoothest possible UX, Kiro might be the better choice. I'm not pretending otherwise.
The Real Comparison
Here's what it comes down to:
| What Matters to You | Better Choice |
|---|---|
| Rapid prototyping, greenfield | Kiro |
| Production codebase, brownfield | Draft |
| Visual IDE experience | Kiro |
| Multi-tool portability | Draft |
| Event-driven automation | Kiro |
| Bug hunting & code review depth | Draft |
| ACID compliance enforcement | Draft |
| Enterprise traceability (ADRs, audit trails) | Draft |
| Microservice architecture | Draft |
| Team collaboration via PR review | Draft |
| Free / no subscription | Draft |
The 5-Minute Test
Don't take my word for it. Try it yourself.
# In Claude Code:
/plugin marketplace add mayurpise/draft
/plugin install draft
# Point it at your existing codebase:
/draft:init
# Watch it generate architecture.md — a 30-45 page
# analysis of how your system actually works.
# Then run:
/draft:bughunt
# See what it finds.
The architecture discovery alone is worth the 10 minutes. Even if you never use another Draft command, having a machine-readable map of your codebase changes how every AI tool interacts with your code.
Why I Built This
I'm a solo developer competing with Amazon. I don't have a marketing team, a DevRel org, or a booth at re:Invent.
What I have is a methodology that catches production bugs before they ship, enforces ACID patterns that prevent 3am incidents, and generates architecture documentation that eliminates the "AI doesn't understand my codebase" problem.
Kiro is a good IDE. Draft is a deeper methodology. They solve different problems at different depths.
If your problem is "I want a nicer IDE for AI coding" — use Kiro.
If your problem is "AI keeps generating code that breaks in production" — try Draft.