To achieve production-grade results with Claude Code, you must adhere to one central rule: Never let the AI write a single line of code until you have reviewed and approved a written architectural plan. This “Plan-First” strategy utilizes a workflow of Research, Planning, Annotation, and Boring Implementation to prevent “expensive failures”—where AI-generated code works in isolation but breaks systemic integrity by ignoring existing cache layers, ORM conventions, or architectural boundaries.
The Trap of “Isolated Success” in AI Coding
The most significant risk in AI-assisted development isn't a syntax error or a logic bug; it is the creation of code that is “correct but misplaced”. Boris Tane, who joined Cloudflare after his serverless observability platform, Baselime, was acquired, notes that AI often lacks a “global” understanding of a codebase.
Common “isolated failures” include:
-
Implementing a new feature while completely ignoring an existing internal cache layer.
-
Creating database migrations that violate existing ORM (Object-Relational Mapping) conventions.
-
Duplicating logic that already exists elsewhere in the system because the AI didn't “see” it.
When these errors occur, the developer often spends more time rolling back and fixing the “perfect” but “wrong” code than they would have spent writing it manually.
The 4-Phase Claude Code Workflow
To solve this, Tane suggests a rigorous pipeline that treats the AI like a highly efficient construction crew and the developer like the lead architect.
1. Research: Forcing Deep Comprehension
Before asking for a solution, force Claude to read the entire relevant directory. The goal is to produce a research.md file that serves as a “review surface” for the human architect.
-
Instructional Keywords: Use strong modifiers like “deeply,” “in great detail,” and “intricacies”.
-
The Goal: Claude should explain how the current system works, its dependencies, and its edge cases before proposing changes.
-
Validation: If the
research.mdcontains errors about how your system functions, the subsequent code will be fundamentally flawed.
2. Planning: Defining the Specification
Once the research is validated, the AI creates a plan.md. This is not a vague vision statement; it is a technical blueprint.
A high-quality plan must include:
-
Specific File Paths: Exactly which files will be modified.
-
Code Snippets: Demonstrations of the proposed logic changes.
-
Trade-offs: A discussion of why a specific approach was chosen over others.
-
Reference Implementations: Tane suggests providing an example of a similar feature from an open-source project to give the AI a “concrete anchor”.
3. Annotation: The Human-in-the-Loop Feedback
This is the most critical stage. Instead of arguing with the AI in a chat window, the developer opens the plan.md in their editor and adds inline annotations.
-
Shared Mutable State: The markdown file acts as a shared state between the human and the AI.
-
Specific Corrections: The developer can veto over-engineering, enforce project-specific naming conventions, or protect specific API signatures that must not change.
-
Iteration: This cycle of annotation and plan updates may repeat 1 to 6 times until the plan is “perfect”.
4. Implementation: Making Execution “Boring”
When the plan is finally approved, the implementation should be mechanical and “boring”.
-
The Instruction: “Implement everything. Do not stop until every task is marked complete in the plan”.
-
The Guardrails: Instruct Claude to run type-checks (
typecheck) continuously and avoid usinganyorunknowntypes to maintain code quality. -
The Rollback Rule: If the AI starts veering off-course, do not try to patch the mistake. Roll back the Git changes entirely and restart with a narrower scope.
Comparison: Traditional Prompting vs. The Tane Workflow
| Feature | Traditional AI Prompting | The Tane Workflow |
| Initial Action | “Write a function that…” |
“Research this directory deeply.”
|
| Success Metric | Does the code run? |
Does it align with the approved plan?
|
| Communication | Messy back-and-forth chat |
Structured annotations in
|
| Risk Level | High (Architectural drift) |
Low (Boring, verified execution)
|
| Developer Role | Code editor/debugger |
System architect and pilot
|
How to Stay in the “Pilot's Seat”
A core tenet of this workflow is that the human retains all decision-making power. Tane identifies four specific ways to guide the AI during the planning phase:
-
Pick and Choose: Select only the parts of an AI suggestion that add value while discarding over-engineered components.
-
Trim Scope: Explicitly remove “nice-to-have” features that aren't necessary for the current task.
-
Protect Interfaces: Set hard constraints on function signatures and public APIs to ensure backward compatibility.
-
Override Technical Choices: Force the AI to use specific libraries or internal methods it might have overlooked.
FAQ: Optimizing Claude Code for Production
Q: Why shouldn't I use the built-in “Plan Mode” in Claude Code? A: Boris Tane prefers using a dedicated plan.md file because it can be edited, annotated, and saved as a permanent record within the project. Built-in modes often lack the persistence and “shared state” benefits of a local markdown file.
Q: Does the AI lose context in long sessions? A: While many fear context degradation, Tane argues that keeping research, planning, and implementation in one long session is actually beneficial. The AI builds a deep understanding during the research phase that remains valuable during implementation.
Q: What is the most common reason for an AI coding failure? A: It is the “expensive failure mode” of implementation in isolation. The AI creates code that is syntactically perfect but breaks the surrounding system's established patterns.
Q: How do I handle bugs during the implementation phase? A: Use short, direct feedback like “You missed a specific function” or “This UI element needs a 2px gap”. If the error is systemic, roll back to the last clean Git state and refine the plan.



