Six behavioral guidelines for AI coding assistants, derived from Andrej Karpathy's observations on LLM coding pitfalls. Works with Claude Code, Codex CLI, and Gemini CLI.
"The models make wrong assumptions on your behalf and just run along with them without checking. They don't manage their confusion, don't seek clarifications, don't surface inconsistencies, don't present tradeoffs, don't push back when they should."
"They really like to overcomplicate code and APIs, bloat abstractions, don't clean up dead code... implement a bloated construction over 1000 lines when 100 would do."
"They still sometimes change/remove comments and code they don't sufficiently understand as side effects, even if orthogonal to the task."
Six principles that directly address these issues:
| Principle | Addresses |
|---|---|
| Think Before Coding | Wrong assumptions, hidden confusion, missing tradeoffs |
| Simplicity First | Overcomplication, bloated abstractions |
| Surgical Changes | Orthogonal edits, touching code you shouldn't |
| Goal-Driven Execution | Leverage through tests-first, verifiable success criteria |
| Communicate Progress | Silent multi-step execution; users can't intervene until mistakes compound |
| Trace Before Building | Building before understanding; redundant tools, wrong abstractions |
npm install -g @zigrivers/coding-skill
coding-skill initThe CLI detects which AI coding assistants you have installed and generates the appropriate instruction file for each:
| CLI | Generated file |
|---|---|
| Claude Code | CLAUDE.md |
| Codex CLI | AGENTS.md |
| Gemini CLI | GEMINI.md |
You can also target specific CLIs:
coding-skill init --claude --codex # only Claude Code and Codex
coding-skill init --global # install to global config dirsIf you only use Claude Code and want auto-invocation (the skill fires automatically on relevant prompts):
/plugin marketplace add https://github.com/zigrivers/coding-skill
/plugin install coding-skill@coding-skill-marketplace
Copy the guidelines directly into your project:
curl -o CLAUDE.md https://raw.githubusercontent.com/zigrivers/coding-skill/main/skills/coding-skill/SKILL.mdcoding-skill init [--global] [--claude] [--codex] [--gemini] [--yes] [--force]
Initialize guidelines for detected AI CLIs
coding-skill update [--global]
Regenerate managed sections from latest principles
coding-skill doctor [--global]
Check health of managed files (PASS/WARN/FAIL)
coding-skill diff [--global]
Preview what update would change
coding-skill eject <target> [--global] [--all]
Remove markers, keep content as plain text
coding-skill --version
Print version
After init, a .coding-skill.json file is created in your project:
{
"targets": ["claude", "codex"],
"exclude": ["trace-before-building"],
"custom": ["guidelines/team-rules.md"]
}| Field | Purpose |
|---|---|
targets |
Which CLIs to manage (set by init, use eject to remove) |
exclude |
Principle slugs to omit from generated files |
custom |
Paths to additional markdown appended after built-in principles |
Available principle slugs: think-before-coding, simplicity-first, surgical-changes, goal-driven-execution, communicate-progress, trace-before-building
Don't assume. Don't hide confusion. Surface tradeoffs.
- State assumptions explicitly — If uncertain, ask rather than guess
- Present multiple interpretations — Don't pick silently when ambiguity exists
- Push back when warranted — If a simpler approach exists, say so
- Stop when confused — Name what's unclear and ask for clarification
Minimum code that solves the problem. Nothing speculative.
- No features beyond what was asked
- No abstractions for single-use code
- No "flexibility" or "configurability" that wasn't requested
- No error handling for impossible scenarios
- If 200 lines could be 50, rewrite it
The test: Would a senior engineer say this is overcomplicated? If yes, simplify.
Touch only what you must. Clean up only your own mess.
When editing existing code:
- Don't "improve" adjacent code, comments, or formatting
- Don't refactor things that aren't broken
- Match existing style, even if you'd do it differently
- If you notice unrelated dead code, mention it — don't delete it
The test: Every changed line should trace directly to the user's request.
Define success criteria. Loop until verified.
Transform imperative tasks into verifiable goals:
| Instead of... | Transform to... |
|---|---|
| "Add validation" | "Write tests for invalid inputs, then make them pass" |
| "Fix the bug" | "Write a test that reproduces it, then make it pass" |
| "Refactor X" | "Ensure tests pass before and after" |
Narrate what you're doing, not what you did.
Before executing any plan with more than one edit or command, announce each step before starting it, confirm completion in one line, and flag unexpected findings immediately rather than silently adapting.
Enumerate system states explicitly before touching code.
When changes cross a module boundary, or when data flows through three or more components, draw the state diagram before implementing. List every input type, trace it through, and name every transition and assumption.
- Fewer unnecessary changes in diffs — Only requested changes appear
- Fewer rewrites due to overcomplication — Code is simple the first time
- Clarifying questions come before implementation — Not after mistakes
- Clean, minimal PRs — No drive-by refactoring or "improvements"
These guidelines bias toward caution over speed. For trivial tasks (simple typo fixes, obvious one-liners), use judgment — not every change needs the full rigor.
Derived from Andrej Karpathy's observations on LLM coding pitfalls. Originally created by forrestchang.
MIT