writing-plans
Use when you have a spec or requirements for a multi-step task. Creates comprehensive implementation plans with bite-sized tasks, exact file paths, and complete code examples.
Use when you have a spec or requirements for a multi-step task. Creates comprehensive implementation plans with bite-sized tasks, exact file paths, and complete code examples.
Real data. Real impact.
Emerging
Developers
Per week
Excellent
Skills give you superpowers. Install in 30 seconds.
Write comprehensive implementation plans assuming the implementer has zero context for the codebase and questionable taste. Document everything they need: which files to touch, complete code, testing commands, docs to check, how to verify. Give them bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume the implementer is a skilled developer but knows almost nothing about the toolset or problem domain. Assume they don't know good test design very well.
Core principle: A good plan makes implementation obvious. If someone has to guess, the plan is incomplete.
Always use before:
Don't skip when:
Each task = 2-5 minutes of focused work.
Every step is one action:
Too big:
### Task 1: Build authentication system [50 lines of code across 5 files]
Right size:
### Task 1: Create User model with email field [10 lines, 1 file] ### Task 2: Add password hash field to User [8 lines, 1 file] ### Task 3: Create password hashing utility [15 lines, 1 file]
Every plan MUST start with:
# [Feature Name] Implementation Plan > **For Hermes:** Use subagent-driven-development skill to implement this plan task-by-task. **Goal:** [One sentence describing what this builds] **Architecture:** [2-3 sentences about approach] **Tech Stack:** [Key technologies/libraries] ---
Each task follows this format:
### Task N: [Descriptive Name] **Objective:** What this task accomplishes (one sentence) **Files:** - Create: `exact/path/to/new_file.py` - Modify: `exact/path/to/existing.py:45-67` (line numbers if known) - Test: `tests/path/to/test_file.py` **Step 1: Write failing test** ```python def test_specific_behavior(): result = function(input) assert result == expected ``` **Step 2: Run test to verify failure** Run: `pytest tests/path/test.py::test_specific_behavior -v` Expected: FAIL — "function not defined" **Step 3: Write minimal implementation** ```python def function(input): return expected ``` **Step 4: Run test to verify pass** Run: `pytest tests/path/test.py::test_specific_behavior -v` Expected: PASS **Step 5: Commit** ```bash git add tests/path/test.py src/path/file.py git commit -m "feat: add specific feature" ```
Read and understand:
Use Hermes tools to understand the project:
# Understand project structure search_files("*.py", target="files", path="src/") # Look at similar features search_files("similar_pattern", path="src/", file_glob="*.py") # Check existing tests search_files("*.py", target="files", path="tests/") # Read key files read_file("src/app.py")
Decide:
Create tasks in order:
For each task, include:
src/config/settings.py)Check:
mkdir -p docs/plans # Save plan to docs/plans/YYYY-MM-DD-feature-name.md git add docs/plans/ git commit -m "docs: add implementation plan for [feature]"
Bad: Copy-paste validation in 3 places Good: Extract validation function, use everywhere
Bad: Add "flexibility" for future requirements Good: Implement only what's needed now
# Bad — YAGNI violation class User: def __init__(self, name, email): self.name = name self.email = email self.preferences = {} # Not needed yet! self.metadata = {} # Not needed yet! # Good — YAGNI class User: def __init__(self, name, email): self.name = name self.email = email
Every task that produces code should include the full TDD cycle:
See
test-driven-development skill for details.
Commit after every task:
git add [files] git commit -m "type: description"
Bad: "Add authentication" Good: "Create User model with email and password_hash fields"
Bad: "Step 1: Add validation function" Good: "Step 1: Add validation function" followed by the complete function code
Bad: "Step 3: Test it works" Good: "Step 3: Run
pytest tests/test_auth.py -v, expected: 3 passed"
Bad: "Create the model file" Good: "Create:
src/models/user.py"
After saving the plan, offer the execution approach:
"Plan complete and saved. Ready to execute using subagent-driven-development — I'll dispatch a fresh subagent per task with two-stage review (spec compliance then code quality). Shall I proceed?"
When executing, use the
subagent-driven-development skill:
delegate_task per task with full contextBite-sized tasks (2-5 min each) Exact file paths Complete code (copy-pasteable) Exact commands with expected output Verification steps DRY, YAGNI, TDD Frequent commits
A good plan makes implementation obvious.
MIT
mkdir -p ~/.hermes/skills/software-development/writing-plans && curl -o ~/.hermes/skills/software-development/writing-plans/SKILL.md https://raw.githubusercontent.com/NousResearch/hermes-agent/main/skills/software-development/writing-plans/SKILL.md1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.