How to Build Your Own Agency Agent
A step-by-step tutorial for authoring a new agency-agent, including structure, tuning, and publishing to the community.
A step-by-step tutorial for authoring a new agency-agent, including structure, tuning, and publishing to the community.
The 150 agents in msitarzewski/agency-agents are a great starting point, but the best agents are the ones you build for your specific workflow. Nobody understands your job as well as you do, and that domain expertise, encoded as an agent, is a moat that compounds over time.
This tutorial walks through building your own agency-style agent in an afternoon, following the exact structure of the upstream library.
Don't build an agent for a problem you'll face once. Build an agent for a problem you face weekly or daily. The agent pays off through reuse; low-frequency problems don't benefit.
Good candidates:
Bad candidates:
Before writing the agent, write 3-5 examples of great answers to real versions of the problem. These become your gold standard.
This step is the most important one and the one people skip. Without examples, you're guessing at what the agent should produce. With examples, you can measure whether the agent matches your quality bar.
Look at your examples and identify the common structure. Do all good answers include:
Write these down. They become the "competencies" and "communication style" sections of your agent.
Open a new Markdown file with the standard five-part structure:
# [Agent Name]
## Role
You are a [specific title] with [experience level] who [key responsibility]. You prioritize [top 2-3 values].
## Core Competencies
- [Competency 1 with enough specificity to matter]
- [Competency 2]
- ...
- [Competency 10]
## Decision Frameworks
1. **[Rule name]:** [one-line decision rule]
2. **[Rule name]:** [one-line decision rule]
3. **[Rule name]:** [one-line decision rule]
## Communication Style
- [Format preferences, e.g., lead with conclusions]
- [Tone preferences]
- [Output structure]
## Boundaries
- You will not [thing the agent should refuse]
- You will not [thing the agent should refuse]
- You will always [non-negotiable behavior]
Fill in each section based on your examples.
Here's where most first-time agent authors get overconfident. Test your agent by running it against the same problem twice: once with your agent loaded, once without. Compare outputs.
If the agent's version is clearly better, you have a working agent. If it's only marginally better, iterate. If it's worse, you have some debugging to do.
Baseline comparison is critical because it protects against the "it feels better to me because I wrote it" bias.
Use the agent on 5-10 real problems over the next week. Note what it gets right and what it gets wrong. For each miss, ask: was this a boundary issue, a competency gap, or a decision framework hole? Fix the corresponding section.
Don't over-iterate. After about 20 iterations, you hit diminishing returns. Ship it.
The agent's name matters more than you'd think. It becomes the shorthand your team uses. Make it:
If your agent is useful to you, it's probably useful to others. Publishing options:
Fork and PR to upstream. The msitarzewski/agency-agents repo accepts contributions. Send a pull request.
Publish to AI Skill Market. Use the /submit flow to list your agent alongside the 150 existing ones.
Self-host. Keep it in a personal repo and share the URL.
Whatever you choose, document the agent with a short README covering what it does, when to use it, and an example prompt.
Mistake 1: Making it too generic. "You are a helpful assistant" produces helpful-assistant output. Specificity is the source of agent quality.
Mistake 2: Too many competencies. Above 15 bullets, the agent starts to dilute. Pick the essential 10-12 and cut the rest.
Mistake 3: Missing boundaries. Without boundaries, the agent drifts into mediocre-safe territory. Boundaries keep it sharp.
Mistake 4: No decision frameworks. The competencies tell the agent what it knows; the frameworks tell it what to do when options conflict.
Mistake 5: Testing only on easy cases. Your agent needs to work on hard cases too. Include difficult examples in your testing.
Let's walk through a quick example. Say you're a frontend developer who reviews a lot of Shadcn UI component code. You want an agent to speed up that specific review.
Total time: one afternoon. Ongoing value: every Shadcn PR review for the rest of your career.
2-4 hours for the first draft, plus 2-3 weeks of iteration as you use it on real problems.
Short examples can help, but they eat context budget. Usually better to keep examples separate and reference them only when needed.
Absolutely. That's often the fastest path. Fork the closest match, customize, and iterate.
When 3 consecutive changes don't materially improve output quality, stop. Ship it.
Probably. See How Agents Beat Prompt Engineering in 2026 for the full argument.
The best way to understand agents is to build one. Pick a problem you solve weekly, follow the steps above, and you'll have a working agent by tomorrow. Your future self will thank you every time you use it.
Browse all 150 agents at aiskill.market/agents or submit your own skill.
AI soul and personality tracing skill — gives your AI agent a consistent persona, tone, and conversational identity across all interactions. 319.2K installs.
Design short-term, long-term, and graph-based memory architectures for agents
Official Agent Skills suite from ElevenLabs: text-to-speech, speech-to-text, voice agents, sound effects, music, and voice isolation. Works in Claude Code, Cursor, and other compatible agents.
Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash