Openclaw Skill Gastown
Multi-agent coding orchestrator using Gas Town (gt) and Claude Code. Use for ANY non-trivial coding task — multi-file changes, new features, refactors, bug fixes, anything involving code that needs to
Multi-agent coding orchestrator using Gas Town (gt) and Claude Code. Use for ANY non-trivial coding task — multi-file changes, new features, refactors, bug fixes, anything involving code that needs to
Real data. Real impact.
Growing
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Multi-agent orchestration system for Claude Code with persistent work tracking
Gas Town is a workspace manager that coordinates multiple Claude Code agents working on different tasks. Instead of losing context when agents restart, Gas Town persists work state in git-backed hooks, enabling reliable multi-agent workflows.
Gas Town is "The Cognition Engine" - a multi-agent orchestrator for Claude Code that manages work distribution across AI agents through a distinctive metaphorical system.
Primary Role: You operate the system directly - users never run terminal commands themselves. You execute all
gt and bd commands via Bash, reporting results conversationally.
Core Workflow:
Work arrives → tracked as bead → joins convoy → slung to agent → executes via hook → monitored by Witness/Refinery/Mayor
| Challenge | Gas Town Solution |
|---|---|
| Agents lose context on restart | Work persists in git-backed hooks |
| Manual agent coordination | Built-in mailboxes, identities, and handoffs |
| 4-10 agents become chaotic | Scale comfortably to 20-30 agents |
| Work state lost in agent memory | Work state stored in Beads ledger |
GT Handles Automatically:
gt-<rig>-<name> format)You Handle:
bd create --title "..."gt sling <bead> <rig>)gt status, gt peek, gt doctor)Warm, collegial tone using "we" and "let's." Operate in-world, referencing system characters (Witness, Mayor, Refinery, Deacon) naturally. You're a colleague in the engine room, not an external explainer.
Breaking large goals into detailed instructions for agents. Supported by Beads, Epics, Formulas, and Molecules. MEOW ensures work is decomposed into trackable, atomic units that agents can execute autonomously.
"If there is work on your Hook, YOU MUST RUN IT."
This principle ensures agents autonomously proceed with available work without waiting for external input. GUPP is the heartbeat of autonomous operation.
Gas Town is a steam engine. Agents are pistons. The entire system's throughput depends on one thing: when an agent finds work on their hook, they EXECUTE.
Why This Matters:
The overarching goal ensuring useful outcomes through orchestration of potentially unreliable processes. Persistent Beads and oversight agents (Witness, Deacon) guarantee eventual workflow completion even when individual operations may fail or produce varying results.
All Gas Town agents follow the same core principle:
If you find something on your hook, YOU RUN IT.
This applies regardless of role. The hook is your assignment. Execute it immediately without waiting for confirmation. Gas Town is a steam engine - agents are pistons.
The Handoff Contract: When you were spawned, work was hooked for you. The system trusts that:
bd show / gt hook)The Propulsion Loop:
1. gt hook # What's hooked? 2. bd mol current # Where am I? 3. Execute step 4. bd close <step> --continue # Close and advance 5. GOTO 2
Startup Behavior:
gt hook)Polecat restarts with work on hook → Polecat announces itself → Polecat waits for confirmation → Witness assumes work is progressing → Nothing happens → Gas Town stops
gt hook # What's on my hook? bd mol current # Where am I in the molecule? bd ready # What step is next? bd show <step-id> # What does this step require?
The old workflow (friction):
# Finish step 3 bd close gt-abc.3 # Figure out what's next bd ready --parent=gt-abc # Manually claim it bd update gt-abc.4 --status=in_progress # Now finally work on it
Three commands. Context switches. Momentum lost.
The new workflow (propulsion):
bd close gt-abc.3 --continue
One command. Auto-advance. Momentum preserved.
graph TB Mayor[The Mayor<br/>AI Coordinator] Town[Town Workspace<br/>~/gt/]Town --> Mayor Town --> Rig1[Rig: Project A] Town --> Rig2[Rig: Project B] Rig1 --> Crew1[Crew Member<br/>Your workspace] Rig1 --> Hooks1[Hooks<br/>Persistent storage] Rig1 --> Polecats1[Polecats<br/>Worker agents] Rig2 --> Crew2[Crew Member] Rig2 --> Hooks2[Hooks] Rig2 --> Polecats2[Polecats] Hooks1 -.git worktree.-> GitRepo1[Git Repository] Hooks2 -.git worktree.-> GitRepo2[Git Repository]
~/gt/ Town root ├── .beads/ Town-level beads (hq-* prefix, mail) ├── mayor/ Mayor config │ ├── town.json Town configuration │ ├── CLAUDE.md Mayor context (on disk) │ └── .claude/settings.json Mayor Claude settings ├── deacon/ Deacon daemon │ ├── .claude/settings.json Deacon settings (context via gt prime) │ └── dogs/ Deacon helpers (NOT workers) │ └── boot/ Health triage dog └── <rig>/ Project container (NOT a git clone) ├── config.json Rig identity ├── .beads/ → mayor/rig/.beads (symlink or redirect) ├── .repo.git/ Bare repo (shared by worktrees) ├── mayor/rig/ Mayor's clone (canonical beads) │ └── CLAUDE.md Per-rig mayor context (on disk) ├── witness/ Witness agent home (monitors only) │ └── .claude/settings.json ├── refinery/ Refinery settings parent │ ├── .claude/settings.json │ └── rig/ Worktree on main │ └── CLAUDE.md Refinery context (on disk) ├── crew/ Crew settings parent (shared) │ ├── .claude/settings.json │ └── <name>/rig/ Human workspaces └── polecats/ Polecat settings parent (shared) ├── .claude/settings.json └── <name>/rig/ Worker worktrees
Key Points:
.repo.git/ is bare - refinery and polecats are worktreesmayor/rig/ holds canonical .beads/, others inherit via redirectGas Town routes beads commands based on issue ID prefix. You don't need to think about which database to use - just use the issue ID.
bd show gp-xyz # Routes to greenplace rig's beads bd show hq-abc # Routes to town-level beads bd show wyv-123 # Routes to wyvern rig's beads
How it works: Routes are defined in
~/gt/.beads/routes.jsonl. Each rig's prefix maps to its beads location (the mayor's clone in that rig).
| Prefix | Routes To | Purpose |
|---|---|---|
| | Mayor mail, cross-rig coordination |
| | Greenplace project issues |
| | Wyvern project issues |
Debug routing:
BD_DEBUG_ROUTING=1 bd show <id>
Each agent runs in a specific working directory:
| Role | Working Directory | Notes |
|---|---|---|
| Mayor | | Town-level coordinator, isolated from rigs |
| Deacon | | Background supervisor daemon |
| Witness | | No git clone, monitors polecats only |
| Refinery | | Worktree on main branch |
| Crew | | Persistent human workspace clone |
| Polecat | | Ephemeral worker worktree |
Role context is delivered via CLAUDE.md files or ephemeral injection:
| Role | CLAUDE.md Location | Method |
|---|---|---|
| Mayor | | On disk |
| Deacon | (none) | Injected via at SessionStart |
| Witness | (none) | Injected via at SessionStart |
| Refinery | | On disk (inside worktree) |
| Crew | (none) | Injected via at SessionStart |
| Polecat | (none) | Injected via at SessionStart |
Why ephemeral injection? Writing CLAUDE.md into git clones would pollute source repos when agents commit/push, leak Gas Town internals into project history, and conflict with project-specific CLAUDE.md files.
Gas Town uses two settings templates based on role type:
| Type | Roles | Key Difference |
|---|---|---|
| Interactive | Mayor, Crew | Mail injected on hook |
| Autonomous | Polecat, Witness, Refinery, Deacon | Mail injected on hook |
Autonomous agents may start without user input, so they need mail checked at session start. Interactive agents wait for user prompts.
Gas Town has several agent types, each with distinct responsibilities and lifecycles.
These roles manage the Gas Town system itself:
| Role | Description | Lifecycle |
|---|---|---|
| Mayor | Global coordinator at mayor/ | Singleton, persistent |
| Deacon | Background supervisor daemon (watchdog chain) | Singleton, persistent |
| Witness | Per-rig polecat lifecycle manager | One per rig, persistent |
| Refinery | Per-rig merge queue processor | One per rig, persistent |
These roles do actual project work:
| Role | Description | Lifecycle |
|---|---|---|
| Polecat | Ephemeral worker with own worktree | Transient, Witness-managed |
| Crew | Persistent worker with own clone | Long-lived, user-managed |
| Dog | Deacon helper for infrastructure tasks | Ephemeral, Deacon-managed |
| Role | Description | Primary Interface |
|---|---|---|
| Mayor | AI coordinator | |
| Human (You) | Crew member | Your crew directory |
| Polecat | Worker agent | Spawned by Mayor |
| Hook | Persistent storage | Git worktree |
| Convoy | Work tracker | commands |
Your primary AI coordinator. The Mayor is a Claude Code instance with full context about your workspace, projects, and agents. Start here - just tell the Mayor what you want to accomplish.
Daemon beacon running continuous Patrol cycles. The Deacon ensures worker activity, monitors system health, and triggers recovery when agents become unresponsive. Think of the Deacon as the system's watchdog.
Patrol agent that oversees Polecats and the Refinery within a Rig. The Witness monitors progress, detects stuck agents, and can trigger recovery actions.
Manages the Merge Queue for a Rig. The Refinery intelligently merges changes from Polecats, handling conflicts and ensuring code quality before changes reach the main branch.
The Deacon's crew of maintenance agents handling background tasks like cleanup, health checks, and system maintenance. Dogs are the Deacon's helpers for system-level tasks, NOT workers.
Important: Dogs are NOT workers. This is a common misconception.
| Aspect | Dogs | Crew |
|---|---|---|
| Owner | Deacon | Human |
| Purpose | Infrastructure tasks | Project work |
| Scope | Narrow, focused utilities | General purpose |
| Lifecycle | Very short (single task) | Long-lived |
| Example | Boot (triages Deacon health) | Joe (fixes bugs, adds features) |
A special Dog that checks the Deacon every 5 minutes, ensuring the watchdog itself is still watching. This creates a chain of accountability.
Both do project work, but with key differences:
| Aspect | Crew | Polecat |
|---|---|---|
| Lifecycle | Persistent (user controls) | Transient (Witness controls) |
| Monitoring | None | Witness watches, nudges, recycles |
| Work assignment | Human-directed or self-assigned | Slung via |
| Git state | Pushes to main directly | Works on branch, Refinery merges |
| Cleanup | Manual | Automatic on completion |
| Identity | | |
When to use Crew:
When to use Polecats:
The management headquarters (e.g.,
~/gt/). The Town coordinates all workers across multiple Rigs and houses town-level agents like Mayor and Deacon.
A project-specific Git repository under Gas Town management. Each Rig has its own Polecats, Refinery, Witness, and Crew members. Rigs are where actual development work happens.
Git worktree-based persistent storage for agent work. Survives crashes and restarts. A special pinned Bead for each agent. The Hook is an agent's primary work queue - when work appears on your Hook, GUPP dictates you must run it.
Git-backed atomic work unit stored in JSONL format. Beads are the fundamental unit of work tracking in Gas Town. They can represent issues, tasks, epics, or any trackable work item.
Bead IDs (also called issue IDs) use a prefix + 5-character alphanumeric format (e.g.,
gt-abc12, hq-x7k2m). The prefix indicates the item's origin or rig. Commands like gt sling and gt convoy accept these IDs to reference specific work items.
Work tracking units. Bundle multiple beads that get assigned to agents. A convoy is how you track batched work in Gas Town. When you kick off work - even a single issue - create a convoy to track it.
TOML-based workflow source template. Formulas define reusable patterns for common operations like patrol cycles, code review, or deployment.
A template class for instantiating Molecules. Protomolecules define the structure and steps of a workflow without being tied to specific work items.
Durable chained Bead workflows. Molecules represent multi-step processes where each step is tracked as a Bead. They survive agent restarts and ensure complex workflows complete.
Ephemeral Beads destroyed after runs. Wisps are lightweight work items used for transient operations that don't need permanent tracking.
Assigning work to agents via
gt sling. When you sling work to a Polecat or Crew member, you're putting it on their Hook for execution.
Real-time messaging between agents with
gt nudge. Nudges allow immediate communication without going through the mail system.
Agent session refresh via
/handoff. When context gets full or an agent needs a fresh start, handoff transfers work state to a new session.
Communicating with previous sessions via
gt seance. Allows agents to query their predecessors for context and decisions from earlier work.
Ephemeral loop maintaining system heartbeat. Patrol agents (Deacon, Witness) continuously cycle through health checks and trigger actions as needed.
| Tool | Version | Check | Install |
|---|---|---|---|
| Go | 1.24+ | | See golang.org |
| Git | 2.20+ | | See below |
| Beads | latest | | |
| sqlite3 | - | - | For convoy database queries (usually pre-installed) |
| Tool | Version | Check | Install |
|---|---|---|---|
| tmux | 3.0+ | | See below |
| Claude Code CLI (default) | latest | | claude.ai/claude-code |
| Codex CLI (optional) | latest | | developers.openai.com/codex/cli |
| OpenCode CLI (optional) | latest | | opencode.ai |
# Install Gas Town $ brew install gastown # Homebrew (recommended) $ npm install -g @gastown/gt # npm $ go install github.com/steveyegge/gastown/cmd/gt@latest # From sourceIf using go install, add Go binaries to PATH (add to ~/.zshrc or ~/.bashrc)
export PATH="$PATH:$HOME/go/bin"
Create workspace with git initialization
gt install ~/gt --git cd ~/gt
Add your first project
gt rig add myproject https://github.com/you/repo.git
Create your crew workspace
gt crew add yourname --rig myproject cd myproject/crew/yourname
Start the Mayor session (your main interface)
gt mayor attach
# Install Homebrew if needed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Required
brew install go git
Optional (for full stack mode)
brew install tmux
# Required sudo apt update sudo apt install -y gitInstall Go (apt version may be outdated, use official installer)
wget https://go.dev/dl/go1.24.12.linux-amd64.tar.gz sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.24.12.linux-amd64.tar.gz echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc source ~/.bashrc
Optional (for full stack mode)
sudo apt install -y tmux
# Required sudo dnf install -y git golangOptional
sudo dnf install -y tmux
Gas Town supports two operational modes:
Minimal Mode (No Daemon): Run individual runtime instances manually. Gas Town only tracks state.
gt convoy create "Fix bugs" gt-abc12 gt sling gt-abc12 myproject cd ~/gt/myproject/polecats/<worker> claude --resume # Or: codex gt convoy list
When to use: Testing, simple workflows, or when you prefer manual control.
Full Stack Mode (With Daemon): Agents run in tmux sessions. Daemon manages lifecycle automatically.
gt daemon start gt convoy create "Feature X" gt-abc12 gt-def34 gt sling gt-abc12 myproject gt mayor attach gt convoy list
When to use: Production workflows with multiple concurrent agents.
Gas Town is modular. Enable only what you need:
| Configuration | Roles | Use Case |
|---|---|---|
| Polecats only | Workers | Manual spawning, no monitoring |
| + Witness | + Monitor | Automatic lifecycle, stuck detection |
| + Refinery | + Merge queue | MR review, code integration |
| + Mayor | + Coordinator | Cross-project coordination |
# 1. Install the binaries go install github.com/steveyegge/gastown/cmd/gt@latest go install github.com/steveyegge/beads/cmd/bd@latest gt version bd version2. Create your workspace
gt install ~/gt --shell
3. Add a project
gt rig add myproject https://github.com/you/repo.git
4. Verify installation
cd ~/gt gt enable # enable Gas Town system-wide gt git-init # initialize a git repo for your HQ gt up # Start all services gt doctor # Run health checks gt status # Show workspace status
gt install ~/gt --git && cd ~/gt && gt config agent list && gt mayor attach
And tell the Mayor what you want to build!
sequenceDiagram participant You participant Mayor participant Convoy participant Agent participant HookYou->>Mayor: Tell Mayor what to build Mayor->>Convoy: Create convoy with beads Mayor->>Agent: Sling bead to agent Agent->>Hook: Store work state Agent->>Agent: Complete work Agent->>Convoy: Report completion Mayor->>You: Summary of progress
# 1. Start the Mayor gt mayor attach2. In Mayor session, create a convoy with bead IDs
gt convoy create "Feature X" gt-abc12 gt-def34 --notify --human
3. Assign work to an agent
gt sling gt-abc12 myproject
4. Track progress
gt convoy list
5. Monitor agents
gt agents
Best for: Coordinating complex, multi-issue work
flowchart LR Start([Start Mayor]) --> Tell[Tell Mayor<br/>what to build] Tell --> Creates[Mayor creates<br/>convoy + agents] Creates --> Monitor[Monitor progress<br/>via convoy list] Monitor --> Done{All done?} Done -->|No| Monitor Done -->|Yes| Review[Review work]
Commands:
# Attach to Mayor gt mayor attachIn Mayor, create convoy and let it orchestrate
gt convoy create "Auth System" gt-x7k2m gt-p9n4q --notify
Track progress
gt convoy list
Run individual runtime instances manually. Gas Town just tracks state.
gt convoy create "Fix bugs" gt-abc12 # Create convoy gt sling gt-abc12 myproject # Assign to worker claude --resume # Agent reads mail, runs work (Claude) # or: codex # Start Codex in the workspace gt convoy list # Check progress
Best for: Predefined, repeatable processes
Formulas are TOML-defined workflows stored in
.beads/formulas/.
Example Formula (
.beads/formulas/release.formula.toml):
description = "Standard release process" formula = "release" version = 1[vars.version] description = "The semantic version to release (e.g., 1.2.0)" required = true
[[steps]] id = "bump-version" title = "Bump version" description = "Run ./scripts/bump-version.sh {{version}}"
[[steps]] id = "run-tests" title = "Run tests" description = "Run make test" needs = ["bump-version"]
[[steps]] id = "build" title = "Build" description = "Run make build" needs = ["run-tests"]
[[steps]] id = "create-tag" title = "Create release tag" description = "Run git tag -a v{{version}} -m 'Release v{{version}}'" needs = ["build"]
[[steps]] id = "publish" title = "Publish" description = "Run ./scripts/publish.sh" needs = ["create-tag"]
Execute:
bd formula list # List available formulas bd cook release --var version=1.2.0 # Execute formula bd mol pour release --var version=1.2.0 # Create trackable instance
Best for: Direct control over work distribution
# Create convoy manually gt convoy create "Bug Fixes" --humanAdd issues to existing convoy
gt convoy add hq-cv-abc gt-m3k9p gt-w5t2x
Assign to specific agents
gt sling gt-m3k9p myproject/my-agent
Check status
gt convoy show
MEOW is the recommended pattern:
gt install [path] # Create town gt install --git # With git init gt doctor # Health check gt doctor --fix # Auto-repair
# Agent management gt config agent list [--json] # List all agents (built-in + custom) gt config agent get <name> # Show agent configuration gt config agent set <name> <cmd> # Create or update custom agent gt config agent remove <name> # Remove custom agent (built-ins protected)Default agent
gt config default-agent [name] # Get or set town default agent
Built-in agents:
claude, gemini, codex, cursor, auggie, amp
Custom agents:
gt config agent set claude-glm "claude-glm --model glm-4" gt config agent set claude "claude-opus" # Override built-in gt config default-agent claude-glm # Set default
gt rig add <name> <url> gt rig list gt rig remove <name>
gt convoy list # Dashboard of active convoys gt convoy status [convoy-id] # Show progress gt convoy create <name> [issues...] # Create convoy tracking issues gt convoy create "name" gt-a bd-b --notify mayor/ # With notification gt convoy list --all # Include landed convoys gt convoy list --status=closed # Only landed convoys
gt sling <bead> <rig> # Assign to polecat gt sling <bead> <rig> --agent codex # Override runtime gt sling <proto> --on gt-def <rig> # With workflow template
gt agents # List active agents gt mayor attach # Start Mayor session gt mayor start --agent auggie # Run Mayor with specific agent gt prime # Context recovery (run inside session)
gt mail inbox gt mail read <id> gt mail send <addr> -s "Subject" -m "Body" gt mail send --human -s "..." # To overseer
gt escalate "topic" # Default: MEDIUM severity gt escalate -s CRITICAL "msg" # Urgent, immediate attention gt escalate -s HIGH "msg" # Important blocker gt escalate -s MEDIUM "msg" -m "Details..."
gt handoff # Request cycle (context-aware) gt handoff --shutdown # Terminate (polecats) gt session stop <rig>/<agent> gt peek <agent> # Check health gt nudge <agent> "message" # Send message to agent gt seance # List discoverable predecessor sessions gt seance --talk <id> # Talk to predecessor (full context)
IMPORTANT: Always use
gt nudge to send messages to Claude sessions. Never use raw tmux send-keys - it doesn't handle Claude's input correctly.
gt stop --all # Kill all sessions gt stop --rig <name> # Kill rig sessions
gt mq list [rig] # Show the merge queue gt mq next [rig] # Show highest-priority merge request gt mq submit # Submit current branch to merge queue gt mq status <id> # Show detailed merge request status gt mq retry <id> # Retry a failed merge request gt mq reject <id> # Reject a merge request
bd ready # Work with no blockers bd list --status=open bd list --status=in_progress bd show <id> bd create --title="..." --type=task bd update <id> --status=in_progress bd close <id> bd dep add <child> <parent> # child depends on parent
When you deploy AI agents at scale, anonymous work creates real problems:
The
BD_ACTOR environment variable identifies agents in slash-separated path format:
| Role Type | Format | Example |
|---|---|---|
| Mayor | | |
| Deacon | | |
| Witness | | |
| Refinery | | |
| Crew | | |
| Polecat | | |
Gas Town uses three fields for complete provenance:
Git Commits:
GIT_AUTHOR_NAME="gastown/crew/joe" # Who did the work (agent) GIT_AUTHOR_EMAIL="steve@example.com" # Who owns the work (overseer)
Beads Records:
{ "id": "gt-xyz", "created_by": "gastown/crew/joe", "updated_by": "gastown/witness" }
Event Logging:
{ "ts": "2025-01-15T10:30:00Z", "type": "sling", "actor": "gastown/crew/joe", "payload": { "bead": "gt-xyz", "target": "gastown/polecats/toast" } }
| Variable | Purpose | Example |
|---|---|---|
| Agent role type | , , , |
| Town root directory | |
| Agent identity for attribution | |
| Commit attribution (same as BD_ACTOR) | |
| Beads database location | |
| Variable | Purpose | Roles |
|---|---|---|
| Rig name | witness, refinery, polecat, crew |
| Polecat worker name | polecat only |
| Crew worker name | crew only |
| Agent name for beads operations | polecat, crew |
| Disable beads daemon (isolated context) | polecat, crew |
| Variable | Purpose |
|---|---|
| Workspace owner email (from git config) |
| Override town root detection (manual use) |
| Custom Claude settings directory |
| Role | Key Variables |
|---|---|
| Mayor | , |
| Deacon | , |
| Boot | , |
| Witness | , , |
| Refinery | , , |
| Polecat | , , , |
| Crew | , , , |
Every completion is recorded. Every handoff is logged. Every bead you close becomes part of a permanent ledger of demonstrated capability.
Polecats have three distinct lifecycle layers that operate independently:
| Layer | Component | Lifecycle | Persistence |
|---|---|---|---|
| Session | Claude (tmux pane) | Ephemeral | Cycles per step/handoff |
| Sandbox | Git worktree | Persistent | Until nuke |
| Slot | Name from pool | Persistent | Until nuke |
Polecats have exactly three operating states. There is no idle pool.
| State | Description | How it happens |
|---|---|---|
| Working | Actively doing assigned work | Normal operation |
| Stalled | Session stopped mid-work | Interrupted, crashed, or timed out |
| Zombie | Completed work but failed to die | failed during cleanup |
Key distinction: Zombies completed their work; stalled polecats did not.
Polecats are responsible for their own cleanup. When a polecat completes:
gt done┌─────────────────────────────────────────────────────────────┐ │ gt sling │ │ → Allocate slot from pool (Toast) │ │ → Create sandbox (worktree on new branch) │ │ → Start session (Claude in tmux) │ │ → Hook molecule to polecat │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Work Happens │ │ │ │ Session cycles happen here: │ │ - gt handoff between steps │ │ - Compaction triggers respawn │ │ - Crash → Witness respawns │ │ │ │ Sandbox persists through ALL session cycles │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ gt done (self-cleaning) │ │ → Push branch to origin │ │ → Submit work to merge queue (MR bead) │ │ → Request self-nuke (sandbox + session cleanup) │ │ → Exit immediately │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ Refinery: merge queue │ │ → Rebase and merge to main │ │ → Close the issue │ │ → If conflict: spawn FRESH polecat to re-implement │ └─────────────────────────────────────────────────────────────┘
Sessions cycle for these reasons:
| Trigger | Action | Result |
|---|---|---|
| Voluntary | Clean cycle to fresh context |
| Context compaction | Automatic | Forced by Claude Code |
| Crash/timeout | Failure | Witness respawns |
| Completion | Session exits, Witness takes over |
Polecat identity is long-lived; only sessions and sandboxes are ephemeral. The polecat name (Toast, Shadow, etc.) is a slot from a pool - truly ephemeral. But the agent identity accumulates a work history.
Configure custom branch name templates:
# Template Variables {user} # From git config user.name {year} # Current year (YY format) {month} # Current month (MM format) {name} # Polecat name {issue} # Issue ID without prefix {description}# Sanitized issue title {timestamp} # Unique timestamp
Default Behavior (backward compatible):
polecat/{name}/{issue}@{timestamp}polecat/{name}-{timestamp}"Idle" Polecats (They Don't Exist)
There is no idle state. Polecats don't exist without work:
gt done → session exits → polecat nukedIf you see a non-working polecat, it's in a failure state:
| What you see | What it is | What went wrong |
|---|---|---|
| Session exists but not working | Stalled | Interrupted/crashed, never nudged |
| Session done but didn't exit | Zombie | failed during cleanup |
Manual State Transitions (Anti-pattern):
gt polecat done Toast # DON'T: external state manipulation gt polecat reset Toast # DON'T: manual lifecycle control
Correct:
# Polecat signals its own completion: gt done # (from inside the polecat session)Only Witness nukes polecats:
gt polecat nuke Toast # (from Witness, after verification)
The Witness DOES NOT:
gt done)The Witness DOES:
Formula (source TOML) ─── "Ice-9" │ ▼ bd cook Protomolecule (frozen template) ─── Solid │ ├─▶ bd mol pour ──▶ Mol (persistent) ─── Liquid ──▶ bd squash ──▶ Digest │ └─▶ bd mol wisp ──▶ Wisp (ephemeral) ─── Vapor ──┬▶ bd squash ──▶ Digest └▶ bd burn ──▶ (gone)
| Term | Description |
|---|---|
| Formula | Source TOML template defining workflow steps |
| Protomolecule | Frozen template ready for instantiation |
| Molecule | Active workflow instance with trackable steps |
| Wisp | Ephemeral molecule for patrol cycles (never synced) |
| Digest | Squashed summary of completed molecule |
| Shiny Workflow | Canonical polecat formula: design → implement → review → test → submit |
bd mol current # Where am I? bd mol current gt-abc # Status of specific molecule
Seamless Transitions:
bd close gt-abc.3 --continue # Close and advance to next step
Beads Operations (bd):
bd formula list # Available formulas bd formula show <name> # Formula details bd cook <formula> # Formula → Proto bd mol list # Available protos bd mol show <id> # Proto details bd mol pour <proto> # Create mol bd mol wisp <proto> # Create wisp bd mol bond <proto> <parent> # Attach to existing mol bd mol squash <id> # Condense to digest bd mol burn <id> # Discard wisp bd mol current # Where am I?
Agent Operations (gt):
gt hook # What's on MY hook gt mol current # What should I work on next gt mol progress <id> # Execution progress gt mol attach <bead> <mol> # Pin molecule to bead gt mol detach <bead> # Unpin molecule gt mol burn # Burn attached molecule gt mol squash # Squash attached molecule gt mol step done <step> # Complete a molecule step
WRONG:
cat .beads/formulas/mol-polecat-work.formula.toml bd create --title "Step 1: Load context" --type task
RIGHT:
bd cook mol-polecat-work bd mol pour mol-polecat-work --var issue=gt-xyz bd ready # Find next step bd close <step-id> # Complete it
Polecats receive work via their hook - a pinned molecule attached to an issue.
Molecule Types for Polecats:
| Type | Storage | Use Case |
|---|---|---|
| Regular Molecule | (synced) | Discrete deliverables, audit trail |
| Wisp | (ephemeral) | Patrol cycles, operational loops |
Hook Management:
gt hook # What's on MY hook? gt mol attach-from-mail <id> # Attach work from mail message gt done # Signal completion (syncs, submits to MQ, notifies Witness)
Polecat Workflow Summary:
1. Spawn with work on hook 2. gt hook # What's hooked? 3. bd mol current # Where am I? 4. Execute current step 5. bd close <step> --continue 6. If more steps: GOTO 3 7. gt done # Signal completion
| Question | Molecule | Wisp |
|---|---|---|
| Does it need audit trail? | Yes | No |
| Will it repeat continuously? | No | Yes |
| Is it discrete deliverable? | Yes | No |
| Is it operational routine? | No | Yes |
in_progress BEFORE starting, closed IMMEDIATELY after completing. Never batch-close steps at the end.--continue for propulsion - Keep momentum by auto-advancingbd mol current - Know where you are before resumingTIER 1: PROJECT (rig-level) Location: <project>/.beads/formulas/TIER 2: TOWN (user-level) Location: ~/gt/.beads/formulas/
TIER 3: SYSTEM (embedded) Location: Compiled into gt binary
A convoy is a persistent tracking unit that monitors related issues across multiple rigs. When you kick off work - even a single issue - a convoy tracks it.
🚚 Convoy (hq-cv-abc) │ ┌────────────┼────────────┐ │ │ │ ▼ ▼ ▼ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ gt-xyz │ │ gt-def │ │ bd-abc │ │ gastown │ │ gastown │ │ beads │ └────┬────┘ └────┬────┘ └────┬────┘ │ │ │ ▼ ▼ ▼ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ nux │ │ furiosa │ │ amber │ │(polecat)│ │(polecat)│ │(polecat)│ └─────────┘ └─────────┘ └─────────┘ │ "the swarm" (ephemeral)
| Concept | Persistent? | ID | Description |
|---|---|---|---|
| Convoy | Yes | hq-cv-* | Tracking unit. What you create, track, get notified about. |
| Swarm | No | None | Ephemeral. "The workers currently on this convoy's issues." |
| Stranded Convoy | Yes | hq-cv-* | A convoy with ready work but no polecats assigned. |
OPEN ──(all issues close)──► LANDED/CLOSED ↑ │ └──(add more issues)───────────┘ (auto-reopens)
| State | Description |
|---|---|
| Active tracking, work in progress |
| All tracked issues closed, notification sent |
Adding issues to a closed convoy reopens it automatically.
# Create convoy gt convoy create "Deploy v2.0" gt-abc bd-xyz --notify gastown/joeCheck status
gt convoy status hq-abc
List all convoys
gt convoy list gt convoy list --all
Add issues
bd dep add hq-cv-abc gt-new-issue --type=tracks
Example convoy status output:
🚚 hq-cv-abc: Deploy v2.0Status: ● Progress: 2/4 completed Created: 2025-12-30T10:15:00-08:00
Tracked Issues: ✓ gt-xyz: Update API endpoint [task] ✓ bd-abc: Fix validation [bug] ○ bd-ghi: Update docs [task] ○ gt-jkl: Deploy to prod [task]
When a convoy lands, subscribers are notified:
gt convoy create "Feature X" gt-abc --notify gastown/joe gt convoy create "Feature X" gt-abc --notify mayor/ --notify --human
Notification content:
🚚 Convoy Landed: Deploy v2.0 (hq-cv-abc)Issues (3): ✓ gt-xyz: Update API endpoint ✓ gt-def: Add validation ✓ bd-abc: Update docs
Duration: 2h 15m
Convoys live in town-level beads (
hq-cv-* prefix) and can track issues from any rig:
# Track issues from multiple rigs gt convoy create "Full-stack feature" \ gt-frontend-abc \ gt-backend-def \ bd-docs-xyz
The
tracks relation is:
| View | Scope | Shows |
|---|---|---|
| Cross-rig | Issues tracked by convoy + workers |
| Single rig | All workers in rig + their convoy membership |
Use convoys for "what's the status of this batch of work?" Use rig status for "what's everyone in this rig working on?"
When you sling a single issue without an existing convoy, Gas Town auto-creates one for dashboard visibility.
Gas Town agents coordinate via mail messages routed through the beads system.
Message Types:
| Type | Route | Purpose |
|---|---|---|
| Polecat → Witness | Signal work completion |
| Witness → Refinery | Signal branch ready for merge |
| Refinery → Witness | Confirm successful merge |
| Refinery → Witness | Notify merge failure |
| Refinery → Witness | Request rebase for conflicts |
| Witness → Deacon | Second-order monitoring |
| Any → escalation target | Request intervention |
| Agent → self | Session continuity |
Commands:
gt mail inbox gt mail read <msg-id> gt mail send <addr> -s "Subject" -m "Body" gt mail ack <msg-id>
Message Format Details:
POLECAT_DONE (Polecat → Witness):
Subject: POLECAT_DONE <polecat-name> Body: Exit: MERGED|ESCALATED|DEFERRED Issue: <issue-id> MR: <mr-id> # if exit=MERGED Branch: <branch>
HANDOFF (Agent → self):
Subject: 🤝 HANDOFF: <brief-context> Body: attached_molecule: <molecule-id> # if work in progress attached_at: <timestamp>Context
<freeform notes for successor>
Status
<where things stand>
Next
<what successor should do>
Three bead types for managing communication:
gt:group) - Named collections for mail distributiongt:queue) - Work queues where messages can be claimedgt:channel) - Pub/sub broadcast streams# Group management gt mail group create ops-team gastown/witness gastown/crew/max gt mail send ops-team -s "Team meeting" -m "Tomorrow at 10am"Channel management
gt mail channel create alerts --retain-count=50 gt mail send channel:alerts -s "Build failed" -m "Details..."
Severity Levels:
| Level | Priority | Description |
|---|---|---|
| CRITICAL | P0 | System-threatening, immediate attention |
| HIGH | P1 | Important blocker, needs human soon |
| MEDIUM | P2 | Standard escalation |
Escalation Categories:
| Category | Description | Default Route |
|---|---|---|
| Multiple valid paths, need choice | Deacon -> Mayor |
| Need guidance or expertise | Deacon -> Mayor |
| Waiting on unresolvable dependency | Mayor |
| Unexpected error, can't proceed | Deacon |
| Security or data integrity issue | Overseer (direct) |
| Gate didn't resolve in time | Deacon |
| Worker stuck or needs recycle | Witness |
Commands:
gt escalate "Database migration failed" gt escalate -s CRITICAL "Data corruption detected" gt escalate --type decision "Which auth approach?"
Hand off your current session to a fresh Claude instance while preserving work context.
When to Use:
Usage:
/handoff [optional message]
What Persists:
What Resets:
Gas Town uses a three-tier watchdog chain for autonomous health monitoring:
Daemon (Go process) ← Dumb transport, 3-min heartbeat │ └─► Boot (AI agent) ← Intelligent triage, fresh each tick │ └─► Deacon (AI agent) ← Continuous patrol, long-running │ └─► Witnesses & Refineries ← Per-rig agents
Key insight: The daemon is mechanical (can't reason), but health decisions need intelligence. Boot bridges this gap.
| Agent | Session Name | Location | Lifecycle |
|---|---|---|---|
| Daemon | (Go process) | | Persistent, auto-restart |
| Boot | | | Ephemeral, fresh each tick |
| Deacon | | | Long-running, handoff loop |
| Condition | Action |
|---|---|
| Session dead | START |
| Heartbeat > 15 min | WAKE |
| Heartbeat 5-15 min + mail | NUDGE |
| Heartbeat fresh | NOTHING |
| Agent | Patrol Molecule | Responsibility |
|---|---|---|
| Deacon | | Agent lifecycle, plugin execution, health checks |
| Witness | | Monitor polecats, nudge stuck workers |
| Refinery | | Process merge queue, review MRs |
gt deacon health-check <agent> # Send health check ping gt deacon health-state # Show health check state cat ~/gt/deacon/heartbeat.json | jq . # Check Deacon heartbeat gt boot triage # Manual Boot run
The Problem: The daemon needs to ensure the Deacon is healthy, but:
The Solution: Boot is a narrow, ephemeral AI agent that:
The daemon runs a heartbeat tick every 3 minutes:
func (d *Daemon) heartbeatTick() { d.ensureBootRunning() // 1. Spawn Boot for triage d.checkDeaconHeartbeat() // 2. Belt-and-suspenders fallback d.ensureWitnessesRunning() // 3. Witness health d.ensureRefineriesRunning() // 4. Refinery health d.triggerPendingSpawns() // 5. Bootstrap polecats d.processLifecycleRequests() // 6. Cycle/restart requests }
Heartbeat Freshness:
| Age | State | Boot Action |
|---|---|---|
| < 5 min | Fresh | Nothing (Deacon active) |
| 5-15 min | Stale | Nudge if pending mail |
| > 15 min | Very stale | Wake (Deacon may be stuck) |
| File | Purpose | Updated By |
|---|---|---|
| Deacon freshness | Deacon (each cycle) |
| Boot in-progress marker | Boot spawn |
| Boot last action | Boot triage |
| Agent health tracking | |
| Daemon activity | Daemon |
| Daemon process ID | Daemon startup |
When tmux is unavailable, Gas Town enters degraded mode:
| Capability | Normal | Degraded |
|---|---|---|
| Boot runs | As AI in tmux | As Go code (mechanical) |
| Observe panes | Yes | No |
| Nudge agents | Yes | No |
| Start agents | tmux sessions | Direct spawn |
Gas Town supports multiple AI coding runtimes. Per-rig settings in
settings/config.json:
{ "runtime": { "provider": "codex", "command": "codex", "args": [], "prompt_mode": "none" } }
Gas Town's attribution enables objective model comparison:
# Deploy different models on similar tasks gt sling gt-abc gastown --model=claude-sonnet gt sling gt-def gastown --model=gpt-4Compare outcomes
bd stats --actor=gastown/polecats/* --group-by=model
Option 1: Worktrees (Preferred)
gt worktree beads # Creates ~/gt/beads/crew/gastown-joe/
Option 2: Dispatch to Local Workers
bd create --prefix beads "Fix authentication bug" gt convoy create "Auth fix" bd-xyz gt sling bd-xyz beads
Gas Town uses sparse checkout to exclude Claude Code context files:
git sparse-checkout set --no-cone '/*' '!/.claude/' '!/CLAUDE.md' '!/CLAUDE.local.md'
A marketplace for Gas Town formulas - like npm for molecules.
URI Scheme:
hop://molmall.gastown.io/formulas/mol-polecat-work@4.0.0
Commands (Future):
gt formula install mol-code-review-strict gt formula upgrade mol-polecat-work gt formula publish mol-polecat-work
Federation enables formula sharing across organizations using the Highway Operations Protocol.
gt dashboard --port 8080 open http://localhost:8080
Features:
gt completion bash > /etc/bash_completion.d/gt gt completion zsh > "${fpath[1]}/_gt" gt completion fish > ~/.config/fish/completions/gt.fish
| Problem | Solution |
|---|---|
| Agent in wrong directory | Check cwd, |
| Beads prefix mismatch | Check vs rig config |
| Worktree conflicts | Ensure for polecats |
| Stuck worker | , then |
| Dirty git state | Commit or discard, then |
| Add to PATH |
| |
| Daemon not starting | Check tmux: |
| Agents lose connection | then |
| Convoy stuck | |
| Mayor not responding | then |
gt doctor # Run health checks gt doctor --fix # Auto-repair common issues gt doctor --verbose # Detailed output gt status # Show workspace status
BD_DEBUG_ROUTING=1 bd show <id> # Debug beads routing gt peek <agent> # Check agent health tail -f ~/gt/daemon/daemon.log # View daemon log
bd cook → bd mol pour pipeline instead.~/gt/). Coordinates all workers across multiple Rigs.gt sling.gt nudge./handoff.gt seance.As AI agents become central to engineering workflows, teams face new challenges:
Gas Town is an orchestration layer that treats AI agent work as structured data. Every action is attributed. Every agent has a track record. Every piece of work has provenance.
The problem: You want to assign a complex Go refactor. You have 20 agents. Some are great at Go. Some have never touched it. Some are flaky. How do you choose?
The solution: Every agent accumulates a work history:
# What has this agent done? bd audit --actor=gastown/polecats/toastSuccess rate on Go projects
bd stats --actor=gastown/polecats/toast --tag=go
Why it matters:
The problem: You have work in Go, Python, TypeScript, Rust. You have agents with varying capabilities. Manual assignment doesn't scale.
The solution: Work carries skill requirements. Agents have demonstrated capabilities (derived from their work history). Matching is automatic:
# Agent capabilities (derived from work history) bd skills gastown/polecats/toast # → go: 47 tasks, python: 12 tasks, typescript: 3 tasksRoute based on fit
gt dispatch gt-xyz --prefer-skill=go
Why it matters:
The problem: Enterprise projects are complex. A "feature" becomes 50 tasks across 8 repos involving 4 teams. Flat issue lists don't capture this structure.
The solution: Work decomposes naturally:
Epic: User Authentication System ├── Feature: Login Flow │ ├── Task: API endpoint │ ├── Task: Frontend component │ └── Task: Integration tests ├── Feature: Session Management │ └── ... └── Feature: Password Reset └── ...
Each level has its own chain. Roll-ups are automatic. You always know where you stand.
The problem: Your frontend can't ship until the backend API lands. They're in different repos. Traditional tools don't track this.
The solution: Explicit cross-project dependencies:
depends_on: beads://github/acme/backend/be-456 # Backend API beads://github/acme/shared/sh-789 # Shared types
The problem: An agent says "done." Is it actually done? Is the code quality acceptable? Did it pass review?
The solution: Structured validation with attribution:
{ "validated_by": "gastown/refinery", "validation_type": "merge", "timestamp": "2025-01-15T10:30:00Z", "quality_signals": { "tests_passed": true, "review_approved": true, "lint_clean": true } }
The problem: Complex multi-agent work is opaque. You don't know what's happening until it's done (or failed).
The solution: Work state as a real-time stream:
bd activity --follow[14:32:08] + patrol-x7k.arm-ace bonded (5 steps) [14:32:09] → patrol-x7k.arm-ace.capture in_progress [14:32:10] ✓ patrol-x7k.arm-ace.capture completed [14:32:14] ✓ patrol-x7k.arm-ace.decide completed [14:32:17] ✓ patrol-x7k.arm-ace COMPLETE
Why it matters:
| Capability | Developer Benefit | Enterprise Benefit |
|---|---|---|
| Attribution | Debug agent issues | Compliance audits |
| Work history | Tune agent assignments | Performance management |
| Skill routing | Faster task completion | Resource optimization |
| Federation | Multi-repo projects | Cross-org visibility |
| Validation | Quality assurance | Process enforcement |
| Activity feed | Real-time debugging | Operational awareness |
gt --help or gt <command> --help to verify syntaxMIT License - see LICENSE file for details
This glossary was contributed by Clay Shirky in Issue #80.
Installation Command:
tessl install github:numman-ali/n-skills --skill gastownNo automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.