Model Usage is a cost tracking skill that summarizes per-model API usage from CodexBar's local logs. It supports both Codex and Claude providers, giving developers clear visibility into which models are consuming the most resources and what they cost.
Whether you need a quick snapshot of your current model's spending or a full breakdown across all models, this skill delivers the data in seconds from locally stored usage logs.
Key Features
- Dual provider support for both Codex and Claude cost data with seamless switching between providers
- Flexible output formats including human-readable text (default) and JSON with pretty-printing for programmatic consumption
- Multiple input methods that can run CodexBar CLI directly, accept JSON files, or process data from stdin
- Smart model selection that automatically identifies the highest-cost model in recent data with configurable fallback logic
- Per-model cost summaries that break down spending across all models used in a project
Use Cases
- Track daily API spending across different models to stay within budget
- Identify which models consume the most resources in a multi-model project
- Generate automated cost analysis reports for team review or billing reconciliation
- Integrate usage data into development workflows for cost-aware model selection
How It Works
The skill executes a Python script that either invokes the codexbar cost command or parses supplied JSON data. It locates the most recent daily entry containing model breakdowns, extracts the highest-cost model, and formats the results. Users can override model selection with the --model flag for targeted analysis.
Getting Started
Install CodexBar via Homebrew with steipete/tap/codexbar. The skill requires macOS. Once CodexBar is collecting usage data, invoke the skill to get instant per-model cost summaries in text or JSON format.