A.I. Smart Router
Expertise-aware model router with semantic domain scoring, context-overflow protection, and security redaction. Automatically selects the optimal AI model using weighted expertise scoring (Feb 2026 be
Expertise-aware model router with semantic domain scoring, context-overflow protection, and security redaction. Automatically selects the optimal AI model using weighted expertise scoring (Feb 2026 be
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Intelligently route requests to the optimal AI model using tiered classification with automatic fallback handling and cost optimization.
The router operates transparently—users send messages normally and get responses from the best model for their task. No special commands needed.
Optional visibility: Include
[show routing] in any message to see the routing decision.
The router uses a three-tier decision process:
┌─────────────────────────────────────────────────────────────────┐ │ TIER 1: INTENT DETECTION │ │ Classify the primary purpose of the request │ ├─────────────────────────────────────────────────────────────────┤ │ CODE │ ANALYSIS │ CREATIVE │ REALTIME │ GENERAL │ │ write/debug │ research │ writing │ news/live │ Q&A/chat │ │ refactor │ explain │ stories │ X/Twitter │ translate │ │ review │ compare │ brainstorm │ prices │ summarize │ └──────┬───────┴──────┬──────┴─────┬──────┴─────┬─────┴─────┬─────┘ │ │ │ │ │ ▼ ▼ ▼ ▼ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ TIER 2: COMPLEXITY ESTIMATION │ ├─────────────────────────────────────────────────────────────────┤ │ SIMPLE (Tier $) │ MEDIUM (Tier $$) │ COMPLEX (Tier $$$)│ │ • One-step task │ • Multi-step task │ • Deep reasoning │ │ • Short response OK │ • Some nuance │ • Extensive output│ │ • Factual lookup │ • Moderate context │ • Critical task │ │ → Haiku/Flash │ → Sonnet/Grok/GPT │ → Opus/GPT-5 │ └──────────────────────────┴─────────────────────┴───────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ TIER 3: SPECIAL CASE OVERRIDES │ ├─────────────────────────────────────────────────────────────────┤ │ CONDITION │ OVERRIDE TO │ │ ─────────────────────────────────────┼─────────────────────────│ │ Context >100K tokens │ → Gemini Pro (1M ctx) │ │ Context >500K tokens │ → Gemini Pro ONLY │ │ Needs real-time data │ → Grok (regardless) │ │ Image/vision input │ → Opus or Gemini Pro │ │ User explicit override │ → Requested model │ └──────────────────────────────────────┴──────────────────────────┘
When a request contains multiple clear intents (e.g., "Write code to analyze this data and explain it creatively"):
Examples:
Non-English requests are handled normally — all supported models have multilingual capabilities:
| Model | Non-English Support |
|---|---|
| Opus/Sonnet/Haiku | Excellent (100+ languages) |
| GPT-5 | Excellent (100+ languages) |
| Gemini Pro/Flash | Excellent (100+ languages) |
| Grok | Good (major languages) |
Intent detection still works because:
Edge case: If intent unclear due to language, default to GENERAL intent with MEDIUM complexity.
| Intent | Simple | Medium | Complex |
|---|---|---|---|
| CODE | Sonnet | Opus | Opus |
| ANALYSIS | Flash | GPT-5 | Opus |
| CREATIVE | Sonnet | Opus | Opus |
| REALTIME | Grok | Grok | Grok-3 |
| GENERAL | Flash | Sonnet | Opus |
When a model becomes unavailable mid-session (token quota exhausted, rate limit hit, API error), the router automatically switches to the next best available model and notifies the user.
When a model switch occurs due to exhaustion, the user receives a notification:
┌─────────────────────────────────────────────────────────────────┐ │ ⚠️ MODEL SWITCH NOTICE │ │ │ │ Your request could not be completed on claude-opus-4-5 │ │ (reason: token quota exhausted). │ │ │ │ ✅ Request completed using: anthropic/claude-sonnet-4-5 │ │ │ │ The response below was generated by the fallback model. │ └─────────────────────────────────────────────────────────────────┘
| Reason | Description |
|---|---|
| Daily/monthly token limit reached |
| Too many requests per minute |
| Input too large for model |
| Model took too long to respond |
| Provider returned an error |
| Model temporarily offline |
def execute_with_fallback(primary_model: str, fallback_chain: list[str], request: str) -> Response: """ Execute request with automatic fallback and user notification. """ attempted_models = [] switch_reason = None# Try primary model first models_to_try = [primary_model] + fallback_chain for model in models_to_try: try: response = call_model(model, request) # If we switched models, prepend notification if attempted_models: notification = build_switch_notification( failed_model=attempted_models[0], reason=switch_reason, success_model=model ) return Response( content=notification + "\n\n---\n\n" + response.content, model_used=model, switched=True ) return Response(content=response.content, model_used=model, switched=False) except TokenQuotaExhausted: attempted_models.append(model) switch_reason = "token quota exhausted" log_fallback(model, switch_reason) continue except RateLimitExceeded: attempted_models.append(model) switch_reason = "rate limit exceeded" log_fallback(model, switch_reason) continue except ContextWindowExceeded: attempted_models.append(model) switch_reason = "context window exceeded" log_fallback(model, switch_reason) continue except APITimeout: attempted_models.append(model) switch_reason = "API timeout" log_fallback(model, switch_reason) continue except APIError as e: attempted_models.append(model) switch_reason = f"API error: {e.code}" log_fallback(model, switch_reason) continue # All models exhausted return build_exhaustion_error(attempted_models)def build_switch_notification(failed_model: str, reason: str, success_model: str) -> str: """Build user-facing notification when model switch occurs.""" return f"""⚠️ MODEL SWITCH NOTICE
Your request could not be completed on
(reason: {reason}).{failed_model}✅ Request completed using:
{success_model}The response below was generated by the fallback model."""
def build_exhaustion_error(attempted_models: list[str]) -> Response: """Build error when all models are exhausted.""" models_tried = ", ".join(attempted_models) return Response( content=f"""❌ REQUEST FAILED
Unable to complete your request. All available models have been exhausted.
Models attempted: {models_tried}
What you can do:
- Wait — Token quotas typically reset hourly or daily
- Simplify — Try a shorter or simpler request
- Check status — Run
to see model availability/router statusIf this persists, your human may need to check API quotas or add additional providers.""", model_used=None, switched=False, failed=True )
When a model is exhausted, the router selects the next best model for the same task type:
| Original Model | Fallback Priority (same capability) |
|---|---|
| Opus | Sonnet → GPT-5 → Grok-3 → Gemini Pro |
| Sonnet | GPT-5 → Grok-3 → Opus → Haiku |
| GPT-5 | Sonnet → Opus → Grok-3 → Gemini Pro |
| Gemini Pro | Flash → GPT-5 → Opus → Sonnet |
| Grok-2/3 | (warn: no real-time fallback available) |
After a model switch, the agent should note in the response that:
This ensures transparency and sets appropriate expectations.
When using streaming responses, fallback handling requires special consideration:
async def execute_with_streaming_fallback(primary_model: str, fallback_chain: list[str], request: str): """ Handle streaming responses with mid-stream fallback.If a model fails DURING streaming (not before), the partial response is lost. Strategy: Don't start streaming until first chunk received successfully. """ models_to_try = [primary_model] + fallback_chain for model in models_to_try: try: # Test with non-streaming ping first (optional, adds latency) # await test_model_availability(model) # Start streaming stream = await call_model_streaming(model, request) first_chunk = await stream.get_first_chunk(timeout=10_000) # 10s timeout for first chunk # If we got here, model is responding — continue streaming yield first_chunk async for chunk in stream: yield chunk return # Success except (FirstChunkTimeout, StreamError) as e: log_fallback(model, str(e)) continue # Try next model # All models failed yield build_exhaustion_error(models_to_try)
Key insight: Wait for the first chunk before committing to a model. If the first chunk times out, fall back before any partial response is shown to the user.
RETRY_CONFIG = { "initial_timeout_ms": 30_000, # 30s for first attempt "fallback_timeout_ms": 20_000, # 20s for fallback attempts (faster fail) "max_retries_per_model": 1, # Don't retry same model "backoff_multiplier": 1.5, # Not used (no same-model retry) "circuit_breaker_threshold": 3, # Failures before skipping model entirely "circuit_breaker_reset_ms": 300_000 # 5 min before trying failed model again }
Circuit breaker: If a model fails 3 times in 5 minutes, skip it entirely for the next 5 minutes. This prevents repeatedly hitting a down service.
When the preferred model fails (rate limit, API down, error), cascade to the next option:
Opus → Sonnet → GPT-5 → Gemini Pro
Opus → GPT-5 → Gemini Pro → Sonnet
Opus → GPT-5 → Sonnet → Gemini Pro
Grok-2 → Grok-3 → (warn: no real-time fallback)
Flash → Haiku → Sonnet → GPT-5
┌─────────────────────────────────────────────────────────────────┐ │ LONG CONTEXT FALLBACK CHAIN │ ├─────────────────────────────────────────────────────────────────┤ │ TOKEN COUNT │ FALLBACK CHAIN │ │ ───────────────────┼───────────────────────────────────────────│ │ 128K - 200K │ Opus (200K) → Sonnet (200K) → Gemini Pro │ │ 200K - 1M │ Gemini Pro → Flash (1M) → ERROR_MESSAGE │ │ > 1M │ ERROR_MESSAGE (no model supports this) │ └─────────────────────┴───────────────────────────────────────────┘
Implementation:
def handle_long_context(token_count: int, available_models: dict) -> str | ErrorMessage: """Route long-context requests with graceful degradation."""# Tier 1: 128K - 200K tokens (Opus/Sonnet can handle) if token_count <= 200_000: for model in ["opus", "sonnet", "haiku", "gemini-pro", "flash"]: if model in available_models and get_context_limit(model) >= token_count: return model # Tier 2: 200K - 1M tokens (only Gemini) elif token_count <= 1_000_000: for model in ["gemini-pro", "flash"]: if model in available_models: return model # Tier 3: > 1M tokens (nothing available) # Fall through to error # No suitable model found — return helpful error return build_context_error(token_count, available_models)def build_context_error(token_count: int, available_models: dict) -> ErrorMessage: """Build a helpful error message when no model can handle the input."""
# Find the largest available context window max_available = max( (get_context_limit(m) for m in available_models), default=0 ) # Determine what's missing missing_models = [] if "gemini-pro" not in available_models and "flash" not in available_models: missing_models.append("Gemini Pro/Flash (1M context)") if token_count <= 200_000 and "opus" not in available_models: missing_models.append("Opus (200K context)") # Format token count for readability if token_count >= 1_000_000: token_display = f"{token_count / 1_000_000:.1f}M" else: token_display = f"{token_count // 1000}K" return ErrorMessage( title="Context Window Exceeded", message=f"""Your input is approximately **{token_display} tokens**, which exceeds the context window of all currently available models.Required: Gemini Pro (1M context) {"— currently unavailable" if "gemini-pro" not in available_models else ""} Your max available: {max_available // 1000}K tokens
Options:
- Wait and retry — Gemini may be temporarily down
- Reduce input size — Remove unnecessary content to fit within {max_available // 1000}K tokens
- Split into chunks — I can process your input sequentially in smaller pieces
Would you like me to help split this into manageable chunks?""",
recoverable=True, suggested_action="split_chunks" )
Example Error Output:
⚠️ Context Window ExceededYour input is approximately 340K tokens, which exceeds the context window of all currently available models.
Required: Gemini Pro (1M context) — currently unavailable Your max available: 200K tokens
Options:
- Wait and retry — Gemini may be temporarily down
- Reduce input size — Remove unnecessary content to fit within 200K tokens
- Split into chunks — I can process your input sequentially in smaller pieces
Would you like me to help split this into manageable chunks?
The router auto-detects available providers at runtime:
1. Check configured auth profiles 2. Build available model list from authenticated providers 3. Construct routing table using ONLY available models 4. If preferred model unavailable, use best available alternative
Example: If only Anthropic and Google are configured:
The router considers cost when complexity is LOW:
| Model | Cost Tier | Use When |
|---|---|---|
| Gemini Flash | $ | Simple tasks, high volume |
| Claude Haiku | $ | Simple tasks, quick responses |
| Claude Sonnet | $$ | Medium complexity |
| Grok 2 | $$ | Real-time needs only |
| GPT-5 | $$ | General fallback |
| Gemini Pro | $$$ | Long context needs |
| Claude Opus | $$$$ | Complex/critical tasks |
Rule: Never use Opus ($$$) for tasks that Flash ($) can handle.
Add
[show routing] to any message:
[show routing] What's the weather in NYC?
Output includes:
[Routed → xai/grok-2-latest | Reason: REALTIME intent detected | Fallback: none available]
Explicit overrides:
Ask: "router status" or "/router" to see:
When processing a request:
1. DETECT available models (check auth profiles) 2. CLASSIFY intent (code/analysis/creative/realtime/general) 3. ESTIMATE complexity (simple/medium/complex) 4. CHECK special cases (context size, vision, explicit override) 5. FILTER by cost tier based on complexity ← BEFORE model selection 6. SELECT model from filtered pool using routing matrix 7. VERIFY model available, else use fallback chain (also cost-filtered) 8. EXECUTE request with selected model 9. IF failure, try next in fallback chain 10. LOG routing decision (for debugging)
def route_with_fallback(request): """ Main routing function with CORRECT execution order. Cost filtering MUST happen BEFORE routing table lookup. """# Step 1: Discover available models available_models = discover_providers() # Step 2: Classify intent intent = classify_intent(request) # Step 3: Estimate complexity complexity = estimate_complexity(request) # Step 4: Check special-case overrides (these bypass cost filtering) if user_override := get_user_model_override(request): return execute_with_fallback(user_override, []) # No cost filter for explicit override if token_count > 128_000: return handle_long_context(token_count, available_models) # Special handling if needs_realtime(request): return execute_with_fallback("grok-2", ["grok-3"]) # Realtime bypasses cost # ┌─────────────────────────────────────────────────────────────┐ # │ STEP 5: FILTER BY COST TIER — THIS MUST COME FIRST! │ # │ │ # │ Cost filtering happens BEFORE the routing table lookup, │ # │ NOT after. This ensures "what's 2+2?" never considers │ # │ Opus even momentarily. │ # └─────────────────────────────────────────────────────────────┘ allowed_tiers = get_allowed_tiers(complexity) # SIMPLE → ["$"] # MEDIUM → ["$", "$$"] # COMPLEX → ["$", "$$", "$$$"] cost_filtered_models = { model: meta for model, meta in available_models.items() if COST_TIERS.get(model) in allowed_tiers } # Step 6: NOW select from cost-filtered pool using routing preferences preferences = ROUTING_PREFERENCES.get((intent, complexity), []) for model in preferences: if model in cost_filtered_models: # Only consider cost-appropriate models selected_model = model break else: # No preferred model in cost-filtered pool — use cheapest available selected_model = select_cheapest(cost_filtered_models) # Step 7: Build cost-filtered fallback chain task_type = get_task_type(intent, complexity) full_chain = MASTER_FALLBACK_CHAINS.get(task_type, []) filtered_chain = [m for m in full_chain if m in cost_filtered_models and m != selected_model] # Step 8-10: Execute with fallback + logging return execute_with_fallback(selected_model, filtered_chain)def get_allowed_tiers(complexity: str) -> list[str]: """Return allowed cost tiers for a given complexity level.""" return { "SIMPLE": ["$"], # Budget only — no exceptions "MEDIUM": ["$", "$$"], # Budget + standard "COMPLEX": ["$", "$$", "$$$", "$$$$"], # All tiers — complex tasks deserve the best }.get(complexity, ["$", "$$"])
Example flow for "what's 2+2?":
1. available_models = {opus, sonnet, haiku, flash, grok-2, ...}
2. intent = GENERAL
3. complexity = SIMPLE
4. (no special cases)
5. allowed_tiers = ["$"] ← SIMPLE means $ only
cost_filtered_models = {haiku, flash, grok-2} ← Opus/Sonnet EXCLUDED
6. preferences for (GENERAL, SIMPLE) = [flash, haiku, grok-2, sonnet]
first match in cost_filtered = flash ✓
7. fallback_chain = [haiku, grok-2] ← Also cost-filtered
8. execute with flash
Result: Opus is NEVER considered, not even momentarily.
┌─────────────────────────────────────────────────────────────────┐ │ COST OPTIMIZATION IMPLEMENTATION OPTIONS │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ APPROACH 1: Explicit filter_by_cost() (shown above) │ │ ───────────────────────────────────────────────────────────── │ │ • Calls get_allowed_tiers(complexity) explicitly │ │ • Filters available_models BEFORE routing table lookup │ │ • Most defensive — impossible to route wrong tier │ │ • Recommended for security-critical deployments │ │ │ │ APPROACH 2: Preference ordering (implicit) │ │ ───────────────────────────────────────────────────────────── │ │ • ROUTING_PREFERENCES lists cheapest capable models first │ │ • For SIMPLE tasks: [flash, haiku, grok-2, sonnet] │ │ • First available match wins → naturally picks cheapest │ │ • Simpler code, relies on correct preference ordering │ │ │ │ This implementation uses BOTH for defense-in-depth: │ │ • Preference ordering provides first line of cost awareness │ │ • Explicit filter_by_cost() guarantees tier enforcement │ │ │ │ For alternative implementations that rely solely on │ │ preference ordering, see references/models.md for the │ │ filter_by_cost() function if explicit enforcement is needed. │ │ │ └─────────────────────────────────────────────────────────────────┘
Use sessions_spawn for model routing:
sessions_spawn( task: "user's request", model: "selected/model-id", label: "task-type-query" )
references/security.md for full security guidanceSee
references/models.md for detailed capabilities and pricing.No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.