Tool-Use Pattern: Connecting Agents to the Real World
Master the tool-use agentic design pattern. Learn how agents decide when and how to use tools, handle tool results, and chain multiple tools together.
Tool-Use Pattern: Connecting Agents to the Real World
The tool-use pattern is what transforms an AI from a knowledge base into an active participant in your systems. Without tools, an LLM can only generate text based on its training data. With tools, it can query databases, call APIs, execute code, send messages, and interact with any system you connect.
This guide covers the tool-use pattern in depth—how agents decide when to use tools, how to design effective tools, and how to chain multiple tools together for complex tasks.
The Tool-Use Pattern Explained
At its simplest, the tool-use pattern works like this:
- Agent receives a task that may require external information or actions
- Agent reasons about whether tools are needed
- Agent selects a tool and specifies parameters
- System executes the tool and returns results
- Agent processes results and continues reasoning
- Repeat until the task is complete
User: "What's the weather in Tokyo?"
Agent thinks: I need current weather data. I'll use the weather tool.
Agent calls: weather_lookup(city="Tokyo")
System returns: {"temp": 22, "conditions": "Partly cloudy", "humidity": 65}
Agent responds: "It's currently 22C and partly cloudy in Tokyo with 65% humidity."
This pattern enables agents to overcome LLM limitations:
- Knowledge cutoff: Tools provide real-time information
- Computation: Tools can perform precise calculations
- Actions: Tools can modify external state
- Verification: Tools can check facts against authoritative sources
How Agents Decide to Use Tools
The decision to use a tool involves several considerations:
1. Task Analysis
The agent first analyzes what the task requires:
def analyze_task(task: str) -> dict:
"""Agent's internal reasoning about task requirements"""
analysis = llm.generate(f"""
Analyze this task:
{task}
Determine:
1. What information is needed?
2. Is this information in my training data?
3. Does this require current/real-time data?
4. Does this require taking an action?
5. What tools might help?
Think step by step.
""")
return parse_analysis(analysis)
2. Tool Matching
The agent matches task requirements to available tools:
def select_tools(task_requirements: dict, available_tools: list) -> list:
"""Match task needs to available tools"""
prompt = f"""
Task requirements:
{task_requirements}
Available tools:
{format_tools(available_tools)}
Which tools, if any, should be used? Consider:
- Does the tool provide what's needed?
- Is the tool the best option?
- What order should tools be used?
Return a list of tools to use, or empty if none needed.
"""
return llm.generate(prompt)
3. Parameter Specification
Once a tool is selected, the agent specifies parameters:
def specify_parameters(tool: dict, context: str) -> dict:
"""Generate tool parameters from context"""
prompt = f"""
Tool: {tool['name']}
Description: {tool['description']}
Parameters: {tool['parameters']}
Context: {context}
What parameter values should be used?
Provide values for all required parameters and any relevant optional ones.
"""
return llm.generate(prompt)
Implementing Tool-Use
Here's a complete implementation of the tool-use pattern:
from anthropic import Anthropic
from typing import Callable, Any
import json
class ToolUseAgent:
def __init__(self):
self.client = Anthropic()
self.tools = []
self.tool_functions = {}
def register_tool(
self,
name: str,
description: str,
parameters: dict,
function: Callable
):
"""Register a tool the agent can use"""
self.tools.append({
"name": name,
"description": description,
"input_schema": parameters
})
self.tool_functions[name] = function
def run(self, task: str, max_iterations: int = 10) -> str:
"""Execute a task, using tools as needed"""
messages = [{"role": "user", "content": task}]
for _ in range(max_iterations):
response = self.client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
tools=self.tools,
messages=messages
)
# Check if we're done
if response.stop_reason == "end_turn":
return self._extract_text(response)
# Process tool calls
if response.stop_reason == "tool_use":
messages.append({
"role": "assistant",
"content": response.content
})
tool_results = []
for block in response.content:
if block.type == "tool_use":
result = self._execute_tool(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": json.dumps(result)
})
messages.append({"role": "user", "content": tool_results})
return "Max iterations reached"
def _execute_tool(self, name: str, params: dict) -> Any:
"""Execute a tool and return results"""
if name not in self.tool_functions:
return {"error": f"Unknown tool: {name}"}
try:
return self.tool_functions[name](**params)
except Exception as e:
return {"error": str(e)}
def _extract_text(self, response) -> str:
for block in response.content:
if hasattr(block, 'text'):
return block.text
return ""
# Example usage
agent = ToolUseAgent()
# Register a weather tool
agent.register_tool(
name="get_weather",
description="Get current weather for a city",
parameters={
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
},
function=lambda city: weather_api.get_current(city)
)
# Register a calculator tool
agent.register_tool(
name="calculate",
description="Perform mathematical calculations",
parameters={
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
},
function=lambda expression: {"result": math_parser.calculate(expression)}
)
# Run
result = agent.run("What's the temperature in Paris in Fahrenheit?")
Tool Chaining
Complex tasks often require multiple tools in sequence:
Sequential Chaining
Tools used one after another, each building on previous results:
Task: "Find the CEO of Apple and their net worth"
1. Agent calls: company_info(company="Apple")
Returns: {"ceo": "Tim Cook", "founded": 1976, ...}
2. Agent calls: person_net_worth(name="Tim Cook")
Returns: {"net_worth_usd": 1800000000, ...}
3. Agent synthesizes: "Tim Cook is the CEO of Apple with a net worth of approximately $1.8 billion."
Parallel Chaining
Independent tool calls executed together:
async def parallel_tool_execution(tool_calls: list) -> list:
"""Execute multiple independent tool calls in parallel"""
tasks = []
for call in tool_calls:
task = asyncio.create_task(
execute_tool(call["name"], call["params"])
)
tasks.append(task)
results = await asyncio.gather(*tasks)
return results
Conditional Chaining
Tool selection depends on previous results:
Task: "Check if the server is healthy, and restart if not"
1. Agent calls: server_health_check(server_id="prod-1")
Returns: {"status": "unhealthy", "error": "high_memory"}
2. Agent reasons: Server is unhealthy, I should restart it.
3. Agent calls: server_restart(server_id="prod-1")
Returns: {"status": "restarting"}
4. Agent calls: server_health_check(server_id="prod-1")
Returns: {"status": "healthy"}
5. Agent responds: "Server was unhealthy due to high memory. Restarted successfully and now healthy."
Advanced Tool Patterns
Pattern 1: Tool Composition
Combine simple tools into complex operations:
class ComposedToolAgent(ToolUseAgent):
def __init__(self):
super().__init__()
self.composed_tools = {}
def register_composed_tool(
self,
name: str,
description: str,
steps: list[dict]
):
"""Register a tool that composes multiple operations"""
async def composed_function(**params):
results = {}
for step in steps:
tool_name = step["tool"]
step_params = {}
for param, source in step["params"].items():
if source.startswith("$"):
# Reference previous result
ref = source[1:]
step_params[param] = results[ref]
elif source.startswith("@"):
# Reference input param
step_params[param] = params[source[1:]]
else:
step_params[param] = source
results[step["name"]] = await self._execute_tool(
tool_name, step_params
)
return results
self.composed_tools[name] = {
"description": description,
"function": composed_function
}
Pattern 2: Fallback Tools
Try alternative tools when primary fails:
class FallbackToolExecutor:
def __init__(self):
self.fallback_chains = {}
def register_fallback(self, primary: str, fallbacks: list[str]):
"""Register fallback tools for when primary fails"""
self.fallback_chains[primary] = fallbacks
async def execute_with_fallback(self, tool: str, params: dict) -> dict:
"""Try primary tool, then fallbacks on failure"""
tools_to_try = [tool] + self.fallback_chains.get(tool, [])
for t in tools_to_try:
result = await self.execute_tool(t, params)
if not result.get("error"):
return result
# Log and try next
return {"error": "All tools failed", "tried": tools_to_try}
Pattern 3: Tool Validation
Validate tool results before proceeding:
class ValidatingToolAgent(ToolUseAgent):
def __init__(self):
super().__init__()
self.validators = {}
def register_validator(self, tool_name: str, validator: Callable):
"""Register a validator for a tool's output"""
self.validators[tool_name] = validator
def _execute_tool(self, name: str, params: dict) -> Any:
result = super()._execute_tool(name, params)
if name in self.validators:
is_valid, message = self.validators[name](result)
if not is_valid:
return {
"error": f"Validation failed: {message}",
"invalid_result": result
}
return result
# Example validator
def validate_weather(result):
if "temp" not in result:
return False, "Missing temperature"
if not -100 < result["temp"] < 60:
return False, f"Temperature {result['temp']} seems invalid"
return True, "OK"
Pattern 4: Cached Tool Results
Cache expensive tool calls:
from functools import lru_cache
from datetime import datetime, timedelta
class CachedToolExecutor:
def __init__(self, cache_ttl: int = 300):
self.cache = {}
self.cache_ttl = cache_ttl
def execute_cached(self, tool: str, params: dict) -> Any:
cache_key = f"{tool}:{json.dumps(params, sort_keys=True)}"
# Check cache
if cache_key in self.cache:
result, timestamp = self.cache[cache_key]
if datetime.now() - timestamp < timedelta(seconds=self.cache_ttl):
return result
# Execute and cache
result = self.execute_tool(tool, params)
self.cache[cache_key] = (result, datetime.now())
return result
def invalidate(self, tool: str = None):
"""Clear cache, optionally for specific tool"""
if tool:
self.cache = {
k: v for k, v in self.cache.items()
if not k.startswith(f"{tool}:")
}
else:
self.cache = {}
Pattern 5: Rate-Limited Tools
Prevent overwhelming external services:
import asyncio
from collections import deque
class RateLimitedExecutor:
def __init__(self, calls_per_minute: int = 60):
self.calls_per_minute = calls_per_minute
self.call_times = deque()
async def execute_rate_limited(self, tool: str, params: dict) -> Any:
await self._wait_if_needed()
self.call_times.append(datetime.now())
return await self.execute_tool(tool, params)
async def _wait_if_needed(self):
now = datetime.now()
minute_ago = now - timedelta(minutes=1)
# Remove old entries
while self.call_times and self.call_times[0] < minute_ago:
self.call_times.popleft()
# Wait if at limit
if len(self.call_times) >= self.calls_per_minute:
oldest = self.call_times[0]
wait_time = (oldest + timedelta(minutes=1) - now).total_seconds()
if wait_time > 0:
await asyncio.sleep(wait_time)
Tool Design Best Practices
1. Clear, Action-Oriented Names
# Good
"search_products"
"send_notification"
"calculate_shipping_cost"
# Bad
"products"
"notification"
"shipping"
2. Comprehensive Descriptions
{
"name": "search_database",
"description": """
Search the product database for items matching the query.
Use this when you need to find products by:
- Name or description keywords
- Category
- Price range
- Availability
Returns up to 20 results sorted by relevance.
Do NOT use for:
- Getting details of a specific known product (use get_product instead)
- Inventory queries (use check_inventory instead)
"""
}
3. Typed, Validated Parameters
{
"input_schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search keywords",
"minLength": 1,
"maxLength": 200
},
"category": {
"type": "string",
"enum": ["electronics", "clothing", "home", "food"],
"description": "Filter by category"
},
"max_price": {
"type": "number",
"minimum": 0,
"description": "Maximum price in USD"
},
"limit": {
"type": "integer",
"minimum": 1,
"maximum": 50,
"default": 10
}
},
"required": ["query"]
}
}
4. Informative Returns
def search_products(query: str, **filters) -> dict:
results = db.search(query, **filters)
return {
"query": query,
"filters_applied": filters,
"total_results": len(results),
"results": [
{
"id": r.id,
"name": r.name,
"price": r.price,
"category": r.category,
"in_stock": r.quantity > 0
}
for r in results[:10]
],
"has_more": len(results) > 10,
"suggestion": "Use get_product(id) for full details"
}
5. Graceful Error Handling
def execute_tool(name: str, params: dict) -> dict:
try:
result = tools[name](**params)
return {"success": True, "data": result}
except ValidationError as e:
return {
"success": False,
"error_type": "validation",
"message": str(e),
"suggestion": "Check parameter types and required fields"
}
except RateLimitError as e:
return {
"success": False,
"error_type": "rate_limit",
"message": str(e),
"retry_after": e.retry_after
}
except Exception as e:
return {
"success": False,
"error_type": "unknown",
"message": str(e)
}
Testing Tool-Use
Thorough testing is essential:
class ToolUseTests:
def test_tool_selection(self):
"""Agent selects appropriate tool for task"""
agent = ToolUseAgent()
agent.register_tool("get_weather", ...)
agent.register_tool("calculate", ...)
# Should use weather tool
result = agent.run("What's the weather in London?")
assert "weather" in agent.last_tools_used
# Should use calculator
result = agent.run("What's 15% of 340?")
assert "calculate" in agent.last_tools_used
def test_parameter_extraction(self):
"""Agent extracts correct parameters from context"""
agent = ToolUseAgent()
agent.register_tool("search", ...)
agent.run("Find red shoes under $50")
assert agent.last_tool_params == {
"query": "red shoes",
"max_price": 50
}
def test_tool_chaining(self):
"""Agent chains tools correctly"""
agent = ToolUseAgent()
agent.register_tool("get_user", ...)
agent.register_tool("get_orders", ...)
agent.run("Get all orders for user john@example.com")
assert agent.tools_used_in_order == ["get_user", "get_orders"]
def test_error_recovery(self):
"""Agent handles tool errors gracefully"""
agent = ToolUseAgent()
agent.register_tool("flaky_api", lambda: raise_error())
result = agent.run("Use the flaky API")
assert "error" in result.lower() or "unable" in result.lower()
Conclusion
The tool-use pattern is what makes AI agents truly useful. By connecting LLMs to external tools, you enable them to:
- Access real-time information
- Take actions in the real world
- Perform precise computations
- Integrate with your existing systems
Key principles:
- Design tools with clear purposes and descriptions
- Handle errors gracefully
- Chain tools for complex tasks
- Cache and rate-limit appropriately
- Test thoroughly
When done well, tool-use transforms an AI from a conversationalist into a capable assistant that can actually get things done.
Ready to combine reasoning with action? Check out The ReAct Pattern for the next level of agent capability.