Adding Tools to Your Agent
Tools are what transform a LangChain chain into a true agent. A tool is a function the LLM can call to interact with the outside world — searching the web, running code, reading files, calling APIs. In this lesson, you'll attach real tools to an agent using LangChain's create_tool_calling_agent and AgentExecutor.
Defining Your First Tool
LangChain tools are Python functions decorated with @tool. The docstring becomes the tool's description — the LLM reads it to decide when and how to use the tool.
# agent/tools/search.py
from langchain_core.tools import tool
import httpx
@tool
def search_wikipedia(query: str) -> str:
"""Search Wikipedia for factual information about a topic.
Use this when you need factual information, definitions, or historical context.
Returns a summary of the most relevant Wikipedia article.
Args:
query: The search term or question to look up
"""
# Use Wikipedia's public REST API
response = httpx.get(
"https://en.wikipedia.org/api/rest_v1/page/summary/" + query.replace(" ", "_"),
timeout=10.0
)
if response.status_code == 200:
data = response.json()
return data.get("extract", "No summary available.")
return f"Could not find Wikipedia article for: {query}"
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression safely.
Use this for arithmetic, percentages, unit conversions, and numeric calculations.
Args:
expression: A Python math expression like '2 ** 10' or '(100 * 1.15) / 12'
"""
import ast
import operator
# Safe evaluation — only allow math operations
allowed_operators = {
ast.Add: operator.add,
ast.Sub: operator.sub,
ast.Mult: operator.mul,
ast.Div: operator.truediv,
ast.Pow: operator.pow,
ast.USub: operator.neg,
}
def safe_eval(node):
if isinstance(node, ast.Constant):
return node.value
elif isinstance(node, ast.BinOp):
return allowed_operators[type(node.op)](safe_eval(node.left), safe_eval(node.right))
elif isinstance(node, ast.UnaryOp):
return allowed_operators[type(node.op)](safe_eval(node.operand))
raise ValueError(f"Unsafe expression: {type(node).__name__}")
try:
tree = ast.parse(expression, mode="eval")
result = safe_eval(tree.body)
return f"{expression} = {result}"
except Exception as e:
return f"Calculation error: {str(e)}"
Building the Agent
LangChain provides create_tool_calling_agent which implements the ReAct-style reasoning loop using the model's native tool calling API:
# agent/core.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import create_tool_calling_agent, AgentExecutor
from agent.tools.search import search_wikipedia, calculate
# 1. Initialize the model
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 2. Define the system prompt with required placeholders
prompt = ChatPromptTemplate.from_messages([
("system", """You are a helpful research assistant.
You have access to tools for searching Wikipedia and performing calculations.
Always use tools when you need factual information — do not rely on your training data for facts.
When you have the information needed, provide a clear, well-structured answer."""),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"), # Required for tool call history
])
# 3. Define available tools
tools = [search_wikipedia, calculate]
# 4. Create the agent
agent = create_tool_calling_agent(llm, tools, prompt)
# 5. Wrap in AgentExecutor (manages the reasoning loop)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True, # Print reasoning steps
max_iterations=10, # Guard against infinite loops
handle_parsing_errors=True,
)
# 6. Run the agent
if __name__ == "__main__":
result = agent_executor.invoke({
"input": "What year was the transformer architecture introduced, and how many parameters did the original model have? Also calculate: if training costs $1000 per million tokens, what's the cost for 2.5 billion tokens?"
})
print("\n=== Final Answer ===")
print(result["output"])
Running the Agent
python -m agent.core
You'll see the agent's reasoning process printed in real time:
> Entering new AgentExecutor chain...
Invoking: `search_wikipedia` with `{'query': 'Transformer neural network architecture'}`
The Transformer is a deep learning model introduced in 2017...
Invoking: `calculate` with `{'expression': '1000 * 2500'}`
1000 * 2500 = 2500000
The transformer architecture was introduced in 2017 in the paper "Attention Is All You Need"...
The training cost would be $2,500,000 for 2.5 billion tokens.
> Finished chain.
Tool Call Best Practices
Write precise tool descriptions: The LLM reads your docstring to decide which tool to use. Ambiguous descriptions lead to wrong tool selection.
Always handle errors: Return informative error strings instead of raising exceptions — the agent can reason about an error string and try an alternative approach.
Validate inputs: Tools should validate their inputs and return clear error messages for invalid inputs rather than crashing.
Keep tools focused: One tool, one purpose. An execute_anything tool is a security risk and an unclear interface. Specific, well-named tools are easier for models to use correctly.
You now have a working LangChain agent with real tools. This is the foundation for everything in the rest of the course — more complex agents are built by adding more tools, more sophisticated prompts, and better memory management.