Tool Use in Agents

Function Calling Fundamentals

13m read

Function Calling Fundamentals

Function calling (also called tool use) is the API feature that transforms LLMs from text generators into agents capable of interacting with the real world. Instead of generating free-form text describing what action to take, the model returns a structured function call object that your code can reliably execute.

The Function Calling Flow

1. You send: messages + list of available functions (their schemas)
2. Model decides: should I call a function or generate text?
3. If calling: model returns finish_reason="tool_calls" with function name + JSON arguments
4. You execute: run the function with those arguments
5. You send back: the function result as a tool message
6. Model continues: uses the result to inform its next response
7. Repeat until: model generates a final text response (finish_reason="stop")

Defining Function Schemas

A function schema is a JSON Schema description of your function — its name, purpose, and parameters:

from openai import OpenAI
import json

client = OpenAI()

# Define the available functions
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather conditions for a city. Use this when the user asks about weather.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city name, e.g. 'London' or 'New York'"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit. Default is celsius."
                    }
                },
                "required": ["city"],
                "additionalProperties": False
            },
            "strict": True  # Enforce schema validation
        }
    },
    {
        "type": "function",
        "function": {
            "name": "create_calendar_event",
            "description": "Create a calendar event. Use when user wants to schedule something.",
            "parameters": {
                "type": "object",
                "properties": {
                    "title": {"type": "string", "description": "Event title"},
                    "date": {"type": "string", "description": "ISO 8601 date, e.g. '2025-03-15'"},
                    "time": {"type": "string", "description": "Time in HH:MM format, e.g. '14:30'"},
                    "duration_minutes": {"type": "integer", "description": "Duration in minutes"},
                    "attendees": {
                        "type": "array",
                        "items": {"type": "string", "format": "email"},
                        "description": "Email addresses of attendees"
                    }
                },
                "required": ["title", "date", "time"],
                "additionalProperties": False
            },
            "strict": True
        }
    }
]

The Complete Tool Use Loop

def get_weather(city: str, unit: str = "celsius") -> dict:
    """Actual implementation of the weather function."""
    # In production: call a real weather API
    return {
        "city": city,
        "temperature": 18 if unit == "celsius" else 64,
        "unit": unit,
        "condition": "Partly cloudy",
        "humidity": 65,
    }

def create_calendar_event(title: str, date: str, time: str, 
                          duration_minutes: int = 60, 
                          attendees: list[str] = None) -> dict:
    """Actual implementation of event creation."""
    return {
        "success": True,
        "event_id": "evt_12345",
        "title": title,
        "scheduled": f"{date} at {time}",
    }

# Map function names to implementations
FUNCTION_MAP = {
    "get_weather": get_weather,
    "create_calendar_event": create_calendar_event,
}

def run_tool_loop(user_message: str) -> str:
    """Run the complete function-calling agent loop."""
    messages = [{"role": "user", "content": user_message}]
    
    while True:
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages,
            tools=tools,
            tool_choice="auto",  # Let model decide when to use tools
        )
        
        message = response.choices[0].message
        messages.append(message)  # Add assistant message to history
        
        if response.choices[0].finish_reason == "stop":
            # Model produced a final text response
            return message.content
        
        if response.choices[0].finish_reason == "tool_calls":
            # Execute all requested tool calls
            for tool_call in message.tool_calls:
                func_name = tool_call.function.name
                func_args = json.loads(tool_call.function.arguments)
                
                # Execute the function
                if func_name in FUNCTION_MAP:
                    result = FUNCTION_MAP[func_name](**func_args)
                else:
                    result = {"error": f"Unknown function: {func_name}"}
                
                # Add tool result to message history
                messages.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": json.dumps(result)
                })

# Test it
result = run_tool_loop("What's the weather in Tokyo? Also schedule a team sync for March 20 at 10am for 45 minutes.")
print(result)

Parallel Tool Calls

Modern models can request multiple tool calls in a single turn when the calls are independent:

# The model returns multiple tool_calls for independent operations
# Execute them concurrently for better performance
import asyncio

async def execute_tool_calls_parallel(tool_calls: list) -> list[dict]:
    """Execute multiple tool calls concurrently."""
    async def execute_one(tool_call) -> dict:
        func_name = tool_call.function.name
        func_args = json.loads(tool_call.function.arguments)
        
        # Run sync functions in thread pool to avoid blocking
        loop = asyncio.get_event_loop()
        result = await loop.run_in_executor(
            None,  # Default thread pool
            lambda: FUNCTION_MAP[func_name](**func_args)
        )
        return {
            "tool_call_id": tool_call.id,
            "result": result
        }
    
    return await asyncio.gather(*[execute_one(tc) for tc in tool_calls])

Tool Choice Control

# Modes for tool_choice parameter:
tool_choice = "auto"      # Model decides (most common)
tool_choice = "none"      # Never use tools (force text response)
tool_choice = "required"  # Must use at least one tool
tool_choice = {           # Force specific function
    "type": "function",
    "function": {"name": "get_weather"}
}

Mastering function calling is the foundation for all agent work. Every subsequent pattern — RAG, memory systems, multi-agent orchestration — builds on this core tool use mechanism.

Tool Use — Check Your Understanding

3 вопроса · проходной балл 70%

  1. 1.In OpenAI function calling, how does the model signal it wants to call a tool?

  2. 2.What is a tool schema (function definition) used for?

  3. 3.What is 'tool call parallelism' and when is it beneficial?

Осталось ответить: 3