Veni AI
LLM Integration

Function Calling and Tool Use: LLM Integration Guide

Comprehensive technical guide for OpenAI function calling, Claude tool use, structured output generation, and integrating LLMs with external systems.

Veni AI Technical TeamJanuary 3, 20255 min read
Function Calling and Tool Use: LLM Integration Guide

Function Calling and Tool Use: LLM Integration Guide

Function calling is a powerful capability that allows LLMs to generate structured output and call external functions.

What is Function Calling?

1Traditional: 2User: "How is the weather in Istanbul?" 3LLM: "I don't know the weather" (hallucination risk) 4 5Function Calling: 6User: "How is the weather in Istanbul?" 7LLM: {"function": "get_weather", "args": {"city": "Istanbul"}} 8System: get_weather("Istanbul") → 15°C 9LLM: "The weather in Istanbul is 15°C"

OpenAI Function Calling

Basic Usage

1from openai import OpenAI 2 3client = OpenAI() 4 5tools = [ 6 { 7 "type": "function", 8 "function": { 9 "name": "get_weather", 10 "description": "Get the weather for a specific city", 11 "parameters": { 12 "type": "object", 13 "properties": { 14 "city": { 15 "type": "string", 16 "description": "City name, e.g. Istanbul" 17 }, 18 "unit": { 19 "type": "string", 20 "enum": ["celsius", "fahrenheit"], 21 "description": "Temperature unit" 22 } 23 }, 24 "required": ["city"] 25 } 26 } 27 } 28] 29 30response = client.chat.completions.create( 31 model="gpt-4-turbo", 32 messages=[{"role": "user", "content": "How is the weather in Istanbul?"}], 33 tools=tools, 34 tool_choice="auto" 35)

Parallel Function Calling

Multiple functions simultaneously:

1tools = [ 2 {"type": "function", "function": weather_function}, 3 {"type": "function", "function": stock_function}, 4 {"type": "function", "function": news_function} 5] 6 7response = client.chat.completions.create( 8 model="gpt-4-turbo", 9 messages=[{ 10 "role": "user", 11 "content": "Istanbul weather, TSLA stock and current news?" 12 }], 13 tools=tools, 14 tool_choice="auto" 15) 16 17# Response can contain multiple tool calls 18for tool_call in response.choices[0].message.tool_calls: 19 print(f"Function: {tool_call.function.name}") 20 print(f"Args: {tool_call.function.arguments}")

Function Execution Loop

1def execute_function_call(tool_call): 2 name = tool_call.function.name 3 args = json.loads(tool_call.function.arguments) 4 5 if name == "get_weather": 6 return get_weather(**args) 7 elif name == "get_stock_price": 8 return get_stock_price(**args) 9 else: 10 return f"Unknown function: {name}" 11 12def chat_with_functions(user_message, tools): 13 messages = [{"role": "user", "content": user_message}] 14 15 while True: 16 response = client.chat.completions.create( 17 model="gpt-4-turbo", 18 messages=messages, 19 tools=tools 20 ) 21 22 assistant_message = response.choices[0].message 23 messages.append(assistant_message) 24 25 if not assistant_message.tool_calls: 26 # No tool call, exit loop 27 return assistant_message.content 28 29 # Execute functions 30 for tool_call in assistant_message.tool_calls: 31 result = execute_function_call(tool_call) 32 messages.append({ 33 "role": "tool", 34 "tool_call_id": tool_call.id, 35 "content": str(result) 36 })

Claude Tool Use

Tool Use with Anthropic API

1from anthropic import Anthropic 2 3client = Anthropic() 4 5tools = [ 6 { 7 "name": "get_weather", 8 "description": "Get weather for a city", 9 "input_schema": { 10 "type": "object", 11 "properties": { 12 "city": { 13 "type": "string", 14 "description": "City name" 15 } 16 }, 17 "required": ["city"] 18 } 19 } 20] 21 22response = client.messages.create( 23 model="claude-3-opus-20240229", 24 max_tokens=1024, 25 tools=tools, 26 messages=[{"role": "user", "content": "How is the weather in Ankara?"}] 27) 28 29# Tool use response handling 30if response.stop_reason == "tool_use": 31 tool_use = next( 32 block for block in response.content 33 if block.type == "tool_use" 34 ) 35 36 # Execute tool 37 result = execute_tool(tool_use.name, tool_use.input) 38 39 # Continue conversation 40 response = client.messages.create( 41 model="claude-3-opus-20240229", 42 max_tokens=1024, 43 messages=[ 44 {"role": "user", "content": "How is the weather in Ankara?"}, 45 {"role": "assistant", "content": response.content}, 46 { 47 "role": "user", 48 "content": [ 49 { 50 "type": "tool_result", 51 "tool_use_id": tool_use.id, 52 "content": result 53 } 54 ] 55 } 56 ] 57 )

Structured Output (JSON Mode)

OpenAI JSON Mode

1response = client.chat.completions.create( 2 model="gpt-4-turbo", 3 response_format={"type": "json_object"}, 4 messages=[ 5 { 6 "role": "system", 7 "content": "Respond in JSON format." 8 }, 9 { 10 "role": "user", 11 "content": "Suggest me 3 programming languages." 12 } 13 ] 14) 15 16data = json.loads(response.choices[0].message.content) 17# {"languages": ["Python", "JavaScript", "Go"]}

Validation with Pydantic

1from pydantic import BaseModel 2from typing import List 3 4class ProgrammingLanguage(BaseModel): 5 name: str 6 use_case: str 7 difficulty: str 8 9class LanguageRecommendation(BaseModel): 10 languages: List[ProgrammingLanguage] 11 reasoning: str 12 13def get_structured_response(prompt: str, model: BaseModel): 14 schema = model.model_json_schema() 15 16 response = client.chat.completions.create( 17 model="gpt-4-turbo", 18 response_format={"type": "json_object"}, 19 messages=[ 20 { 21 "role": "system", 22 "content": f"JSON schema: {json.dumps(schema)}" 23 }, 24 {"role": "user", "content": prompt} 25 ] 26 ) 27 28 return model.model_validate_json( 29 response.choices[0].message.content 30 )

Instructor Library

Easy integration with Pydantic + OpenAI:

1import instructor 2from pydantic import BaseModel 3from openai import OpenAI 4 5client = instructor.patch(OpenAI()) 6 7class UserInfo(BaseModel): 8 name: str 9 age: int 10 email: str 11 12user = client.chat.completions.create( 13 model="gpt-4-turbo", 14 response_model=UserInfo, 15 messages=[ 16 {"role": "user", "content": "John Doe, 25 years old, john@email.com"} 17 ] 18) 19 20print(user.name) # John Doe 21print(user.age) # 25

Complex Tool Definitions

Nested Parameters

1{ 2 "name": "create_calendar_event", 3 "description": "Create a calendar event", 4 "parameters": { 5 "type": "object", 6 "properties": { 7 "title": {"type": "string"}, 8 "datetime": { 9 "type": "object", 10 "properties": { 11 "date": {"type": "string", "format": "date"}, 12 "time": {"type": "string", "format": "time"}, 13 "timezone": {"type": "string"} 14 }, 15 "required": ["date", "time"] 16 }, 17 "attendees": { 18 "type": "array", 19 "items": { 20 "type": "object", 21 "properties": { 22 "email": {"type": "string"}, 23 "role": {"type": "string", "enum": ["required", "optional"]} 24 } 25 } 26 }, 27 "reminder": { 28 "type": "object", 29 "properties": { 30 "minutes_before": {"type": "integer"}, 31 "method": {"type": "string", "enum": ["email", "popup"]} 32 } 33 } 34 }, 35 "required": ["title", "datetime"] 36 } 37}

Error Handling

1class FunctionCallError(Exception): 2 pass 3 4def safe_execute_function(tool_call, available_functions): 5 try: 6 name = tool_call.function.name 7 args = json.loads(tool_call.function.arguments) 8 9 if name not in available_functions: 10 raise FunctionCallError(f"Unknown function: {name}") 11 12 # Parameter validation 13 func = available_functions[name] 14 sig = inspect.signature(func) 15 16 for param in sig.parameters.values(): 17 if param.default is inspect.Parameter.empty: 18 if param.name not in args: 19 raise FunctionCallError( 20 f"Missing required parameter: {param.name}" 21 ) 22 23 # Execute with timeout 24 with timeout(30): 25 result = func(**args) 26 27 return {"success": True, "result": result} 28 29 except json.JSONDecodeError as e: 30 return {"success": False, "error": f"Invalid JSON: {e}"} 31 except FunctionCallError as e: 32 return {"success": False, "error": str(e)} 33 except TimeoutError: 34 return {"success": False, "error": "Function timeout"} 35 except Exception as e: 36 return {"success": False, "error": f"Execution error: {e}"}

Best Practices

1. Clear Descriptions

1# Bad 2{"name": "search", "description": "Searches"} 3 4# Good 5{ 6 "name": "search_products", 7 "description": "Searches in e-commerce product database. " 8 "Can search by product name, category or brand. " 9 "Returns maximum 20 results." 10}

2. Enum Usage

1"payment_method": { 2 "type": "string", 3 "enum": ["credit_card", "bank_transfer", "crypto"], 4 "description": "Payment method" 5}

3. Default Values

1"limit": { 2 "type": "integer", 3 "default": 10, 4 "description": "Result limit (default: 10)" 5}

Conclusion

Function calling turns LLMs into powerful automation tools. You can create reliable integrations with correct schema design and error handling.

At Veni AI, we develop solutions based on function calling.

İlgili Makaleler