Veni AI
LLM 통합

함수 호출 및 도구 사용: LLM 통합 가이드

OpenAI 함수 호출, Claude 도구 사용, 구조화된 출력 생성 및 외부 시스템과의 LLM 통합을 위한 종합 기술 가이드.

Veni AI Technical Team3 Ocak 20255 dk okuma
함수 호출 및 도구 사용: LLM 통합 가이드

Function Calling 및 Tool Use: LLM 통합 가이드

Function calling은 LLM이 구조화된 출력을 생성하고 외부 함수를 호출할 수 있게 해주는 강력한 기능입니다.

Function Calling이란?

1Traditional: 2User: "How is the weather in Istanbul?" 3LLM: "I don't know the weather" (hallucination risk) 4 5Function Calling: 6User: "How is the weather in Istanbul?" 7LLM: {"function": "get_weather", "args": {"city": "Istanbul"}} 8System: get_weather("Istanbul") → 15°C 9LLM: "The weather in Istanbul is 15°C"

OpenAI Function Calling

기본 사용법

1from openai import OpenAI 2 3client = OpenAI() 4 5tools = [ 6 { 7 "type": "function", 8 "function": { 9 "name": "get_weather", 10 "description": "Get the weather for a specific city", 11 "parameters": { 12 "type": "object", 13 "properties": { 14 "city": { 15 "type": "string", 16 "description": "City name, e.g. Istanbul" 17 }, 18 "unit": { 19 "type": "string", 20 "enum": ["celsius", "fahrenheit"], 21 "description": "Temperature unit" 22 } 23 }, 24 "required": ["city"] 25 } 26 } 27 } 28] 29 30response = client.chat.completions.create( 31 model="gpt-4-turbo", 32 messages=[{"role": "user", "content": "How is the weather in Istanbul?"}], 33 tools=tools, 34 tool_choice="auto" 35)

Parallel Function Calling

여러 함수를 동시에 호출:

1tools = [ 2 {"type": "function", "function": weather_function}, 3 {"type": "function", "function": stock_function}, 4 {"type": "function", "function": news_function} 5] 6 7response = client.chat.completions.create( 8 model="gpt-4-turbo", 9 messages=[{ 10 "role": "user", 11 "content": "Istanbul weather, TSLA stock and current news?" 12 }], 13 tools=tools, 14 tool_choice="auto" 15) 16 17# Response can contain multiple tool calls 18for tool_call in response.choices[0].message.tool_calls: 19 print(f"Function: {tool_call.function.name}") 20 print(f"Args: {tool_call.function.arguments}")

Function Execution Loop

1def execute_function_call(tool_call): 2 name = tool_call.function.name 3 args = json.loads(tool_call.function.arguments) 4 5 if name == "get_weather": 6 return get_weather(**args) 7 elif name == "get_stock_price": 8 return get_stock_price(**args) 9 else: 10 return f"Unknown function: {name}" 11 12def chat_with_functions(user_message, tools): 13 messages = [{"role": "user", "content": user_message}] 14 15 while True: 16 response = client.chat.completions.create( 17 model="gpt-4-turbo", 18 messages=messages, 19 tools=tools 20 ) 21 22 assistant_message = response.choices[0].message 23 messages.append(assistant_message) 24 25 if not assistant_message.tool_calls: 26 # No tool call, exit loop 27 return assistant_message.content 28 29 # Execute functions 30 for tool_call in assistant_message.tool_calls: 31 result = execute_function_call(tool_call) 32 messages.append({ 33 "role": "tool", 34 "tool_call_id": tool_call.id, 35 "content": str(result) 36 }) 37## Claude 도구 사용 38 39### Anthropic API를 통한 도구 사용 40 41```python 42from anthropic import Anthropic 43 44client = Anthropic() 45 46tools = [ 47 { 48 "name": "get_weather", 49 "description": "Get weather for a city", 50 "input_schema": { 51 "type": "object", 52 "properties": { 53 "city": { 54 "type": "string", 55 "description": "City name" 56 } 57 }, 58 "required": ["city"] 59 } 60 } 61] 62 63response = client.messages.create( 64 model="claude-3-opus-20240229", 65 max_tokens=1024, 66 tools=tools, 67 messages=[{"role": "user", "content": "How is the weather in Ankara?"}] 68) 69 70# Tool use response handling 71if response.stop_reason == "tool_use": 72 tool_use = next( 73 block for block in response.content 74 if block.type == "tool_use" 75 ) 76 77 # Execute tool 78 result = execute_tool(tool_use.name, tool_use.input) 79 80 # Continue conversation 81 response = client.messages.create( 82 model="claude-3-opus-20240229", 83 max_tokens=1024, 84 messages=[ 85 {"role": "user", "content": "How is the weather in Ankara?"}, 86 {"role": "assistant", "content": response.content}, 87 { 88 "role": "user", 89 "content": [ 90 { 91 "type": "tool_result", 92 "tool_use_id": tool_use.id, 93 "content": result 94 } 95 ] 96 } 97 ] 98 )

구조화된 출력 (JSON 모드)

OpenAI JSON 모드

1response = client.chat.completions.create( 2 model="gpt-4-turbo", 3 response_format={"type": "json_object"}, 4 messages=[ 5 { 6 "role": "system", 7 "content": "Respond in JSON format." 8 }, 9 { 10 "role": "user", 11 "content": "Suggest me 3 programming languages." 12 } 13 ] 14) 15 16data = json.loads(response.choices[0].message.content) 17# {"languages": ["Python", "JavaScript", "Go"]}

Pydantic을 활용한 검증

1from pydantic import BaseModel 2from typing import List 3 4class ProgrammingLanguage(BaseModel): 5 name: str 6 use_case: str 7 difficulty: str 8 9class LanguageRecommendation(BaseModel): 10 languages: List<ProgrammingLanguage] 11 reasoning: str 12 13def get_structured_response(prompt: str, model: BaseModel): 14 schema = model.model_json_schema() 15 16 response = client.chat.completions.create( 17 model="gpt-4-turbo", 18 response_format={"type": "json_object"}, 19 messages=[ 20 { 21 "role": "system", 22 "content": f"JSON schema: {json.dumps(schema)}" 23 }, 24 {"role": "user", "content": prompt} 25 ] 26 ) 27 28 return model.model_validate_json( 29 response.choices[0].message.content 30 )

Instructor 라이브러리

Pydantic + OpenAI 간편 통합:

1import instructor 2from pydantic import BaseModel 3from openai import OpenAI 4 5client = instructor.patch(OpenAI()) 6 7class UserInfo(BaseModel): 8 name: str 9 age: int 10 email: str 11 12user = client.chat.completions.create( 13 model="gpt-4-turbo", 14 response_model=UserInfo, 15 messages=[ 16 {"role": "user", "content": "John Doe, 25 years old, john@email.com"} 17 ] 18) 19 20print(user.name) # John Doe 21print(user.age) # 25 22## 복잡한 도구 정의 23 24### 중첩 파라미터 25 26```python 27{ 28 "name": "create_calendar_event", 29 "description": "Create a calendar event", 30 "parameters": { 31 "type": "object", 32 "properties": { 33 "title": {"type": "string"}, 34 "datetime": { 35 "type": "object", 36 "properties": { 37 "date": {"type": "string", "format": "date"}, 38 "time": {"type": "string", "format": "time"}, 39 "timezone": {"type": "string"} 40 }, 41 "required": ["date", "time"] 42 }, 43 "attendees": { 44 "type": "array", 45 "items": { 46 "type": "object", 47 "properties": { 48 "email": {"type": "string"}, 49 "role": {"type": "string", "enum": ["required", "optional"]} 50 } 51 } 52 }, 53 "reminder": { 54 "type": "object", 55 "properties": { 56 "minutes_before": {"type": "integer"}, 57 "method": {"type": "string", "enum": ["email", "popup"]} 58 } 59 } 60 }, 61 "required": ["title", "datetime"] 62 } 63}

오류 처리

1class FunctionCallError(Exception): 2 pass 3 4def safe_execute_function(tool_call, available_functions): 5 try: 6 name = tool_call.function.name 7 args = json.loads(tool_call.function.arguments) 8 9 if name not in available_functions: 10 raise FunctionCallError(f"Unknown function: {name}") 11 12 # Parameter validation 13 func = available_functions[name] 14 sig = inspect.signature(func) 15 16 for param in sig.parameters.values(): 17 if param.default is inspect.Parameter.empty: 18 if param.name not in args: 19 raise FunctionCallError( 20 f"Missing required parameter: {param.name}" 21 ) 22 23 # Execute with timeout 24 with timeout(30): 25 result = func(**args) 26 27 return {"success": True, "result": result} 28 29 except json.JSONDecodeError as e: 30 return {"success": False, "error": f"Invalid JSON: {e}"} 31 except FunctionCallError as e: 32 return {"success": False, "error": str(e)} 33 except TimeoutError: 34 return {"success": False, "error": "Function timeout"} 35 except Exception as e: 36 return {"success": False, "error": f"Execution error: {e}"}

모범 사례

1. 명확한 설명

1# Bad 2{"name": "search", "description": "Searches"} 3 4# Good 5{ 6 "name": "search_products", 7 "description": "Searches in e-commerce product database. " 8 "Can search by product name, category or brand. " 9 "Returns maximum 20 results." 10}

2. Enum 사용

1"payment_method": { 2 "type": "string", 3 "enum": ["credit_card", "bank_transfer", "crypto"], 4 "description": "Payment method" 5}

3. 기본값

1"limit": { 2 "type": "integer", 3 "default": 10, 4 "description": "Result limit (default: 10)" 5}

결론

Function calling은 LLM을 강력한 자동화 도구로 바꿔줍니다. 올바른 스키마 설계와 오류 처리를 통해 신뢰할 수 있는 통합을 구축할 수 있습니다.

Veni AI는 function calling 기반 솔루션을 개발하고 있습니다.

İlgili Makaleler