Agent tool-use pattern cơ bản sử dụng OpenAI function calling, ~100 dòng Python:
import json
import math
from openai import OpenAI
client = OpenAI()
# ───── 1. DEFINE TOOLS (JSON schema) ─────
TOOLS = [
{
"type": "function",
"function": {
"name": "calculator",
"description": "Evaluate a math expression. Supports +, -, *, /, **, sqrt, sin, cos, log, pi, e. Use for any arithmetic or math.",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Python math expression, e.g. 'sqrt(2) * 15' or '(100 + 50) / 3'"
}
},
"required": ["expression"],
},
},
},
{
"type": "function",
"function": {
"name": "web_search",
"description": "Search the web for current information. Use for recent events, prices, weather, news, or anything past your knowledge cutoff.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"max_results": {
"type": "integer", "default": 3, "maximum": 10
},
},
"required": ["query"],
},
},
},
]
# ───── 2. IMPLEMENT TOOLS ─────
def tool_calculator(expression: str) -> str:
"""Safe-ish eval. Production: use a sandboxed interpreter."""
allowed = {k: getattr(math, k) for k in dir(math) if not k.startswith("_")}
try:
result = eval(expression, {"__builtins__": {}}, allowed)
return f"Result: {result}"
except Exception as e:
return f"Error: {e}"
def tool_web_search(query: str, max_results: int = 3) -> str:
"""Stub — production: use Tavily, Serper, Brave Search API."""
# Example with Tavily:
# from tavily import TavilyClient
# results = TavilyClient(api_key).search(query, max_results=max_results)
return json.dumps([
{"title": "Demo", "url": "https://example.com",
"snippet": f"Fake result for: {query}"}
])
TOOL_IMPLS = {
"calculator": lambda args: tool_calculator(args["expression"]),
"web_search": lambda args: tool_web_search(
args["query"], args.get("max_results", 3)
),
}
# ───── 3. AGENT LOOP ─────
SYSTEM_PROMPT = """You are a research assistant. Use tools when you need:
- Math/calculations → calculator
- Current info / facts you're unsure about → web_search
Always cite which tool you used. If user asks a simple question you know, answer directly."""
def run_agent(user_query: str, max_iterations: int = 10) -> str:
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_query},
]
for step in range(max_iterations):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS,
tool_choice="auto",
temperature=0,
)
msg = response.choices[0].message
messages.append(msg.model_dump(exclude_none=True))
# No tool called → final answer
if not msg.tool_calls:
return msg.content or "(no response)"
# Execute each tool call, append results
for tc in msg.tool_calls:
name = tc.function.name
args = json.loads(tc.function.arguments)
print(f"[step {step+1}] Calling {name}({args})")
impl = TOOL_IMPLS.get(name)
result = impl(args) if impl else f"Unknown tool: {name}"
messages.append({
"role": "tool",
"tool_call_id": tc.id,
"content": str(result),
})
return "Agent exceeded max iterations without final answer."
# ───── 4. USAGE ─────
if __name__ == "__main__":
q = "GDP Việt Nam 2023 là bao nhiêu? Mỗi người trong 100 triệu dân chia đều được bao nhiêu USD?"
print(run_agent(q))Workflow example trace:
step 1: Calling web_search({'query': 'Vietnam GDP 2023'})
→ "Vietnam GDP 2023: $430 billion USD"
step 2: Calling calculator({'expression': '430e9 / 100e6'})
→ "Result: 4300.0"
Final: "GDP Việt Nam 2023 khoảng 430 tỷ USD [web_search].
Chia cho 100 triệu dân: khoảng 4,300 USD/người [calculator]."Production upgrades:
1. Safety cho calculator — dùng simpleeval hoặc asteval thay vì eval. Hoặc Docker sandbox cho code execution.
2. Real web search:
- Tavily (AI-first, cho context AI agent).
- Serper (Google wrapper, cheap).
- Brave Search API.
- Exa (semantic search).
- Perplexity Sonar (LLM-enhanced search).
3. Error handling — try/except quanh tool call, retry với backoff.
4. Streaming — stream tool call và final response.
5. Tool limit — max_tool_calls_per_tool, max_tokens_per_session để tránh runaway.
6. Observability — log mỗi tool call với Langfuse/LangSmith.
7. Parallel tool call — OpenAI hỗ trợ multiple tool call cùng step; chạy song song tiết kiệm latency.
8. Tool design — description rõ ràng, example usage; ít tool (<20) để model chọn đúng.
9. Fallback — nếu tool fail liên tục, model báo lại user thay vì loop vô hạn.
Framework thay thế tự viết (production):
- LangChain AgentExecutor / LangGraph — graph-based agent.
- LlamaIndex Agent — ReAct + tool.
- CrewAI — role-based multi-agent.
- AutoGen — conversational multi-agent.
- Vercel AI SDK useChat + tools — Next.js friendly.
- OpenAI Assistants API — managed.
- Claude Tool Use (Anthropic) — Anthropic native.
MCP integration: instead of hard-coding tool, expose qua MCP server → agent connect bất kỳ MCP tool nào (filesystem, postgres, github, slack).