Supercharging Your Chatbot with Tools (Function Calling) - Part 2
In Part 1, we built a basic chatbot. While impressive, its capabilities were limited to the knowledge baked into the LLM. To make our chatbot truly powerful, we need to give it the ability to interact with the outside world or perform specific actions. This is where "tools" (often referred to as function calling) come in.

Technologies Used
The Vercel AI SDK provides excellent support for integrating tools into your chat flows. In this post, we'll modify our chatbot to:
- Define a "tool" for our LLM to use (e.g., a mock weather-fetching function).
- Handle the LLM's request to use a tool.
- Execute the tool and return the result to the LLM to continue the conversation.
Prerequisites
- Completion of Part 1 or a similar setup.
- An LLM that supports function calling/tool usage (e.g., OpenAI's
gpt-3.5-turbo
,gpt-4o
, or newer).
Understanding Tools/Function Calling
Tools allow an LLM to indicate that it needs to call a predefined function you've made available to it. The flow is generally:
- User sends a message.
- LLM analyzes the message and, if appropriate, decides to use one of its available tools.
- Instead of a text response, the LLM responds with a "tool call" request, specifying the tool name and arguments.
- Your application code receives this request and executes the actual function (the "tool").
- The result of the function execution is sent back to the LLM as a new message.
- The LLM uses this result to formulate its final text response to the user.
Step 1: Defining a Tool (Client-Side or Shared)
Let's define a simple tool that simulates fetching the weather for a given city. We'll use zod
for schema validation, which integrates nicely with the Vercel AI SDK.
Install zod
:
npm install zod
# or
yarn add zod
Now, let's define our tool. This can be in a shared utility file or directly in your API route, but for clarity, let's imagine a lib/tools.ts
:
// lib/tools.ts (or directly in your API route)
import { z } from 'zod';
import { tool } from 'ai';
export const weatherTool = tool({
description: 'Get the current weather for a specific location',
parameters: z.object({
city: z.string().describe('The city to get the weather for (e.g., San Francisco)'),
unit: z.enum(['celsius', 'fahrenheit']).optional().default('celsius').describe('The unit for temperature'),
}),
execute: async ({ city, unit }) => {
// In a real app, you'd call a weather API here
console.log(`TOOL CALL: Fetching weather for ${city} in ${unit}`);
let temperature;
let condition;
if (city.toLowerCase().includes('san francisco')) {
temperature = unit === 'celsius' ? 15 : 59;
condition = 'Cloudy with a chance of fog';
} else if (city.toLowerCase().includes('tokyo')) {
temperature = unit === 'celsius' ? 22 : 72;
condition = 'Sunny';
} else if (city.toLowerCase().includes('london')) {
temperature = unit === 'celsius' ? 12 : 54;
condition = 'Rainy';
} else {
temperature = unit === 'celsius' ? 20 : 68;
condition = 'Partly cloudy';
}
return {
city,
temperature,
unit,
condition,
mockData: true,
};
},
});
Explanation:
tool
: A helper from theai
package to define a tool.description
: Tells the LLM what the tool does. It is crucial for the LLM to know when to use it.parameters
: Azod
schema defining the arguments the tool expects. The LLM will try to extract these from the user's query.describe
helps the LLM understand each parameter.execute
: An async function that performs the tool's action. It receives the validated parameters. This function is executed on your server.
Step 2: Updating the API Route
Now, let's modify app/api/chat/route.ts
to make the LLM aware of this tool and handle tool calls.
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
// Allow streaming responses up to 60 seconds
export const maxDuration = 60;
export const runtime = 'edge';
export const POST = async (req: Request) => {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'), // Use a model that supports tools well
messages,
tools: {
//Get the Weather Tool defined ealier
getWeather: weatherTool, // Expose the tool to the LLM
},
// The `onToolCall` handler is called when the LLM wants to use a tool
// It is responsible for executing the tool and returning the result to the LLM
// By default, this is not needed if you define the `execute` function on the tool.
// The Vercel AI SDK will automatically call the tool's `execute` method.
// However, you can define it for more complex scenarios or logging.
// onToolCall: async ({ toolCall, appendToolCallMessage, stream }) => {
// console.log('Tool call received on server:', toolCall);
// // ... custom logic if needed ...
// // Default behavior is handled if execute is on the tool.
// // If you implement onToolCall, you MUST handle the tool execution.
// }
});
return result.toDataStreamResponse();
}
Key Changes:
- We're using
gpt-4o
as it has better tool-following capabilities. You can try withgpt-3.5-turbo
but it might be less reliable. tools: { getWeather: weatherTool }
: We pass our defined tool(s) tostreamText
. The key (getWeather
) is the name the LLM will use to refer to this tool.- The
tool
definition now includes anexecute
function. The Vercel AI SDK will automatically call this function when the LLM decides to use thegetWeather
tool. The result ofexecute
is automatically sent back to the LLM. - The
onToolCall
handler instreamText
is available for more advanced scenarios where you might want to intercept tool calls before execution or manage them differently, but for simple cases withexecute
defined on the tool, it's often not needed.
Step 3: Client-Side Changes (Minimal)
The beauty of useChat
is that it already supports the tool invocation flow! When the LLM responds with a tool call, useChat
will:
- Append a message to
messages
withrole: 'tool'
andcontent
being the tool call information (this is often an internal step). - Your API route handles the tool execution.
- The API sends the tool result back.
useChat
appends another message withrole: 'tool'
andcontent
being the tool's stringified output.- The LLM then generates the final text response, which
useChat
appends as anassistant
message.
Your app/page.tsx
from Part 1 should mostly work as is. However, you might want to display tool messages differently or log them.
// app/page.tsx
'use client';
import { useChat, Message } from '@ai-sdk/react';
const Chat = () => {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
// Optionally, you can provide an `onToolCall` handler on the client too,
// though for this example, server-side execution is more common and secure.
// experimental_onToolCall: async (toolCalls, appendToolCallMessage) => {
// console.log("Client-side tool call:", toolCalls);
// // Handle client-side tools if any, or just acknowledge
// }
});
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map((message) => (
<div key={m.id} className="whitespace-pre-wrap py-2">
<strong>
{message.role === 'user' ? 'User: ' : 'AI: '}
</strong>
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}``}>{part.text}</div>;
case 'tool-invocation':
return (
<pre key={`${message.id}-${i}``}>
{JSON.stringify(part.toolInvocation, null, 2)}
</pre>
);
}
})}
</div>
))}
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl text-black"
value={input}
placeholder="Ask about the weather (e.g., 'What's the weather in London?')"
onChange={handleInputChange}
disabled={isLoading}
/>
<button type="submit" disabled={isLoading} className="fixed bottom-0 right-0 p-2 mb-8 mr-2">
Send
</button>
</form>
</div>
);
}
export default Chat
Note on Message
type: The Message type from @ai-sdk/react
now includes tool_calls
(an array of objects when the assistant wants to call tools) and tool_result
(when the role is 'tool', this contains the result from your tool execution). The useChat
hook manages adding these messages appropriately. The rendering above is a more detailed way to see these.
Step 4: Testing Tool Usage
Run your app (pnpm dev
). Try prompts like:
- "What's the weather like in San Francisco?"
- "Can you tell me the temperature in Tokyo in fahrenheit?"
Observe the console output on your server (where you run pnpm dev
). You should see the "TOOL CALL: Fetching weather..." log. The chatbot should then respond using the (mock) weather data.
If you inspect the messages
array (e.g., by logging it in your component), you'll see the full flow:
- User message.
- Assistant message with
tool_calls
(requesting to usegetWeather
). - Tool message with
tool_call_id
andtool_result
(containing the weather data). - Final assistant message to the user, incorporating the weather information.
Activating Multi-Step Tool Execution
You might have noticed that although the tool results appear in the chat interface, the model doesn't use them to respond to your original query. This happens because once the model generates a tool call, it considers its response complete.
To address this, you can enable multi-step tool calls by setting the maxSteps
option in your useChat
hook. This feature automatically feeds the tool results back into the model, prompting a follow-up generation. In this scenario, it ensures the model uses the weather tool’s output to answer the user's question.
Update Your Client-side Code
Modify your app/page.tsx
file to include the maxSteps
option:
//app/page.tsx
'use client';
import { useChat } from '@ai-sdk/react';
const Chat = () => {
const { messages, input, handleInputChange, handleSubmit } = useChat({
maxSteps: 5,
});
// ... rest of your component code
}
export default Chat
Return to the browser and ask about the weather in a specific location. This time, you should see the model using the weather tool's results to answer your question directly.
By setting maxSteps
to 5, you're allowing the model to take up to five steps during a single generation. This enables more complex interactions, giving the model the ability to gather and process information across multiple steps when necessary. You can see this in action by adding another tool—for example, one that forecast the temperature.
// lib/tools.ts (or in addition to your current weather tool)
import { z } from 'zod';
import { tool } from 'ai';
export const forecastTool = tool({
description: 'Get a 3-day weather forecast for a specific city',
parameters: z.object({
city: z.string().describe('The city to get the weather forecast for (e.g., Paris)'),
unit: z.enum(['celsius', 'fahrenheit']).default('celsius').describe('Temperature unit'),
}),
execute: async ({ city, unit }) => {
console.log(`Fetching 3-day forecast for ${city} in ${unit}`);
// Mock forecast data
const forecasts = {
'paris': [
{ day: 'Tomorrow', condition: 'Sunny', temperature: unit === 'celsius' ? 23 : 73 },
{ day: 'Day After Tomorrow', condition: 'Cloudy', temperature: unit === 'celsius' ? 19 : 66 },
{ day: 'In 3 Days', condition: 'Light rain', temperature: unit === 'celsius' ? 17 : 62 }
],
'new york': [
{ day: 'Tomorrow', condition: 'Rainy', temperature: unit === 'celsius' ? 16 : 61 },
{ day: 'Day After Tomorrow', condition: 'Windy', temperature: unit === 'celsius' ? 18 : 64 },
{ day: 'In 3 Days', condition: 'Sunny', temperature: unit === 'celsius' ? 22 : 72 }
]
};
const forecast = forecasts[city.toLowerCase()] || [
{ day: 'Tomorrow', condition: 'Partly cloudy', temperature: unit === 'celsius' ? 20 : 68 },
{ day: 'Day After Tomorrow', condition: 'Sunny', temperature: unit === 'celsius' ? 22 : 71 },
{ day: 'In 3 Days', condition: 'Overcast', temperature: unit === 'celsius' ? 18 : 65 }
];
return {
city,
unit,
forecast,
mockData: true,
};
},
});
Update Your Route Handler
Update your app/api/chat/route.ts
file to add a new tool to convert the temperature from Fahrenheit to Celsius:
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
// Allow streaming responses up to 60 seconds
export const maxDuration = 60;
export const runtime = 'edge';
export const POST = async (req: Request) => {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'), // Use a model that supports tools well
messages,
tools: {
//Get the Weather Tool defined ealier
getWeather: weatherTool,
getForecast: forecastTool
},
});
return result.toDataStreamResponse();
}
Now, when you ask "What's the weather in a city in Celsius?", you should see a more complete interaction:
- The model will call the weather tool for the city.
- You'll see the tool result displayed.
- It will then call the forecast tool to provide a 3-day weather forecast.
- The model will then use that information to provide a natural language response about the weather in the city.
Key Takeaways
- Tools allow LLMs to interact with external systems or execute specific code you define.
zod
is excellent for defining tool parameter schemas.- The Vercel AI SDK's tool helper and integration within
streamText
anduseChat
make tool implementation fairly seamless. - The LLM needs clear descriptions for tools and parameters to use them effectively.
- Tool execution typically happens on the server for security and access to backend resources.
What's Next?
So far, we've focused on conversational AI. But the Vercel AI SDK is also great for other AI tasks. In Part 3, we'll explore "completions" – generating text for non-chat use cases like summarization or simple content generation.