Generating Text Completions with Vercel AI SDK - Part 3
While chatbots are a dominant use case for LLMs, the underlying technology is incredibly versatile for various text generation tasks. "Completions" refer to the model generating text that follows a given prompt, without necessarily being part of an ongoing conversation.

Technologies Used
This is useful for tasks like:
- Summarizing longer texts.
- Translating languages.
- Generating product descriptions.
- Drafting emails.
- Simple code generation.
The Vercel AI SDK provides the streamText
and generateText
functions (and corresponding React hooks like useCompletion
) to facilitate these non-chat interactions. In this post, we'll build a simple UI to get text completions for a given prompt.
Prerequisites
- A Next.js project setup (similar to Part 1).
- Vercel AI SDK and an LLM provider library installed.
- OpenAI API key (or equivalent).
Core Concept: streamText
vs. generateText
streamText
: Streams the completion token by token. Ideal for real-time feedback to the user, similar to howuseChat
works.generateText
: Waits for the entire completion to be generated and then returns it. Simpler to handle if streaming isn't critical.
For user-facing applications, streaming (streamText
) often provides a better experience. We'll focus on that. The useCompletion
hook in @ai-sdk/react
is the client-side counterpart for streamText
in a non-chat context.
Step 1: Creating the API Route for Completions
Let's create a new API route app/api/complete/route.ts
.
// app/api/complete/route.ts
import {openai } from '@ai-sdk/openai';
import { generatingText } from 'ai';
export const runtime = 'edge';
export const maxDuration = 30;
export const POST = async (req: Request) => {
const { prompt } = await req.json();
if (!prompt) {
return new Response('Prompt is required', { status: 400 });
}
const result = await streamText({
model: openai('gpt-4o'),
prompt,
// You can add other parameters like max_tokens, temperature, etc.
// maxTokens: 150,
// temperature: 0.7,
});
return result.toDataStreamResponse();
}
Explanation:
- This route is simpler than the chat route. It expects a single
prompt
string. - We use
streamText
with the prompt directly. - Note: Some older models (like
text-davinci-003
) were specifically "completion" models. Newer chat models (gpt-3.5-turbo
,gpt-4o
) can also handle completion tasks effectively when given a direct prompt. If using an older OpenAI SDK version or model that has a dedicatedopenai.completion()
method, you might use that with streamText. The@ai-sdk/openai
openai.chat()
is versatile.
Step 2: Building the UI for Completions (Client-Side)
We'll use the useCompletion
hook from @ai-sdk/react
. This hook is analogous to useChat
but designed for single-turn completions.
Modify app/page.tsx
(or create a new page like app/completion/page.tsx
):
// app/page.tsx (or app/completion/page.tsx)
'use client';
import { useCompletion } from '@ai-sdk/react';
export default function CompletionPage() {
const {
completion,
input,
stop,
isLoading,
handleInputChange,
handleSubmit,
} = useCompletion({
api: '/api/complete', // Specify your completion API endpoint
});
return (
<div className="flex flex-col w-full max-w-xl py-12 mx-auto">
<h1 className="text-2xl font-bold mb-4">Text Completion Demo</h1>
<form onSubmit={handleSubmit} className="mb-4">
<label htmlFor="prompt-input" className="block text-sm font-medium text-gray-700 mb-1">
Enter your prompt:
</label>
<textarea
id="prompt-input"
className="w-full p-2 border border-gray-300 rounded shadow-sm text-black"
rows={4}
value={input}
placeholder="e.g., Write a short story about a robot who dreams of flying."
onChange={handleInputChange}
/>
<button
type="submit"
disabled={isLoading}
className="mt-2 px-4 py-2 bg-blue-500 text-white rounded hover:bg-blue-600 disabled:bg-gray-300"
>
{isLoading ? 'Generating...' : 'Generate Completion'}
</button>
{isLoading && (
<button
type="button"
onClick={stop}
className="mt-2 ml-2 px-4 py-2 bg-red-500 text-white rounded hover:bg-red-600"
>
Stop
</button>
)}
</form>
{completion && (
<div className="mt-4 p-4 border border-gray-200 rounded bg-gray-50">
<h3 className="text-lg font-semibold mb-2">Generated Completion:</h3>
<p className="whitespace-pre-wrap text-gray-800">{completion}</p>
</div>
)}
</div>
);
}
Explanation:
useCompletion
:api
: '/api/complete
': Tells the hook which endpoint to hit.completion
: A string that holds the streaming/completed text from the AI.input
,handleInputChange
,handleSubmit
: Similar touseChat
for managing the prompt input.isLoading
: Boolean indicating if a request is in progress.stop
: A function to prematurely stop the streaming generation.
- The UI allows users to enter a prompt and displays the AI-generated completion as it streams in.
Step 3: Testing Text Completions
Run pnpm dev
and navigate to your page. Try various prompts:
- "Summarize this text: [paste a long paragraph]"
- "Translate 'Hello, how are you?' to Spanish."
- "Write three marketing taglines for a new eco-friendly coffee brand."
You'll see the text stream into the "Generated Completion" area.

(Imagine a UI with a textarea for prompt and a display area for completion.)
Key Takeaways
- The Vercel AI SDK isn't just for chatbots;
generateText
anduseCompletion
make it easy to build UIs for general text generation. - Streaming provides a responsive user experience for longer completions.
- Prompt engineering is key: the quality of the completion heavily depends on how well you craft your prompt.
What's Next?
In Part 4, we'll delve deeper into text generation, focusing on more creative or structured text outputs and discussing prompt engineering techniques that can help you get the most out of the LLMs. We'll explore how you can guide the AI to produce text in specific styles or formats.