TL;DR
- Install with
npm install aiand pick your framework hooks (@ai-sdk/react,@ai-sdk/svelte,@ai-sdk/vue) - Use
useChat()for instant chat UIs with streaming, history, and tool calling - Swap providers (OpenAI, Anthropic, Mistral, Groq) with one line of config
- Run on Vercel Edge Functions for sub-100ms cold starts
- Tool calling and structured output work out of the box with TypeScript types
1. Install in Next.js / React
1.1 Create a Next.js project (if you don’t have one)
npx create-next-app@latest my-ai-app --typescript --eslint --tailwind
cd my-ai-app
1.2 Install the SDK
npm install ai @ai-sdk/react
# or
pnpm add ai @ai-sdk/react
1.3 Add your API key
Create .env.local in the root:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
MISTRAL_API_KEY=...
GROQ_API_KEY=...
2. useChat & useCompletion Hooks
2.1 Minimal chat UI
Create app/chat/page.tsx:
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
2.2 Expected output
User: Hello
AI: Hi there! How can I help you today?
2.3 useCompletion for non-chat use cases
import { useCompletion } from 'ai/react';
function Completion() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion();
return (
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<div>{completion}</div>
</form>
);
}
3. Streaming Responses
3.1 Enable streaming
Streaming is enabled by default in useChat and useCompletion. No extra config needed.
3.2 Customize streaming behavior
const { messages } = useChat({
streamMode: 'text', // or 'tokens' for raw token chunks
onResponse: (response) => {
console.log('Headers:', response.headers);
},
onFinish: (message) => {
console.log('Finished:', message);
},
});
3.3 Edge runtime streaming
Add runtime: 'edge' to your Next.js route:
export const runtime = 'edge';
4. Tool Calling & Structured Output
4.1 Define tools
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const { text, toolCalls } = await generateText({
model: openai('gpt-4-turbo'),
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
location: z.string(),
}),
},
},
prompt: 'What is the weather in Paris?',
});
4.2 Execute tool calls
if (toolCalls.length > 0) {
const weather = await getWeather(toolCalls[0].args.location);
console.log(weather);
}
4.3 Structured output
import { z } from 'zod';
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
}),
}),
prompt: 'Give me a recipe for chocolate chip cookies',
});
console.log(object.recipe.name); // "Chocolate Chip Cookies"
5. Multi-Model Provider Setup
5.1 Configure providers
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { mistral } from '@ai-sdk/mistral';
import { groq } from '@ai-sdk/groq';
const model = openai('gpt-4-turbo');
// or
const model = anthropic('claude-3-opus-20240229');
// or
const model = mistral('mistral-large-latest');
// or
const model = groq('mixtral-8x7b-32768');
5.2 Use in useChat
const { messages } = useChat({
model: openai('gpt-4-turbo'),
// or any other provider
});
5.3 Fallback providers
import { fallback } from 'ai';
const model = fallback([
openai('gpt-4-turbo'),
anthropic('claude-3-opus-20240229'),
]);
6. Edge Runtime Compatibility
6.1 Enable Edge runtime
Add to your Next.js route:
export const runtime = 'edge';
6.2 Edge-compatible providers
| Provider | Edge Support | Notes |
|---|---|---|
| OpenAI | ✅ Yes | Works out of the box |
| Anthropic | ❌ No | Payload too large |
| Mistral | ✅ Yes | Works out of the box |
| Groq | ✅ Yes | Works out of the box |
6.3 Edge runtime gotchas
- Cold starts: ~50-100ms on Vercel Edge Functions Edge Runtime Docs
- Memory limits: 128MB for Edge Functions
- No Node.js APIs: Use
fetchinstead ofaxios
7. Build a Chatbot from Scratch
7.1 Full chatbot code
app/chat/page.tsx:
'use client';
import { useChat } from 'ai/react';
import { openai } from '@ai-sdk/openai';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
model: openai('gpt-4-turbo'),
streamMode: 'text',
onResponse: (response) => {
console.log('Headers:', response.headers);
},
});
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(m => (
<div key={m.id} className={`whitespace-pre-wrap ${m.role === 'user' ? 'text-blue-500' : 'text-green-500'}`}>
<strong>{m.role === 'user' ? 'You: ' : 'AI: '}</strong>
{m.content}
</div>
))}
<form onSubmit={handleSubmit} className="fixed bottom-0 w-full max-w-md mb-8">
<input
className="w-full p-2 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Ask me anything..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
7.2 Add tool calling
const { messages, input, handleInputChange, handleSubmit } = useChat({
model: openai('gpt-4-turbo'),
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
location: z.string(),
}),
},
},
async onToolCall({ toolCall }) {
if (toolCall.toolName === 'getWeather') {
const weather = await getWeather(toolCall.args.location);
return weather;
}
},
});
7.3 Deploy to Vercel
vercel
Common Errors & Fixes
| Error | Cause | Fix |
|---|---|---|
401 Unauthorized | Missing API key | Add .env.local with OPENAI_API_KEY |
429 Too Many Requests | Rate limit hit | Use fallback providers or upgrade plan |
Edge Function payload too large | Anthropic response too big | Switch to Node.js runtime |
TypeError: model is not a function | Wrong import | Use import { openai } from '@ai-sdk/openai' |
Alternatives
1. LangChain
- Pros: Better for complex workflows (RAG, agents)
- Cons: Heavier, no framework hooks
- LangChain Docs
2. OpenAI SDK
- Pros: Official OpenAI support
- Cons: No multi-provider or framework hooks
- OpenAI SDK Docs
What's Next?
-
Add authentication: Use NextAuth.js to secure your chatbot
npm install next-auth -
Deploy to production: Enable Vercel Analytics for usage tracking
vercel --prod -
Extend with RAG: Use Vercel’s AI SDK with
@vercel/postgresfor vector searchnpm install @vercel/postgres
For teams scaling AI tools, Hyperion Consulting offers end-to-end AI infrastructure and deployment consulting.
