OpenAI compatibility

Seamlessly migrate existing OpenAI integrations to Vapi with zero code changes

Overview

Migrate your existing OpenAI chat applications to Vapi without changing a single line of code. Perfect for teams already using OpenAI SDKs, third-party tools expecting OpenAI API format, or developers who want to leverage existing OpenAI workflows.

What You’ll Build:

  • Drop-in replacement for OpenAI chat endpoints using Vapi assistants
  • Migration path from OpenAI to Vapi with existing codebases
  • Integration with popular frameworks like LangChain and Vercel AI SDK
  • Production-ready server implementations with both streaming and non-streaming

Prerequisites

  • Completed Chat quickstart tutorial
  • Existing OpenAI integration or familiarity with OpenAI SDK

Scenario

We’ll migrate “TechFlow’s” existing OpenAI-powered customer support chat to use Vapi assistants, maintaining all existing functionality while gaining access to Vapi’s advanced features like custom voices and tools.


1. Quick Migration Test

1

Install the OpenAI SDK

If you don’t already have it, install the OpenAI SDK:

$npm install openai
2

Test with OpenAI-compatible endpoint

Use your existing OpenAI code with minimal changes:

Test OpenAI Compatibility
$curl -X POST https://api.vapi.ai/chat/responses \
> -H "Authorization: Bearer YOUR_VAPI_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{
> "model": "gpt-4o",
> "input": "Hello, I need help with my account",
> "stream": false,
> "assistantId": "your-assistant-id"
> }'
3

Verify response format

The response follows OpenAI’s structure with Vapi enhancements:

OpenAI-Compatible Response
1{
2 "id": "response_abc123",
3 "object": "chat.response",
4 "created": 1642678392,
5 "model": "gpt-4o",
6 "output": [
7 {
8 "role": "assistant",
9 "content": [
10 {
11 "type": "text",
12 "text": "Hello! I'd be happy to help with your account. What specific issue are you experiencing?"
13 }
14 ]
15 }
16 ],
17 "usage": {
18 "prompt_tokens": 12,
19 "completion_tokens": 23,
20 "total_tokens": 35
21 }
22}

2. Migrate Existing OpenAI Code

1

Update your OpenAI client configuration

Change only the base URL and API key in your existing code:

Before (OpenAI)
1import OpenAI from 'openai';
2
3const openai = new OpenAI({
4 apiKey: 'your-openai-api-key'
5});
6
7const response = await openai.chat.completions.create({
8 model: 'gpt-4o',
9 messages: [{ role: 'user', content: 'Hello!' }],
10 stream: true
11});

With Vapi (No Code Changes)

After (Vapi)
1import OpenAI from 'openai';
2
3const openai = new OpenAI({
4 apiKey: 'YOUR_VAPI_API_KEY',
5 baseURL: 'https://api.vapi.ai/chat',
6});
7
8const response = await openai.chat.completions.create({
9 model: 'gpt-4o',
10 messages: [{ role: 'user', content: 'Hello!' }],
11 stream: true
12});
2

Update your function calls

Change chat.completions.create to responses.create and add assistantId:

Before (OpenAI Chat Completions)
1const response = await openai.chat.completions.create({
2 model: 'gpt-4o',
3 messages: [
4 { role: 'user', content: 'What is the capital of France?' }
5 ],
6 stream: false
7});
8
9console.log(response.choices[0].message.content);
After (Vapi Compatibility)
1const response = await openai.responses.create({
2 model: 'gpt-4o',
3 input: 'What is the capital of France?',
4 stream: false,
5 assistantId: 'your-assistant-id'
6});
7
8console.log(response.output[0].content[0].text);
3

Test your migrated code

Run your updated code to verify the migration works:

migration-test.ts
1import OpenAI from 'openai';
2
3const openai = new OpenAI({
4 apiKey: 'YOUR_VAPI_API_KEY',
5 baseURL: 'https://api.vapi.ai/chat'
6});
7
8async function testMigration() {
9 try {
10 const response = await openai.responses.create({
11 model: 'gpt-4o',
12 input: 'Hello, can you help me troubleshoot an API issue?',
13 stream: false,
14 assistantId: 'your-assistant-id'
15 });
16
17 console.log('Migration successful!');
18 console.log('Response:', response.output[0].content[0].text);
19 } catch (error) {
20 console.error('Migration test failed:', error);
21 }
22}
23
24testMigration();

3. Implement Streaming with OpenAI SDK

1

Migrate streaming chat completions

Update your streaming code to use Vapi’s streaming format:

Streaming via curl
$curl -X POST https://api.vapi.ai/chat/responses \
> -H "Authorization: Bearer YOUR_VAPI_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{
> "model": "gpt-4o",
> "input": "Explain how machine learning works in detail",
> "stream": true,
> "assistantId": "your-assistant-id"
> }'
2

Update streaming JavaScript code

Adapt your existing streaming implementation:

streaming-migration.ts
1async function streamWithVapi(userInput: string): Promise<string> {
2 const stream = await openai.responses.create({
3 model: 'gpt-4o',
4 input: userInput,
5 stream: true,
6 assistantId: 'your-assistant-id'
7 });
8
9 let fullResponse = '';
10
11 const reader = stream.body?.getReader();
12 if (!reader) return fullResponse;
13
14 const decoder = new TextDecoder();
15
16 while (true) {
17 const { done, value } = await reader.read();
18 if (done) break;
19
20 const chunk = decoder.decode(value);
21
22 // Parse and process SSE events
23 const lines = chunk.split('\n').filter(line => line.trim());
24 for (const line of lines) {
25 if (line.startsWith('data: ')) {
26 try {
27 const event = JSON.parse(line.slice(6));
28 if (event.path && event.delta) {
29 process.stdout.write(event.delta);
30 fullResponse += event.delta;
31 }
32 } catch (e) {
33 console.error('Invalid JSON line:', line);
34 continue;
35 }
36 }
37 }
38 }
39
40 console.log('\n\nComplete response received.');
41 return fullResponse;
42}
43
44streamWithVapi('Write a detailed explanation of REST APIs');
3

Handle conversation context

Implement context management using Vapi’s approach:

context-management.ts
1function createContextualChatSession(apiKey: string, assistantId: string) {
2 const openai = new OpenAI({
3 apiKey: apiKey,
4 baseURL: 'https://api.vapi.ai/chat'
5 });
6 let lastChatId: string | null = null;
7
8 async function sendMessage(input: string, stream: boolean = false) {
9 const requestParams = {
10 model: 'gpt-4o',
11 input: input,
12 stream: stream,
13 assistantId: assistantId,
14 ...(lastChatId && { previousChatId: lastChatId })
15 };
16
17 const response = await openai.responses.create(requestParams);
18
19 if (!stream) {
20 lastChatId = response.id;
21 return response.output[0].content[0].text;
22 }
23
24 return response;
25 }
26
27 return { sendMessage };
28}
29
30// Usage example
31const session = createContextualChatSession('YOUR_VAPI_API_KEY', 'your-assistant-id');
32
33const response1 = await session.sendMessage("My name is Sarah and I'm having login issues");
34console.log('Response 1:', response1);
35
36const response2 = await session.sendMessage("What was my name again?");
37console.log('Response 2:', response2); // Should remember "Sarah"

4. Framework Integrations

1

Integrate with LangChain

Use Vapi with LangChain’s OpenAI integration:

langchain-integration.ts
1import { ChatOpenAI } from "langchain/chat_models/openai";
2import { HumanMessage } from "langchain/schema";
3
4const chat = new ChatOpenAI({
5 openAIApiKey: "YOUR_VAPI_API_KEY",
6 configuration: {
7 baseURL: "https://api.vapi.ai/chat"
8 },
9 modelName: "gpt-4o",
10 streaming: false
11});
12
13async function chatWithVapi(message: string, assistantId: string): Promise<string> {
14 const response = await fetch('https://api.vapi.ai/chat/responses', {
15 method: 'POST',
16 headers: {
17 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
18 'Content-Type': 'application/json'
19 },
20 body: JSON.stringify({
21 model: 'gpt-4o',
22 input: message,
23 assistantId: assistantId,
24 stream: false
25 })
26 });
27
28 const data = await response.json();
29 return data.output[0].content[0].text;
30}
31
32// Usage
33const response = await chatWithVapi(
34 "What are the best practices for API design?",
35 "your-assistant-id"
36);
37console.log(response);
2

Integrate with Vercel AI SDK

Use Vapi with Vercel’s AI SDK:

vercel-ai-integration.ts
1import { openai } from '@ai-sdk/openai';
2import { generateText, streamText } from 'ai';
3
4const vapiOpenAI = openai({
5 apiKey: 'YOUR_VAPI_API_KEY',
6 baseURL: 'https://api.vapi.ai/chat'
7});
8
9// Non-streaming text generation
10async function generateWithVapi(prompt: string, assistantId: string): Promise<string> {
11 const response = await fetch('https://api.vapi.ai/chat/responses', {
12 method: 'POST',
13 headers: {
14 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
15 'Content-Type': 'application/json'
16 },
17 body: JSON.stringify({
18 model: 'gpt-4o',
19 input: prompt,
20 assistantId: assistantId,
21 stream: false
22 })
23 });
24
25 const data = await response.json();
26 return data.output[0].content[0].text;
27}
28
29// Streaming implementation
30async function streamWithVapi(prompt: string, assistantId: string): Promise<void> {
31 const response = await fetch('https://api.vapi.ai/chat/responses', {
32 method: 'POST',
33 headers: {
34 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
35 'Content-Type': 'application/json'
36 },
37 body: JSON.stringify({
38 model: 'gpt-4o',
39 input: prompt,
40 assistantId: assistantId,
41 stream: true
42 })
43 });
44
45 const reader = response.body?.getReader();
46 if (!reader) return;
47
48 const decoder = new TextDecoder();
49
50 while (true) {
51 const { done, value } = await reader.read();
52 if (done) break;
53
54 const chunk = decoder.decode(value);
55
56 // Parse and process SSE events
57 const lines = chunk.split('\n').filter(line => line.trim());
58 for (const line of lines) {
59 if (line.startsWith('data: ')) {
60 try {
61 const event = JSON.parse(line.slice(6));
62 if (event.path && event.delta) {
63 process.stdout.write(event.delta);
64 }
65 } catch (e) {
66 console.error('Invalid JSON line:', line);
67 continue;
68 }
69 }
70 }
71 }
72}
73
74// Usage examples
75const text = await generateWithVapi(
76 "Explain the benefits of microservices architecture",
77 "your-assistant-id"
78);
79console.log(text);
3

Create a production server

Build a simple server that exposes Vapi through OpenAI-compatible endpoints:

simple-server.ts
1import express from 'express';
2
3const app = express();
4app.use(express.json());
5
6app.post('/v1/chat/completions', async (req, res) => {
7 const { messages, model, stream = false, assistant_id } = req.body;
8
9 if (!assistant_id) {
10 return res.status(400).json({
11 error: 'assistant_id is required for Vapi compatibility'
12 });
13 }
14
15 const lastMessage = messages[messages.length - 1];
16 const input = lastMessage.content;
17
18 const response = await fetch('https://api.vapi.ai/chat', {
19 method: 'POST',
20 headers: {
21 'Authorization': `Bearer ${process.env.VAPI_API_KEY}`,
22 'Content-Type': 'application/json'
23 },
24 body: JSON.stringify({
25 assistantId: assistant_id,
26 input: input,
27 stream: stream
28 })
29 });
30
31 if (stream) {
32 res.setHeader('Content-Type', 'text/event-stream');
33 res.setHeader('Cache-Control', 'no-cache');
34 res.setHeader('Connection', 'keep-alive');
35
36 const reader = response.body?.getReader();
37 if (!reader) {
38 return res.status(500).json({ error: 'Failed to get stream reader' });
39 }
40
41 const decoder = new TextDecoder();
42
43 while (true) {
44 const { done, value } = await reader.read();
45 if (done) {
46 res.write('data: [DONE]\n\n');
47 res.end();
48 break;
49 }
50
51 const chunk = decoder.decode(value);
52 res.write(chunk);
53 }
54 } else {
55 const chat = await response.json();
56 const openaiResponse = {
57 id: chat.id,
58 object: 'chat.completion',
59 created: Math.floor(Date.now() / 1000),
60 model: model || 'gpt-4o',
61 choices: [{
62 index: 0,
63 message: {
64 role: 'assistant',
65 content: chat.output[0].content
66 },
67 finish_reason: 'stop'
68 }]
69 };
70 res.json(openaiResponse);
71 }
72});
73
74app.listen(3000, () => {
75 console.log('Vapi-OpenAI compatibility server running on port 3000');
76});

Next Steps

Enhance your migrated system:

Need help? Chat with the team on our Discord or mention us on X/Twitter.