Custom transcriber
Integrate your own transcription service with Vapi
Overview
A custom transcriber lets you use your own transcription service with Vapi, instead of a built-in provider. This is useful if you need more control, want to use a specific provider like Deepgram, or have custom processing needs.
This guide shows you how to set up Deepgram as your custom transcriber. The same approach can be adapted for other providers.
You’ll learn how to:
- Stream audio from Vapi to your server
- Forward audio to Deepgram for transcription
- Return real-time transcripts back to Vapi
Why Use a Custom Transcriber?
- Flexibility: Integrate with your preferred transcription service.
- Control: Implement specialized processing that isn’t available with built‑in providers.
- Cost Efficiency: Leverage your existing transcription infrastructure while maintaining full control over the pipeline.
- Customization: Tailor the handling of audio data, transcript formatting, and buffering according to your specific needs.
How it works
Connection initialization
Vapi connects to your custom transcriber endpoint (e.g. /api/custom-transcriber
) via WebSocket. It sends an initial JSON message like this:
Transcription processing
Your server forwards the audio to Deepgram (or your chosen transcriber) using its SDK. Deepgram processes the audio and returns transcript events that include a channel_index
(e.g. [0, ...]
for customer, [1, ...]
for assistant). The service buffers the incoming data, processes the transcript events (with debouncing and channel detection), and emits a final transcript.
Implementation steps
Project setup
Create a new Node.js project and install the required dependencies:
Create a .env
file with the following content:
Test your integration
- Deploy your server:
- Expose your server: Use a tool like ngrok to expose your server via HTTPS/WSS.
- Initiate a call with Vapi: Use the following CURL command (update the placeholders with your actual values):
Expected behavior:
- Vapi connects via WebSocket to your custom transcriber at
/api/custom-transcriber
. - The
"start"
message initializes the Deepgram session. - PCM audio data is forwarded to Deepgram.
- Deepgram returns transcript events, which are processed with channel detection and debouncing.
- The final transcript is sent back as a JSON message:
Notes and limitations
- Streaming support requirement:
The custom transcriber must support streaming. Vapi sends continuous audio data over the WebSocket, and your server must handle this stream in real time. - Secret header:
The custom transcriber configuration accepts an optional field calledsecret
. When set, Vapi will send this value with every request as an HTTP header namedx-vapi-secret
. This can also be configured via a headers field. - Buffering:
The solution buffers PCM audio and performs simple validation (e.g. ensuring stereo PCM data length is a multiple of 4). If the audio data is malformed, it is trimmed to a valid length. - Channel detection:
Transcript events from Deepgram include achannel_index
array. The service uses the first element to determine whether the transcript is from the customer (0
) or the assistant (1
). Ensure Deepgram’s response format remains consistent with this logic.
Conclusion
Using a custom transcriber with Vapi gives you the flexibility to integrate any transcription service into your call flows. This guide walked you through the setup, usage, and testing of a solution that streams real-time audio, processes transcripts with multi‑channel detection, and returns formatted responses back to Vapi. Follow the steps above and use the provided code examples to build your custom transcriber solution.