OpenAI Realtime

You can use OpenAI’s newest speech-to-speech model with your Vapi assistants.

The Realtime API is currently in beta, and not recommended for production use by OpenAI. We’re excited to have you try this new feature and welcome your feedback as we continue to refine and improve the experience.

OpenAI’s Realtime API enables developers to use a native speech-to-speech model. Unlike other Vapi configurations which orchestrate a transcriber, model and voice API to simulate speech-to-speech, OpenAI’s Realtime API natively processes audio in and audio out.

To start using it with your Vapi assistants, select gpt-4o-realtime-preview-2024-10-01 as your model.

  • Please note that only OpenAI voices may be selected while using this model. The voice selection will not act as a TTS (text-to-speech) model, but rather as the voice used within the speech-to-speech model.
  • Also note that we don’t currently support Knowledge Bases with the Realtime API. Furthermore, advanced functionality is currently limited with the latest voices Ash, Ballad, Coral, Sage and Verse.
  • Lastly, note that our Realtime integration still retains the rest of Vapi’s orchestration layer such as the endpointing and interruption models to enable a reliable conversational flow.