-
Batch Call Operations: You can now place multiple calls to different customers at once by providing a list of
customer
s as an array inPOST /call
. -
Google Sheets Row Append Tool Added: You can now append rows to Google Sheets directly from your assistant using
GoogleSheetsRowAppendTool
. This allows integration with Google Sheets via the API for automating data entry tasks. -
Call Control and Scheduling: You can now schedule calls using the new
SchedulePlan
feature, specifying earliest and latest times for calls to occur. This gives you more control over call timing and scheduling. -
New Transcriber Options and Fallback Plans: New transcribers like
GoogleTranscriber
andOpenAITranscriber
have been added, along with the ability to setfallbackPlan
for transcribers. This provides more choices and reliability for speech recognition in your applications.
-
Multi-Structured Data Extraction with
StructuredDataMultiPlan
: You can now extract multiple sets of structured data from calls by configuringassistant.analysisPlan.structuredDataMultiPlan
. This allows you to define various extraction plans, each producing structured outputs accessible viacall.analysis.structuredDataMulti
. -
Customizable Voice Speed and Language Settings: You can now adjust the speech speed and language for your assistant’s voice by using the new
speed
andlanguage
properties inAssistant.voice
. This enables you to fine-tune the voice output to better match your user’s preferences and localize the experience. -
Integration of OpenAI Transcriber: The
transcriber
property in assistants now supportsOpenAITranscriber
, allowing you to utilize OpenAI’s transcription services. A correspondingCall.endedReason
value,pipeline-error-openai-transcriber-failed
, has been added to help you identify when a call ends due to an OpenAI transcriber error.
-
Customizable Background Sound: You can now use a custom audio file as the background sound in calls by providing a URL in the
backgroundSound
property. This allows you to enhance the call experience with personalized ambient sounds or music. -
New Recording Format Options in
ArtifactPlan
: You can specify the recording format as either'wav;l16'
or'mp3'
inAssistant.artifactPlan
orCall.artifactPlan
. This gives you control over the audio format of call recordings to suit your storage and playback preferences. -
Integrate with Langfuse for Enhanced Observability: You can now integrate with Langfuse by setting
assistant.observabilityPlan
tolangfuse
. Addtags
andmetadata
to your traces to improve monitoring, categorization, and debugging of your application’s behavior.
-
OpenAI Voice Enhancements: When using OpenAI Voice models in
Assistant.voice
, you can now use specific text to speech models and add custom instructions to control your assistant’s voice output -
Improved Call Error Reporting: You can now use new
Call.endedReason
codes when a call fails to start or ends unexpectedly due to failing to retrieve Vapi objects. Refer to Call.endedReason for more details.
Introducing Google Calendar Integration, and Chat Test Suite / Rime AI Voice Enhancements
- Integration with Google Calendar: You can now create and manage Google Calendar events directly within your tools. Configure OAuth2 credentials through the dashboard > Build > Provider Keys to authenticate and interact with Google Calendar APIs.

-
Enhanced Voice Customization for RimeAIVoice: Gain more control over Rime AI voice properties with new options like
reduceLatency
,inlineSpeedAlpha
,pauseBetweenBrackets
, andphonemizeBetweenBrackets
. These settings let you optimize voice streaming and adjust speech delivery to better suit your assistant’s needs. -
Chat Test Suite Enhancements: You can now create and run chat-based tests in your test suites using the new
TestSuiteTestChat
to more comprehensively test conversational interactions in your assistant. -
Maximum Length for Test Suite Chat Scripts: When creating or updating chat tests, note that the
script
property now has a maximum length of 10,000 characters. Ensure your test scripts conform to this limit to avoid any validation errors.
Test Suite, Smart Endpointing, and Compliance Plans, Chat Completion Message Workflows, and Voicemail Detection
-
Test Suite Enhancements: Developers can now define
targetPlan
andtesterPlan
when creating or updating test suites, allowing for customized testing configurations without importing phone numbers to Vapi. -
Smart Endpointing Updates: You can now select between
Vapi
andLivekit
smart endpointing providers using theAssistant.startSpeakingPlan.smartEndpointingPlan
; thecustomEndpointingRules
property is deprecated and should no longer be used. -
Compliance Plan Enhancements: Organizations can now specify compliance settings using the new
compliancePlan
property, enabling features like PCI compliance at the org level. -
Chat Completion Message Updates: When working with OpenAI chat completions, you should now use
ChatCompletionMessageWorkflows
instead of the deprecatedChatCompletionMessage
. -
Voicemail Detection Defaults Updated: The default
voicemailExpectedDurationSeconds
for voicemail detection plans has increased from 15 to 25 seconds, affecting how voicemail detection timings are handled.
New timeoutSeconds Property in Custom LLM Model
- New
timeoutSeconds
Property inCustom LLM Model
: Developers can now specify a custom timeout duration (between 20 and 600 seconds) for connections to their custom language model provider using the newtimeoutSeconds
property. This enhancement allows for better control over response waiting times, accommodating longer operations or varying network conditions.
Enhancements in Assistant Responses, New Gemini Model, and Call Handling
- Introduction of ‘gemini-2.0-flash-lite’ Model Option: You can now use
gemini-2.0-flash-lite
inAssistant.model[provider="google"].model[model="gemini-2.0-flash-lite"]
for a reduced latency, lower cost Gemini model with a 1 million token context window.

- New Assistant Paginated Response: All
Assistant
endpoints now return paginated responses. Each response specifiesitemsPerPage
,totalItems
, andcurrentPage
, which you can use to navigate through a list of assistants.
Blocks Schema Deprecations, Scheduling Enhancements, and New Voice Options for Vapi Voice
-
‘scheduled’ Status Added to Calls and Messages: You can now set the status of a call or message to
scheduled
, allowing it to be executed at a future time. This enables scheduling functionality within your application for calls and messages. -
New Voice Options for Text-to-Speech: Four new voices—
Neha
,Cole
,Harry
, andPaige
—have been added for text-to-speech services. You can enhance user experience by setting thevoiceId
to one of these options in your configurations. -
Removal of Step and Block Schemas:
Blocks and Steps are now officially deprecated. Developers should update their applications to adapt to these changes, possibly by using new or alternative schemas provided.
New Workflows API, Telnyx Phone Number Support, Voice Options, and much more
- Workflows Replace Blocks: The API has migrated from blocks to workflows with new
/workflow
endpoints. Introduction to Workflows You can now useUpdateWorkflowDTO
where conversation components (Say
,Gather
,ApiRequest
,Hangup
,Transfer
nodes) are explicitly connected via edges to create directed conversation flows.
Example workflow (simplified)
-
Telnyx Phone Number Support: Telnyx is now available as a phone number provider alongside Twilio and Vonage.
- Use the
TelnyxPhoneNumber
,CreateTelnyxPhoneNumberDTO
, andUpdateTelnyxPhoneNumberDTO
schemas with/phone-number
endpoints to create and update Telnyx phone numbers. - The
Call.phoneCallProviderId
now includes Telnyx’scallControlId
alongside Twilio’scallSid
and Vonage’sconversationUuid
.
- Use the
-
New Voice Options:
- Vapi Voices: New Vapi voices -
Elliot
,Rohan
,Lily
,Savannah
, andHana
- Hume Voice: New provider with
octave
model and customizable voice settings - Neuphonic Voice: New provider with
neu_hq
(higher quality) andneu_fast
(faster) models
- Vapi Voices: New Vapi voices -
-
New Cerebras Model:
CerebrasModel
Supportsllama3.1-8b
andllama-3.3-70b
models -
Enhanced Transcription:
- New Providers: ElevenLabs and Speechmatics transcribers now available.
- DeepgramTranscriber Numerals: New
numerals
option converts spoken numbers to digits (e.g., “nine-seven-two” → “972”)
-
Improved Voicemail Detection: You can now use multiple provider implementations for
assistant.voicemailDetection
(Google, OpenAI, Twilio). OpenAI implementation allows configuring detection duration (5-60 seconds, default: 15). -
Smart Endpointing Upgrade: Now supports LiveKit as an alternative to Vapi’s custom-trained model in
StartSpeakingPlan.smartEndpointingEnabled
. LiveKit only supports English but may offer different endpointing characteristics. -
Observability with Langfuse: New
assistant.observabilityPlan
property allows integration with Langfuse for tracing and monitoring of assistant calls. Configure with LangfuseObservabilityPlan. -
More Credential Support: Added support for Cerebras, Google, Hume, InflectionAI, Mistral, Trieve, and Neuphonic credentials in
assistant.credentials