Get the (almost) daily changelog
  1. Batch Call Operations: You can now place multiple calls to different customers at once by providing a list of customers as an array in POST /call.

  2. Google Sheets Row Append Tool Added: You can now append rows to Google Sheets directly from your assistant using GoogleSheetsRowAppendTool. This allows integration with Google Sheets via the API for automating data entry tasks.

  3. Call Control and Scheduling: You can now schedule calls using the new SchedulePlan feature, specifying earliest and latest times for calls to occur. This gives you more control over call timing and scheduling.

  4. New Transcriber Options and Fallback Plans: New transcribers like GoogleTranscriber and OpenAITranscriber have been added, along with the ability to set fallbackPlan for transcribers. This provides more choices and reliability for speech recognition in your applications.

  1. Multi-Structured Data Extraction with StructuredDataMultiPlan: You can now extract multiple sets of structured data from calls by configuring assistant.analysisPlan.structuredDataMultiPlan. This allows you to define various extraction plans, each producing structured outputs accessible via call.analysis.structuredDataMulti.

  2. Customizable Voice Speed and Language Settings: You can now adjust the speech speed and language for your assistant’s voice by using the new speed and language properties in Assistant.voice. This enables you to fine-tune the voice output to better match your user’s preferences and localize the experience.

  3. Integration of OpenAI Transcriber: The transcriber property in assistants now supports OpenAITranscriber, allowing you to utilize OpenAI’s transcription services. A corresponding Call.endedReason value, pipeline-error-openai-transcriber-failed, has been added to help you identify when a call ends due to an OpenAI transcriber error.

  1. Customizable Background Sound: You can now use a custom audio file as the background sound in calls by providing a URL in the backgroundSound property. This allows you to enhance the call experience with personalized ambient sounds or music.

  2. New Recording Format Options in ArtifactPlan: You can specify the recording format as either 'wav;l16' or 'mp3' in Assistant.artifactPlan or Call.artifactPlan. This gives you control over the audio format of call recordings to suit your storage and playback preferences.

  3. Integrate with Langfuse for Enhanced Observability: You can now integrate with Langfuse by setting assistant.observabilityPlan to langfuse. Add tags and metadata to your traces to improve monitoring, categorization, and debugging of your application’s behavior.

Introducing Google Calendar Integration, and Chat Test Suite / Rime AI Voice Enhancements

  1. Integration with Google Calendar: You can now create and manage Google Calendar events directly within your tools. Configure OAuth2 credentials through the dashboard > Build > Provider Keys to authenticate and interact with Google Calendar APIs.
Google Calendar Integration
Google Calendar Integration
  1. Enhanced Voice Customization for RimeAIVoice: Gain more control over Rime AI voice properties with new options like reduceLatency, inlineSpeedAlpha, pauseBetweenBrackets, and phonemizeBetweenBrackets. These settings let you optimize voice streaming and adjust speech delivery to better suit your assistant’s needs.

  2. Chat Test Suite Enhancements: You can now create and run chat-based tests in your test suites using the new TestSuiteTestChat to more comprehensively test conversational interactions in your assistant.

  3. Maximum Length for Test Suite Chat Scripts: When creating or updating chat tests, note that the script property now has a maximum length of 10,000 characters. Ensure your test scripts conform to this limit to avoid any validation errors.

Test Suite, Smart Endpointing, and Compliance Plans, Chat Completion Message Workflows, and Voicemail Detection

  1. Test Suite Enhancements: Developers can now define targetPlan and testerPlan when creating or updating test suites, allowing for customized testing configurations without importing phone numbers to Vapi.

  2. Smart Endpointing Updates: You can now select between Vapi and Livekit smart endpointing providers using the Assistant.startSpeakingPlan.smartEndpointingPlan; the customEndpointingRules property is deprecated and should no longer be used.

  3. Compliance Plan Enhancements: Organizations can now specify compliance settings using the new compliancePlan property, enabling features like PCI compliance at the org level.

  4. Chat Completion Message Updates: When working with OpenAI chat completions, you should now use ChatCompletionMessageWorkflows instead of the deprecated ChatCompletionMessage.

  5. Voicemail Detection Defaults Updated: The default voicemailExpectedDurationSeconds for voicemail detection plans has increased from 15 to 25 seconds, affecting how voicemail detection timings are handled.

Enhancements in Assistant Responses, New Gemini Model, and Call Handling

  1. Introduction of ‘gemini-2.0-flash-lite’ Model Option: You can now use gemini-2.0-flash-lite in Assistant.model[provider="google"].model[model="gemini-2.0-flash-lite"] for a reduced latency, lower cost Gemini model with a 1 million token context window.
gemini-2.0-flash-lite Model Option
gemini-2.0-flash-lite Model Option
  1. New Assistant Paginated Response: All Assistant endpoints now return paginated responses. Each response specifies itemsPerPage, totalItems, and currentPage, which you can use to navigate through a list of assistants.

Blocks Schema Deprecations, Scheduling Enhancements, and New Voice Options for Vapi Voice

  1. ‘scheduled’ Status Added to Calls and Messages: You can now set the status of a call or message to scheduled, allowing it to be executed at a future time. This enables scheduling functionality within your application for calls and messages.

  2. New Voice Options for Text-to-Speech: Four new voices—Neha, Cole, Harry, and Paige—have been added for text-to-speech services. You can enhance user experience by setting the voiceId to one of these options in your configurations.

  3. Removal of Step and Block Schemas:

    Blocks and Steps are now officially deprecated. Developers should update their applications to adapt to these changes, possibly by using new or alternative schemas provided.

New Workflows API, Telnyx Phone Number Support, Voice Options, and much more

  1. Workflows Replace Blocks: The API has migrated from blocks to workflows with new /workflow endpoints. Introduction to Workflows You can now use UpdateWorkflowDTO where conversation components (Say, Gather, ApiRequest, Hangup, Transfer nodes) are explicitly connected via edges to create directed conversation flows.
1{
2 "name": "Customer Support Workflow",
3 "nodes": [
4 {
5 "id": "greeting",
6 "type": "Say",
7 "text": "Hello, welcome to customer support. Do you need help with billing or technical issues?"
8 },
9 {
10 "id": "menu",
11 "type": "Gather",
12 "options": ["billing", "technical", "other"]
13 },
14 {
15 "id": "billing",
16 "type": "Say",
17 "text": "I'll connect you with our billing department."
18 },
19 {
20 "id": "technical",
21 "type": "Say",
22 "text": "I'll connect you with our technical support team."
23 },
24 {
25 "id": "transfer_billing",
26 "type": "Transfer",
27 "destination": {
28 "type": "number",
29 "number": "+1234567890"
30 }
31 },
32 {
33 "id": "transfer_technical",
34 "type": "Transfer",
35 "destination": {
36 "type": "number",
37 "number": "+1987654321"
38 }
39 }
40 ],
41 "edges": [
42 {
43 "from": "greeting",
44 "to": "menu"
45 },
46 {
47 "from": "menu",
48 "to": "billing",
49 "condition": {
50 "type": "logic",
51 "liquid": "{% if input == 'billing' %} true {% endif %}"
52 }
53 },
54 {
55 "from": "menu",
56 "to": "technical",
57 "condition": {
58 "type": "logic",
59 "liquid": "{% if input == 'technical' %} true {% endif %}"
60 }
61 },
62 {
63 "from": "billing",
64 "to": "transfer_billing"
65 },
66 {
67 "from": "technical",
68 "to": "transfer_technical"
69 }
70 ]
71}
  1. Telnyx Phone Number Support: Telnyx is now available as a phone number provider alongside Twilio and Vonage.

  2. New Voice Options:

    • Vapi Voices: New Vapi voices - Elliot, Rohan, Lily, Savannah, and Hana
    • Hume Voice: New provider with octave model and customizable voice settings
    • Neuphonic Voice: New provider with neu_hq (higher quality) and neu_fast (faster) models
  3. New Cerebras Model: CerebrasModel Supports llama3.1-8b and llama-3.3-70b models

  4. Enhanced Transcription:

    • New Providers: ElevenLabs and Speechmatics transcribers now available.
    • DeepgramTranscriber Numerals: New numerals option converts spoken numbers to digits (e.g., “nine-seven-two” → “972”)
  5. Improved Voicemail Detection: You can now use multiple provider implementations for assistant.voicemailDetection (Google, OpenAI, Twilio). OpenAI implementation allows configuring detection duration (5-60 seconds, default: 15).

  6. Smart Endpointing Upgrade: Now supports LiveKit as an alternative to Vapi’s custom-trained model in StartSpeakingPlan.smartEndpointingEnabled. LiveKit only supports English but may offer different endpointing characteristics.

  7. Observability with Langfuse: New assistant.observabilityPlan property allows integration with Langfuse for tracing and monitoring of assistant calls. Configure with LangfuseObservabilityPlan.

  8. More Credential Support: Added support for Cerebras, Google, Hume, InflectionAI, Mistral, Trieve, and Neuphonic credentials in assistant.credentials