Get the (almost) daily changelog
  1. New Assistant Hook for Call Ending Events: You can now define actions to execute when a call is ending using Assistant.hooks\["AssistantHookCallEnding"\]. This allows you to specify actions like transferring the call, saying a message, or invoking a function at the end of a call.

  2. Enhanced Voicemail Detection Configuration: Configure voicemail detection more precisely with new Assistant.voicemailDetection.backoffPlan and Assistant.voicemailDetection.beepMaxAwaitSeconds properties. This lets you control retry strategies and set maximum wait times for voicemail beeps.

  3. Twilio Authentication Using API Keys: Authenticate with Twilio using apiKey and apiSecret when importing a Twilio Phone Number This replaces the need for authToken.

  4. Support for New Voicemail Detection Provider and Model: Utilize the new vapi provider for voicemail detection by configuring Assistant.voicemailDetection.provider. Additionally, the gemini-2.5-flash-preview-04-17 model is now supported in various schemas for advanced capabilities.

  5. Expanded Workflow Nodes: Workflows now support Start and Assistant nodes, enabling more complex and customizable call flow designs. This allows for greater flexibility in defining how calls are handled.

  1. Adding metadata to ToolCallResult and ToolCallResultMessage: You can now include optional metadata in tool call results and messages. This allows you to send additional context or information to clients alongside standard tool responses.

  2. Adding tool.completed client message type: Assistants can now handle a new client message type, tool.completed. This enables you to notify clients when a tool has finished executing.

  3. Customizable assistant messages via message property in ToolCallResult: You can now specify exact messages for the assistant to say upon tool completion or failure using the message property. This gives you greater control over user interactions by allowing custom, context-specific responses.

  1. New OpenAI Models ‘o3’ and ‘o4-mini’ Added: You can now use the ‘o3’ and ‘o4-mini’ models with OpenAI models in Assistant.model["OpenAIModel"].model.

  2. ‘whisper’ Model Added to Deepgram Transcribers: The ‘whisper’ model is now available in Deepgram transcriber models for audio transcription. Select ‘whisper’ in the Assistant.transcriber["DeepgramTranscriber"].model property to utilize this advanced transcription model.

  3. Expanded Language Support in Deepgram Transcribers: You can now transcribe audio in ‘ar’ (Arabic), ‘he’ (Hebrew), and ‘ur’ (Urdu) when using Deepgram transcriber in your assistant.

  1. Per-Voice Caching Control Added: Developers can now enable or disable voice caching for each assistant’s voice using the new cachingEnabled property in voice configurations. This allows you to optimize performance or comply with data policies by controlling whether voice responses are cached.

  2. ‘Condition’ Value Now Accepts Strings: When specifying conditions, the value property should now be provided as a string instead of an object. This simplifies condition definitions and makes it easier to set and interpret condition values.

  1. Create Sesame Voices Programmatically: You can now create and manage Sesame Voices via the API by specifying a voiceName and transcription.

  2. AWS STS Support in OAuth2 Authentication: You can now use AWS Security Token Service for authentication by setting the type of OAuth2AuthenticationPlan to 'aws-sts', enabling integration with AWS’s secure token services.

  1. Idle Message Count Reset in Assistant.messagePlan: You can now enable Assistant.messagePlan.idleMessageResetCountOnUserSpeechEnabled (default: false) to allow the idle message count to reset whenever the user speaks. This means the assistant can repeatedly remind an idle user throughout the conversation.

**1. Custom Hooks When a Call is Ringing: You can now define custom hooks on your phone numbers to automatically perform actions when a call is ringing. This enables you to play messages or transfer calls without additional server-side code by using the new hooks property in Call.phoneNumber.hooks["phoneNumberHookCallRinging"].

**2. Say and Transfer Actions in Hooks: The new phone number hook call ringing allows you to specify actions that trigger when a call is ringing (on: 'call.ringing'). like redirecting calls or playing a message. Include these actions in the do array of your hook.

**3. Enhanced Call Tracking with endedReason: When implementing call analytics, you can now track calls that ended due to hook actions through new endedReason values:

  • 'call.ringing.hook-executed-say': Call ended after playing a message via hook
  • 'call.ringing.hook-executed-transfer': Call ended after being transferred via hook These values let you distinguish between different automated call handling outcomes in your reporting.
  1. Assistant Overrides in Testing (TargetPlan.assistantOverrides): You can now apply assistantOverrides when testing an assistant with a Target Plan, allowing modifications to the assistant’s configuration specifically for tests without changing the original assistant. This helps in testing different configurations or behaviors of an assistant without affecting the live version.

  2. Specify Voice Model with Deepgram: You can now specify the model to be used by Deepgram voices by setting the model property to "aura" or "aura-2" (default: "aura-2").

  3. Expanded Deepgram Voice Options (voiceId in DeepgramVoice and FallbackDeepgramVoice): The list of available deepgram voice options has been greatly expanded, providing a wider selection of voices for assistants. This allows you to customize the assistant’s voice to better match your desired persona with Assistant.voice["DeepgramVoice"].voiceId.

Expanded Deepgram Voice Options
Expanded Voice Options
  1. Control Text Replacement Behavior (replaceAllEnabled in ExactReplacement): A new property replaceAllEnabled allows you to decide whether to replace all instances of a specified text (key) or just the first occurrence in ExactReplacement configurations. Setting replaceAllEnabled to true ensures that all instances are replaced.
  1. New GPT-4.1 Models Available: You can now use 'gpt-4.1', 'gpt-4.1-mini', and 'gpt-4.1-nano' as options for the model and fallbackModels with your OpenAI models. These models may offer improved performance or features over previous versions.
New GPT-4.1 Models Available
New GPT-4.1 Models Available
  1. Expanded Voice Selection for Assistant Voices: You can now specify any valid voiceId for assistant voices without being limited to a predefined list. This provides greater flexibility to use different voices in Assistant.voice, and related configurations.