These are the options for the assistant’s transcriber.
These are the options for the assistant’s LLM.
These are the options for the assistant’s voice.
This is the mode for the first message. Default is ‘assistant-speaks-first’.
Use:
assistant.model.messages
at call start, call.messages
at squad transfer points).@default ‘assistant-speaks-first’
When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input. You can check the shape of the messages in ClientMessage schema.
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
How many seconds of silence to wait before ending the call. Defaults to 30.
@default 30
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
@default 600 (10 minutes)
This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’.
This enables filtering of noise and background speech while the user is talking.
Default false
while in beta.
@default false
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
Default false
while in beta.
@default false
These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.
This is the name of the assistant.
This is required when you want to transfer between assistants in a call.
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=[VoicemailTool]. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.
This is the message that the assistant will say if the call is forwarded to voicemail.
If unspecified, it will hang up.
This is the message that the assistant will say if it ends the call.
If unspecified, it will hang up without saying anything.
This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.
This is for metadata you want to store on the assistant.
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
Note: recordingEnabled
is currently at the root level. It will be moved to artifactPlan
in the future, but will remain backwards compatible.
This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages
.
Note: firstMessage
, voicemailMessage
, and endCallMessage
are currently at the root level. They will be moved to messagePlan
in the future, but will remain backwards compatible.
This is the plan for when the assistant should start talking.
You should configure this if you’re running into these issues:
This is the plan for when assistant should stop talking on customer interruption.
You should configure this if you’re running into these issues:
This is the plan for real-time monitoring of the assistant’s calls.
Usage:
monitorPlan.listenEnabled
to true
.monitorPlan.controlEnabled
to true
.Note, serverMessages
, clientMessages
, serverUrl
and serverUrlSecret
are currently at the root level but will be moved to monitorPlan
in the future. Will remain backwards compatible
These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.
/** This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.
The order of precedence is:
This is the unique identifier for the assistant.
This is the unique identifier for the org that this assistant belongs to.
This is the ISO 8601 date-time string of when the assistant was created.
This is the ISO 8601 date-time string of when the assistant was last updated.
These are the options for the assistant’s transcriber.
These are the options for the assistant’s LLM.
These are the options for the assistant’s voice.
This is the mode for the first message. Default is ‘assistant-speaks-first’.
Use:
assistant.model.messages
at call start, call.messages
at squad transfer points).@default ‘assistant-speaks-first’
When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input. You can check the shape of the messages in ClientMessage schema.
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
How many seconds of silence to wait before ending the call. Defaults to 30.
@default 30
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
@default 600 (10 minutes)
This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’.
This enables filtering of noise and background speech while the user is talking.
Default false
while in beta.
@default false
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
Default false
while in beta.
@default false
These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.
This is the name of the assistant.
This is required when you want to transfer between assistants in a call.
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=[VoicemailTool]. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.
This is the message that the assistant will say if the call is forwarded to voicemail.
If unspecified, it will hang up.
This is the message that the assistant will say if it ends the call.
If unspecified, it will hang up without saying anything.
This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.
This is for metadata you want to store on the assistant.
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
Note: recordingEnabled
is currently at the root level. It will be moved to artifactPlan
in the future, but will remain backwards compatible.
This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages
.
Note: firstMessage
, voicemailMessage
, and endCallMessage
are currently at the root level. They will be moved to messagePlan
in the future, but will remain backwards compatible.
This is the plan for when the assistant should start talking.
You should configure this if you’re running into these issues:
This is the plan for when assistant should stop talking on customer interruption.
You should configure this if you’re running into these issues:
This is the plan for real-time monitoring of the assistant’s calls.
Usage:
monitorPlan.listenEnabled
to true
.monitorPlan.controlEnabled
to true
.Note, serverMessages
, clientMessages
, serverUrl
and serverUrlSecret
are currently at the root level but will be moved to monitorPlan
in the future. Will remain backwards compatible
These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.
/** This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.
The order of precedence is: