Get Assistant
Path parameters
Headers
Bearer authentication of the form Bearer <token>, where token is your auth token.
Response
This is the ISO 8601 date-time string of when the assistant was created.
This is the ISO 8601 date-time string of when the assistant was last updated.
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
This is the mode for the first message. Default is ‘assistant-speaks-first’.
Use:
- ‘assistant-speaks-first’ to have the assistant speak first.
- ‘assistant-waits-for-user’ to have the assistant wait for the user to speak first.
- ‘assistant-speaks-first-with-model-generated-message’ to have the assistant speak first with a message generated by the model based on the conversation state. (
assistant.model.messages
at call start,call.messages
at squad transfer points).
@default ‘assistant-speaks-first’
These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=[VoicemailTool]. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
How many seconds of silence to wait before ending the call. Defaults to 30.
@default 30
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
@default 600 (10 minutes)
This enables filtering of noise and background speech while the user is talking.
Default false
while in beta.
@default false
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
Default false
while in beta.
@default false
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
Note: recordingEnabled
is currently at the root level. It will be moved to artifactPlan
in the future, but will remain backwards compatible.
This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages
.
Note: firstMessage
, voicemailMessage
, and endCallMessage
are currently at the root level. They will be moved to messagePlan
in the future, but will remain backwards compatible.
This is the plan for when the assistant should start talking.
You should configure this if you’re running into these issues:
- The assistant is too slow to start talking after the customer is done speaking.
- The assistant is too fast to start talking after the customer is done speaking.
- The assistant is so fast that it’s actually interrupting the customer.
This is the plan for when assistant should stop talking on customer interruption.
You should configure this if you’re running into these issues:
- The assistant is too slow to recognize customer’s interruption.
- The assistant is too fast to recognize customer’s interruption.
- The assistant is getting interrupted by phrases that are just acknowledgments.
- The assistant is getting interrupted by background noises.
- The assistant is not properly stopping — it starts talking right after getting interrupted.
This is the plan for real-time monitoring of the assistant’s calls.
Usage:
- To enable live listening of the assistant’s calls, set
monitorPlan.listenEnabled
totrue
. - To enable live control of the assistant’s calls, set
monitorPlan.controlEnabled
totrue
.
Note, serverMessages
, clientMessages
, serverUrl
and serverUrlSecret
are currently at the root level but will be moved to monitorPlan
in the future. Will remain backwards compatible
This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.
The order of precedence is:
- assistant.server.url
- phoneNumber.serverUrl
- org.serverUrl