Create Assistant
Headers
Bearer authentication of the form Bearer <token>
, where token is your auth token.
Request
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,handoff-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
@default 600 (10 minutes)
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
Default false
while in beta.
@default false
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.
The order of precedence is:
- assistant.server.url
- phoneNumber.serverUrl
- org.serverUrl
Response
This is the ISO 8601 date-time string of when the assistant was created.
This is the ISO 8601 date-time string of when the assistant was last updated.
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.).
If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,handoff-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.
@default 600 (10 minutes)
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
Default false
while in beta.
@default false
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema.
The order of precedence is:
- assistant.server.url
- phoneNumber.serverUrl
- org.serverUrl