Passing data between assistants

Three approaches for forwarding context to the next assistant in a squad — when to use each, and what each one costs.

When an assistant in a squad hands off to another assistant, you usually need to forward something — the caller’s name, an extracted intent, an upstream tool’s result, a session ID. Vapi gives you three different mechanisms to do this. Each one trades off latency, accuracy, and where the value comes from. Picking the wrong one is the single most common reason squad handoffs feel slow or unreliable.

This page is a decision guide. For end-to-end configuration of the handoff itself, see the Handoff tool page.

The three approaches at a glance

ApproachWhere the value comes fromLLM involved?LatencyHallucination riskBest for
Handoff arguments (function.parameters on the handoff tool)The model decides, inline with the same tool call that triggers the handoffYes — piggybacks on the LLM call already happeningZero addedYes (model fills the value)Classifications, summaries, sentiment, intent — anything the model has to derive from the live conversation
Variable extraction (variableExtractionPlan.schema on the destination)The model extracts from the full conversation transcriptYes — separate dedicated LLM callFull LLM round-trip (hundreds of ms)YesStructured extraction with a dedicated prompt — e.g. pulling dateOfBirth, appointmentTime from the user’s last few utterances
Liquid templating in the destination’s promptAlready in the variable bag (call data, prior tool results, prior extractions)No — pure template substitutionSub-millisecond per renderNo (deterministic)Forwarding values that already exist — caller phone number, prior lookupPatient result, time variables

Approach 1: Handoff arguments

Define function.parameters on the handoff tool. The LLM that’s already generating the handoff tool call also fills in your custom arguments as part of the same call — no extra round-trip.

Availability today:

  • API: Fully supported. Send the JSON below via POST /tool or as part of your assistant’s model.tools[] via POST /assistant / PATCH /assistant.
  • Dashboard — Tools page: UX for defining function.parameters on a handoff tool is shipping soon. Use the API in the meantime.
  • Dashboard — Squad builder: Configuring a handoff via the squad member’s Handoff Tools section does NOT currently carry function.parameters through to the runtime tool (backend synthesizes the tool without the function field). Until that’s fixed, put the handoff tool directly on the assistant’s model.tools[] (via the API or the Tools page) instead of defining it per squad-member destination.
1{
2 "type": "handoff",
3 "function": {
4 "name": "handoff_to_specialist",
5 "description": "Hand off to the specialist when the customer is ready",
6 "parameters": {
7 "type": "object",
8 "required": ["destination", "customerIntent", "customerSentiment"],
9 "properties": {
10 "destination": {
11 "type": "string",
12 "enum": ["specialist"]
13 },
14 "customerIntent": {
15 "type": "string",
16 "enum": ["new-customer", "existing-customer", "billing-issue"],
17 "description": "What the customer is calling about"
18 },
19 "customerSentiment": {
20 "type": "string",
21 "enum": ["positive", "neutral", "frustrated"],
22 "description": "Caller's overall sentiment"
23 }
24 }
25 }
26 },
27 "destinations": [
28 {
29 "type": "assistant",
30 "assistantName": "Specialist"
31 }
32 ]
33}

The next assistant receives customerIntent and customerSentiment in the variable bag and can reference them as {{customerIntent}} / {{customerSentiment}} in its prompts.

Use this when the value only exists “in the model’s head” — it has to be derived from the live conversation, but you don’t need a separate dedicated extraction call.

Avoid this when the value already exists somewhere structured (a prior tool result, the call’s customer.number, etc.) — the model could mishear or paraphrase it. Use Approach 3 for those.

Approach 2: Variable extraction (variableExtractionPlan.schema)

Define a variableExtractionPlan.schema on the handoff destination. After the handoff fires, Vapi makes a dedicated LLM call against the full conversation transcript to fill the schema, then merges the result into the variable bag for the next assistant.

1{
2 "type": "assistant",
3 "assistantName": "Scheduler",
4 "variableExtractionPlan": {
5 "schema": {
6 "type": "object",
7 "required": ["preferredDate", "preferredTime"],
8 "properties": {
9 "preferredDate": {
10 "type": "string",
11 "description": "The date the caller asked to schedule for, in YYYY-MM-DD format"
12 },
13 "preferredTime": {
14 "type": "string",
15 "description": "The time of day the caller asked for, in 24-hour HH:MM format"
16 }
17 }
18 }
19 }
20}

Use this when the value lives across several user utterances and needs a dedicated extraction prompt to get reliably. Schema validation gives you typed output and lets you constrain values via JSON-schema enum / pattern.

Avoid this when zero added latency matters — this path adds a full LLM round-trip per handoff (typically a few hundred ms). For high-traffic flows where the value is something the model can fill inline, Approach 1 is faster.

For full configuration details — multiple destinations, dynamic handoffs, context engineering — see the Variable extraction section of the Handoff tool page.

Approach 3: Liquid templating in the destination’s prompt

The variable bag is shared across every assistant in the squad for the lifetime of the call. Anything that’s been put into it — by Approach 1, Approach 2, by a prior tool call returning JSON, by call-level data like customer.number and phoneNumber.number, by time variables like now and year — is reachable from any subsequent assistant’s prompt via Liquid syntax. No extra wiring required.

You are the scheduling specialist. The caller is {{customer.name}}, calling
from {{customer.number}}. Their patient ID is {{patientId}} (looked up earlier
this call). They want a {{preferredAppointmentType}} appointment.
Today is {{currentDateTime}}.

If customer.name, patientId, etc. are in the bag, they render. If they’re not, they render as the literal token {{patientId}} (so the caller might hear “patientId” spoken — worth handling defensively in your prompt).

Use this when the value is already in the bag — there’s no reason to re-extract via LLM what you already have structurally. Sub-millisecond, deterministic, free.

Avoid this when the value isn’t in the bag yet. Liquid can’t extract from the conversation; it can only forward what’s already there.

Sensitive fields are sanitized. Vapi automatically redacts credential-like keys (twilioAuthToken, twilioApiSecret, serverUrlSecret, accountSid, callToken, credentialId, etc.) from the variable bag before any prompt rendering. References like {{phoneNumber.twilioAuthToken}} will render as [REDACTED] rather than leaking the actual credential.

Decision flowchart

What do you want the next assistant to know?
├─ "Something the model just heard / classified / summarized"
│ └─→ Approach 1: Handoff arguments
│ Zero added latency, model fills inline.
├─ "Something the user explicitly said and I want a dedicated, schema-validated extraction"
│ └─→ Approach 2: variableExtractionPlan.schema
│ Adds an LLM round-trip, but you get structured output and a focused
│ extraction prompt.
└─ "Something I already have — call data, prior tool result, prior extraction"
└─→ Approach 3: Reference it via Liquid in the destination's prompt
No extra cost. Use {{customer.number}}, {{patientId}}, etc. directly.

Common patterns

Pattern: “Forward an extracted ID after a database lookup”

A lookupPatient tool returned {patientId: "p_42", dob: "1990-01-15"} on assistant A. Assistant B needs patientId.

Use Approach 3 — it’s already in the bag. Assistant B’s prompt: The patient ID is {{patientId}}. Don’t re-extract it via schema; the model could mishear digits.

Pattern: “Categorize what the caller wants and route on it”

Caller spent two turns describing a problem. Assistant A needs to classify the intent and hand off to a specialist who knows about that intent.

Use Approach 1 — handoff arguments with an enum for intent. The classifying assistant’s tool call carries the intent inline; the destination assistant reads {{intent}}.

Pattern: “Pull a structured booking request out of free-form speech”

Caller said “I want to come in next Tuesday around 2 PM, maybe earlier if there’s something”. Assistant A needs {preferredDate, preferredTime, alternativesOK} as structured fields.

Use Approach 2variableExtractionPlan.schema with the destination. The dedicated extraction prompt + schema validation catches the structure better than inline arguments.

Pattern: “Mix and match”

You can combine all three on a single handoff. Common shape: handoff arguments for the LLM-classified intent, schema extraction for one structured field that needs the dedicated prompt, and the destination’s system prompt directly references prior tool results via Liquid.

What if extraction fails?

Vapi’s handoff path is failure-isolated:

  • An empty variableExtractionPlan ({}) is a graceful no-op — the handoff proceeds without extraction.
  • A schema-extraction LLM failure (5xx, timeout, rate limit) is logged and the handoff proceeds with no extracted variables — it does not bail the handoff.
  • A schema-extraction result that isn’t a plain object (an array, a primitive, null) is dropped before merge — it does not corrupt the variable bag.

So extraction is best-effort; if values are critical for the next assistant to function, prefer Approach 1 (handoff arguments — required by the function schema, blocks the LLM call until provided) or Approach 3 (reference values you already have).

Next steps