Debugging voice agents
Learn to identify, diagnose, and fix common issues with your voice assistants and workflows
Overview
Voice agents involve multiple AI systems working together—speech recognition, language models, and voice synthesis. When something goes wrong, systematic debugging helps you quickly identify and fix the root cause.
Most common issues fall into these categories:
- Agent doesn’t understand user input correctly
- Responses are inappropriate or inconsistent
- Agent sounds robotic or unnatural
- Call quality issues or audio problems
- Tool integrations failing or returning errors
- Workflow logic not executing as expected
Quick diagnostics
Start with these immediate checks before diving deeper:
Test in dashboard
Test your voice agent directly in the dashboard:
Click “Talk to Assistant” to test
Click “Call” to test workflow
Benefits:
- Eliminates phone network variables
- Provides real-time transcript view
- Shows tool execution results immediately
Check logs
Navigate to the Observe
section in your dashboard sidebar:
Review call transcripts, durations, and error messages
Check API requests and responses for integration issues
Verify webhook deliveries and server responses
Test individual components
Use dashboard testing features:
Automated testing for assistants
Test tools with sample data
Verify provider status
Check if AI service providers are experiencing issues:
Core Services:
- Visit Vapi Status Page for Vapi service status
Provider Status Pages:
- OpenAI Status for OpenAI language models
- Anthropic Status for Anthropic language models
- ElevenLabs Status for ElevenLabs voice synthesis
- Deepgram Status for Deepgram speech-to-text
- And other providers’ status pages as needed
Dashboard debugging resources
The Vapi dashboard provides powerful debugging features to help you identify and fix issues quickly:
Call Logs
Navigate to Observe > Call Logs
to:
- Review complete call transcripts
- Check call duration and completion status
- Identify where calls failed or ended unexpectedly
- See tool execution results and errors
- Analyze conversation flow in workflows
API Logs
Navigate to Observe > API Logs
to:
- Monitor all API requests and responses
- Check for authentication errors
- Verify request payloads and response codes
- Debug integration issues with external services
Webhook Logs
Navigate to Observe > Webhook Logs
to:
- Verify webhook deliveries to your server
- Check server response codes and timing
- Debug webhook authentication issues
- Monitor event delivery failures
Voice Test Suites
Navigate to Test > Voice Test Suites
to:
- Run automated tests on your assistants (not available for workflows)
- Test conversation flows with predefined scenarios
- Verify assistant behavior across different inputs
- Monitor performance over time
Tool Testing
For any tool in your Tools
section:
- Navigate to
Tools > [Select Tool]
- Use the
Test
button to send sample payloads - Verify tool responses and error handling
- Debug parameter extraction and API calls
Speech and language issues
Debug steps for response quality:
- Review system prompt - Navigate to your assistant/workflow in the dashboard and check the system prompt specificity
- Check model configuration - Scroll down to
Model
section and verify:- You’re using an appropriate model (e.g.,
gpt-4o
) Max Tokens
is sufficient for response length- Necessary tools are enabled and configured correctly
- You’re using an appropriate model (e.g.,
Tool and workflow debugging
Variable extraction details:
Common error patterns
Getting help
When you’re stuck:
Before asking for help: