Call Analysis
At the end of the call, you can summarize and evaluate how it went.
The Call Analysis feature allows you to summarize and evaluate calls, providing valuable insights into their effectiveness. This feature uses a combination of prompts and schemas to generate structured data and success evaluations based on the call’s content. The underlying models driving our call analysis pipeline are the latest version of Anthropic’s Claude Sonnet and in case of failure OpenAI’s GPT 4o.
You can customize the below in the assistant’s assistant.analysisPlan
.
Summary Prompt
The summary prompt is used to create a concise summary of the call. This summary is stored in call.analysis.summary
.
Default Summary Prompt
The default summary prompt is:
Customizing the Summary Prompt
You can customize the summary prompt by setting the summaryPrompt
property in the API or SDK:
To disable the summary prompt, set it to an empty string ""
or "off"
:
Structured Data Prompt
The structured data prompt extracts specific pieces of data from the call. This data is stored in call.analysis.structuredData
.
Default Structured Data Prompt
The default structured data prompt is:
Customizing the Structured Data Prompt
You can set a custom structured data prompt using the structuredDataPrompt
property:
Structured Data Schema
The structured data schema enforces the format of the extracted data. It is defined using JSON Schema standards.
Customizing the Structured Data Schema
You can set a custom structured data schema using the structuredDataSchema
property:
Success Evaluation Prompt
The success evaluation prompt is used to determine if the call was successful. This evaluation is stored in call.analysis.successEvaluation
.
Default Success Evaluation Prompt
The default success evaluation prompt is:
Customizing the Success Evaluation Prompt
You can set a custom success evaluation prompt using the successEvaluationPrompt
property:
To disable the success evaluation prompt, set it to an empty string ""
or "off"
:
Success Evaluation Rubric
The success evaluation rubric defines the criteria used to evaluate the call’s success. The available rubrics are:
NumericScale
: A scale of 1 to 10.DescriptiveScale
: A scale of Excellent, Good, Fair, Poor.Checklist
: A checklist of criteria and their status.Matrix
: A grid that evaluates multiple criteria across different performance levels.PercentageScale
: A scale of 0% to 100%.LikertScale
: A scale of Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree.AutomaticRubric
: Automatically break down evaluation into several criteria, each with its own score.PassFail
: A simple ‘true’ if the call passed, ‘false’ if not.
Customizing the Success Evaluation Rubric
You can set a custom success evaluation rubric using the successEvaluationRubric
property:
Combining Prompts and Rubrics
You can use prompts and rubrics in combination to create detailed instructions for the call analysis:
By customizing these properties, you can tailor the call analysis to meet your specific needs and gain valuable insights from your calls.