Skip to main content
Auto QA Results
Cynthia Tsai avatar
Written by Cynthia Tsai
Updated over 2 months ago

It can be challenging to grade customer service conversations manually. Manual grading is time-consuming and can be inconsistent, leading to low coverage and missed opportunities. To address these concerns, Echo AI offers Auto QA, an AI-powered grading software that automates the grading process of contact center tickets.

What is AutoQA?

Auto QA is a software that automates the grading process of contact center tickets. With Auto QA, QA graders can have coverage over 100% of their tickets, even if they get 1000+ tickets a day. It is fully customizable, allowing graders to customize the criteria they are grading using a scorecard that is similar to the one they use for Manual QA.

Why is using LLM models a game changer vs. in-house machine learning?

Legacy Machine Learning models are expensive, slow to implement, and restricted to the training data that is used in the setup process. Large Language models represent the next generation of AI and are more powerful and adaptable than anything that came before. They can also be used out of the box and don't require a ton of data to get started.

Numerous competitors claim to offer Auto QA, but they are not true Auto QA. Competitors often use basic sentiment analysis or phrase tracking, which may be possible over 100% of the tickets but is restrictive in scope. They offer AI assistance during the manual process, such as suggested answers for a QA grader, which does not enable the grader to achieve 100% coverage.

Setting up Auto QA

To use Auto QA, you will need your admin to set up a QA Template for your pipelines. This template will provide customizations to grade your conversations. Please see Configuring QA Templates for more details (Coming soon!).

How does Auto QA work?

After your pipelines and QA Templates are set up, your conversations will flow into Echo AI to be transcribed. Upon detecting an interaction with a customer, the Auto QA feature will automatically evaluate conversations. The AI will generate a score for the conversation, which will then be reflected for the agent.

Reading the Results

Below is an example conversation that has been transcribed and graded. You can see the layout of the Auto QA results and the numbered areas.

  1. To find the Auto QA results for your conversation, go to the conversation and click on the Quality tab on the right-hand side.

  2. At the top of the results tab there will be a rating (Failed, Good, Perfect, etc.) and the number of questions that have issues below that.

  3. On the right-most side there will be the total score out of 100.

  4. In the body the questions are divided into sections. Each section name will be in bold with the score for that section in line to the right. Below each question will be the comment from the LLM to explain why the questions were graded the way they were.

All sections will automatically hide questions that received full marks. Any questions that the agent failed or got partial points for will be shown. This can be toggled by clicking on the section name/header to reveal the questions with full marks.

Leader Options

There are some options for Leaders (this includes Admins) you can take if the Auto QA scores don't look accurate enough. This can either be done for just one question or you can create a new whole new scorecard to kick off the Manual QA process.

  • Provide feedback/Override the score for a particular question

    Leaders will be able to see a thumbs up and a thumbs down when hovering over a question. This will allow them to provide feedback on that specific question. Selecting thumbs up will provide feedback the score given was accurate. If the thumbs down is selected it will show a pop up for the question. Here there is the option to override the AutoQA score and input the correct score and provide feedback for improvement.

  • Starting the ManualQA process

    At the bottom of the AutoQA results are two buttons used for kicking off the ManualQA process.

    • Create a scorecard

      This is for the user to create a scorecard assigned to themselves. This is useful for a user that would like to grade the conversation themselves right away. After clicking the button, there will be a popup asking for which QA Template to use for the scorecard. After selecting one and clicking continue, it will direct the user to a new scorecard that is assigned to the user. They can then provide the scores for ManualQA and submit.

    • Assign to a grader

      This button is similar to the one for creating a scorecard, but it will assign it to someone else to grade instead of the user. Clicking this button will trigger a popup that asks for the grader to assign the scorecard to and for the QA Template. Select a member of the org and the template, then click Save to make the assignment.

Cumulative Performance Scores

These Auto QA results look into individual scores/conversations. To see the scores across the board for the whole org, team, or by any other available filters, please see the Performance tab details (Coming soon!).

Did this answer your question?