Within the Conversation Intelligence (CI) framework, the 'Quality' tab is a key feature for evaluating specific conversations. It hosts two main components: AutoQA Results, which automatically grade each question in a QA scorecard to deliver an overall conversation score, and ManualQA Results, offering a human-evaluated scorecard for a nuanced assessment. This combination of automated precision and manual insight under the 'Quality' tab provides a thorough analysis of conversation quality, setting the stage for a deeper exploration of its core functionalities.
Key Features
AutoQA Results
AutoQA Results represent a groundbreaking feature that utilizes advanced algorithms to assess the quality of conversation interactions automatically. By meticulously grading each question based on a predefined QA scorecard, AutoQA calculates an overall auto score. This score offers a quantifiable measure of the conversation's quality, enabling users to gauge performance quickly without manual intervention. The process streamlines the evaluation and ensures consistency and objectivity in scoring, making it an invaluable tool for understanding and enhancing conversation quality.
Overall QA Score: The Overall AutoQA Score provides a quantified evaluation of the conversation's quality based solely on automated analysis. It summarizes the performance against the QA scorecard, offering a clear measure of communication effectiveness and compliance with predefined standards
Section Score: The Section Score represents the aggregated points or total score awarded for all questions within a specific section of the QA scorecard. This score quantifies the performance on distinct aspects of conversation quality, offering detailed insights into each evaluated area.
Question Score: Refers to the individual score or points assigned to each question in the QA Scorecard, directly reflecting the quality and effectiveness of responses within specific conversational aspects
Create scorecard: Navigates users to a page dedicated to constructing a new scorecard, allowing for the customization and definition of quality metrics and questions tailored to evaluate conversation effectiveness
Assign to grader: Directs the selected conversation to a human QA grader for in-depth manual QA scoring, providing an expert assessment of the conversation's quality against established standards.
ManualQA Results
Appear when a conversation has undergone a human-led evaluation process, displaying the detailed scorecard directly below the AutoQA Results. These results offer a nuanced view of the conversation's quality, incorporating the human grader's insights and assessments against the predefined quality criteria. This layer of manual review complements the automated scoring, enriching the overall understanding of conversation effectiveness and areas for improvement.
Within ManualQA Results, the Overall QA Score, Section Score, and Question Score follow a similar structure to their counterparts in AutoQA Results but are derived from human evaluation. The Overall QA Score provides a comprehensive assessment of the conversation's quality as determined by a human grader, capturing the essence of communication effectiveness from a subjective perspective. Section Scores break down this analysis further, offering insights into specific areas of the conversation based on human judgment. Lastly, Question Scores reflect the nuanced evaluation of individual questions, showcasing the grader's assessment of each response's quality. These scores collectively offer a rich, human-centric view of conversation quality, complementing the objective analysis provided by AutoQA.
In addition, ManualQA Results disclose the QA Grader assigned to the manual evaluation, ensuring transparency by identifying the professional responsible for assessing conversation quality.