Not every interaction can be evaluated reliably. Poor audio quality might make it impossible to determine what was actually said. Technical issues might cut off critical portions of the conversation. Background noise might obscure key exchanges. When transcript quality or available evidence is insufficient, attempting to evaluate compliance or quality creates false confidence in unreliable results.
This signal identifies interactions where the evaluation confidence is too low for reliable assessment. It flags cases where transcript quality, audio clarity, technical issues, or other factors make it impossible to accurately determine what happened during the interaction.
False positives and false negatives in quality evaluation create serious operational problems. A compliance violation that gets missed because of poor transcript quality might surface during an audit, creating regulatory exposure. A quality score that’s artificially low because of technical issues might trigger unnecessary coaching or disciplinary action.
Low-confidence evaluations also skew operational metrics. If 15% of interactions cannot be reliably evaluated but still receive scores, then overall quality metrics become unreliable. Teams cannot make informed decisions about training, process improvements, or compliance management when their data is contaminated by unreliable assessments.
Flagging low-confidence evaluations allows teams to handle these interactions appropriately — either through manual review, re-recording requests, or exclusion from metrics until better evidence is available.
Compass evaluates the quality of available evidence for each interaction assessment. This includes transcript accuracy, audio clarity, conversation completeness, and other factors that affect evaluation reliability.
When evidence quality falls below the threshold needed for reliable assessment, the signal flags the interaction as low-confidence. These interactions can then be routed for manual review, excluded from automated scoring, or handled through alternative evaluation methods.
QA teams use confidence signals to prioritize manual review efforts. Instead of reviewing random samples, they focus on interactions where automated evaluation was unreliable, ensuring that quality assessments are based on solid evidence.
Operations managers track confidence rates to identify technical issues that affect evaluation reliability. Consistent low confidence on certain call types might indicate audio quality problems, system integration issues, or other technical factors that need attention.
Analytics teams exclude low-confidence interactions from performance metrics to ensure data accuracy. This prevents technical issues from contaminating quality measurements and operational decision-making.
This signal is part of Chordia’s Quality Monitoring capabilities.
We'll walk you through real interactions and show how each signal traces back to specific conversational evidence — so your team can act on what actually happened.