Compass Now Lets You Define What Good Means - Through Conversation

April 28, 2026

Every team defines quality differently. A retention operation cares about save rate and empathy. A sales floor cares about conversion and objection handling. A support team cares about first-call resolution and process adherence. A generic quality score can't capture these differences - and forcing every team into the same evaluation framework means nobody's scores actually reflect what matters to their business.

Compass now lets admins define their own scoring objectives - directly through conversation.

A Scoring Consultant, Not a Settings Page

This isn't a configuration form. It's a guided conversation where Compass acts as a scoring consultant, combining your operational knowledge with real data from your calls.

An admin opens Compass Chat and says something like "I want to define what a good call looks like for my team." Compass responds by showing the current scoring state and asking about the business - what types of calls the team handles, what outcomes matter most, what separates a great interaction from a mediocre one.

Then Compass does something most rubric-building processes skip: it pulls real call transcripts from across the quality spectrum and walks through them with you. "Here's a call your team would consider great. Here's one that fell short. What makes the difference?" The rubric gets built from concrete examples, not abstract criteria.

Based on that conversation, Compass drafts a 9-level rubric - scored from 1.0 to 5.0 in half-point increments - using the admin's own language and business-specific criteria. Not generic QA terminology. The words and standards that actually mean something to the team running the operation.

Test Before You Commit

A rubric that looks right on paper can produce surprising results in practice. That's why Compass tests every draft rubric against real calls before it goes live.

After generating the rubric, Compass scores a set of actual conversations and presents the results: "Here's how your rubric scored 5 real calls - does Call #3 at 3.5 feel right?" If something's off, you adjust and retest right there in the conversation.

Compass also flags statistical problems that would be hard to catch manually. If 80% of calls cluster at a 4.0, it tells you - because a rubric that can't differentiate between conversations isn't useful, no matter how well-worded the criteria are. The iteration cycle from "this isn't quite right" to "that's exactly what I want" happens in a single session, not over weeks of piloting.

Paradigm-Aware Scoring

Not every call type should be evaluated the same way. Compass automatically classifies interactions into paradigms - resolution, conversion, recovery, or informational - and admins can optionally define different rubrics for different paradigm types.

A retention team might score calls on save rate and de-escalation. A sales team scores on conversion and value articulation. A support team scores on resolution completeness and process adherence. Instead of compromising with one rubric that partially fits every call type, multi-program operations can get precision scoring for each.

Most teams just need a single rubric. But for operations that handle fundamentally different conversation types, paradigm-specific objectives mean every call is evaluated against criteria that actually make sense for what it is.

What Happens After a Rubric Goes Live

Once an admin confirms a rubric, the impact is immediate and systematic:

  • Every new interaction is scored against the custom objective instead of the default
  • Agent Lift models retrain using the custom objective as their target - so performance measurement aligns with the team's actual definition of quality
  • Previous rubric versions are preserved in history - nothing is lost, and you can always see how scoring criteria evolved
  • Existing scores stay as they are unless a rescore is explicitly requested

The scoring source is tracked transparently, so it's always clear which rubric produced a given score.

No Expertise Required

The key differentiator here isn't customization itself - most QA platforms offer some form of rubric configuration. It's that Compass removes the expertise barrier entirely.

Building an effective evaluation rubric is harder than it looks. Criteria need to be specific enough to produce consistent scores but flexible enough to handle the variety of real conversations. Weights need to differentiate without distorting. Edge cases need to be anticipated. Most teams either hire consultants or spend weeks iterating through trial and error.

With Compass, the admin doesn't need to know anything about rubric design. They just need to know their business. Compass brings the real data, asks the right questions, tests the drafts against actual calls, and flags the statistical problems. The result is a production-quality rubric built in a conversation, not a project.

Access and Availability

Custom scoring objectives are available now to all Compass users. Admin and SuperAdmin roles can define and modify objectives. All other users benefit from custom scoring automatically - their evaluations simply reflect what their team has defined as quality.

To see custom scoring objectives in action, request a demo.