Model drift is the gradual change in how well an AI model performs after it is deployed, usually because the real-world calls it sees start to differ from the data it was trained on. Changes in customer language, new products, seasonality, agent scripts, or updated regulations can all shift the patterns the model relies on.
In a contact center, drift matters because it can quietly reduce the reliability of automated compliance checks, risk flagging, and conversation analytics. If drift isn’t monitored, the model may miss required disclosures, misclassify sensitive topics, or generate inconsistent QA results, which can lead to audit gaps, rework, and increased regulatory exposure.
Operationally, teams manage drift by tracking model performance over time, reviewing false positives and false negatives, and retraining or recalibrating models when policies, scripts, or call mix changes. Clear ownership and regular validation help ensure compliance reporting stays accurate as the business evolves.