Conversational Intelligence Terminology

Hallucination Risk (Conversational AI)

Hallucination risk (conversational AI) is the likelihood that an AI assistant or summarization tool will generate content that is not supported by the conversation, knowledge base, or policy rules, but is presented as factual. This can include invented account details, incorrect policy explanations, fabricated promises, or inaccurate summaries of what the customer said.

Operationally, hallucinations matter because they can drive agents to give wrong guidance, make unauthorized commitments, or document interactions inaccurately. In regulated environments, this can trigger compliance breaches (e.g., misdisclosures, misleading statements, improper advice), increase complaint and dispute rates, and create audit and legal risk.

Managing the risk typically involves defining where AI is allowed to speak vs. assist, constraining outputs to approved sources, requiring citations or confidence signals, and monitoring interactions for unsupported claims. It also requires clear escalation paths when the AI is uncertain and quality controls to catch errors in real time and in post-call records.

Example:

During a voice call, an AI assistant tells an agent that the customer is eligible for a fee waiver and that it has already been approved, even though no such policy exists and the account notes don’t show approval. The agent repeats the claim to the customer, creating a compliance issue and a dispute when the waiver is later denied.

More Conversational Intelligence Terminology