Skip to main content

How to Configure and Manage Agent Disputes

Setting up and using disputes for better score accuracy and agent engagement.

Updated over a week ago

Disputes give your agents a voice in the quality assurance process. By allowing them to challenge evaluation scores they believe are inaccurate, you can foster a more collaborative and transparent QA environment. This keeps agents engaged, improves the accuracy of your scoring, and helps identify areas of confusion in your scorecards.

Enabling Disputes

To begin using this feature, you first need to enable it for your account.

  1. Navigate to Settings.

  2. Select Disputes from the settings sidebar under the Evaluation settings section.

  3. Toggle the master switch to turn the feature on.

The Dispute Workflow

The dispute process is designed to be straightforward and transparent, ensuring every challenge is handled efficiently.

  1. Agent Submits Dispute: An agent disagrees with a score on a specific question in their evaluation. They submit a dispute directly from the evaluation, providing a reason for the challenge.

  2. Assignment to Evaluator: The dispute is automatically assigned to an evaluator for review based on your configuration.

  3. Evaluator Resolves: The assigned evaluator reviews the agent's reason, the interaction, and the original score. They then either Accept the dispute (overturning the original score) or Reject it (upholding the original score), providing a reason for their decision.

  4. Agent Notified: The agent receives a notification with the outcome and the evaluator's feedback, closing the loop.

Configuring Dispute Settings

Once enabled, you can customize how disputes are assigned and managed to fit your team's workflow.

Dispute Assignment

You control who reviews submitted disputes. This ensures fairness and consistency.

  • Assign to initial human reviewer: The dispute is sent back to the person who performed the original evaluation. If the original evaluation was performed by your AI, the dispute will be assigned to a Fallback Reviewer.

    • Random selection: Randomly assigns the dispute to any user with an ADMIN or EVALUATOR role.

    • Select from specific reviewers: Allows you to choose a specific pool of users to handle disputes from AI-powered evaluations.
      ​

  • Assign to selected human reviewers: The dispute is always routed to a specific group of reviewers you define, regardless of who performed the initial evaluation. This is useful if you have a dedicated QA specialist or manager who handles all disputes.

Dispute Limits

To keep the process manageable, you can limit the number of open disputes an agent can have at one time. Once an agent reaches this limit, they cannot submit new disputes until their existing ones are resolved. This encourages agents to focus on their most critical challenges.

Dispute Deadlines

Set timeframes to ensure disputes are submitted and resolved promptly.

  • Submission deadline (for agent): Defines the window an agent has to submit a dispute after an evaluation is completed. Options include 24 hours, 48 hours, 7 days, or no deadline.

  • Resolution deadline (for evaluator): Defines the window an evaluator has to resolve a dispute after it has been submitted. Options are the same as the submission deadline.

Webhook Integration

Connect your dispute workflow to other systems (like Slack, a project management tool, or your own internal dashboards) using webhooks.

  • Enable Webhooks: Toggle the switch to turn on the integration.

  • Webhook URL: Enter the URL of the external system that will receive the HTTP POST request.

  • Webhook Events: Choose whether to send a notification when a dispute is Accepted, Rejected, or both.

  • Test Webhook: Send a test payload to your URL to confirm the connection is working correctly. The payload will contain details about the dispute, including the agent, evaluator, interaction, and resolution.

Creative Use Cases for the Disputes Feature

  • Improve Scorecard Clarity: If you notice many agents are disputing the same question on a scorecard, it's a strong indicator that the question may be ambiguous, subjective, or poorly worded. Use this feedback to refine your scorecards for better consistency.

  • Calibrate Your Team: Disputes are a fantastic tool for calibration. When an evaluator and an agent disagree, it creates a coaching opportunity to align their understanding of your quality standards. This helps calibrate not only your agents but also your evaluators.

  • Boost Agent Engagement: Giving agents a formal process to appeal scores shows that their perspective is valued. This sense of fairness and involvement can significantly increase their buy-in to the entire QA program, transforming it from a punitive process to a collaborative one.

Did this answer your question?