Skip to main content
All CollectionsGetting Started
Calibration Sessions
Calibration Sessions
Updated over a week ago

Calibration Session Overview

Calibration sessions in Voxjar are used to assess evaluator consistency. They allow for side-by-side comparison of scores to highlight areas where evaluators may need additional training.

Creating and Running Calibration Sessions

To create a calibration session:

  1. Open an interaction

  2. Click “Calibration Sessions” in the panel on the right

  3. Select a scorecard for the session

  4. Choose the evaluators you want to participate. You can include both human evaluators and AI.

  5. Click “Initiate Calibration Session”

Once created, the calibration session will appear in the assigned users' work queues.

Calibration Session Interface

When participating in a calibration session:

  1. Access your work queue to find assigned calibration sessions.

  2. You'll see a distraction-free zone with the call details and evaluation form.

  3. Answer the questions as you would in a normal evaluation.

  4. You can add notes to questions if needed.

  5. Submit your evaluation when complete.

Calibration Session Reporting

To view calibration session results:

  1. Go to the Calibration Sessions page.

  2. Select the completed session you want to review.

  3. You'll see a side-by-side comparison of evaluator responses.

  4. Look for "disconnects" - areas where evaluators gave different answers.

  5. You can drill down to see individual evaluations and notes.

This view allows you to identify inconsistencies between evaluators, which can be used to guide further training or clarification of evaluation criteria.

Using Calibration Sessions for Training

Calibration sessions can be used as a training tool:

  1. Review the side-by-side comparisons with your team.

  2. Discuss areas of disconnect to understand why evaluators scored differently.

  3. Use these discussions to refine your evaluation criteria or provide additional guidance to evaluators.

You can also include AI evaluations in calibration sessions to compare how well the AI performs against human evaluators.

Calibration sessions are not included in typical evaluation reporting or scores to prevent noise on the interactions. They can be reviewed separately in their designated areas.

By periodically running calibration sessions, you can ensure that all your evaluators - both human and AI - are aligned in their understanding and application of your evaluation criteria.

Did this answer your question?