Skip to main content
All CollectionsQueues
Create Auto Queues
Create Auto Queues
Updated over a week ago

Automatically download call recordings and assign them to Voxjar's AI evaluator, or to your QA team, on a schedule with Auto Queues.

The queue builder lets you define rules, schedules, and data sources to initiate call evaluations.

To create a new Auto Queue, go to Queues (app.voxjar.com/queues), click "Create a Queue", and select "Auto Queue".

Launch the auto queue builder in voxjar for call recording assignment

Data Sources

integration selection for call recordings on auto queue

Auto queues can only run if you have integrated your telephony platform or a cloud storage bucket containing your call recordings and metadata.

If you haven't created an integration, you will see a link to set up your first integration.

Otherwise you can create an AdHoc queue and manually upload your call recordings.

Data Collection Filters

You can use Queue filters to identify which calls should be assigned for evaluation. Voxjar collects metadata from your integrations on the initial connection and on every run to keep your Queue filters up to date.

Standard Voxjar Queue filters that will always be selectable:

  1. Call duration

  2. Call direction

  3. Agents

The remaining filters are synced from your integration. If you don't see any additional filters be sure that your integration has completed syncing (viewed on the integration in settings)

If your integration is a cloud storage provider or FTP, then you'll need to make sure that your metadata fields are mapped correctly. Right now only text-based custom metadata from cloud storage or FTP will become a Queue filter. This will expand in the future.

auto queue filters for selecting call recordings

Data Collection Rules

After you've set your filters, you create rules for the Queue. These rules help you set guardrails to prevent pulling too many calls and to ensure Voxjar is sampling a good distribution of data.

There are five rules for Auto Queues:

  1. Schedule how often to run the queue

  2. Choose how far back to pull calls

  3. Set call distribution

  4. Set your sample size

  5. Set a deadline for evaluation

Data Collection Schedule

Auto Queues can be scheduled to run daily, weekly, or monthly.

The next run date will be show below your selection so you know exactly what to expect.

The schedule will anchor to the current date, so a weekly run will start a week from the day you created it, etc.

set a schedule for collecting call recordings for qa

Data Collection Window

Set how far back you want Voxjar to look for call recordings and metadata.

You usually do not want to look back farther than your schedule or you risk duplicate call evaluations.

The farther back you look the longer it will take Voxjar to collect data. At the moment we have a one hour time limit for data collection. If your Auto Queue fails, it is often because the window is too large and there is too much data to parse through in that time period.

data collection window for downloading call recordings for quality assurance

Data Distribution

By default Voxjar will randomly sample your dataset to distribute the call evaluations fairly.

You can also choose to sample a set number of calls per agent.

For this to work your integration must identify agents in the metadata. You can confirm this when setting your queue filters - click on "select agents". (Be sure to reset your agent filter after checking)

select how the call recording data set should be distributed

Sample Size

set the sample size of call recordings to be downloaded

Select a max sample size so Voxjar does not pull more calls than you want.

Your max sample size will always be respected. It sets the ultimate ceiling on data collection.

Sample size can not currently exceed 1,000 calls.

This helps ensure that the queue completes within a one hour time limit and helps protect against accidental AI credit overuse and flooding your human QA team with too many evaluation requests.

Evaluation Window

An evaluation deadline sets a target completion time window for each evaluation assigned by the queue.

This is mostly useful for manual QA. The AI evaluator is automatically queued for the soonest possible completion time and is usually done within a minute of being assigned an evaluation.

After you queue is created, you'll see how many evaluations are past due on the queue list

Set the deadline for QA scores from human evaluators

Assign Evaluators

After setting your rules you'll be asked to assign calls to either Voxjar's AI evaluator or to your human QA team members with a scorecard of your choice.

When you're finished, the Auto Queue will be scheduled and you will automatically have evaluations assigned to your team or automatically scored by Voxjar's AI evaluator.

Ai Evaluator

Voxjar's AI evaluator can automatically evaluate the calls from your Auto Queue with scalable, high quality responses.

The AI is built around the leading large language models in the market.

That means that you do not need to write custom queries or keyword searches for the AI Evaluator.

It is automatically compatible with all scorecards created in Voxjar. So you can use the same scorecards with a human team and the AI.

The AI evaluator operates on a credit system. Every 5 minutes of audio reviewed uses 1 credit.

ai evaluator selection

Human Evaluators

You can also assign evaluations to a human QA team.

Invite evaluators to your team from the team page.

Once invited, you can select them to review calls from your queues.

Evaluations are assigned round robin.

If you select "All Evaluators" the assignments will only include users with the role of "Evaluator". The selection list will include all "Admin" and "Evaluator" users.

Evaluations will be added to the user's Work Queue for them to evaluate in a designated workflow.

human QA evaluation

Scorecard Assignment

The AI evaluator requires a scorecard assignment to operate.

We suggest assigning a specific scorecard to human evaluators, too.

Any scorecard created in Voxjar is compatible with the Ai evaluator and your human QA team's Work Queue workflow.

scorecard selector
Did this answer your question?