Weekly Calibration Sessions

Call Calibration is the process of standardizing the call evaluation and scoring process. The ultimate goal is to make sure that when two any people listen to and evaluate a call they arrive at the same score. Calibration ensures fairness and objectivity and is an absolute requirement for improvement for agents call.

When Calibration Is Needed:
Indicators of the need to calibrate a QA (Quality Analyst) program include down-trending in satisfaction levels revealed in surveys of customers and agents, widening divergence in script adherence metrics, increasing average handle time and increasing agent turnover. When QA staff assignments change overtime and scoring of similar contacts diverges among evaluators, then it is likely that unfairness or bias in evaluations is present. Calibrating evaluations helps create and maintain the desirable culture of quality and fairness. When managers, supervisors and agents understand the importance and value of calibrating evaluations the desired organizational outcome is more likely achieved. Having evaluators use the same criteria, based on benchmarked standards, when scoring similar interactions eliminates possible bias.

In addition to the agents whose performances are being evaluated and the QA evaluators who are scoring their interactions, supervisors, quality assurance and contact center managers need to be involved in the sessions.

Participation of Agents increased their awareness of the objective of fairness through scoring evaluations consistently and objectively. Their participation reinforces establishing the desired quality culture and serves as a staff developmental tool. Having agents in the calibration sessions enables the process to receive input from frontline agents, reduces “us” versus “them” feelings, and provides participants with opportunities for personal growth. Drawbacks of having agents participate in calibration sessions are the possibility of breach in the confidentiality of contacts and scoring, even if identifies are redacted, and loss of availability of the agents to be handling interactions.

QA evaluators are helped by the establishment of a baseline for scoring, which is based on the collective feedback from the scoring group.  Participating in the sessions increases in their experience in judging scores, decreases their individual learning curves and helps reduce errors made when they conduct evaluations.

Participation of Supervisory staff is important so that they can gain insight into the QA process. Their coaching efforts become easier, for example, when they understand the calibration process and can explain why deductions have been made to scores and how those measurements have been applied to all agents’ performance in an objective manner.

QA Managers and Leads should to be involved in sessions to ensure that their teams are scoring accurately over a wide spectrum of contacts. Their participation              also enables them to be more flexible when creating QA assignments, eases the on-boarding process for new-hires to the QA staff, and reduces challenges and disputes over completed evaluations.

Participation of Department Management (Management Representatives) if supervisors are not scoring evaluations, provides those managers with insight into the calibration process and ensures that management’s opinions are considered in the quality assurance and its calibration process. The task of managing the organization to build a quality culture is easier if management is involved directly in the process and able to identify staff knowledge gaps at all levels.

Establishing a Calibration Practice Include:

  • QA form design
  • Quality standards definition
  • Ownership of the process
  • Selection of contacts for evaluation
  • Frequency of session
  • Measuring calibration effectiveness

QA Form Design
Creating an effective evaluation form is the first step in preparing to calibrate agent evaluations. This form comprises the record of checklist and subjective questions that are applied to measure the sufficiency of an interaction in meeting quality adherence. The form should rate all components of contacts: opening, greeting, discovery of the purpose of the contact, resolving the customer inquiry or issue and the summary closing. Checklist questions are rated by sufficiency, tallying pass/fail scores. Subjective questions can be rated on a scale of 1 to 5, where 1 is the lowest (poor) and 5 is the highest (excellent) rating.

Quality Standard Definitions
All of these questions must be based on the center’s Quality Standards Definition Document (QSDD) or Quality Assurance Standards (QAS) that are being applied. The QSDD or QAS is a reference guide for all components of the evaluation form, including definitions of performance objective with real examples of desired contact handling. Following these standards will ensure that every question on the evaluation form is clearly defined. Everyone involved in the agent evaluation process needs to be educated on these quality standards definitions and how they inform the evaluation form.

When Changes in quality standard definitions or quality assurance standards occur, they need to be reflected in the QA form and calibration session processes. Management must ensure that the staff responsible for the calibration practice makes those changes to the QSDD or QAS in a timely manner and communicates the updates to the agents, supervisors and , managers in the group.

Ownership of the process
Management decides who owns calibration. The process owner needs to have project management and delegation skills, the authority to enforce attendance at calibration sessions, as well as communication skills for reporting session results and issues to senior management. Whoever facilitates the session is not responsible for making the final decision on the scoring of the contacts. The facilitator is responsible only for conducting the process successfully. This includes handling the logistics of where, when and for how long each session occurs, booking of session rooms, setting up the supporting technology, sending emails with the list of contacts to be calibrateda and keeping the calibration group on track and on target by exercising effective leadership and facilitation skills. The facilitator role is best filled on a rotating basis by several trained staff, so that knowledge of how to effectively conduct the calibration practice is shared and will not be lost in case one or more experienced facilitators leave the organization.

Calibration Frequency
During initial rollout months of a newly instituted practice, weekly sessions are favored to enable the practitioners to adjust to, fine-tune and adopt the process as a norm. Once the group has met its initial calibration goal, calibration frequency can be moved to bi-weekly sessions. Factors that may dictate a change in session frequency include changes in the organization’s business practices, updates to quality standards definitions or QA form, as well as involvement of newly hired staff.

Selection of Contacts
Contacts for calibration can be either be nominated or randomly selected. Nominated contacts should have the contacts and justification of nomination e-mailed to the calibration facilitator prior to the calibration session. Randomly selected contacts should use the organization’s call recording software’s contact list randomizer in conjunction with other selection criteria to generate a specific number of contacts with which to calibrate evaluations. For most contact centers, the large majority of contacts are telephone calls. However, calibration may also include online chat sessions and email responses.

Measuring Calibration Effectiveness
Calibration effectiveness is measured against the organization’s selected standard deviation (SD) goal. First, compile the result of the group’s calibration sessions, and then track progress toward the goal. Present the results during calibration sessions and in meetings with management. Periodically adjust the standard deviation goal as appropriate. Ideally, use benchmark studies that specify, use benchmark studies that specify best-in-class standards for contact centers in relevant industries.

Implementation

Your team must understand the criteria that define successful interactions, and be committed to recognize and resolve all interpretational differences to reachfully calibrated status. Here are a few suggestions that can help your calibration effort:

  • Schedule at least one hour for each calibration session. Share five or six recorded interactions with your team before the session so each team member can come prepared. If recorded interactions are not available, listen to live calls so you can evaluate together.
  • Before the first calibration session, the team should set an initial target variance for the score. When your team is just beginning the calibration process, set a goal such as overall call scores should be within five points (or 10 percent) of each other. In the beginning, scores may vary greatly, but you can gradually lower the target variance in follow–up sessions until the gap is reduced to an acceptable level. It may take many sessions to achieve the initial target variance and many more to get the quality monitoring team fully calibrated.
  • Designate a facilitator to moderate discussions, take notes, and keep the team focused on the goal. Establish some facilitation ground rules.
  • Have all participants use the same monitoring form to score the interaction. After all the participants are finished, have one person summarize the interaction. During the recap, identify the areas where points were subtracted give a final score. Have all the participants take turns recapping so that everyone learns how.
  • The facilitator should ask participants to share and explain their scores. Be prepared for a passionate discussion on score variance from the participants! It’s not important that everyone agrees on a final score. The point is for everyone to understand the accepted criteria for a successful interaction, and apply that understanding consistently in evaluating future interactions. Quality monitors should provide feedback to the agents during their review/evaluation sessions on how the criteria are being applied so they can learn from the process.
  • At the end of the session, the facilitator should review the notes, highlighting any changes or group decisions, and distribute to everyone who participated.
  • Once calibration is attained, schedule periodic calibration sessions to keep the monitoring team sharp.

Ground Rules for Call Calibration:
Establish ground rules and communicate them to all participants in advance, so sessions are positive and productive. Keep these tips in mind as you get started:

  • Create an environment where everyone can feel comfortable sharing their opinions without fear of retribution.
  • Avoid being confrontational and allow everyone’s opinion to be heard. Allow people to finish explaining their thoughts.
  • Talk about the facts, not feelings. The performance criteria should be defined by measurable tasks, so keep the discussion focused on what can be taught, not thought.
  • When making decisions, consider what would be best for the overall success of the program. Do not make a decision just because everyone has grown tired of the discussion!
  • Enforce compliance. It’s critical to identify and warn anyone who uses their own standards while monitoring, rather than the agreed–upon standards.
  • Do not give up when the process gets difficult and some people seem ready to quit.