Industry trends suggest that Call Centers are focusing much more attention on implementing quality assurance practices within their Call Centers, so that customer satisfaction levels remain a primary focus. In this section, specific quality assurance practices are identified that will contribute to the overall achievement of service excellence. Specific practices discussed include call monitoring, customer call-backs, mystery calling, benchmarking and processes and procedures.
Call Monitoring
Many Contact Centers have introduced call-monitoring strategies to ensure that quality standards are maintained when providing appropriate information to customers and to measure the quality of customer interaction. Given the increasing use of part-time staff and the hiring of new staff, call monitoring can assist in the identification of specific training needs.
Call monitoring is a useful tool for coaching staff to improve performance. However, if it used as a disciplinary or negative performance management tool, it may result in feelings of mistrust and suspicion by staff. The primary purpose of call monitoring should be its value as a development tool. This must be clearly communicated to staff. Call center manager and team leaders should carry out 10 call observation a month for each staff member.
Call Center Agent Monitoring
Introduction
According to a 1999 study by Call Center Management Review, more than 88% of call centers monitor their agents. However, nearly one-half of respondents to the study say that those conducting the monitoring do not have enough time to provide effective feedback. Most monitoring is conducted by unit supervisors who, because of short staffing and high workloads, may delay for days or even weeks before sharing their observations with individual agents.
Furthermore, more than one-half of call centers only monitor from one to five calls per agent each month, resulting in an insufficient quantity of feedback to consistently improve knowledge and skills.
Despite the general trend, we have found that some programs are quite effective. The best monitoring programs typically meet three conditions:
- Simplicity – The evaluation instrument is short, clear and actionable. Any scoring or weighting uses simple, intuitive formulas.
- Fairness – Auditors are perceived as impartial and well-trained.
- Consistency – All customer contact agents are monitored on a consistent schedule. The plan for monitoring, reporting and coaching must be realistic; otherwise the program will deteriorate as it is superseded by day-to-day time demands on supervisors and agents.
Methods of Monitoring
Call centers generally use three methods to monitor agents:
- Side-by-Side, in which the supervisor sits at an agent’s work station and provides immediate feedback;
- Remote real-time, where the supervisor listens in on calls from a remote work station; and
- Recorded, where a supervisor or Service Quality agent reviews recorded interactions.
All three methods have inherent advantages and disadvantages.
The least advantageous is side-by-side monitoring, as it tends to alter the behavior of agents and thus fails to provide an accurate view of their skills.
Call center agents report that this method is uncomfortable for them and that they do not feel assessments made under these circumstances are representative of their actual performance.
Remote real-time monitoring is a more useful supervisory tool, but it is not ideal for providing reliable, objective evaluations that can be used for incentives and trending.
Even the best supervisors are inherently biased when assessing their own staff, particularly if they receive incentives that are linked to their units results.
In addition, agents are frequently able to identify when supervisors are monitoring and to alter their behavior in response to this knowledge.
Monitoring of recorded calls provides the best opportunity for reliable assessment because the agents are not aware of being monitored, and the monitoring professionals can check the accuracy of their assessments by replaying the recording.
The problem of bias still exists, however, if supervisors assess their own staff.
We recommend a dual approach to monitoring call center agents. Formal monitoring should be performed by an independent unit within the contact center or a by third-party provider, using recorded data. At the same time, supervisors should monitor their own staff on an ad hoc basis.
The results of supervisor monitoring should be used only for coaching, while formal monitoring should employ quantitative analysis with results linked to employee rewards and incentives.
Formal Monitoring
Formal monitoring generally meets the following criteria:
- Objective, third-party evaluation;
- The use of recorded calls, preferably with synchronized screen capture;
- Random selection of calls for evaluation;
- A structured evaluation tool, including an overall numeric score;
- Quick turnaround of evaluations (preferably within one day);
- Real-time, on-screen reporting showing trends, comparisons and auditor comments;
- Frequent review and coaching sessions between supervisors and agents, using both the evaluation reports and the recorded calls.
Training
Formal monitoring is performed by an independent team of trained auditors, either within the company or through a third-party provider. Adequate training and staffing of the team is essential to make the program a success. Training should, of course, include a thorough review of the evaluation form. In addition, it should include techniques for randomly selecting calls. To ensure that evaluations are consistent regardless of which team member performs the monitoring, the training should also include validation exercises. These are done by having different team members evaluate the same calls and then compare results. A high agreement rate (80% or more) is required on the evaluations to achieve validation. Until this rate can be reached, the team should not be approved to begin formal monitoring.
Selection
Calls should be selected randomly for formal monitoring – that is, for any evaluations that are used for quantitative analysis or incentives. The method of selection should be predetermined – for example, the nth call in a series. (Some CTI systems have functions that will record calls randomly or based upon predetermined criteria.) Auditors should not have the option of selecting or rejecting individual calls, even if they think a call is non-representative.
While formal monitoring should include random selection, informal monitoring by supervisors can and should be highly selective. By including a formal, third-party component to the call center’s monitoring program, supervisors are freed from the need to appear unbiased in their selection of calls. As a result, they can be up-front about selecting calls that will best illustrate the points they wish to make in coaching sessions. Since the results of informal monitoring are not linked to incentives, agents will have little reason to dispute their supervisor’s selection.
Timing
Quick turnaround of feedback is essential to the effectiveness of call monitoring. A delay of even a few days will weaken the ability of the assessment to change behavior. Because of the limited amount of time available to supervisors and agents, it may not be realistic to have a formal review and coaching session every time a call is monitored. However, a printed or on-screen report should be available within one day of the evaluation.
Communication of Results
Evaluations of individual calls or e-mails should be sent to the agent monitored and to his/her supervisor. The report should have value independent of any subsequent review or coaching by supervisors. To this end, the reports should provide not only documentation of the behaviors and skills assessed, but clear narrative suggestions for improvement. Follow-up review and coaching are best conducted by the agents supervisor, rather than by the auditing staff who perform the assessments. Coaching should include a review of the actual recorded calls in addition to the evaluation reports. It is important that both supervisors and associates understand that the review is not an occasion to dispute the results of the independent assessment. Rather, it is an opportunity to develop strategies for improving scores in the future and for introducing quality components that are not captured in the evaluation report (best practices, etc).
Realistically, supervisors will not be able to perform a timely review of every call monitored, and may not be able to hold a review more often than once a month. However, the immediate distribution of the completed evaluation report will allow associates to continually improve their performance, and to discuss their improvement efforts during subsequent review and coaching sessions.
Reporting
Reporting of results should be motivational as well as informative. Individual evaluation reports from formal monitoring should include specific recommendations for improvement, as well as recognition of the positive elements of the customer interaction. Other reports that should be shared with supervisors and agents include:
- Success Tracking, showing month-to-month trends for the individual associate, as well as for the unit and the company;
- Peer Comparison, showing the associates average score compared to the average for all agents;
- Unit Comparison, showing the units average score compared to the company average.
Ideally, call agents and supervisors should be able to access all of these reports electronically on demand.
Evaluation Form
An effective evaluation form includes four main components:
Context. This section includes details such as the agent’s name, time of call and customer account number. It should also provide information that will be useful for subsequent quantitative analysis, such as type of call, customer issue, resolution and escalation.
Observation. The bulk of the evaluation should be composed of questions that can be answered yes or no (or N/A). These questions should be clear and simple, with no explanation or interpretation needed. They should describe observable behaviors that either occur or do not occur, and that are based upon documented performance standards. When designing the evaluation tool it is important to avoid double-barreled questions, such as The associate made relevant and appropriate statements. If relevant and appropriate represent two separate things, they should be measured separately. If they are synonymous, only one should be included to avoid confusion. It is also important to avoid questions that represent outcomes rather than the behaviors that lead to the outcomes. For example, Establish rapport can be answered yes or no (perhaps), but a no answer does little to help the associate improve. If establishing rapport is a desired outcome, it would be more useful to label a section of the evaluation as Rapport-Building Skills, and within that section evaluate specific behaviors that contribute to building rapport, such as calling the customer by name.
Scaled Ratings. The evaluation should include a few ratings in order to reflect qualitative factors in the customer interaction. Ratings should be used sparingly, however, for two reasons: First, they tend to open the door to dispute, which undermines the credibility of the evaluation; and second, ratings tend to be of very little practical use to either supervisors or associates. Note that the number of points on the scale makes little difference, but every point should have a clear anchor or explanation, so that a 4 rating on a 5-point scale does not have different meanings for different evaluators.
Comments. While they do not contribute to quantitative analysis or incentives, comments tend to be the most useful component of the evaluation for associates. Because of this, evaluators should be trained to provide narratives that offer specific and detailed recommendations, as well as positive recognition when appropriate.
Scoring. For the purpose of setting goals, tracking improvement and triggering incentives, evaluations should include a numeric score. A simple and effective method is to express the score as a percentage, and to calculate it by dividing the number of points given by the number possible. Questions answered N/A do not contribute to either the numerator or the denominator. Generally, yes/no questions will return one point for yes and zero points for no. Scaled ratings are best collapsed into three points, regardless of the size of the scale used. For example, a 5-point scale might return zero points for a rating of 1 or 2, one point for a rating of 3 or 4, and two points for a rating of 5.
Weighting
All of the factors evaluated do not have the same effect on customer satisfaction, retention, or other desired outcomes. Furthermore, some skills may need to be emphasized more than others because they are more difficult to learn. For these reasons, many organizations use a weighting system when scoring evaluations.
If a weighting scheme is used, it should be simple and intuitive. The easiest system is to assign a weight of 1, 2 or 3 to each variable in the evaluation. The evaluation form should clearly indicate each variables weight so that supervisors and associates know which skills to emphasize to achieve a higher score.
It is usually best not to use a weighting system in the early stages of a formal monitoring program. After the program becomes well established and accepted, weights can be added to the scoring formula. Over time the weighting scheme can be changed as the needs of the organization evolve, provided the changes are infrequent and well communicated.
Informal Monitoring
The benefits of a formal monitoring system are that it is fair, objective, timely and consistent. However, these benefits should not come at the expense of supervisory flexibility or control. For this reason, an informal monitoring program, centered around supervisor feedback, should exist concurrently.
The logistics of informal monitoring need not change substantially from those used at most call centers today. Supervisors should be able to choose their method of monitoring (side-by-side, remote real-time or recorded), and should be able to decide which calls and which agents they wish to monitor. They should also be able to give immediate feedback or wait for a convenient time to schedule a coaching session.
The difference is that, with the introduction of a formal monitoring program, supervisors need not keep up the appearance of structure in their monitoring schedule. They can choose whom to monitor based upon their judgment rather than a company-set quota. Since their findings are not quantified (and do not even need to be recorded), the necessity for consistency from supervisor to supervisor is reduced. And, since their findings are not directly linked to staff incentives, associates will have less reason to dispute findings or “shoot the messenger” when given feedback.
Additional monitoring techniques have proven valuable in many organizations, including:
- Peer monitoring;
- Monitoring between units;
- Competitor monitoring;
- Team review of tapes to discuss “best practices”.
All of these techniques should be included under the category of informal monitoring, and are most effective when balanced with a formal, structured monitoring program administered by third-party evaluators.
Check List of Key Dimensions for Call Monitoring
As part of the process, a specific call-monitoring evaluation form should be designed to take into account the relevant call dimensions that are appropriate for the specific. Call Center environment. Other staff can be involve in the process of call monitoring in order to give them some insight into the standard of calls required at the Call Center and to provide them with task variety.
- Initial Greeting
- Customer Acknowledgment
- Use of courteous statements
- Displayed Empathy
- Kept customer informed when keying in
- Listen effectively
- Quality of voice tone and pitch
- Effective use of questions
- Use of positive words
- Reaffirmed call outcomes / action
- Ending of cal – additional help / thanked
- Call resolution / outcome
- Overall Professionalism