QATC Survey Results
This article details the results of the most recent QATC quarterly survey on critical quality assurance and training topics. About 65 call center professionals representing a wide variety of industries provided insight regarding call center hiring practices.
Number of Agents
Twenty-three percent of the participants are from call center operations with between 101-200 agents, followed by 20% having over 50 agents and 17% each with 51-100 and 201-300. This represents a broad cross section of the centers by size.
Technology Use for Evaluations
When asked what technology or format is used for quality monitoring evaluations, the vast majority (91%) have quality monitoring technology in place. The remainder are using spreadsheets but none reported tabulating scores manually.
This suggests that the QA monitoring products are reaching a high level of market penetration in all sizes of centers.
Frequency of Forms Updates
The participants were asked how often the quality monitoring evaluation forms are updated. The responses are quite varied with about one-third updating every 1-5 years and approximately one-quarter (28%) updating once a year. Only 15% indicated the form updates are driven by business changes rather than a planned schedule. Eighteen percent of the respondents chose “other,” which may mean they don’t update them at all or their frequency was not one of the listed choices.
Number of Items on Quality Form
One of the challenges of completing the quality scoring process is answering each of the questions or items listed on the form. The biggest percentage (43%) of respondents indicate there are between 11 and 20 items while 26% increase that to between 21 and 30 items. Only 11% had more than 30 items and 20% kept the list short at between 1 and 10 items. The key for many to achieve the desired results is to concentrate on the items that really make a difference to the end results in terms of customer satisfaction and accuracy. This can vary with the complexity of the interaction.
Weighting of Items
The majority (88%) of the respondents apply a weight to the items on the form while 12% do not. It is common to find that some items are much more critical to the organization and the coaching process than others and applying weights to the items takes that into account. However, these values can change over time and the weights applied may need to be reviewed each time the items on the form are changed.
Just over half indicate that they use a simple yes/no or pass/fail scoring scheme while about one-third use a scale such as 1 to 5 points. The remainder use another scheme that may include “meets expectations/exceeds expectations/needs improvement” or something similar. While pass/fail can work for things that are either done or not, it doesn’t give much room for partial completion/compliance with expectations. However, a point scale complicates the scoring and calibration process by an order of magnitude.
The purpose of the QA process may be critical to choosing the most effective scheme for an operation. Where the emphasis of QA is on coaching, a simple process can work quite well. Where the numerical score is important to differentiate agents for rewards or in reporting to others, the scale may be required.
Over two-thirds of the respondents do not offer bonus points for exceptional performance. This may match up with the portion that uses a simple scoring process to a large extent. However, those who use a point system may find that this is an important differentiator that can be used to reward those who exceed the requirements.
Area for Written Comments
Nearly everyone (95%) indicated that there is a place on the forms for the evaluator to write comments or suggestions for improvement. This can be helpful for the agent but may also be important when the evaluator is not the one who coaches the agent. Having as much information as possible that can be used to guide the agent to better performance and reinforce excellence will increase the value of the entire process.
After-Call Work Evaluation
More than half of the respondents (58%) indicate that they do not include the after-call work or wrap-up work in the quality monitoring form or evaluation. This would seem to be an area ripe for expansion of the process, especially in those contacts that require significant work after the caller hangs up. For many centers focused on managing average handle time (AHT), this can also reveal any number of coaching opportunities.
Quality Definitions Document
The largest share of the respondents (86%) indicate that they do have a quality definitions document that provides descriptions and examples of each question and/or statement on the form. Another 8% are in process completing this important document while 6% indicate they do not have it. It is difficult for an agent to know what is meant by each item that is being scored if there is not a clear definition and calibration among evaluators is nearly impossible if items are open to individual interpretation. Sometimes an example of the desired method of handling a situation may be the best way to define it, especially when it is a soft skill such as “interested and helpful manner.”
Objectivity is clearly difficult to achieve in any scoring program, especially if the options are a range of scores versus yes/no. Only 16% feel that their form is completely objective while 75% feel it is only somewhat objective or half & half. This can be a critical element of any QA program in terms of achieving buy-in by the agents. Where items are subjective, scoring is challenging and calibration difficult. When scoring is inconsistent, agents give little credibility to the process and may question every evaluation. This can take valuable time and energy better spent in rewarding excellence and coaching for improvements in performance.
This survey shows that a quality assurance program is in place in most centers today and the technologies to make it more manageable have been well deployed. There is still a wide range in the number of items each evaluation considers and whether there is a point-based scoring system. To some extent, these differences may be a function of the maturity of the program and the available resources. Maintaining a focus on the end objective of the QA program can help to guide the ongoing revisions to items and the scoring processes. We hope you will complete the Winter survey, which will focus on First Contact Resolution and will be available online soon.