QATC Survey Results

[one_half]

This article details the results of the most recent QATC quarterly survey on critical quality assurance and training topics. Nearly 90 contact center professionals representing a wide variety of industries provided insight regarding the development and performance monitoring of training resources.

Number of Agents

The largest number of participants is from call center operations between 50 and 100 agents, closely followed by those with over 500 agents. While the insurance and financial industries are well represented, “other” was selected by the largest number of participants. This mix of respondents provides a broad spectrum of call center sizes and industries.

Training Program for New Instructors

Just over half of the respondents indicated that they do have a competency or training program for new instructors or trainers. However, nearly half do not and this risks a wide variety of capabilities and processes in the training programs delivered.

Program Content

When asked what areas the training program focuses on, the respondents were offered the opportunity to choose more than one answer. Delivery, communication, and content were selected by nearly half of all respondents, with preparation cited by about one-third. Leadership was the least often selected option.

Observation of Trainers

When asked if instructors are observed during training classes, nearly three-quarters indicated that this is part of their process. When asked how often this observation process takes place, approximately half indicated that it happens “as needed.” Two to four times per year was reported by 29 percent and only once per year by 3 percent. However, 16 percent reported some other level of frequency which could be more than 4 times per year or less than once a year.

Scoring of Instructors

Survey participants were asked if instructors are rated or scored when they are observed. More than half indicated that they do not rate or score while 45 percent indicated that they do. Of those who do score, three-quarters reported that they used these scores in performance reviews for the instructors.

Trainer Performance Objectives

When asked if the instructors have measurable quality and performance objectives tied to organizational goals, two-thirds indicated that they do have such objectives. When asked what these objectives are, the answers varied widely with focus on ensuring the trainees are competent at the end of training the most common. Some focus on the timeline of the agents reaching the floor. Teaching techniques were mentioned such as meeting trainee styles/personalities, using games, and including assessment to ensure the students are achieving the goals. Associate feedback scores, graduation rate, retention rates, QA average scores, and adherence of associates were also listed.

[/one_half]
[one_half_last]

Primary Functions of Instructors

When asked what primary functions instructors perform, approximately one-third focus on design and instruction. Full-time instruction, and instruct and manage the group each gained about one-quarter of the answers while part-time instruction accounted for only fifteen percent.

Training Council

When asked if there is a training “council” made up of others in the organization such as QA, Operations, and/or WFM, just over half indicated that they do not. Such groups can be helpful in obtaining feedback and providing opportunities for continuous improvement in the training results.

Tracking Student Performance

Respondents were asked to identify how long student performance is tracked after the training programs are complete. The largest group (41%) indicated that they monitor for more than 60 days while another 28 percent track for 30 to 60 days. A surprising 18 percent do not track student performance after the completion of training.

Trainee Quality Audits

Respondents were asked if the instructors perform quality audits on trainees once they leave the classroom. Slightly more than half indicated that they do this. More than half of those that do these audits indicated that they do 10 or more calls while the rest are split between 5 to 9 calls and 1 to 4 calls. Having instructors review trainee performance once on the floor can prove valuable feedback to the instructor on items that could use some addition attention, not only for the audited trainee, but for the entire group in the future.

Trainee Results Tied to Instructor Objectives

Respondents were asked if the trainee results are tied back to the instructor’s objectives in any way. Slightly more than half indicated that trainee results do not tie back to instructor objectives. Those that do were asked how quality results of trainees tie back and the answers are varied. Most indicated that the instructors may receive bonuses for agent performance while others measure to identify what training modifications may be needed.

Conclusion

This survey provides insight into the development and performance management of training personnel in the contact center. While the performance monitoring and coaching for agents is often rigorous, it seems that for trainers it is less so. Many do not set specific measurable objectives and goals for the trainers or tie the results of the training back to the instructors’ performance reviews. This would seem to be a gap in the performance management overall as the purpose of the training is to produce quality and competent agents that complete the training and perform up to standards on the job. Measuring the effectiveness of the training programs and instructors can identify areas of excellence to be rewarded and opportunities for improvement going forward.

We hope you will complete the QATC Spring survey, which focuses on Speech Analytics.

[/one_half_last]