Automated Quality:  Six Steps to Success

[one_half]There is no activity in quality monitoring more time consuming than manually scoring evaluations. With thousands of calls to evaluate — and limited resources — quality staff may struggle to accurately assess agents’ skill gaps and find time for the one-on-one coaching that can help improve agent performance.

By introducing automated components — such as scoring — you can revolutionize how your organization approaches quality, shifting resources from identifying opportunities to improve agent performance to actually improving it, and in turn helping to make every agent your best agent.

But your success in using automation is not dependent on the technology alone — how you introduce the technology into your organization can be just as important to your success. A gradual implementation of automation can help your team absorb many of this new technology’s changes with minimal disruption to the processes you have in place today. Let’s explore six steps to effectively introducing automation below.

Step One: Slow and Steady Wins the Race

Though you may be tempted to automate your entire program all at once, it’s important to get a feel for your technology — to understand its power and its limitations. So don’t be afraid to take the time needed to roll out automation gradually.

This can help you better understand how automation will impact your people and processes, and how you can manage the change that comes with shifting from manual processes to automated ones.

Step Two: Start Simply

The journey to full automation begins with scoring a single question. Pick a question where the scoring will be relatively easy to automate, as you build your knowledge of the solution’s capabilities. An ideal question for autoscoring is one where the answer is fairly straightforward. For example, you could select a “yes/no” question where the agent has to adhere to a script or has to read a specific disclosure. If you don’t have a question on your existing form that meets these criteria, choose a question where the answer is as objective as possible — the agent greeted the caller, the agent said thank you, the agent mentioned your company’s name, etc.

Step Three: Trust, but Verify

Now that you’ve identified your question, it is important to set expectations across senior leadership, the quality team, and agents before you test it, to help ensure they understand that there will be some variation in manual scores and automated scores. Sometimes, a variance is easy to correct if it is the result of an oversight in your manual process or a scoring rule that requires an adjustment. Other times, it may be the result of shifting from a manual to an automated process.
[/one_half]
[one_half_last]To help the team absorb these differences, incorporate your automated scoring for a single question into the form used for manual evaluations. That way, the quality team can see how closely the automated scoring aligns with the way they’d score the call manually. If an autoscore seems inaccurate for a particular call, it can be overridden, and you can refine the scoring rules to address the issue. A best practice is to pilot your selected question in anywhere from 50-100 calls or forms, so that you can accurately assess where you need to make adjustments. Once your team is comfortable with how the technology is scoring the questions you can enable automated scoring across all the calls.

Step Four: Help Your Coaches Build a New Playbook

Once you’ve reached a point where you are scoring 100 percent of your calls, you can identify where agents have true skill gaps or performance issues. By setting up automated alerts to let coaches know when an agent’s scores on a particular question are low, you can enable coaches to intercede sooner, helping agents when they need it, which can make the correction more effective.

To support this new process, it’s important to make each agent’s calls — and the areas of the call that require coaching — easily accessible for your coaches, so they can design a focused session that improves performance.

Step Five: Review Your Progress

When you feel confident in the scoring rules and the pilot results from autoscoring your first question, it’s time to get feedback from the team. Do they feel comfortable with how the technology is evaluating their calls? Do they have any suggestions for how the next questions should be autoscored and tested? What would they change? Take that feedback and incorporate it into the next question you roll out. By engaging with your team and involving them in the process, you can build advocates for automation and better pave the way for success.

Step Six: Plan for the Future

After you understand how you can automate your quality program, develop a plan for answering questions that have arisen organically. Consider answers to questions such as the following: Are our evaluation criteria aligned with our service goals? Do we need to update our form or scoring criteria? How should our processes change, now that we spend less time on manual scoring and have more time to coach?

By gradually rolling out automation, you can benefit from obtaining data to thoughtfully answer these questions. With this information in hand, your team can be better positioned to manage the changes automation can bring — and to reap its benefits.

Siobhan Miller is Sr. Director of Portfolio Market Strategy for Verint. She may be reached by siobhan.miller@verint.com. Please go to www.verint.com for more information about Verint.[/one_half_last]