The Math of Quality Management

By Bill Durr, Contact Center Consulting

[one_half]
Assess four to eight randomly sampled calls per agent per month.  This is what is known as quality management in contact centers.  Except it’s not.

Quality management (QM) emerged almost 100 years ago in the manufacturing sector.  The goal of QM was to ensure uniformity and adherence to specifications in manufactured goods.  Effective QM programs relied upon statistical sampling at high confidence levels.  In this way, management could be certain that all the machines producing parts and all the laborers assembling those parts into finished products were conforming to the very rigid specifications in place; that the finished products were identical.

In practice, QM works as follows.  A specification for a widget is produced.  It might look something like this:

  • Made from oak.
  • 7.25” long by 3.75” square.
  • Has two 3/8” holes drilled at 3” and 4” as measured from the bottom.

Now suppose we have a manufacturing plant with 20 machines producing these widgets and that each machine needs to be precisely “tooled” to produce the widgets as specified.  Further suppose that each machine can produce 70 widgets per day and are being run five days a week.  Thus, in one month each machine will produce 1,400 widgets.  How can we be certain that there are no variations between the machines?
That’s what statistical sampling is about.  There are formulas to determine how many samples should be studied before you can conclude with some level of certainty that all the machines are producing widgets to specification.  The formula is:


In this formula,

  • “n” is the sample size required
  • “N” is the size of the population under study
  • “d” is the confidence level desired
  • “Z” is the number of standard deviation units of the sample distribution that corresponds to the desired confidence level

Typically, good surveys use a 95% confidence level, so “d” would be equal to .05.  Sometimes a confidence level of 90% is good enough to draw defensible conclusions, and “d” would be equal to .1.  Finally,  “Z” equates to 1.6440 when the confidence level is 90% and equates to 1.96 when the confidence level is 95%.
Let’s use a 90% confidence level and plug in the population number of widgets produced in a month, which is 1,400 (5 days a week times 4 weeks in a month times 70 widgets a day).  Solving the formula:

We would need to draw 65 randomly selected widgets from each machine and carefully examine them for conformance with the specifications in order to be certain that each machine was producing identical widgets.
The QM process worked marvels in manufacturing environments so it was only a matter of time until someone thought to apply the same methodology to services like handling customer calls.  But a lot was lost in translation.  What went wrong?

The problem is two-fold.  First, the typical contact center QM program samples somewhere between four and eight calls per agent per month.  If we assume an agent can handle 70 calls per day that means they will handle 1,400 in a month.  Using the same formula as shown above, the contact center QM team needs to sample 65 calls per agent per month.  No contact center comes anywhere near that number.  With such a small sample size, the likelihood of actually uncovering material deficiencies is astonishingly slim.

Which brings us to the second problem – the specification.  In manufacturing environments the specification is detailed, objective and extremely well defined.   In the contact center environment the specification is embodied in the “call requirements” document.   Consider the following generic contact specification:

  • Section 1 – Opening
    • Used standard greeting supported by a warm, friendly tone of voice
    • Acknowledged reason for the call
    • Properly verified the customer
  • Section 2 – Soft Skills
    • Allowed the customer to speak w/o interruption
    • Did not cause the caller to repeat themselves unnecessarily
    • Used verbal acknowledgements to indicate listening
    • Eliminated periods of unexplained silence
    • Paraphrased to ensure understanding
    • Used an effective tone of voice throughout the call
    • Made empathetic statements when appropriate
    • Adjusted communication style to accommodate the caller
    • Maintained composure throughout the call
    • Maintained control of the call
    • Used proper grammar
    • Spoke clearly and understandably
    • Used courtesy phrases such as “Please” and “Thank you”
    • Made an effective attempt to handle a difficult situation
  • Section 3 – Problem Solving
    • Used effective and efficient questioning technique
    • Accurately identified the issue to be resolved
    • Offered options/alternatives when applicable
    • Made a decision that effectively balanced the needs of the caller and company
    • Surfaced and addressed all additional needs
    • Effectively addressed all objections
    • Took ownership of the call
  • Section 4 – Job Knowledge
    • Gave accurate and complete information
    • Efficiently navigated all systems
    • Effectively utilized all resources
    • Accurately entered all data
    • Completely and accurately documented the call
  • Section 5 – Closing
    • Effectively summarized main points of the call and/or next steps
    • Asked “Is there anything else I can help you with?”
    • Used an effective “totally satisfied” statement
    • Used standard closing

Admittedly there are parts of this conversation “specification” that are specific such as whether the standard opening and closing were used.  But there are many elements of the specification that are purely judgmental and very much non-specific such as “Used an effective tone of voice throughout the call” and “Took ownership of the call.”  To the extent that subjective elements are part of the “specification” it becomes increasingly difficult to arrive at uniform assessments across multiple conversations and among multiple assessors.
[/one_half]
[one_half_last]
Taken together, these two problems with contact center QM often relegate the process to an institutionalized form of nit-picking. It’s no wonder than some agents regard the process with a measure of disdain.

While the current contact center QM process isn’t really about quality, it is about something just as important – branding.

What a list of “call requirements” actually represents is the way QM professionals want the interaction to be experienced. It is, in reality, branding the interaction in a rudimentary fashion.

Brand management is the analysis and planning on how that brand is perceived in the market. Brand managers define tangible elements of the brand, things like the product itself, the look, the price, the packaging and intangible elements like the experience that the consumer has had with the brand, and also the relationship that they have with that brand. Brand management aims to create an emotional connection between products, companies, and their customers and constituents.

Developing a brand is a necessary and appropriate internal activity undertaken by each company in their respective markets. But it is the consumer who perceives the brand, and what the consumer thinks and believes trumps whatever company employees may think.

Brand management is what the misnamed QM team is really all about. And there’s no reason to regard this change in nomenclature as somehow demeaning to QM professionals. But it’s time to realize that quality management is not what is happening when only four to eight calls per agent per month are being assessed. By calling the practice what it actually is – brand management – perhaps agents will be less inclined to dispute assessments and accept coaching that simply represents the brand better.

Now, if QM teams are loath to rename their profession and practice and actually want to manage quality, there is a way to do that.

There are some relatively new software solutions and processes that QM teams can utilize that directly address the two problems I’ve identified above; speech analytics and customer surveying. Speech analytics addresses the number of samples problem and customer surveying addresses the specification issues.

Soliciting opinions from people in a systematic manner dates back to at least the 19th Century, and probably much earlier. Contact centers have been attempting to survey customers for many years using a number of tactics and techniques. Mailed surveys, Interactive Voice Response (IVR) systems and live telephone conversations — immediately after the transaction or performed some time later via a call-back — are all employed to determine how the customer feels about the interaction with the agent, the company, its products and so on.

Essentially there are two kinds of surveys. One is the post-interaction survey and the other is sometimes referred to as a panel survey or cohort survey.

Post-interaction surveys tend to be offered to customers who have called into the contact center. These surveys typically seek to learn whether the customer’s problem was resolved resulting in first contact resolution information and to gain some insight into the courtesy exhibited by the agent. These surveys are usually only a few questions in length with simple yes/no answers.

Panel surveys, on the other hand, are conducted with carefully selected participants. For example, panels can be assembled consisting of top customers or new customers. The surveys typically explore topics of interest in some depth and require a deeper level of participation than post-interaction surveys. The point of panel surveys is to acquire insight into customer perceptions and attitudes.

It is obvious that directly tapping into the voice of the customer through direct surveying can reveal what they think and feel about the brand and what they like and don’t like about the products and service.
For example, if after extensive surveying a company discovers that its customers place no real value on having their names verbalized three times during a conversation, why should that specification appear in the call requirement document? Similarly, if customers find the “standard close” to be annoying scripted nonsense, why should that be a requirement? Done correctly, surveying reveals what is important to customers and what is not. These key insights reshape the assessment function and criteria.

Speech analytics technology, akin to magic, “listens” to every call and tags every call with any number of categories. In so doing, quality management is totally transformed because instead of “wrapper information” like the call length, the queue it arrived in and the agent that handled the call, the QM specialist has “content information” consisting of what the call was actually about, topics discussed and how the caller felt about the outcome.

The ability to map calls into categories depends on the ability of the management team to identify words and phrases indicative of desirable and undesirable behaviors in the call. It is easy to define the desired call opening and therefore easy for speech analytics to determine whether an agent did or did not comply. With clever and insightful definitions it is possible for speech analytics to automatically determine an impressive number of agent and customer behaviors such as:

  • Behaviors
    • Emotions were expressed in the call
    • Agent used the appropriate opening script
    • Hold language occurred
    • Proper caller identification was used
    • Caller mentioned some level of confusion
    • Caller expressed satisfaction
    • Positive words and phrases were used
    • Outcomes
    • Call required escalation
    • Agent was unable to help caller
    • Caller or agent identified a need for a call back at a later time

Clearly, “listening” to every call removes the sampling problem from traditional contact center quality management. Moreover, it greatly enhances the QM team’s ability to uncover material deficiencies in skill and knowledge. And, speech analytics can unburden a QM team from low value, high effort tasks such as ensuring that identification routines and authority-required rote disclosures are faithfully rendered.
And so if contact centers truly wish to manage quality they need to embrace panel survey practices to discover what is important to customers to sharpen the requirements document and deploy speech analytics to automatically rank agents in behaviors that are important to good interactions. n

About the Author

Bill Durr is an industry veteran with more than 30 years of practical experience with contact center technologies in the workforce optimization area, including workforce management and quality monitoring. He is the author of the popular books, Building a World-Class Inbound Call Center and Navigating the Customer Contact Center in the 21st Century. Learn more and contact Bill at his web site: wfoevangelist.com.[/one_half_last]