Skip to content

4 Steps to Maturing Your Quality Practice

January 11, 2023 | | Customer Service, Contact Center

When was the last time you really looked at and questioned how you measure the customer experience of your live (phone or chat) interactions? For many companies, their Quality Assurance programs are based on old practices and haven’t seen much change in a while. Maybe that is because they are tied to compensation, or perhaps it’s “just the way it’s always been done” and so change hasn’t been considered. These companies soon find they suffer from several Quality Program blind spots that conspire to obscure the true nature of their customer interactions.

 

If that sounds familiar with how your company measures the quality of your customer interactions and the associated experience, perhaps now is the time to take a look and question if there is a better way. Leading companies today are actively working to mature their Quality Practices and are doing away with outdated practices such as one-size-fits-all checklists and insufficient sampling. In their place, they are adding in the voice of the customer (both direct and indirect) and interaction metrics to provide a much more complete and comprehensive picture of the customer experience through interaction channels.

 

Post-interaction surveys provide agent, team, and enterprise-level information on how customers perceive their interactions. Automated quality tools and speech or text analytics provide metrics and measurements around detected customer sentiment, interaction types, and resolution rates across 100% of interactions, not just a small, random sample. Desktop analytics tools can help understand the behaviors of agents and their adherence to processes, which can be especially helpful in work-from-home environments

 

Why a Mature Quality Practice Is a Best Practice

A mature quality practice provides better insights into ways customers engage with a business and what they think about those interactions. When contact center quality practices are thoughtfully executed, agents feel their efforts are measured in a more meaningful way, managers can be sure associates get necessary training or recognition, and customers feel validated about their decision to do business with the brand.  It truly is a win-win-win for the agent, leadership, and customers.

 

A good agent wants to be evaluated on the totality of their calls. If a company’s quality assurance methodology relies on randomly monitoring only 3-5 calls per month out of the hundreds an associate handles, it doesn’t gather enough information to offer constructive feedback. Agents feel nervous being evaluated on such an insufficient sample because leadership might catch one bad call and miss the majority of calls on which they did well.

 

Their leaders - whether direct supervisors or more senior leaders - can quickly gain in-the-moment information on where there are areas for improvement or find shining examples of excellent interactions. In recent work with a client, we used Speech Analytics tools to find some of those shining examples that, notably, delighted customers, came to full resolution, and were about 40% shorter than average calls of the same type. Using these calls as a benchmark, we could quickly identify the agents to recognize and those to coach.

 

Most importantly, the customer wins with a mature quality practice. Better measurement means more robust interactions, greater resolution rates, and happier customers.  Confident agents can easily find information for customers and are more apt to resolve issues the first time. Customers feel like their business is appreciated, their time is valued, and their satisfaction is important to the company.

 

4 Steps to Craft a More Mature, Data-Driven Quality Program

 

1. Define the experience the company wants customers to have.

First, clearly define the customer experience before embarking on a maturation process. Success hinges on identifying specific goals to measure.

 

Should agents express a greater level of empathy in some situations? How well do agents maintain and manage control over customer conversations? Are agents compliant with reading necessary disclosure statements or following specific procedures? What about soft skills like building rapport, active listening, and using the right language and terminology? It’s important to re-think these things and ensure what you intend for your customer interactions aligns well with your brand promise and who you want to be in the marketplace.

 

2. Identify metrics that tell the story. 

Skip the random call monitoring. Instead, focus on purposeful metrics. Given the experience customers should have, identify how to measure those things. These metrics might be attainable through direct feedback like surveys or reviews. Or they could be more indirect like insights gleaned from analyzing speech transcriptions or email and chat interactions. Or operational metrics might be where you can find the data you need to tell the story of interactions.

 

Often, there are tiers to the metrics - operational metrics like Time to Resolution for an insurance claim can be strongly linked to customer perceptions about their interactions and the company itself.  Inferred metrics like the volume, duration, and detected sentiment of repeat calls related to Claims interactions can provide more insight into how things are going. Finally, direct metrics like agent-level Net Promoter Score (NPS) or Customer Satisfaction based on post-interaction surveys for Claims can provide even deeper insights – much deeper and more specific than random listening for sure.

 

Once those metrics are mapped to the experience, then it’s time for the next step.

 

3. Make the metrics accessible, understandable, and actionable.

Now that it’s determined how to measure interactions, operations teams and others in the enterprise need to access them, understand them, and know how to take action.

 

In most companies, contact center supervisors, managers, and directors all know how to get information on quality evaluations today – they’ve been doing it the same way for years. Once the Quality Program starts to mature, using metrics and data, they’ll need to know how to access that information, what it means, and what they should do about it.  

 

Others, like marketing teams, CX teams, or others are likely to be audiences of this information as well - although they’ll likely be looking for it at a higher level and a different frequency for other reasons.

 

Accessibility then, is how the data is presented, to whom, and in what format. An emailed report or a dashboard are perhaps the most common and depends on the company’s culture and available toolsets.

 

Understanding the data will take some training. If a company wants to provide a standard greeting on every call, what’s the right baseline - what % is excellent, good, fair, or failing?  Getting a perfect 100% greeting on all calls is not achievable - we often see 85-95% as excellent. Either there was an immediate interruption, background noise, or a robot caller gets through (for example) and the greeting is cut off, not detected, or not necessary on some small % of calls. This is just one example - but understanding those baselines and what constitutes a good target will require calibration and education.

 

Finally, taking action - the direction to take based on what story the data is telling. In some cases, immediate action might be needed (like when it’s detected that an agent is swearing at a customer). In others, a deeper look is needed – this is where we recommend companies spend their time with human evaluations. Listen to the calls or manually read through chats/emails based on what the data reveals. If an agent is struggling to resolve interactions of a certain type - listen to a few of those and devise a targeted coaching plan. If customer sentiment is on the “very bad” side of the scale for certain transactions, then manually evaluate a sample of those.  The actions taken are going to vary based on what is being measured and what needs to be done. Of course, all this takes planning and some change management too.

 

4. Weave the maturity into operations.

Besides the measurement and related reporting and actions, a maturing quality program has other items to take into account and impacts in other areas of the business.

 

If quality scores play a part in compensation, then changes to how those scores are determined will need to be considered as well. This takes even more change management, but if done well, compensation and incentives will be tied much more closely to the desired customer experience.

 

Similar concerns must be considered in processes such as bidding for shifts, providing recognition, triggering performance plans, and more. All of those are examples of downstream impacts - there may be even more that needs to be understood as the program matures and changes are enacted.

 

Quality Assurance Optimization Is a Win-Win-Win

A mature quality assurance program is an efficient and effective way to continuously improve the customer experience. Identifying blind spots, solving them, and building a stronger, more mature Quality program creates a win-win-win because everyone benefits—the company, the contact center agents, and (most importantly) the customers who get the kind of experience they deserve and you’ve designed. 

 

Ready to implement an intentional plan to mature your quality program for improved customer experience? Reach out to one of our consultants today.

 

Drive your CX Forward. Request a Consulation