What is Customer Effort Score, and what is its purpose?

Customer Effort Score (CES) is a customer satisfaction metric. It measures the perceived level of effort required from a customer to work with a company. Most often it’s used in scenarios to ask how much effort was required on the customer’s side to solve an issue. The core idea of CES is that by reducing customer effort, you boost loyalty and can expect increase in revenue.

How do I ask a Customer Effort Score question?

To find out your Customer Effort Score, ask your customers how easy it was for them to work with you on a scale of 1-5, with 1 indicating high effort and 5 indicating low effort.

How is CES measured, and what’s a good CES score?

Answers to CES questions range on a scale from 1 to 5, or 1 to 7, depending on which version of CES you’re using. To give an example, If you assign high effort to 1, and low effort to 7 then you’ll have a scale ranging from low to high effort on which you’ll want to score as high as possible. Aiming high is a good thing, so be sure to identify key areas in your customer-facing business and find out what is driving your CES score (more on that to come).

To work out your company’s Customer Effort Score you’ll just need to average out the scores you’ve received from your customers (add up all CES scores, then divide by the total number of respondents).

What’s the difference between CES and CES 2.0?

As mentioned above, there’s two variants of CES. The original CES operates on a scale of 1-5. It asks questions in terms of effort required from the respondent.

CES 2.O emerged in 2013 as a response to criticism of the original CES. Many thought that CES, posing questions in terms of effort, was framed too negatively. Not only that, across different industries benchmarking the original CES was difficult, because ‘effort’ can mean so many different things in different contexts. CES 2.0 took a new run at the question by getting respondents to evaluate a statement on a scale of 1-7. The first standardized CES 2.0 hypothesis was: “[company name] made it easy for me to handle my issue.” This way of measuring customer interactions has shown much greater capabilities for cross-industry benchmarking, as well as being a more direct, easy question to answer for respondents.

When should I use CES?

Unlike Net Promoter Score (NPS) and many other Customer Satisfaction (CSAT) survey question types, CES is really designed to be asked at a specific moment in time. You should send out CES surveys after specific customer touchpoints or instances of contact with your service desk. Think, for example, of a customer contacting your customer service to solve a payment issue: you’d want to know how easy it was for the customer to get their solution.

You’d use a Customer Effort Score question in 3 main scenarios.

  • Right after purchase: How much effort did it take your customer to buy?
  • After contact with your customer service: How much effort did it take for your customer to resolve their issue?
  • Measuring overall experience: How much effort did it take to learn how to use your product?

 

What can CES tell me about my business?

As mentioned above, at the core of CES is the idea that customers who encounter low effort scenarios working with you or buying from you are much less likely to take their business elsewhere. Companies with a good CES score will experience less churn, and will sell more.

“Not only is it possible to quantify the impact of customer experience — but the effects are huge […] customers who had the best past experiences spend 140% more compared to those who had the poorest past experience.” – Peter Kriss, Harvard Business Review, 2014.

CEB Global were among the pioneering organisations promoting CES as a new and powerful customer satisfaction metric. CEB already established in 2013 that CES is a much better indicator of future spending behaviour than NPS or CSAT, and so too an excellent predictor of customer churn/defection.

CES is an actionable metric. This means that on an ongoing basis you can use CES scoring to prioritise improvements to your business that are backed by supporting data. You can implement CES in various places:

  • On your website or app: are your help/support pages actually helping?
  • Closing a ticket in support software like Zendesk: is the issue really solved, and if so, how much effort did it cost your customer? What do the trends here tell you about your support structure?
  • Onboarding new customers to your service: we’re big fans of this one, and it’s one of the determining factors in whether we consider our onboarding of new clients successful.

This all sounds fabulous. Are there cons?

That’s tricky. On its own, CES is a good-to-know, but essentially limited. Without diving deeper into the factors driving a bad CES score, you won’t know how to improve it. You could feasibly monitor CES on a case-by-case, rectifying poor scores by picking up contact with your unhappy customers. While this is a good thing to do, you shouldn’t limit the power of CES by being merely reactive. Be proactive!

What are the best practices for working with Customer Effort Score?

The best advice to give with using CES is to be smart with where you implement the question. There’s more you can do with CES than just ask how easy someone found it to call you with a question.

Here at Starred we use CES ourselves to measure the level of ease perceived by our clients 3 months into working with us. We ask them how they found getting started with our tool. Is that the only question we ask? Definitely not – lining up CES alongside other survey data can show you the correlation between effort and satisfaction with other factors.

Let’s say you’re on the customer or product team at a company who’ve just launched an app. The app has five key features, and you run a survey to gauge CES and ask about satisfaction with the five features. You find out that only one of those features tends to score badly in terms of satisfaction among your users. Does this show a strong correlation to your users’ perceived level of effort in adopting your app? If so, you’ve got insights ready to be actioned. Improve the feature and survey new users again.

Things get even more interesting when you’ve got your CRM system kept up-to-date and you’ve integrated it with a feedback solution.

If you’re working with a smart, integrated feedback solution, like Starred, which can draw on the powerful information in your CRM system, you’ll have plenty of ways to break down your CES data. Segment your customer satisfaction data, including CES, into the relevant categories to understand the CES scores across the board in your client portfolio.

Are low value customers experiencing low effort, and your most prized clients left feeling confused? Are customers in the earliest stages of your customer journey experiencing high effort? Getting to the bottom of these types of questions will have you on your way to the promised land of low effort and happier customers in no time.

How to improve Customer Effort Score

To improve your Customer Effort Score you’ll need to find out what’s driving the score. There will be a reason why a customer feels it took too much effort to interact with you.

This comes down to correlation. In your results you’ll be able to find relationships between related factors and your Customer Effort Score.

Let’s go with the scenario that you’re measuring Customer Effort Score after a customer has contacted your customer service. The question will read something like:

How much effort did it take for you to resolve your issue today?

If you only ask this and nothing else, you’ll find your score but you won’t know why a customer felt this way. Therefore it’s important to find out just a bit more about their experience. Keep your survey short and sweet and ask, say, 5 more questions. In this scenario you’d get your customer to give you quick ratings on aspects related to your support level. Now let’s look at some dummy averaged responses to these kinds of factors.

How satisfied were you with the:

Solution provided – 7/10

Friendliness – 8/10

Time taken – 6/10

Agent’s knowledge – 9/10

Availability of support – 5/10

From these results it’s clear that ‘Time taken’, rated 6/10, and ‘Availability of support’, rated 5/10, are what your customers are least satisfied with. Once they were in touch, your agents seem to be friendly enough, the solution is adequate, and the person they spoke to clearly knew enough to help. The problem is that it’s too hard to contact you, and the time it takes to sort out an issue is too long.

Look to correlate satisfaction with your service with Customer Effort Score. You’ll find that low satisfaction with aspects of your service is very likely correlated with low Customer Effort Score.

The lesson in this story is that by improving your support availability you will make it easier for customers to resolve their issue with you. You won’t need to waste time and resources better training your support agents, you most likely just need more of them, and make it easier for customers to find out how to contact you.

Correlating customer satisfaction with effort scores may take some time, but it will show you how to improve your score. Sounds labour-intensive? A tool like Starred’s Priority Matrix will take away the manual labour from this task and give you these insights at a glance.

Want to learn more about how CES creates an impact for your business? Get in touch and let’s talk!