A guide to Customer Effort Score (CES) in B2B research

A guide to Customer Effort Score (CES) in B2B research
Contents

Why track customer effort in B2B research?

Pros and cons of using CES in B2B research

Best practices when using CES in B2B research

 

 

Why track customer effort in B2B research?

Over a decade since the introduction of CES, debate continues about its usefulness as a metric. In this case, effort is the amount of work a customer must put in to fulfill their need, or solve their problem, with your business.

Some advocates see it as a key indicator of customer loyalty or a strong overall customer experience (CX). 

Some detractors argue that CES is a limited metric that can mislead businesses, a ‘red herring’ with the potential to suggest the wrong solution to your challenges.

Arguably, when used in the right circumstances, it can be a helpful metric – but sometimes, its importance is overrated. We’ll explain some of the right and wrong ways to use it in B2B research.

CES was introduced in the Harvard Business Review back in 2010. In the provocative article Stop Trying to Delight Your Customers, the authors argued that overserving customers doesn’t improve loyalty, but removing barriers to their goals – or making things easy for them – does.

Simply put, the theory is that the more effort your customers have to go through, the less likely they are to stay loyal to your business. Gartner found that 96% of customers going through service interactions requiring high effort will become disloyal, versus only 9% of those with a low-effort experience.

There are lots of ways businesses put their clients through unnecessary hassle, including:

  • Taking a long time to resolve an issue
  • Providing generic, non-tailored experiences
  • Insufficient understanding of a client’s business
  • Needing multiple channels to complete an interaction
  • Providing inadequate information
  • Transferring a client between multiple support staff members
  • Excessive number of interactions needed to complete a customer journey

The CES score question itself is straightforward to include in research. Answer options are usually shown on scales of 1-5 or 1-7, with the highest and lowest numbers corresponding to very easy and very difficult, using question-wording like: “How easy or difficult was it to interact with [company]?”.

Alternatively, if the question wording looks clunky, you can offer answer options from strongly agree to strongly disagree, presenting a statement to reply to such as: “[Company] made it easy for me to resolve my issue.”

You can ask about the customer’s effort after any one specific interaction. It’s also commonly used in quantitative research to find points of friction – within a B2B purchase process or customer service, and so on. 

Typically, CES is included in an online survey, for robust numbers, but it can also be asked qualitatively.

The aim of using CES is to find insights that will help you:

  • Decrease customer churn
  • Improve customer service performance
  • Increase the number of referrals
  • Grow the number of sales
  • Optimize customer service costs

Pros and cons of using CES in B2B research

CES has some similarities to other metrics used to analyze customer satisfaction in general, like CSAT scores and NPS. 

You can include them one after another in a survey as part of a logical flow – also, they should be quick for a customer to answer.

Each metric tells you something different:

You should expect to see links between the metrics – for example, Gartner also found that NPS scores are 65 points higher on average for companies requiring low effort to buy from.

Benefits of using CES include:

  • Direct link to purchases and repurchases
  • Link to referrals
  • Easy to use several times in a study
  • Can uncover a high-priority CX issue

Taking each of these in turn:

#1 Direct link to purchases and repurchases

A CES score can act as an indicator of how likely your customers are to buy from you.

If you track the score and improve it over time, you may find that your number of sales increases as well. However, there are caveats to this, especially for B2B audience research, which we’ll cover later in the limitations section.

One more insight from Gartner’s report is that 94% of customers intend to repurchase when their buying process was low effort, compared to 4% if their effort levels were high.

#2 Link to referrals

As mentioned, a better CES score should be partially intertwined with a higher NPS score too.

NPS scores are based on answers to the question: “How likely are you to recommend [company] to a friend or colleague?”

You should expect to see a connection between a good CES score and the number of referrals you receive. 

According to Hubspot, 81% of customers would speak negatively about a company to others, after going through high-effort interactions.

#3 Easy to use several times in a study

It’s very simple to get an overall CES score from customers, by using a basic Likert scale e.g. a rating of 1-5.

In addition to an overall score, in a longer survey, you can ask for individual CES scores regarding different types of customer interactions.

For example, you could ask about how easy or difficult your customers find it to:

  • Find the information they’re looking for on your website
  • Resolve a query with your customer service team
  • Purchase an item
  • Receive an item
  • Use an item

And so on. Collecting several CES scores helps you to identify which part of the customer journey is causing the most friction, which you can’t do with an overall CES score.

However, don’t overdo it – asking customers to rate the effort required at too many different points of interaction will cause fatigue, risking less accurate data or incomplete surveys.

#4 Can uncover a high-priority CX issue

If your CES score is low, that’s a sign that you need to improve your brand’s touchpoints on the customer journey urgently.

For brands unaware that their customers are going through a lot of effort to interact with them, a low CES score acts as a wake-up call. 

Failing to address the underlying high-effort interactions behind a disappointing CES score will likely lead to poor performance in terms of repeat purchases or referrals.

However, a CES score itself doesn’t tell you why customers are frustrated, or how to improve the experience for them – which leads us to the limitations of using it as a metric.

Limitations of using CES include:

  • Not relevant for all B2B companies
  • Little clarity around where and why effort is too high
  • CES is not bespoke to your business
  • Just one part of the overall customer experience

#1 Not relevant for all B2B companies

If your brand mission doesn’t have anything to do with providing effortless service, focusing on CES scores could waste your time and resources.

As Global CEM founder Sampson Lee argues, the metric works well for brands built around providing customers with goods quickly and easily, such as Amazon.com. 

CES is more likely to be relevant for B2C companies as a result, but in B2B, purchases tend to be much more infrequent.

Sometimes brands need to put their customers through a little effort to deliver their brand promises consistently and more effectively. 

With finite resources, you could dilute your brand’s strengths by deprioritizing them in favor of instead focusing on creating effortless experiences.

#2 Little clarity around where and why effort is too high

If your customers’ interactions with you are low effort overall, but high at one or two specific touchpoints, it’s difficult to get the insights you need via CES alone.

Yes, you can include a specific CES question around customer service, for example. Within that, you may not have enough space in a survey to find out whether the friction is high or low for phone queries, email queries, or online chat queries specifically.

And even if you do pinpoint the specific problem area, the score still won’t tell you why your customers are frustrated. 

Specifically, how is your customer experience making them put in extra unwanted effort? If it’s an issue with phone support, for example, what’s causing the friction? Is it difficulty connecting, long wait times, a lack of support staff proactivity, insufficient product knowledge, all of the above, or something else?

For that, you have to ask the right follow-up question that provides the detail you need. Often that’s easier to do qualitatively, which we’ll discuss in the next section on best practices.

#3 CES is not bespoke to your business

Just as focusing on your NPS score is unwise in B2B – many businesses see their revenues increase, while their NPS declines – it doesn’t make sense to use CES as a ‘hero’ metric.

Therefore, if you’re looking for a trackable metric that once improved, should translate into growth, look beyond CES.

Yet it does make sense to have a hero or ‘North Star’ metric of some sort. It’s easier to unite a business around improving one number, rather than several.

The metric you use should be bespoke to your business though. For example, at Salesforce, reportedly it’s the average number of records created per account.

For many businesses, the best, tailored metric to track has a strong influence on sales or is something that differentiates them from competitors.

#4 Just one part of the overall customer experience

Arguably, customers’ positive or negative experiences with your company cannot be solely summed up by how much effort they have to put in.

A wide range of other factors come into play – such as product or service pricing, quality, reliability, expectation setting, and so on.

Sometimes, businesses view CES as the key metric to rate their customer experience, but it’s too narrow for that.

As Lee continues, driving effortless experiences is not the same thing as driving memorable experiences. The latter is more likely to retain your customers and encourage them to keep spending with you.

To track and improve customer experiences, you need to build your research approach around your CX goals and business objectives. 

A broader insights program around B2B brand tracking research or the customer journey will help you optimize the customer experience in a more targeted way than a CES-led study.

Best practices when using CES in B2B research

#1 Prioritize addressing your business objectives, with or without CES

Given that customer effort is just one part of the overall customer experience, revisit your research and business objectives before using CES.

If – above all other priorities – your primary aim is to investigate customer effort, then it may well make sense to put CES at the heart of your research study.

If your objectives are to investigate and improve customer experience more broadly, then getting CES scores should not be a key goal. You may not need to ask CES questions at all.

Be clear about what’s needed first and foremost, then adapt your approach around the objectives.

#2 Design research that will add extra detail to your CES scores

As explored earlier, CES scores don’t tell you much by themselves. They show how much effort you’re putting customers through, but out of context, that’s not very helpful.

Your CES scores will be more useful if you ask the right questions before and after, to get a picture of their original needs and why their interaction with you was or wasn’t easy.

In quant research, before asking for CES scores on different types of interactions along the customer journey, first, add questions to establish which ones they’ve experienced recently.

After getting the scores, ask respondents to elaborate on why they gave that score. You can do this by providing a closed list of answer options and/or providing an open text box.

You may find it more insightful to get the detail around CES scores using qualitative research though. With a few in-depth interviews, you can fully explore the reasons why customers wanted to interact with you, why the interaction wasn’t easy, the knock-on effects, emotions, and what they’d prefer next time.

While quant research will give you more robust numbers, it can be more challenging to get the same level of detail. To provide closed lists of answer options, you need to second guess what the issues may be – and for open text boxes, respondents tend to give very brief replies in surveys.

#3 Interpret CES the right way to tell the true story

There are several different ways to calculate CES but some provide more powerful statistics than others.

Here are three common ways to do it, although there are others too:

  • Sum up your scores and divide by the total number of responses, for an average score out of 5 or 7
  • Take a NET of the positive scores and divide by the total number of responses, for a percentage of customers reporting effortless interactions
  • Subtract a NET of the negative scores from a NET of the positive scores, then divide by the total number of responses, for a score between -100 and 100

Option 1 takes all responses into account, including the middle scores. It also differentiates between positive and very positive scores, as well as negative and very negative ones. 

Option 2, as a percentage, can be more impactful to share within the business though. However, it essentially treats positive and very positive scores as the same rating. Similarly, it doesn’t distinguish between average, negative, and very negative scores.

Option 3 has the same drawback as Option 2. However, the score can be even more impactful because it draws more attention to the negative scores.

With Options 2 and 3, you could also add more weight to the very positive or very negative scores, but this is a slightly more complicated calculation to explain. That, in turn, may hinder the impact of the scores when sharing across the business.

Ultimately, the research team needs to choose a way to calculate the score that will best represent your customers’ feedback and have the biggest impact on your stakeholders.

#4 Run advanced statistical techniques to find more links to high effort

In quant research, if you can’t see a clear link between specific customer frustrations and high-effort interactions, try regression analysis.

Regression analysis can reveal a relationship between the dependent variable – in this case, high effort – and independent variables, such as customer pain points.

Statistical techniques like this often help you to see what’s driving these outcomes when respondents struggle to realize it themselves.

If you’re using qualitative research, try indirect questioning to achieve the same goal. Projective techniques – such as giving the respondent hypothetical scenarios, analogies, or image sorting tasks – explore beyond their claimed reasoning to reveal subconscious factors.

 

Summary

Why track customer effort in B2B research?

Potential benefits: decreasing customer churn; improving customer service performance; increasing the number of referrals; growing the number of sales; optimizing customer service costs.

Pros and cons of using CES in B2B research

The advantages include: a direct link to purchases and repurchases; a link to referrals; easy to use several times in a study; can uncover a high-priority CX issue.

The disadvantages include: not relevant for all B2B companies; little clarity around where and why effort is too high; CES is not bespoke to your business; just one part of the overall customer experience.

Best practices when using CES in B2B research

If using CES, we recommend that you: prioritize addressing your business objectives, with or without CES; design research that will add extra detail to your CES scores; interpret CES the right way to tell the true story; run advanced statistical techniques to find more links to high effort.

Chris Wells
Share:

Got a B2B market research project
you’d like to discuss?

Contact us

More from the blog

How to conduct buyer persona research in B2B – with examples

How to

April 18, 2023

How to conduct buyer persona research in B2B – with examples

We highlight the many use cases for B2B buyer personas and explain how to make them, using insights from market research.

How to do sampling in market research, with best practices for B2B

How to

March 24, 2023

How to do sampling in market research, with best practices for B2B

How to ensure the right people take part in your project. We share our best practices for sampling in B2B market research.

How to conduct a Usage and Attitudes study: a step-by-step guide

How to

March 15, 2023

How to conduct a Usage and Attitudes study: a step-by-step guide

How to run insightful usage and attitude (U&A) research that shapes your marketing or sales strategies, with best practices.