Dissecting The Customer Effort Score (CES)

April, 2014

The Customer Effort Score (CES) has been found to be an accurate indicator of customer satisfaction and loyalty. Here we look at what might comprise a CES, and what could be done to optimise it.

The effort a customer must expend in achieving their goal for a contact session – measured as a Customer Effort Score (CES) – has been found to be an accurate indicator of customer satisfaction and loyalty, the ultimate aim of any customer-facing service.

Here we look at what might comprise a CES, and what could be done to optimise it, focusing on 3 key areas:

  1. metrics
  2. functionality
  3. agent training

Metrics

CES is especially useful because it doesn’t rely on a post-interaction survey, but rather can be gleaned from various measures within the control of the contact center.

Which metrics might be useful input for a CES? Mainly those relating to customer wait time.

For voice interactions:

  • How long did the phone ring before it was answered (either by IVR or a live agent)?
  • Was the IVR easy to navigate? i.e. did they follow a linear path, did they get lost and backtrack, or did they ‘zero out’?
  • Once navigated, how long did they wait in queue for a live agent?
  • When talking to an agent, were they put on hold?
  • For how long?
  • How many times?

For text-based interactions (web chat, email, etc):

  • After initiating a chat session/ sending a text/ email, how long did they wait before an agent replied?
    Expectations are different depending on the channel. A recent survey* found that 59% expect resolution within 30 minutes when contacting customer services by phone, 52% expect to get resolution within a day via social media, and 75% expect resolution within a day via email.
    *The Omnichannel Customer Service Gap – Zendesk/ Loudhouse survey, Nov 2013
  • How long were they left waiting for agent responses?
  • How many questions/ answers were required before the customer was satisfied?

To implement improvements for real-time metrics, targets/ thresholds can be set and alerts primed to tell supervisors when the targets are under threat of being breached. For post-interaction metrics, reports must be generated and analysed to identify areas for improvement.

Functionality

For voice interactions:

  • While in the queue, were they kept informed? (“You are number x in the queue”, “The current wait time is x mins”, etc)
  • Did they have the option to leave a message, or request a callback?
  • When transferring from IVR to agent or between agents, did they have to repeat info (security details, account number, etc)?
  • How many times?

Agent training

  • Was the agent able to satisfy the customer during the first contact (first contact resolution)?

In order to achieve this, a number of things must be in place:

  • Did the agent have access to the right information & systems?
  • Is the agent trained adequately to provide the required service?
  • If they couldn’t provide resolution themselves, did they have access to a remote knowledge worker who could help?
  • Did they have the right basic skills for the job? E.g. could they speak/ write clearly and intelligibly?

The combined answers to all these questions and more will give an informed insight into the consumer experience without the need for customer feedback. Armed with this, the contact center is in a good position to maximise efficiency and improve services so that customer effort is reduced and consumers are grown into brand advocates.