Beyond NPS: Why the “Silver Bullet” CX Metric Misses the Mark

by Marie Serrano and Katie Monteith

There is a saying that “you deliver services the way you evaluate them”, yet most organizations lean on one-size-fits-all metrics, like ‘Net Promoter Score’ or ‘Likelihood to Recommend’ as the ‘silver bullet’ measurement to assess their impact on the user experience. But what does a score like NPS tell about the actionable tactics, or specific moments in an experience? 

As organizations invest in delivering customized user-centered experiences, measuring their success and impact on the ecosystems they touch goes beyond the standardized CX metrics. To deliver more user-centered services and drive desired behaviours, organizations must shift from an output- to an outcome-driven perspective. They must find new ways to translate user needs and behaviours into quantifiable and actionable measurements that demonstrate the value created for both users and organizations – or “shared value.” Doing so will enable organizations to become as good at measuring the quality of the user experience as they are at measuring the business value of an experience. 

Why? Because in an omni-channel world where customers move fluidly across channels and providers, their preferences are dynamic as they share content and experiences with their social networks, they experience innovation in services across industries that lead to new levels of expectations, and their needs are constantly evolving, companies need to deliver service experiences that are relevant, dynamic and fluid to meet user demands. A 5,000ft assessment of services is no longer enough.

Capturing and quantifying how users are experiencing complex services requires a new way of thinking about how we measure their experience, at a level of detail that enables organizations to understand how and where to make meaningful improvements. In our opinion, there are no single, silver bullet CX metrics. Just like there are no single, silver bullet business metrics. 

That’s why we have a new proposal for measuring experiences – a set of measurement parameters that establish the sandbox for where, when, and how to measure service experiences with the lens of driving shared value.

We’ll dive into the details of these parameters (and how to assess them) in our IxDA workshop on April 24, but below is a bit of an introduction to generate discussion, and get the conversation started.

Parameter #1: Experience Factors

Answers the question: What are they key experience factors of the service I should perform against?

The experience factors are the foundational requirements that should be delivered in a service for the user or the organization to deem it a “good experience.” They measure the process of an experience (the quality and perception of the experience itself) rather than the outcomes of the experience (the resulting benefit for the organization – like financial benefits – or the user – like whether the task was completed or not).

For users, what does the user want to experience during the service? For example: transparency, personalization, proactivity. For the organization, what does the organization want the user to experience during the service? For example: speed, trustworthiness, loyalty. Instead, the benefit of measuring value for both the user and the organization (or “shared value”) comes from drawing a correlation between the factors that matter for the user and the factors that matter for the organization – this assumes that what creates value for the user can also create value for the organization. For example, by delivering a more transparent experience for users, we will increase the trustworthiness of the experience for the organization.

Parameter #2: Timeline

Answers the question: How do you define where to measure the experience?

Timeline looks at the critical moments in the experience, and where those moments start and end. It helps define where to focus your measurement efforts based on the moments that deliver the most value. Timeline considers both where users and the organization define the boundaries of the experience (where the experience starts and ends), and where users and the organization define the most impactful moments in the experience.

For users, where does the user think the experience starts and ends? And what moments matter most for them? For example: The user often thinks the experience begins when a user considers the different options available to them. For organizations, where does the organization think the experience starts and ends? And what moments matter most for it? For example: the organization often thinks the experience begins when the user makes her first contact with an organization representative or channel. Instead, the benefit of measuring value for both the user and the organization comes from redrawing the boundaries of the experience to include the starting point and the ending point for both users and the organization, and measuring at both the key moments that matter for the users and the organization (whether they overlap or not). However, you can prioritize where to measure by looking for moments that matter to both users AND the organization.

Parameter #3: Reference Point

Answers the question: What am I measuring improvement against?

The reference point is the benchmark against which you measure the quality of the experience. It allows you to track progress or transformation by asking “how is my service improving against a set target?” It provides clarity about what kinds of improvements are most valuable for both the user and for the organization.

For the user, we tend to evaluate their experience based on their original expectations of it. These expectations can be shaped by other services or experiences, not necessarily in the same sector. For example: “If Uber can do it, why can’t my bank, my phone company, or my city services?” For organizations, we tend to evaluate the user experience by comparing it to a previous version of the experience they delivered, or a similar experience delivered by close competitors. Are we delivering better or worse than we did last time? Are we delivering better or worse than our competitors? For example: “We are 10% faster at delivering the service than competitor XY.” Instead, the benefit of measuring value for both the user and the organization comes from defining a co-creatively defined common reference point. It creates a benchmark that establishes what the experience SHOULD BE, as collaboratively defined by users and the organization, and measures performance against whether or not the desired experience is being delivered.

These three parameters form the basis of effectively measuring service experiences. Once we have answers to each of the parameters, it’s an easy 1-2-3-step process to identify the right metrics to assess each service experience: 

  1. Prioritize experience factors 

  2. Define your evidence 

  3. Translate evidence into metrics 

Sound easy? Trust us, it will be. We’ll walk through each of these three steps in our session, with hands-on activities, helpful templates, and group discussion. 

And if you’re worried about whether or not you have the quantitative chops to enjoy the discussion, don’t be. We’re not statisticians, data experts, or research experts. We’re just experienced service designers with a desire to see the work we do measured in a meaningful and impactful way.

We look forward to seeing you on April 24th!