CX — The sum of all interactions with your brand from all customers and potential customers and their on-going relationship with your brand.
It’s commonly believed that it’s cheaper to retain an existing customer than to try and acquire a new one. In these tricky, uncertain economic times, there is often a natural tendency to compete in a ‘race-to-the-bottom’ where price-sensitivity is the most important driver. This might be true for one-off purchases, where the relationship is going to be limited but, this isn’t the best way to grow and retain your audience. You have to show them that you care about their experience, personalisation can be a lot more effective in the long-run.
While there has been a considerable amount written about Customer Experience, a lot of it consists of short articles containing little analytic or structural worth. When measuring Customer Experience we have to move beyond the world of rational, quantitative analysis and discover the qualitative, emotional factors that are affecting our customers.
This is a large audience; from people who come to the website via Tweet and then immediately bounce, never to return, to dedicated, repeat viewers. To make this volume of people easier to work with I reduced it to a framework containing basic principles that can be easily applied.
Don’t worry! I’ll have a way that you can introduce this kind of thinking into your design ideations and product sprints at the end!
Why is CX Important?
Why should you care about measuring CX? Without a valid way to measure our successes, we can’t provide proof of a return on investment for our work, we can’t ask for extra resources or staff to help us maintain or improve our work. If we don’t know when or why we went wrong, we can’t fix our mistakes or improve in the future.
CX is useful when you can’t compete on price. When you can’t compete on price, you have to compete on the quality of the service and the experience. Did it measure up to the customer’s expectations? Did it provide them with what they came for? Did it provide them with value? It’s better to retain existing users by engaging with them and improving their experience than it is to acquire and onboard new users.
CX helps with planning. By finding out what is causing our users the most pain we can prioritise the areas to fix or improve. We can target the quick wins as well as long term, strategic planning.
CX helps to confirm ROI. It achieves this by highlighting where issues have occurred and how much they have cost. You can then show what was done to fix them and, most importantly, the business effect of those changes so you can prove a definite return on the resources invested. This can lead to a higher budget in the future because an actual impact can be proved.
CX gives improved visibility of your wins and successes within the wider business. It helps spread the word how great a job you’re doing and the effect that you’re having on the bottom-line. It also helps by making sure that there’s a positive response to any failures so you can try an avoid making similar mistakes in the future.
We first need to find out what we need from our success metrics.
A Success metric should contain at least some of the following attributes:
It needs to be adaptable and scalable. It needs to be re-useable. It can be altered to fit give similar insight for different scales and for different projects.
It needs to be seasonal or trendable. You should be able to compare results from two time-scales and trend them into the future.
It needs to be replicable and consistent. Whoever performs the analysis, whenever they perform it, the results will match. People need to be able to rely on it and understand what it represents.
It needs to be understandable, impactful and easily digestible. Everyone should have an interest in the success of a project. Not just the analysts and the project leads. It should have an impact on many areas of cross-functional teams
It needs to be segmentable, actionable and insightful. The level of detail is useful, can vary and can be used for large or small-scale projects i.e. it can be used in different ways by different teams.
It must be trustworthy, transparent and credible. It should become a Single Source of Truth for Customer Experience and the measurement should be consistent.
It should be comprehensive. It needs to cover all channels, endpoints and touchpoints.
It must be able to identify both causes and consequences. The input and output metrics are linked and affect each other e.g. Pages per Session affects Frequency of Visit.
Now, we have to look at the audience that we are producing the result for. Who are they? What kind of detail are they looking for? What kind of detail will be useful?
- Audiences and use-cases, who needs to track CX? Why?
The range and scope of the data make it unlikely that any single number can represent CX, because of this, I decided that it was better to use a measurement framework than a single metric. It needs to be able to be adaptable to be used for large or small-scale projects. It should be able to be used cross-functionally across different teams and across multiple roles. It should bring an awareness of what affects the Customer Experience and how the CX affects everything else to as wide an audience as possible.
How can we make the entire customer experience into a usable framework? As we are designing this to encompass the totality of the customer experience, we must take the experience of the audience members who didn’t engage but dropped out, as well as those who do engage. We must have an idea of how the individual users’ previous experiences affect their feeling about the brand. Overall and aggregated metrics can be quite hard to use as they are very awkward to move in any significant way e.g. Aggregated NPS can be is quite monolithic and takes a lot of impact to move by a single point while a 0.1% rise in NPS Detractors from the same question, from the same survey by the same customers, makes tracking the impact much more straight-forward and obvious.
This is intended to inform different job roles of the types of metrics they might see in their area. It will help identify both key-drivers and success metrics to your Stakeholders, UX team and the Insight/Analytics functions.
As we have so many customers with dissimilar needs, we need to try and identify the customer’s motivation for using our services before we can determine the quality of their experience with that service.
I’ve identified two methods to break-down the customer experience into more measurable components.
· The Customer Journey — Where are they trying to go? What is their ultimate end-point? Why are they here?
· Jobs to be Done / User Tasks — What are they trying to accomplish today/in this interaction/in the course of this project? How can they do this?
The Areas of Customer Experience
Customer Experience consists of all the interactions and emotions the customer feels when they visit. Positive, negative, neutral. The things we can track automatically using back-end services and the things that we have to directly ask the user This includes both of what is often referred to as rational, left-brain thinking and emotional, right-brained thinking.
This is focussed on what actually happened, it’s analytical and rational thinking, where maths, logic and linear thinking tend to be thought of as happening. This is reflected by measuring the customers’ actual journey through their experience.
Actual metrics — these reflect what happened to the customer and how it materially affects the brand. This includes areas such as pages viewed, time on site and interactions with the various on-screen elements. This can also be referred to as evidence.
This is the centre of emotional and creative thinking, it focusses on what the user feels is happening. It’s where imagination, intuition, visualisation and big-picture thinking tend to be. This is reflected by measuring the customers’ perception of what they’ve experienced.
Perceived metrics — these reflect what the customer feels about what they’ve experienced and how it affects their relationship with the company. They are self-reported and this should be taken into consideration. People often only answer when they have something that they definitely want to say. There are also sometimes cultural differences in the way scores are given. More individualistic cultures tend to mark at the extreme ends, more communal cultures tend to cluster around the middle. This can also be referred to as experience.
These can be broken down into input and output metrics.
Input Metrics — These are the metrics that affect the Customer Experience:
3 Example Areas of Input Metrics
How we talk to the user
What they say to us
How the customer acts
What they do
How do they feel about the process?
The quality, perceived and actual of the services provided
The value, perceived and actual of the services provided
The utility, perceived and actual of the services provided
Output Metrics — These are affected by the Customer Experience:
2 Areas of Output
How much does the customer bring in?
Revenue, margin, ROI, ROAS.
Are we getting the optimal amount of revenue from each customer?
Do they return?
Do they re-book/repeat?
Do they share/recommend to others?
Preparing a CX metric — First Steps
· Break the customer journey down into recognisable, defined steps or jobs, make sure you include the measurements you’ll be recording for each stage, or what you’d like to measure.
· Overview, break the overall journey into more recognisable steps
· Where are our users going?
· Where do we want them to go?
· FEEL quadrant
· Why are they going there?
· What task(s) are they trying to achieve?
· ACT quadrant
· How do they get there?
· What is their path?
· ACT/FEEL quadrants
· What stops them achieving?
· Where are the pain points?
· THINK quadrant
· How can we tell that this is good for the user?
· How can we prove that this is good for us?
· LOVE quadrant
· How can we improve this?
· What else can we offer that the user wants?
Using this Framework in a Sprint Planning Session
a. Have plenty of pens, post-its and either a whiteboard or a flipchart pad.
b. Explain the reason behind the meeting, before the meeting via email or Slack, let people have a think about how it applies to their role.
c. Have a list of features or issues you’d like to plan for. This should include any OKR’s, KPI’s or other success metrics here as actual outputs.
d. Draw a 2x2 grid on a whiteboard and label the quadrants.
e. Give everyone 5–10 minutes to write down what they think you should be measuring and stick a post-it with it on in their chosen quadrant.
f. After 5–10 minutes, consolidate any similar metrics into one note and discuss with the group why they decided to put them in that particular quadrant.
g. Decide on how you’re going to measure this, what tools do you need. What segments are you targeting?
h. Transfer your metrics to the grid, there is a more specific, analytic version at the bottom for you to use and send to your stake-holders or reporting team.
i. Move on to the next idea and repeat!
Breaking down the quadrants:
1. Actual Input — THINK
Defined Input metrics.
Left-brain analytic thinking.
What actually affects CX.
The actual journey and interactions measured, the evidence behind the customers journey.
What can be quantitatively proved from the customer’s progression to their initial goals? How smooth is their journey from a technical side?
2. Actual Output — ACT
Defined tangible effects.
Left-brain analytic thinking.
What actually happens.
The revenue and concrete figures produced, the evidence produced by the customers journey.
How much revenue, average revenue did we gain from the customer? Did this track acceptably with CPA, ROI, ROAS?
3. Perceived Input — FEEL
Defined User journeys and flow.
Right-brain creative thinking.
What the user feels affects CX.
The customers view of the journey, the customers experience using the product/service.
What the customer thinks and feels about their progress towards their goal. While some of this is quantitative, there will necessarily be a qualitative element. How smooth does their journey appear to them?
4. Perceived Output — LOVE
Defined loyal behaviour.
Right-brain creative thinking.
What the user felt happened and their ongoing emotions.
How the customer feels about the company, the experience that the customer takes forward from their interactions with your brand.
What the customer thinks and feels about the product after the “booking experience”/product has been received. While some of this is quantitative, there will necessarily be a qualitative element such as surveys or user interviews.
THINK: Define your Input metrics. This is left-brain, analytic thinking. It’s what actually affects CX.
FEEL: This is where we define our user journeys and flow. It’s right-brain, creative thinking. This is what the user feels affects their CX.
ACT: Where the tangible effects are defined. Again left-brain, analytic thinking. This is what actually happens and what senior management is most interested in.
LOVE: How we define loyal behaviour. Right-brain, creative, emotive thinking. What the user felt happened to them during their journey and their ongoing emotions.
This is an example of how the framework could initially be completed.
This is the more technical version for Management/Analysts so you can add the metrics you are trying to measure and the methods you are going to use to measure them.
Before you go, watch out for…
The Observer Effect. The act of measuring users’ perception can disrupt the flow and distort the metrics as it interferes with the customer’s experience. Asking the wrong question at the wrong time could take the user out of their experience. We also need to be mindful of the number of times each customer is asked, the number of questions they are asked and the complexity of the questions e.g. Alerts, Pop-in surveys, pop-up modals. This is closely related to Goodhart’s Law which states that “when a measure becomes a target it ceases to be a good metric.”
This framework can be part of a lifecycle of Continuous Improvement or other Agile/Sprint Planning methodology e.g. the A-E cycle from my previous post.
Questions and comments are welcomed and encouraged. Many thanks to @Alex_Wimbleton for feedback and suggesting Evidence and Experience in place of Actual and Percieved.
Next up, actionable NPS insights from small sample sizes.