An interview with our Managing Director, John Hughes regarding the mis-use of NPS surveys
John, you have a bit of a problem with NPS, right?
Not so much a problem, but it continues to amaze me when I see the amount of column inches and blogs I read about the NPS rating scales and them being done differently, let’s say, to the standard method.
I posted an article of LinkedIn a while back and got a lot of feedback that said the same thing. I even heard about a survey recently where the organisation asked the customer to use a scale of 8 to 10!!!!. This fiddling about with the NPS question compromises it as universal benchmark.
And is this common?
Not that particular example, but misuse of the rating scales and manipulation of the scores does seem to be far more commonplace than you’d imagine.
And this bothers you, why?
I just think, why do that? It makes no sense. If you are going to use the methodology then use the same scale as everyone else or you can’t compare the results properly. If a customer says they are ‘likely to recommend’ how do we know if that’s the same as another customer’s 7, 8 or 9? It’s the process that bothers me, and the way it makes benchmarking very difficult.
I’m not bothered about the organisations who do it. If they want to mislead themselves into thinking they have a higher NPS score by adopting these tactics then so be it. I’m more concerned about the wider issue; because if NPS is not used properly it makes it harder to use for comparison and it also tells me that some organisations are much more worried about the number than they are about what the number is telling them.
And that’s not good?
The reality is that it doesn’t matter about the number so much. The NPS score is only a guide to future customer behaviour.
If the scores are inflated in whatever way the organisation sees fit, they will miss this vital clue to what their customers are going to do. Longer term they will find out that the associated business growth does not follow.
And they will want to know why, won’t they?
They will, but more to the point, their board or shareholders will want to know why the growth isn’t matching the numbers and questions will be asked.
So are you not a fan of NPS then?
It’s not that. It’s actually a really good global benchmark and IF everyone used it properly it could continue to be but the more I hear stories like the 8 out of 10 one, and that’s the thin end of the wedge, the more I worry that its credibility is being eroded and its usefulness weakened.
But we have already seen CES (customer effort score) touted as a better guide of customer behaviour than either this or CSAT, so it needs to be careful that its reputation doesn’t become tarnished by bad practice.
It was hailed as the ‘only question you needed to ask’. I take it you don’t agree?
Not that it’s the only question, no.
For a start, you would have to ask why they scored as they did, especially if it was below 9 or 10 out of 10, so that’s at least two questions. I think there has to be questions that drill down into the actual customer experience to fully understand that the customer journey’s is really like and just asking the NPS question can’t do that.
But it’s a really valid question to ask (as is CES) in a survey because it’s still a good benchmark and guide.
So it’s got a place but with a number of other measures?
Exactly. And if it’s a good survey then organisations will achieve better response rates and customer engagement.
Very often it’s about designing a better survey and using the best deployment methods. And of course, doing something with the data and telling your employees and customers what you have done as a result of the feedback.
Is too much emphasis placed on measurement though?
I would agree that there can be too much emphasis on the high level number rather than what the numbers are actually saying. But you need to know if progress is being made and what areas are letting you down, so you have to measure in some way.
One of the difficulties when trying to benchmark is that every organisation uses different questions and ratings as it is, which is why NPS was a welcome measure a few years ago. Its good to have a universal measure that you can compare and get some context from. Just don’t make it the be all and end all, and be careful not to read too much into what the numbers tell you because there are other factors at play. Many years ago, I followed the Baldridge awards with interest, until one of the award winners went bust!
Surely that can happen to anyone though?
Of course, I’m just making the point that winning an award or having high scores isn’t an automatic passage to success. How long will it be before organisations with NPS scores of 70 start to go bust?
John Hughes is the founder and MD of CSN. In his ‘spare time’ he is a speaker on service excellence and a judge on several awards including the UK Customer Experience Awards.