Avatar of Dejana Bajic

Why we don’t like Net Promoter, and what we used instead

What is Net Promoter?

You’ve probably seen and answered this question in a survey before:  How likely is it that you would recommend X to a friend or colleague? …and you pick an answer on an 11-point scale. The companies then digest these responses and label customers either as Detractors, Passives, or Promoters, based on their answer. They then take the % of Promoters and subtract % of Detractors to get something called NPS (Net Promoter Score). So, obviously, if you have more Promoters than Detractors, your NPS will be positive (good). If your NPS is 50+, you’re the next Apple.

Net Promoter Score

Why is it faulty?

A few reasons why I don't like Net Promoter (at least for us as a B2B SaaS provider) and think it shouldn't be used as a growth predictor:

  • To date, there is little evidence that shows the correlation between NPS and actual revenue growth…either renewal rate or recommendations (see opposing research on page 20 and supporting research here).
  • It's complex for the survey-takers. This isn't a real number, it's “how likely are you”? People have very different “1-10″ mental scales, especially across different cultures.
  • Results are non-actionable. If your score is low, you have no insight into what needs to change in order to improve it.

So what did we ask our customers instead?

A question that might be seem to be the same, but is very different if you take a closer look:

Have you recommended UserVoice to anyone?

  • No
  • Yes, and this is how I described it: ______

If you're not already talking to unhappy customers, we also recommend the additional question: “what could we do to make our product better?”

What’s so great about this question?

Two things:

  1. First and foremost, it’s simple. Your customers don't have to think about where they fall on an abstract scale. It's yes or no. It also means there's a lot less analysis you have to do to get an answer to the simple question: “are our customers satisfied?”
  2. It's actionable. Having an open-ended question where they tell you how they’d describe the product gives you a nice list of benefits they received. That’s a powerful list to have when you’re thinking about product positioning and messaging. That’s actionable.

Ok, enough theory, let’s see this in action.

UserVoice recommendations

How has this % breakdown changed over past year:

UserVoice recommendations over time

A few things seem to suggest that the results are accurate. People recommend us more in fall and spring — which does correlate with our revenue. That’s when the buying decisions happen. And folks who recommended our product were more likely to answer the question “how disappointed would you be if you could no longer use UserVoice” (inspired by the smart folks at Survey.io) with “very disappointed”.

And, of course, the “how did you describe UserVoice” question brought us some great insights on how to market our product (“simple”, “efficient”, “easy to set up”, etc).

In conclusion…

Is this a brilliant, foolproof question? No…we firmly believe that it’s impossible for a single metric to tell you everything you need to know. But we do feel that this metric is far more telling than a complex scale that survey-takers may not understand.

Give it a try and let us know what you think, or tell us what you use to measure customer satisfaction!