NPS Doesn't Work for SaaS

NPS started in 2003 when Fred Reichheld, a partner at Bain Capital, wrote about it in Harvard Business Review. The idea was simple: ask customers how likely they are to recommend your company to friends or colleagues on a 0-10 scale. Anyone who says 9-10 is a "promoter," 7-8 is "passive," and 0-6 is a "detractor." Subtract detractors from promoters and you get your score.

It worked great for consumer brands. But for B2B SaaS? Not so much.

Here's what's broken:

The recommendation question makes no sense.
When was the last time you recommended enterprise software to a friend? I don't text my college roommate about our new video AI editor. Even at work, most people aren't going around recommending random software tools unless specifically asked.

The scoring is backwards.
In most of life, 7 out of 10 is pretty good. But in NPS land, it's a negative. A customer who rates you 7/10 gets lumped in with people who rate you 2/10 when calculating your score. That's ridiculous.

It doesn't predict what matters.
I've seen customers give us a 4 on NPS, then renew and expand their contract six months later. I've also seen 9s churn without warning. The score often has little connection to actual business outcomes.

What We Could Ask Instead

Instead of "How likely are you to recommend us?", I think there are better questions we could be asking. Here are some options that might actually matter:

"How disappointed would you be if you could no longer use [product name]?"

  • Very disappointed (Sticky)

  • Somewhat disappointed (At risk)

  • Not disappointed (Churning soon)

This might tell you who actually depends on your product. Very disappointed users probably don't churn much. Somewhat disappointed users likely need attention. Not disappointed users are probably already mentally gone.

"How well is [product name] meeting your current needs?"

  • Exceeds my needs (Potential expansion candidate)

  • Meets my needs (Stable)

  • Falls short of my needs (At risk)

People whose needs are exceeded might be ready for more features or seats. People whose needs aren't met probably need help, or they'll leave.

"Have you mentioned [product name] to anyone in your professional network in the past 6 months?"

  • Yes, I've actively recommended it (True advocate)

  • Yes, I've mentioned it neutrally (Satisfied user)

  • No (Not an advocate)

Then follow up with advocates: "Would you be interested in participating in case studies, reference calls, or other marketing activities?" This could identify people who are actually talking about you, not just people who might theoretically recommend you in some imaginary scenario.

A Possible Better Benchmark

Instead of NPS, we could track what I'm calling the Dependency Score. It would ask

"How disappointed would you be if you could no longer use [product name]?"

  • Very disappointed

  • Somewhat disappointed

  • Not disappointed

Then calculate: (% Very Disappointed) - (% Not Disappointed) = Dependency Score

This would give you a number between -100 and +100, just like NPS. But it measures something that might actually predict retention and customer satisfaction: how much customers depend on your product.

Example Dependancy Score:
Total responses: 195
Very disappointed: 50 (25.6%)
Somewhat disappointed: 30 (15.4%)
Not disappointed: 115 (59.0%)
Dependency Score: 25.6% - 59.0% = -33.3


A Dependency Score of +50 would mean 50% more of your customers would be very disappointed to lose your product than wouldn't care at all. That seems like it could be a much better predictor of business health than whether they'd recommend you to their brother-in-law.

Worth Testing

If you want to move away from NPS, I'd suggest doing it gradually. Keep tracking NPS for a quarter or two while you test these new questions. You might find that the Dependency Score correlates better with actual retention and expansion than NPS does.

The goal isn't to get a high score on some survey. It's to build a product that customers actually depend on and can't imagine living without. These questions will tell you if you're getting there.

Your customer success team might thank you for giving them data they can actually act on. Your product team could appreciate insights that help them prioritize features. And your executives might love having metrics that actually predict revenue.

It's time to stop pretending that a scoring system designed for consumer brands works for B2B software. Our customers deserve better questions, and our businesses deserve better insights.