CSAT vs NPS vs CES: Which Metric Actually Predicts Churn
Gartner's research across 97,000 customer interactions found that 96% of customers who reported high-effort experiences became more disloyal — compared to just 9% of those with low-effort experiences. Yet most support teams still default to CSAT as their primary metric, and many couldn't tell you what their CES score is.
What do CSAT, NPS, and CES actually measure?
CSAT measures satisfaction with a specific interaction, NPS measures long-term loyalty intent, and CES measures how much effort a customer had to spend. Each answers a different question — and confusing them is the most common mistake in CX measurement.
CSAT (Customer Satisfaction Score)
CSAT asks: "How satisfied were you with this interaction?" Customers respond on a 1-5 or 1-7 scale. Your score is the percentage of respondents who selected the top two ratings (4-5 on a 5-point scale). A team with 80 positive responses out of 100 has an 80% CSAT score.
CSAT is transactional. It tells you how a customer felt about what just happened — a resolved ticket, a product delivery, an onboarding call. It says nothing about whether they'll stick around next quarter.
NPS (Net Promoter Score)
NPS asks: "How likely are you to recommend us to a friend or colleague?" on a 0-10 scale. Respondents who score 9-10 are Promoters, 7-8 are Passives, and 0-6 are Detractors. NPS = % Promoters minus % Detractors. The score ranges from -100 to +100.
NPS was created by Fred Reichheld at Bain & Company and first published in a 2003 Harvard Business Review article titled "The One Number You Need to Grow" (HBR, 2003). It's relational — designed to capture overall brand sentiment rather than reaction to a single event.
CES (Customer Effort Score)
CES asks: "How easy was it to get your issue resolved?" on a 1-7 scale (1 = very difficult, 7 = very easy). The concept came from a 2010 Harvard Business Review article by Matthew Dixon, Karen Freeman, and Nicholas Toman, arguing that reducing effort matters more than creating delight (HBR, 2010).
CES is effort-based. It measures friction — how many hoops the customer had to jump through. A customer can leave satisfied (high CSAT) and still have expended unreasonable effort getting there.
| Metric | Question | Scale | Calculation | Measures |
|---|---|---|---|---|
| CSAT | "How satisfied were you?" | 1-5 or 1-7 | % top-2 ratings | Interaction satisfaction |
| NPS | "How likely to recommend?" | 0-10 | % Promoters − % Detractors | Brand loyalty intent |
| CES | "How easy was this?" | 1-7 | Average score or % scoring 5+ | Interaction effort/friction |
Which metric is the strongest predictor of churn?
CES is the strongest single predictor of customer disloyalty. Gartner's research found CES is 1.8x more predictive of customer loyalty than CSAT and 2x more predictive than NPS.
The core finding, published in Dixon, Toman, and DeLisi's book The Effortless Experience (Portfolio/Penguin, 2013) and backed by Gartner's analysis of 97,000 customer service interactions, breaks down like this:
- 96% of customers who reported high-effort interactions became more disloyal (Gartner, 2013)
- Only 9% of customers with low-effort interactions showed increased disloyalty
- 94% of low-effort customers said they intended to repurchase, compared to just 4% of high-effort customers (Gartner, 2013)
- 88% of low-effort customers said they'd increase their spending
The gap is stark: the difference between a high-effort and low-effort experience produces a 23x gap in repurchase intent. No other metric shows this kind of behavioral spread.
Why does CES outperform? CSAT captures a moment-in-time emotion that fades. NPS captures a general sentiment that's influenced by brand perception, pricing, and marketing — not just support quality. CES captures friction, and friction is what drives the decision to leave. A customer doesn't churn because they had a "meh" interaction (mediocre CSAT). They churn because getting help was exhausting (high effort).
That said, CES has blind spots. It only captures post-interaction effort. It misses customers who churned without ever contacting support — the ones who left because the product itself was frustrating, or because a competitor ran a better campaign. That's where NPS fills the gap.
Does NPS actually correlate with revenue growth?
NPS correlates with revenue growth at the company level, but it's a weak predictor of individual customer churn. Bain & Company found that NPS leaders in their industry grow at more than twice the rate of competitors (Bain & Company, 2020), but the link between a single customer's NPS response and their renewal is loose.
The problem with NPS as a churn predictor is the scale itself. A customer who scores you a 6 (Detractor) and a customer who scores you a 0 (also a Detractor) are experiencing fundamentally different levels of dissatisfaction, but NPS treats them identically. Meanwhile, the gap between a 6 and a 7 moves a customer from Detractor to Passive — the largest behavioral cliff in the scoring system.
Retently's 2025 analysis of NPS benchmarks found that the average B2B SaaS NPS sits between +30 and +40, while B2C companies average +20 to +30 (Retently, 2025). Industry-level benchmarks from Sopact's 2026 report show wide variation:
| Industry | Average NPS | Good Score | Excellent Score |
|---|---|---|---|
| SaaS / Cloud | +36 | +40 to +55 | +55+ |
| E-commerce | +45 | +50 to +65 | +65+ |
| Financial Services | +34 | +40 to +50 | +50+ |
| Telecom | +24 | +30 to +40 | +40+ |
| Healthcare | +38 | +45 to +55 | +55+ |
NPS is most useful as a quarterly or annual benchmark that tracks overall brand health. It's the check-engine light on the dashboard — it tells you something is wrong, but not what. For diagnosing individual churn risk, you need the other two metrics.
Why does high CSAT sometimes coexist with high churn?
CSAT suffers from satisfaction inflation — customers routinely score interactions 4 or 5 out of 5 even when they're planning to leave. The American Customer Satisfaction Index reported a national average CSAT of 77.3 out of 100 in Q4 2025 (ACSI, 2025), yet churn rates across SaaS companies average 5-7% monthly for SMBs (Recurly, 2025).
Three structural problems explain the disconnect:
- Response bias. Unhappy customers don't fill out surveys — they leave. Zendesk's 2026 CX Trends report found that survey response rates for support interactions average 10-20%. The customers most likely to churn are the least likely to respond, which inflates your CSAT upward.
- Ceiling effects. Most CSAT distributions cluster at 4-5 on a 5-point scale. When 75-85% of responses are "satisfied" or "very satisfied," the metric loses its ability to discriminate between customers who are genuinely loyal and those who are passively accepting.
- Missing the "why." A customer can rate a ticket resolution 5/5 and still cancel next month because onboarding was confusing, pricing changed, or a competitor shipped a feature they needed. CSAT captures the tree; churn is about the forest.
SurveySparrow's 2026 industry benchmarks illustrate how variable "good" CSAT actually is: healthcare averages 78%, retail 80%, SaaS 77%, and financial services 75% (SurveySparrow, 2026). Companies with identical CSAT scores can have wildly different retention rates depending on the competitive intensity of their market and the effort required across the full customer journey — not just the interactions they survey.
CSAT works best for quality-controlling specific touchpoints: Was this ticket resolved well? Was this onboarding call helpful? It fails when teams try to extrapolate a company-wide retention forecast from a collection of interaction-level scores.
When should you use CSAT vs NPS vs CES?
Use CSAT for transactional feedback after specific interactions, NPS for quarterly brand health measurement, and CES after any interaction that involves problem-solving or multi-step processes. The best CX programs run all three — timed to different moments in the customer journey.
| Moment | Best Metric | Why |
|---|---|---|
| After a support ticket closes | CSAT + CES | Was the resolution satisfactory? Was getting there easy? |
| After onboarding completes | CES | Onboarding friction is the #1 churn driver in the first 90 days |
| After a product purchase | CSAT | Capture satisfaction with the buying experience while it's fresh |
| Quarterly / Semi-annually | NPS | Track overall relationship health and benchmark against competitors |
| After a major change (pricing, UI, policy) | NPS | Detect whether the change shifted overall sentiment |
| After self-service (FAQ, chatbot, docs) | CES | Self-service failures are invisible without effort measurement |
A practical timing pattern
Send CSAT surveys immediately after ticket resolution — within the same conversation thread if possible. Send CES alongside CSAT when the interaction involved troubleshooting, returns, or escalation. Send NPS quarterly via email, separate from any support interaction, to a random sample of active customers. Stagger NPS sends across the quarter rather than blasting everyone on the same day — this smooths the data and reduces survey fatigue.
Zonka Feedback's 2026 guide recommends limiting total survey touchpoints to three per customer per quarter to avoid response rate degradation (Zonka Feedback, 2026). If you're already sending CSAT after every ticket, adding CES to every ticket doubles the survey load and tanks response rates. Pick the touchpoints where CES is most diagnostic — post-support and post-onboarding — and skip it elsewhere.
What are the most common mistakes teams make with these metrics?
The five most damaging mistakes: treating NPS as a KPI for support agents, ignoring non-respondents, benchmarking against unrelated industries, using only one metric, and measuring without acting on the data.
1. Using NPS to evaluate individual agents
NPS reflects overall brand sentiment — it's shaped by product quality, pricing, marketing, and competitor activity as much as by support. Tying agent compensation or reviews to NPS creates perverse incentives. Agents start asking customers to "rate a 9 or 10" or timing surveys to avoid unhappy customers. CustomerGauge's 2026 B2B analysis warns that NPS gaming at the agent level distorts the entire metric (CustomerGauge, 2026). Use CSAT and CES for agent-level performance; reserve NPS for company-level tracking.
2. Ignoring non-respondents
When your CSAT survey gets a 15% response rate, you're missing 85% of your customers — and they're disproportionately the dissatisfied ones. Gartner's original CES research found that high-effort customers are 81% more likely to share negative word-of-mouth but significantly less likely to complete post-interaction surveys (Gartner, 2013). A CSAT score based on 15% response rate is a score of your happiest customers, not your customer base.
3. Benchmarking against the wrong industry
A +24 NPS is excellent in telecom but below average in e-commerce. Sopact's 2026 NPS benchmarks show a 40+ point spread between the highest and lowest industries (Sopact, 2026). Compare your scores to your direct competitors and your own historical trend, not to cross-industry averages.
4. Running only one metric
Each metric has structural blind spots. CSAT misses effort. NPS misses transaction-level quality. CES misses customers who never contacted you. Running a single metric gives you a single perspective on a multi-dimensional problem. The minimum viable CX measurement program uses at least two: CES + CSAT for support teams, NPS + CSAT for product teams.
5. Measuring without closing the loop
Collecting scores without a process to act on low ones is worse than not measuring. Customers who give negative feedback and hear nothing back become more likely to churn than customers who were never surveyed at all — because you raised expectations of being heard and then didn't follow through. Build a closed-loop process: any Detractor (NPS 0-6), any CSAT below 3, and any CES below 4 triggers an automatic review workflow.
How do you combine CSAT, NPS, and CES into a single churn-prediction model?
Don't average them. Instead, weight each metric differently based on the customer journey stage, and use CES as your primary churn signal for support-driven attrition, NPS for relationship-driven attrition, and CSAT as the diagnostic layer underneath both.
A practical framework for small and mid-size support teams:
- Set up a customer health score that combines behavioral data (login frequency, ticket volume, feature usage) with survey data. Weight CES and NPS higher than CSAT because they're more predictive of future behavior.
- Flag at-risk accounts using thresholds: CES below 4 on the 7-point scale, NPS of 6 or below, or CSAT below 3 on two consecutive interactions. Any one of these should trigger an outreach workflow.
- Prioritize by revenue impact. A Detractor on a $5,000/year contract gets a personal call. A Detractor on a $49/month plan gets an automated check-in email. The metric tells you who's at risk; the revenue data tells you how much to invest in saving them.
- Track trends, not snapshots. A customer whose NPS dropped from 8 to 6 over two quarters is a more reliable churn signal than a customer who gave a single CES of 3 once. Longitudinal declines across any metric are more predictive than any single low score.
For teams using multi-channel support tools, matching survey responses to the specific channel where the interaction happened reveals which channels generate the most friction. A business running support across WhatsApp, email, and a website chat widget can compare CES by channel and focus improvement efforts where effort is highest. Platforms like Converge ($49/month flat rate for up to 15 agents) surface CSAT data alongside conversation history, making it possible to connect satisfaction scores to specific interactions and channels without switching tools.
What are the current CSAT, NPS, and CES benchmarks by industry?
Industry benchmarks vary by 30-50 points for NPS and 10-15 percentage points for CSAT, making cross-industry comparisons misleading. Here are the 2025-2026 benchmarks from multiple sources.
| Industry | Avg CSAT | Avg NPS | Avg CES (7-pt scale) |
|---|---|---|---|
| E-commerce / Retail | 80% | +45 | 5.4 |
| SaaS / Technology | 77% | +36 | 5.1 |
| Financial Services | 75% | +34 | 4.8 |
| Healthcare | 78% | +38 | 4.6 |
| Telecom | 72% | +24 | 4.3 |
| Insurance | 74% | +31 | 4.5 |
Sources: SurveySparrow (2026), Sopact (2026), Retently (2025), Sobot (2025), Formbricks (2026). CES figures synthesized from industry reports using 7-point scale averages.
Two patterns stand out. First, industries with simpler transactions (e-commerce, retail) score higher on all three metrics than industries with complex, regulated interactions (healthcare, financial services, telecom). Second, the industries with the lowest CES scores are also the ones with the highest churn rates — telecom's average monthly churn sits around 1.9% (Recurly, 2025), nearly double that of SaaS at 1.1%.
A 1-point improvement in CES can increase retention by up to 8% in e-commerce (OpenSend, 2025). The ROI math is straightforward: if you have 1,000 customers paying an average of $100/month, an 8% retention improvement saves $96,000/year in otherwise-lost revenue. That's why CES-focused improvement programs — reducing channel switches, eliminating repeat contacts, streamlining escalation — tend to produce a faster payback than NPS-focused brand initiatives.
How does channel switching affect CES and churn risk?
62% of customers who switch channels during a support interaction rate the experience as "high effort" — and high effort is the single strongest predictor of disloyalty (Gartner, 2013). Reducing channel switches is the fastest lever most support teams have for improving CES.
Channel switching happens when a customer starts on chat, gets told to call, then gets transferred to email for a follow-up. Each switch resets context, forces the customer to re-explain their problem, and adds friction. Gartner's data shows that 74% of customers find it frustrating to retell their story to a new agent or on a new channel.
The operational cost is also significant. Gartner found that a low-effort interaction costs 37% less to deliver than a high-effort one, largely because low-effort interactions stay in a single channel with a single agent.
Three tactics that directly reduce channel switching:
- Unify your inbox. When agents can see conversation history across channels, they don't need to ask the customer to repeat themselves. A unified inbox that aggregates WhatsApp, email, chat widget, and social messages into one view eliminates the information gap that forces switches.
- Route by skill, not by channel. If a customer's issue requires a specialist, route the conversation to the specialist within the same channel. Don't ask the customer to call a different number or email a different address.
- Measure CES per channel. If your email CES is 5.8 but your phone CES is 3.9, you know where effort concentrates. Most teams track CSAT per channel but skip CES, missing the effort signal entirely.
For teams managing multiple messaging channels, a platform that lets agents handle Telegram, WhatsApp, Instagram, and website chat from a single interface (Converge supports 10+ channels at a $49/month flat rate) removes the structural cause of channel switching rather than just managing the symptoms.
How do you build a CX measurement program from zero?
Start with CES after support interactions, add CSAT to the same survey, and introduce NPS quarterly once you have 90 days of CES/CSAT baseline data. Most teams try to launch all three simultaneously and end up with poor response rates across all of them.
A phased approach for teams just starting out:
Month 1-2: CES + CSAT post-support
Add a two-question survey after ticket resolution: "How easy was it to get your issue resolved?" (CES, 1-7) and "How satisfied were you with this interaction?" (CSAT, 1-5). Two questions keeps completion rates above 30%. Embed the first question directly in the resolution message — in-channel surveys get 2-3x the response rate of email follow-ups.
Month 3: Establish baselines
After 90 days, you'll have enough data to establish baseline CES and CSAT scores per channel, per agent, and per issue type. Identify the three highest-effort touchpoints (lowest CES) and build improvement plans around them. Track CES trends weekly.
Month 4+: Add NPS quarterly
Send NPS to a random 25% sample of active customers each quarter (rotating so everyone gets surveyed once per year). Use email, not in-app, for NPS — you want to measure relationship sentiment, not reaction to a specific session. Compare your NPS quarter-over-quarter and against industry benchmarks.
Ongoing: Close the loop
Any CES ≤ 3 or CSAT ≤ 2 triggers a follow-up within 24 hours. Any NPS Detractor (0-6) gets a personalized outreach within one business week. Track how many at-risk customers were retained after closed-loop outreach to measure the program's impact.
The minimum viable toolset: a support platform that triggers post-resolution surveys (most modern support tools include CSAT natively), a spreadsheet or analytics tool for tracking trends, and an agreement on who reviews and acts on low scores weekly.
Key Takeaways
- Prioritize CES as your primary churn signal — it's 1.8x more predictive of loyalty than CSAT and 2x more predictive than NPS (Gartner, 2013).
- Send CES surveys after support interactions and onboarding, CSAT after specific touchpoints, and NPS quarterly to a rotating sample.
- Track CES per channel to find where effort concentrates — 62% of channel-switching interactions are rated high-effort.
- Benchmark against your own industry, not cross-industry averages — NPS varies by 40+ points between e-commerce (+45) and telecom (+24).
- Build a closed-loop process: any CES below 4, CSAT below 3, or NPS Detractor triggers outreach within 24 hours to one week.
- Combine metrics with behavioral data (login frequency, ticket volume) for a customer health score rather than relying on any single metric.
- Start with two questions (CES + CSAT) post-support before adding NPS — launching all three simultaneously tanks response rates.
Frequently Asked Questions
A good CES on a 7-point scale is 5 or above, which translates to roughly 80%+ on a percentage scale. E-commerce and retail typically score 5.2-5.4, while SaaS companies should target 5.0-5.5. Scores below 4.5 indicate friction that's likely driving churn (Formbricks, 2026).
Yes, but limit total survey touchpoints to three per customer per quarter. Send CES + CSAT as a two-question survey after support interactions (in the same message). Send NPS separately via email, quarterly, to a rotating 25% sample. Don't stack NPS on top of a support survey — it muddies both signals.
NPS remains the most-used CX metric globally and the strongest benchmark for comparing brand health against competitors. Its weakness is individual churn prediction — a single NPS response is a poor predictor of whether that specific customer will renew. Use NPS at the company level, not the ticket level.
Response bias. Unhappy customers skip surveys and leave. With 10-20% response rates typical for post-interaction CSAT, you're scoring your happiest customers, not your customer base. This inflates scores and hides the churn risk that non-respondents represent.
A 1-point CES improvement can increase retention by up to 8% in e-commerce (OpenSend, 2025). Multiply your monthly revenue by your churn rate, then by 0.08 to estimate annual savings. For a company with $100K monthly revenue and 5% monthly churn, that's roughly $48K/year in retained revenue per 1-point CES improvement.
Ready to try Converge?
$49/month flat. Up to 15 agents. 14-day free trial, no credit card required.
Start Free Trial