Back to Entries

Measuring CX Performance

Series: Civics
Date: 2025-12-19
Tags: ["cx", "analytics", "performance", "measurement", "impact"]

This article is about Measuring Customer Experience - From Outputs to Impact.

I worked in, on, and around civic engagement for public agencies for more than 10 years. User-centered approaches continue to gain in popularity, "CX" and Customer Experience have emerged in the public sector, and measurement around effective service delivery follows, to suport this trend.

Public agencies spend substantial resources to instrument websites with analytics and create dashboards and reporting tools to inform staff and the public.


Here's a list common CX metrics, how they roll up from outputs to outcomes to impact, and how to ground "Performance" in long-standing accounting and business practices, like ROI - not just sentiment.

Output Metrics (Activity & Engagement)

  • Website visits / sessions / unique users: Indicates reach; easily gamed; no guarantee of completion or value.
  • Page views & time on page: Measures attention, not success. Longer time may mean confusion.
  • Click-throughs / funnel step completions: Useful for diagnosing drop-offs; still activity-level.
  • “Was this page helpful?” votes (thumbs up/down): Lightweight signal of perceived usefulness; biased toward extremes; not an outcome.
  • Quantitative survey responses (CSAT, 5-star, Likert): Fast pulse on satisfaction; sample bias and recency effects apply.
  • Qualitative survey responses (open text): Rich context for root causes; low N and harder to quantify; best for shaping hypotheses.
  • Trust metrics (e.g., trust-in-agency or trust-in-service): Perception signal; lagging and influenced by macro factors outside the service.

Outcome Metrics (Service Completion & Quality)

  • Completion rate: % of users who start and finish the intended journey (e.g., application submitted, claim filed).
  • First-pass yield / first-time-right rate: % processed without rework or resubmission—strong proxy for clarity and operational quality.
  • Cycle time / time to decision: User-facing latency from start to finish; shorter is usually better if accuracy is maintained.
  • Abandonment rate: % who drop off before completion; reveals friction more than sentiment scores.
  • Error rate / rejection rate: How often the service fails the user; key for quality and fairness.
  • Queue and wait times (virtual or in-person): Directly felt by users; ties to staffing and capacity.
  • Accessibility and equity coverage: Completion and quality across demographics and channels; highlights structural gaps.

Impact Metrics (Value, Sustainability, Equity)

  • Benefit uptake / coverage vs. eligible population: Measures how much of the total addressable population receives the benefit; anchors to saturation.
  • Value realized by users: Harder to measure; can be proxied via post-service status (e.g., benefits delivered, disputes resolved, reinstatements).
  • Reduction in downstream burden: Fewer appeals, escalations, or repeat contacts per user; signals durable quality.
  • Cost-to-serve per completed outcome: Total cost / completed cases; central for ROI.
  • Return on investment: Net value delivered vs. cost (financial or societal proxy); ties CX to accounting discipline.
  • Resilience and reliability: Uptime, incident rates, and recovery times; foundational for trust and sustained impact.

Flow From Outputs → Outcomes → Impact

  • Outputs fuel diagnostics: Engagement and perception data (visits, clicks, helpfulness votes, surveys, trust) show where attention is and how users feel, but they don’t prove success.
  • Outcomes prove service performance: Completion, first-pass yield, cycle time, abandonment, and error rates show whether users actually get through and receive correct service.
  • Impact ties to mission and economics: Coverage vs. eligible population, value realized, reduced rework, and cost/ROI show whether the service achieves its purpose sustainably and equitably.

Visual shorthand:

Outputs (visits, clicks, votes, surveys, trust)
↓ diagnose & prioritize
Outcomes (completion, first-pass yield, cycle time, abandonment, errors)
↓ demonstrate service performance
Impact (coverage vs. eligible population, value realized, cost-to-serve, ROI, equity)
↓ sustain and scale

Grounding CX in Performance (Accounting + ROI)

  • Cost per completed outcome: Track the fully loaded cost to deliver a successful case; lower it without sacrificing quality.
  • Marginal cost and capacity: Understand the cost and latency to serve the next user; informs staffing, automation, and channel mix.
  • Productivity and rework: Measure rework rates and appeals; rework inflates cost and erodes trust.
  • Capital vs. operating spend: Separate one-time investments from run costs to understand payback periods.
  • Portfolio view: Compare services on uptake vs. eligible population and cost-to-serve to prioritize investments.

Guardrails Against Vanity Metrics

  • Require outcome linkage: Every output metric should map to a hypothesized lift in outcomes; if not, deprioritize.
  • Saturation anchoring: Track coverage against the total eligible population and opt-outs; prevents celebrating small local wins while many remain unserved.
  • Equity checks: Break down outcome and impact metrics by demographic and channel; close gaps before chasing new features.
  • Validation loops: Pair survey signals with behavioral and operational data to confirm whether perceived improvements translate into real performance.

Constructive stance: use common CX metrics as diagnostics, but ground performance in completed outcomes, coverage of the eligible population, cost-to-serve, and ROI. Human outcomes matter most—even if they’re harder to measure—and accounting discipline keeps CX focused on substantive improvements instead of vanity signals.