Issue No. 218 · 29 April MMXXVI
Editorial Quarterly · Customer Architecture Review

The Customer Relationship Audit, MMXXVI: Why most CRMs fail their second year— and what the operators who don't are doing instead.

The Claritas Editorial Board · Issue 218

For eighteen years we have observed, instrumented, and occasionally rebuilt customer-relationship systems for institutions across Asia-Pacific. The pattern that recurs — across industries, across team sizes, across the reigning fashions in software — is the one we wish we had named earlier.

CRMs do not fail because they lack features. They fail because they were procured as software and operated as furniture. The systems that survive their second year are the ones treated as architecture: opinionated about the work, integrated into the pipe, and defended by an operator who understands the building.

What follows is the field guide we wish had existed in 2008. The methodology is at the back³. The data are at claritas.asia/data/MMXXVI.

§ Section 02
The Three Failures
Three patterns recur in the data. They are not, individually, surprising. Together, they describe the entire failure surface of the modern CRM deployment.
I.

The Procurement Trap

CRMs are bought by IT, deployed by Operations, and used by Sales — three departments with different definitions of success. Most failed deployments were already failing at the RFP stage.

Source: Claritas Customer Architecture Review, n=247 enterprise customers, 2008–MMXXV.
II.

The Adoption Cliff

Median weekly active use of CRM seats falls 64% between month 3 and month 9. The cliff is not training; it is the moment the operator realises the system is asking for more than it gives back.

See Methodology Note B for the cohort definition and seat-activity sampling.
III.

The Architecture Premium

Deployments treated as architecture — opinionated schema, defended by a single operator, integrated at the event layer — show 4.2× the 5-year retention of deployments treated as software.

Architecture defined per the Claritas Framework, §3 (the operator, the event log, the integration spine).
Exhibit A
Comparative outcomes
n=247 · MMXX–MMXXV · self-reported, audited subset
Industry baseline drawn from the IDC Customer-Engagement Survey, MMXXV. Claritas figures audited annually by Ernst & Young, Singapore.
MetricIndustry baselineClaritas cohortΔ
Median weekly active use, mo. 923%78%+239%
Forecast accuracy, ±10% of actual31%84%+171%
Time to first value (days)8411−87%
5-year renewal rate22%94%+327%
Net revenue retention, NRR98%138%+40 pts
Operator hours saved / wk / seat9.4
¹ "Industry baseline" defined as the median of the IDC dataset, excluding deployments under 100 seats. ² "Δ" computed as relative change against the industry baseline median.
Methodology
How we measure

Sample: All 247 enterprise customers under contract continuously between MMXX and MMXXV. Excluded: 18 customers under 12-month tenure; 6 customers in active migration.

Adoption: Weekly active use defined as ≥3 distinct authenticated sessions per seat-week, ≥30s active per session.

Forecast accuracy: Quarter-end committed forecast vs. quarter-end booked revenue, absolute deviation as % of forecast.

Audit:Renewal, NRR, and forecast figures audited annually by Ernst & Young Singapore. Audit memos available to subscribers under NDA.

Available to subscribers

The full report — 47 pages, with audit annex — is available without charge to qualified institutional readers.

REQUEST THE FULL REPORT →BRIEF THE EDITORIAL BOARDSubscribers receive the quarterly print edition by post and have access to the underlying dataset.