Articles / Data

The Customer Who Left Last Month Tried to Tell You in October

When you lose a customer, the signals were there months earlier — in the emails they stopped opening, the meetings they cancelled, and the questions they stopped asking.

Bill Eisenhauer
Bill Eisenhauer
March 17, 2026 · 5 min read

A veterinary practice management software company lost 12 customers over six months. Each cancellation felt sudden — a termination email with a polite explanation and a 30-day notice. The team marked each one as churn, updated the spreadsheet, and moved on.

Then they did something most businesses never do: they went back and looked at the data.

All 12 customers showed clear warning signals 90 days before cancellation. Every one. Engagement had dropped — fewer logins, fewer support tickets, fewer feature adoptions. In 11 of 12 cases, the primary contact at the practice had changed (an office manager left, and the replacement never logged in). The signals were sitting in the data the entire time. Nobody was watching.

The financial cost of those 12 missed signals: $98,000 in annual recurring revenue that walked out the door while the warning lights were flashing.

What does a customer exit actually look like in slow motion?

Looking forward, churn feels sudden. Looking backward, it almost never is. The data across industries shows a 84-120 day deterioration pattern — a slow-motion exit that’s visible in the numbers if you know where to look.

Here’s what the typical pattern looks like when you reconstruct it after the fact:

Day -120 to -90: The engagement cliff. The customer’s interaction frequency drops from their personal baseline. Not to zero — just lower. Fewer logins. Fewer support requests. Fewer responses to your emails. This stage is the hardest to detect because the customer still appears “active” in aggregate metrics. But relative to their own history, something has shifted.

Day -90 to -60: The support spike, then silence. Many churning customers go through a brief period of elevated support contact — they’re trying to make the product work for a changed situation. When it doesn’t resolve, they stop contacting support entirely. This spike-then-silence pattern is one of the most reliable churn predictors: a 78% accuracy rate in the case studies I’ve reviewed.

Day -60 to -30: The stakeholder drift. The person who chose you stops showing up. In B2B relationships, when the decision-maker delegates all interaction to a junior team member, the relationship is in hospice. The delegate doesn’t have the context, the history, or the authority to advocate for keeping you.

Day -30 to 0: The quiet exit. By this point, the customer has mentally moved on. The cancellation email is a formality — the decision was made weeks or months ago. Any save attempt at this stage has a success rate below 10%. The window for intervention closed 60 days earlier.

Why don’t businesses catch these signals?

They measure averages, not baselines. Most dashboards show aggregate metrics: total logins, average support tickets, overall engagement scores. A customer whose engagement dropped 45% from their personal norm looks “fine” in an aggregate view — because their reduced usage still falls within the normal range for the customer base. The signal is in the change from baseline, not the absolute number.

They track activity, not trajectory. A customer who logged in 3 times this month and 3 times last month looks stable. But if they logged in 12 times a month for the previous year, 3 logins is a 75% drop. Without historical trajectory, the deterioration is invisible.

They respond to complaints, not silence. Most businesses have a support process for customers who raise issues. Almost none have a process for customers who go quiet. The silence is interpreted as satisfaction — “no news is good news” — when it’s often the opposite. The customer has stopped investing effort in the relationship.

What does early detection change?

The veterinary software company implemented one change after their retrospective analysis: when any customer’s engagement dropped 40% from their 90-day average, the account manager received an alert.

The intervention was simple — a genuine check-in: “I noticed we haven’t connected in a few weeks. Is everything working the way you need it?” No discount. No save offer. Just attention.

Of the next four at-risk customers flagged by the system, they retained three. $36,000 in annual revenue preserved — from a process that required no new tools, no new staff, and about 20 minutes per intervention.

The retention literature is consistent on this point: timely beats complex. A genuine check-in at the 60-day mark saves more customers than a 20% discount at the cancellation mark. By the time someone is cancelling, the relationship is over. Sixty days earlier, it’s still recoverable.

How do you build the retrospective view?

Before you can catch signals going forward, you need to understand what signals you’ve been missing:

Pull your last 6 months of lost customers. For each one, reconstruct three data points: their engagement pattern over the 120 days before cancellation (logins, support tickets, email opens, meeting attendance — whatever you track), any contact changes (did the primary person leave?), and any payment friction (failed charges, late payments, billing disputes).

Look for the shared pattern. After 5-6 reconstructions, the pattern will emerge. Maybe your churn customers all reduce email engagement 60 days before cancelling. Maybe they all skip the quarterly review. Maybe they all have a support spike followed by silence. The specific pattern varies by business — but there is always a pattern.

Set the threshold and monitor. Once you know what the deterioration looks like in your business, define the trigger. “Engagement drops 40% from 90-day average” or “no support contact for 45 days when baseline is weekly.” The threshold doesn’t need to be perfect — it needs to exist. A false alarm that prompts an unnecessary check-in costs you 15 minutes. A missed signal that leads to a cancelled customer costs you thousands.

What does AI actually do for churn detection?

AI turns churn detection from a manual retrospective exercise into a continuous, automatic monitoring system. An AI churn detection system tracks every customer’s engagement against their personal baseline — not an aggregate average — and flags the moment any metric deviates by a threshold you define. It correlates signals across multiple data sources simultaneously: email engagement dropping while support tickets spike while payment gets delayed. No human can monitor these patterns across 200 customers daily. AI does it without effort, and surfaces only the accounts that need human attention — typically 3-5 per month, each with a specific explanation of what changed and when. The intervention stays human. The detection becomes automated.

Key takeaways

  • Customer exits follow a 84-120 day deterioration pattern that’s visible in engagement data, support patterns, stakeholder changes, and payment behavior. By the time the cancellation arrives, the decision was made months ago.
  • The most reliable churn signal is a support spike followed by silence — the customer tried to make it work, failed, and disengaged. This pattern predicts churn with roughly 78% accuracy in the case studies I’ve analyzed.
  • Timely beats complex. A genuine check-in at the 60-day mark retains more customers than a discount at the cancellation mark. The window for intervention is narrow — and most businesses miss it because nobody is watching.
  • Start with a retrospective: pull your last 6 months of lost customers and reconstruct their engagement pattern over the 120 days before they left. The shared pattern you find is the early warning system you’ve been missing.
Data & Customers

What's hiding in the customer data you're not using?

This article explored one category. The free diagnostic scores all four — and gives you a dollar estimate in 90 seconds.

Take the Free Diagnostic