Reading0%
Journey Management · Apr 21, 2026

When to Use Data vs. When to Talk to People

The debate between quantitative and qualitative research methods in customer experience work tends to get framed as a values question: data-driven teams trust numbers; design-think

SJ95 3 min Customer Journey, Journey Management
Journey Management
SCQA dossierSJ95
Situation The debate between quantitative and qualitative research methods in customer experience work tends to get framed as a values question: data-driven teams trust numbers; design-think
Complication The old frame no longer explains the work cleanly.
Question When to Use Data vs. When to Talk to People
Answer The debate between quantitative and qualitative research methods in customer experience work tends to get framed as a values question: data-driven teams trust numbers; design-think

The debate between quantitative and qualitative research methods in customer experience work tends to get framed as a values question: data-driven teams trust numbers; design-thinking teams trust stories. This framing is not useful. Both methods are tools for understanding customer reality, and the question worth asking is not which is better but which is better for the specific question you are trying to answer right now.

Journey management uses both, in sequence and in combination, because the two methods answer different questions and because neither is sufficient without the other.

What Data Tells You

Behavioral data — drop-off rates, feature adoption patterns, time-on-task, support ticket volumes, churn rates — tells you what is happening. When the activation stage experience score is –1.4, the data shows you where in the activation flow customers are losing engagement: how many complete each step, where the largest drop-off occurs, how long the average customer takes to reach activation. This is precise, scalable, and necessary.

What the data cannot tell you is why it is happening. The drop-off at the onboarding step where customers are asked to connect their email might be the result of a confusing UI, a privacy concern, a technical failure that affects a particular browser version, or a fundamental mismatch between what customers expect to do in that moment and what the product is asking of them. Each of these causes suggests a different solution. The data shows that there is a problem at this step; it cannot distinguish between these explanations.

"Data tells you where to look. The conversation tells you what to see when you get there."

What Conversations Tell You

Qualitative research — customer interviews, ethnographic observation, facilitated inquiry — surfaces the why. A thirty-minute conversation with a customer who dropped off during activation will typically reveal the specific reason: not the statistical likelihood that any given customer experiences a confusing UI, but the actual experience this specific customer had and what they did as a result.

The limitation of qualitative research is the opposite of data's limitation: conversations are not scalable, they are not representative by default, and they are susceptible to the specific biases of the researcher and the relationship dynamics of the conversation. A customer who is interviewed by someone they know works for the company they are critiquing will naturally soften their criticism. A researcher who has strong hypotheses will naturally hear evidence that confirms them.

The three-tier confidence system is the mechanism for tracking these limitations explicitly. An insight derived from a single customer conversation is an assumption — possibly true, but not yet confirmed. An insight confirmed by multiple customers across different research contexts and corroborated by behavioral data is validated. The confidence tier tracks not just what is known but how it was learned and how reliable that learning is.

The Sequence That Works

In practice, the most productive sequence is: use data to identify where to look, use qualitative research to understand what you find there, and use data again to measure whether the solution worked.

The experience score baseline is quantitative. The discovery interviews that surface the causes of a low score are qualitative. The clustering and synthesis of discovery insights involves both: quantitative judgment about the frequency and severity of each pain pattern, qualitative judgment about the structural connections between seemingly different problems. The Big Solution is designed on the basis of qualitative insight grounded in quantitative pattern. The test plan combines a behavioral hypothesis (the drop-off rate will decrease) with a qualitative validation (customers will report feeling less confused). The delta at the end of the cycle is quantitative.

Neither method is primary. The skill is knowing, at each stage of the process, which type of evidence the next question requires — and building the research practice that can produce it.


Back to Writing