Use Internal Insights as Hypotheses, Not as Facts
Internal discovery produces a rich set of observations about the customer experience — what colleagues believe customers struggle with, what they think customers need, where they a
Internal discovery produces a rich set of observations about the customer experience — what colleagues believe customers struggle with, what they think customers need, where they assume the experience is working or failing. This material is genuinely valuable. It is also second-hand. And treating it as first-hand evidence is one of the most common analytical errors in journey work.
The shift from "this is what we know" to "this is what we think" is not a methodological nicety. It is the practical precondition for conducting useful external research.
The Gap Between Internal Belief and External Reality
Organizations develop strong internal models of their customers over time. These models are built from customer service patterns, sales feedback, product usage data, and the accumulated lore of teams who have been working on the same service for years. They are often substantially correct — experienced teams do develop genuine insight into customer behavior.
They are also systematically biased.
Internal models tend to overweight problems that have surfaced loudly (the complaints that generated escalations, the features that drove the most support tickets) and underweight problems that customers have quietly accommodated — the frictions they work around without complaining, the needs they have stopped articulating because past experience has shown that articulating them produces no response.
They also tend to reflect the organization's structure as much as the customer's experience. Teams describe the journey from the perspective of the functions they own. The onboarding experience, as described by the product team, reflects product's version of onboarding. The post-purchase experience, as described by customer service, reflects customer service's version of it. The seams between these perspectives — the places where the customer's continuous experience crosses organizational boundaries — are often invisible in internal accounts.
"When Discovery ends, you don't have answers — but a direction. Use internal insights as hypotheses: does this reflect your experience? Does this problem exist for you? What's actually happening here?"
How to Carry Internal Insights Into External Research
The mechanism is straightforward: take the patterns identified in internal discovery and convert them into testable hypotheses for external interviews.
"We believe customers have difficulty assessing product quality before purchase because our product descriptions are written for enthusiasts rather than newcomers." This is a hypothesis. It might be exactly right. It might be wrong in the detail — perhaps the difficulty is in comparing products rather than understanding individual ones. Or it might be wrong entirely — perhaps customers trust brand reputation and the product descriptions are irrelevant to the purchase decision.
Present the hypothesis to customers explicitly. "We've heard from our team that there's confusion around X — does that match your experience?" This approach gives customers a clear frame to respond to, makes the conversation efficient, and signals that the organization has already been thinking seriously about their challenges.
When the customer confirms the hypothesis, the insight moves from Assumption to Internally Reasoned on the confidence tier. When the customer provides a more nuanced version — "yes, but the actual problem is Y" — the insight is refined and becomes more actionable. When the customer contradicts the hypothesis — "that hasn't been my experience at all" — the internal model is corrected, and the organization avoids investing in solving the wrong problem.
The Confidence Tier in Practice
This is why the confidence tier system — tagging each insight as Assumption, Internally Reasoned, or Validated — is more than an academic classification. It is an operational tool for managing the risk in decision-making.
A team that decides to invest in addressing a high-confidence, validated insight is taking a calculated risk. A team that invests in addressing an assumption is taking a larger risk — which may be acceptable if the opportunity is large enough, but should be visible as a risk rather than hidden as a certainty.
Making assumptions visible does not paralyze decision-making. It makes decision-making more honest. The organization can choose to act on an assumption when the potential upside justifies it — but it knows it is doing so, builds a test plan accordingly, and measures the result rather than treating the investment as self-evidently correct.
Back to Writing