A Three-Actor Systems: A Structural Shift in CX Design
Customer-centricity was a structural correction. Before it, businesses designed from the inside out (from what
Customer-centricity was a structural correction. Before it, businesses designed from the inside out (from what they had, what they made, what they wanted to sell). The shift toward understanding what customers were actually trying to accomplish changed how products were conceived. That was real progress, and it did produce better products.
The problem it left unresolved is visible in most product and service design workrooms today: ideas are developed around desirability (what the customer wants) while viability and feasibility enter the conversation late, as filters rather than as constitutive inputs [Osterwalder, Value Proposition Design, 2014; IDEO design thinking framework].
This compression was always a structural weakness, but it erred in favour of the customer, and that’s kind of OK. What has made it consequential now is AI, because if AI systems are late-stage implementation decisions, the customer suffers either by getting less value, and even business critical problems for the businesses offering these services.
So AI needs to be thought natively, and AI systems carry decision-making logic, operational boundaries, and failure modes that belong in the earliest conversations about what a service should do.
The three elements here are actors, with their own requirements. Adding intelligence onto a model designed without it produces systems that are incoherent at the level of their own design. They become capable in narrow conditions, and unstable when those conditions change.
---
Three Actors, Each With Requirements
Any service system operating with AI involves three distinct subjects: the customer, the business, and the system itself.
The customer brings intent, attention, and trust — and continuously evaluates whether the interaction is worth the cost of their continued engagement. Jobs to Be Done captures this well [Christensen, Competing Against Luck, 2016]: the customer has a specific progress they are trying to make, and the entire relationship with the service turns on whether that progress is actually happening.
The business brings infrastructure, governance, and capability — and carries constraints around sustainability, regulatory compliance, risk, and strategic positioning that are not peripheral to design but constitutive of what is possible. A service designed without accounting for those limits is not user-centred is incomplete. Creative work could get away with it because creativity (the abundance of ideas flying around in sticky notes) was expensive. Now it’s trending towards zero, and there is no turning back.
The system — the AI agent — is now the third actor, and this is where the structural shift is very clear. A website built in 2010 was a designed possibility space: every state was anticipated, every output was predetermined. When it failed, it just failed technically.
An AI-native system generates responses through inference, reads context, and produces outputs its designers never explicitly coded. The same input in a different context produces a different result, because the system exercised judgment. That is what separates an actor from a tool: not consciousness, but the capacity to transform a situation rather than merely transmit through it [Latour, Reassembling the Social, 2005] (this citation is a proud moment for continental philosophy).
This is also legible in how we respond when AI systems fail. A broken website is a bug. A failing AI system produces bias, hallucination, breach of trust. These categories that carry moral and legal weight. The entire apparatus of AI governance and regulation is a structural acknowledgement that these systems are no longer neutral instruments [EU AI Act, 2024]. This legal perception change, alone, is enough to consider the system an actor that needs to be actively designed.
---
Alignment as the “Design Problem”
What changes when all three actors are taken seriously is the nature of the design problem itself. The question is how to maintain coherence across three sets of requirements simultaneously: customer progress, business sustainability, and agent reliability. Donella Meadows identified suboptimisation as one of the most common failure modes in complex systems: optimising a part for its own performance tends to degrade the whole [Meadows, Thinking in Systems, 2008]. As bad as this sounds, customer-centricity, as currently practiced, is a suboptimisation — it produces local excellence in one actor's experience while the structural conditions that make that experience possible erode.
Meadows also observed that systems with strong feedback loops between their components are more resilient than those where components operate in isolation [ibid]. Each of the three actors in this model generates feedback the others must absorb and respond to: the customer's trust affects what the agent is permitted to do; the agent's decisions affect business risk; business constraints shape what the agent can offer. Designing without those feedback relationships explicit is the removal of the mechanism by which the system could actually learn.
This means the work shifts from designing flows to designing decision logic: how the agent behaves in unanticipated situations, how business constraints are embedded into its reasoning, where human oversight re-enters the loop, and how customer outcomes are protected at scale. The Value Proposition design work [Osterwalder et al., 2014] needs to expand accordingly, from a dyadic exchange between product and customer to a three-way circuit in which each actor contributes resources and carries obligations.
None of this replaces Jobs to Be Done, Unique Value Proposition work or customer-centric journey design. The customer is STILL at the center, but the customer is not alone. There are other actors orbiting them, affecting the system directly and in a complex way. It’s a system with three subjects, each capable of destabilising the whole if left undesigned, and to add value to a stable system that can, now, learn and improve continuously.
Back to Writing