Reading0%
Methodologies · Apr 24, 2026

The Three-Actor Canvas

The Value Proposition Canvas was built for two parties. AI-native concepts require three. Customer, Business, and Agent — each with their own profile, each with their own design problem.

002 8 min Methodology, Value Proposition, AI, Strategic Design, Product
AI and JudgmentStrategic Design MethodsWork and Organizations
SCQA dossier002
Situation The Value Proposition Canvas was built for two parties. AI-native concepts require three. Customer, Business, and Agent — each with their own profile, each with their own design problem.
Complication The old frame no longer explains the work cleanly.
Question The Three-Actor Canvas
Answer The Value Proposition Canvas was built for two parties. AI-native concepts require three. Customer, Business, and Agent — each with their own profile, each with their own design problem.

Every concept that involves AI touches three parties simultaneously. The traditional Value Proposition Canvas was built for two: the business and the customer. You mapped jobs to be done on one side, your product's value on the other, and looked for fit.

That two-party model no longer covers what needs to be designed.

When AI enters the system — not as a feature, but as an actor — it brings its own capability envelope, its own failure modes, its own escalation logic. It interprets, routes, personalises, and in some cases decides. It is not neutral. It changes the experience for the customer, it shapes what the business can promise, and it introduces a new category of design problem: what happens when the AI acts in ways the business didn't anticipate, or the customer didn't expect to trust?

The Three-Actor Canvas maps all three. It replaces the two-party fit test with a three-way alignment check — asking not just whether your value proposition matches customer needs, but whether the Agent can reliably deliver it, whether the Business can govern it, and whether the Customer is willing to engage with a system that acts on their behalf.

Use it early. The further into development you get before asking these three-way questions, the more expensive the misalignment becomes.


How to use it

Tactically — Fill in each actor's panel for a concept you're developing. Start with the Customer (jobs, pains, gains, trust threshold), then the Agent (capabilities, failure modes, required context, escalation), then the Business (intent, constraints, governance posture). The gaps between panels are your design work.

Strategically — Bring this into concept reviews to surface unstated assumptions. Most early-stage AI products implicitly assume the Agent is capable, trustworthy, and well-governed. Making those assumptions explicit — and stress-testing them against the model's known limitations — changes what gets built, what gets promised, and what gets scoped out.

Click each actor to open their canvas. Click the relationship labels to see what each connection must define.

THREE WAY FIT VALUE DELIVERY OPERATIONAL LOGIC EXPERIENCE LAYER CUSTOMER INTENT & TRUST Jobs · Pains · Gains BUSINESS STRATEGY & GOVERNANCE Intent · Constraints AGENT JUDGMENT & SCALE Capability · Limits

— click an actor or relationship to open

Three-Actor Canvas · Select an actor
Customer Seeks progress. Brings intent, data, and trust — and sets the threshold at which they'll let an agent act on their behalf.
Business Sustains the system. Defines strategic intent, compliance limits, and governance posture. Accountable when the agent acts outside expected bounds.
Agent ↑ new Interprets, routes, personalises, decides. Not a feature — an actor with a real capability envelope, failure modes, and escalation logic of its own.
Actor · 01
Customer
Seeks progress. Brings intent, data, and trust — and a trust threshold that determines how much they'll let the system act autonomously on their behalf.
Jobs to be done
What progress are they trying to make?
Not features — outcomes. What situation are they moving from, and what do they want to arrive at? Jobs are functional, social, and emotional. All three matter when AI mediates the experience.
Pains
What is frustrating, risky, or blocked today?
Obstacles, fears, and bad outcomes from the current approach. Include pains specific to AI interaction — opacity, loss of control, wrong outputs with no recourse.
Gains
What would genuinely delight — beyond the minimum?
Expected gains (table stakes) vs. unexpected gains (delight). AI can unlock gains the customer didn't know to ask for — anticipation, speed, personalisation at scale.
Trust threshold New dimension
When will they let the agent act for them?
The line between AI-assisted and AI-autonomous. What earns it? What breaks it? What must the customer see, hear, or control before they hand over the decision?
Actor · 02
Business
Sustains the system and is accountable when it fails. Must define not just what the system delivers, but what it is never allowed to do — and who governs that boundary.
Strategic intent
What problem is this actually solving for the business?
Not the product pitch — the real bet. Cost reduction, differentiation, retention, market entry? The intent shapes what gets resourced, measured, and cut.
Value commitment
What is the explicit promise to the customer?
The traditional VPC territory — pain relievers and gain creators. Now filtered through what the Agent can actually deliver reliably, not what sounds good in a brief.
Constraints
What can the agent never do?
Legal, regulatory, ethical, reputational. The hard limits. These define the operating envelope the agent must stay inside. Design them before you ship.
Governance posture New dimension
Who reviews agent decisions — and at what threshold?
Human-in-the-loop (approves before acting) · Human-on-the-loop (reviews after) · Full autonomy. Which applies where — and what triggers each mode to switch?
Actor · 03 — New
Agent
The actor the traditional VPC omits entirely. The Agent has a real capability envelope, real failure modes, and real escalation logic — all of which must be designed explicitly, not assumed.
Capability envelope New dimension
What does this agent do reliably at production scale?
Not theoretical capability — observed, repeatable performance under real load and real edge cases. Most concepts overestimate this. Stress-test before writing the value promise.
Failure modes New dimension
Where does it hallucinate, refuse, or degrade?
Every model has known failure regions. Mapping them is not pessimism — it's scoping. Failure modes define where human oversight is non-negotiable and what UX must gracefully absorb.
Required context New dimension
What does the agent need to function well?
Data, history, permissions, user preferences. The agent is only as good as what it knows. What must the customer provide? What must the business supply? What's missing in the current data architecture?
Escalation logic New dimension
When does it hand back to a human?
Confidence threshold, detected ambiguity, high-stakes decision class? Escalation is a design decision, not a fallback. Define the triggers before going live — not after the first incident.
Relationship
Value Delivery
The contract between Business and Customer — what gets delivered, through which touchpoints, measured how, and who is accountable when the Agent gets it wrong.
Business Customer
This is the traditional VPC fit line — but now it runs through an intermediary. The business no longer delivers directly. The Agent mediates the moment of value, which means the promise must account for what the Agent can actually do, consistently, at scale.

When the Agent underperforms, the Customer blames the Business. When the Business over-promises, the Agent fails in the field. The gap between the two is where AI products lose trust.
Through which touchpoints does value reach the customer — and which are AI-mediated?
How is delivery measured? What signals confirm the value landed?
When the agent makes a mistake, what is the recovery path — and who owns it?
Relationship
Experience Layer
The moment of contact between Agent and Customer — tone, pacing, personalisation depth, and the signals that build or break trust in real time.
Agent Customer
This edge is entirely new — it didn't exist in two-party design. The Agent is no longer a tool the customer uses. It is an entity the customer interacts with. The experience layer defines how that feels: what it knows about them, how it responds, when it defers, and what it signals when uncertain.

Trust here is accumulated in milliseconds and lost in one bad interaction. The design question is not just what the Agent says — it's what it signals about who's in control.
What does the customer feel when the agent acts autonomously — in control, or watched?
What signals — tone, transparency, pace — build confidence in the system?
At what moments must the agent visibly defer, explain itself, or invite correction?
Relationship
Operational Logic
The rules that govern what the Agent can do, how it behaves under constraint, and who reviews it — the architecture of accountability between Business and Agent.
Business Agent
The operational logic is where governance becomes concrete. Guardrails, policies, audit trails, review cycles, rollback procedures. This is not a legal afterthought — it is a design layer as important as the UX.

The business must define the envelope clearly enough that the Agent can operate within it reliably — and flag when it's approaching the boundary.
What can the agent decide autonomously, and what requires approval before acting?
How are agent decisions logged, audited, and reviewed?
What triggers a governance review — volume, impact, complaint rate, edge case detection?

Where the three-way tension lives

The canvas surfaces a design problem that two-party thinking misses entirely: the three actors have partially conflicting incentives.

The Customer wants control and transparency. The Business wants efficiency and scale. The Agent optimises for task completion within its training — which may not map cleanly onto either.

Fit in AI-native design isn't a bilateral match. It's a negotiation across three parties, held in tension by governance on one side and trust on the other. The canvas gives you the structure to have that negotiation explicitly — before it happens by accident in production.

Back to Writing