The Three-Actor Canvas
The Value Proposition Canvas was built for two parties. AI-native concepts require three. Customer, Business, and Agent — each with their own profile, each with their own design problem.
Every concept that involves AI touches three parties simultaneously. The traditional Value Proposition Canvas was built for two: the business and the customer. You mapped jobs to be done on one side, your product's value on the other, and looked for fit.
That two-party model no longer covers what needs to be designed.
When AI enters the system — not as a feature, but as an actor — it brings its own capability envelope, its own failure modes, its own escalation logic. It interprets, routes, personalises, and in some cases decides. It is not neutral. It changes the experience for the customer, it shapes what the business can promise, and it introduces a new category of design problem: what happens when the AI acts in ways the business didn't anticipate, or the customer didn't expect to trust?
The Three-Actor Canvas maps all three. It replaces the two-party fit test with a three-way alignment check — asking not just whether your value proposition matches customer needs, but whether the Agent can reliably deliver it, whether the Business can govern it, and whether the Customer is willing to engage with a system that acts on their behalf.
Use it early. The further into development you get before asking these three-way questions, the more expensive the misalignment becomes.
How to use it
Tactically — Fill in each actor's panel for a concept you're developing. Start with the Customer (jobs, pains, gains, trust threshold), then the Agent (capabilities, failure modes, required context, escalation), then the Business (intent, constraints, governance posture). The gaps between panels are your design work.
Strategically — Bring this into concept reviews to surface unstated assumptions. Most early-stage AI products implicitly assume the Agent is capable, trustworthy, and well-governed. Making those assumptions explicit — and stress-testing them against the model's known limitations — changes what gets built, what gets promised, and what gets scoped out.
Click each actor to open their canvas. Click the relationship labels to see what each connection must define.
— click an actor or relationship to open
When the Agent underperforms, the Customer blames the Business. When the Business over-promises, the Agent fails in the field. The gap between the two is where AI products lose trust.
Trust here is accumulated in milliseconds and lost in one bad interaction. The design question is not just what the Agent says — it's what it signals about who's in control.
The business must define the envelope clearly enough that the Agent can operate within it reliably — and flag when it's approaching the boundary.
Where the three-way tension lives
The canvas surfaces a design problem that two-party thinking misses entirely: the three actors have partially conflicting incentives.
The Customer wants control and transparency. The Business wants efficiency and scale. The Agent optimises for task completion within its training — which may not map cleanly onto either.
Fit in AI-native design isn't a bilateral match. It's a negotiation across three parties, held in tension by governance on one side and trust on the other. The canvas gives you the structure to have that negotiation explicitly — before it happens by accident in production.
Back to Writing