Reading0%
Essays · Feb 18, 2026

Adding AI Isn’t Becoming AI-Native

In the early years of digitalization, many organizations proudly announced that they had created “digital products.” In practice, what they had done was scan books into PDFs.

010 7 min Strategy
Strategic Design MethodsWork and Organizations
SCQA dossier010
Situation In the early years of digitalization, many organizations proudly announced that they had created “digital products.” In practice, what they had done was scan books into PDFs.
Complication The old frame no longer explains the work cleanly.
Question Adding AI Isn’t Becoming AI-Native
Answer In the early years of digitalization, many organizations proudly announced that they had created “digital products.” In practice, what they had done was scan books into PDFs.

We Are Repeating a Familiar Mistake

In the early years of digitalization, many organizations proudly announced that they had created “digital products.” In practice, what they had done was scan books into PDFs.

The artifact changed format, but the logic did not.

The book remained a linear, static object. The workflow remained one-directional, and the experience remained bound to the constraints of paper, only now viewed on a screen.

Only later did organizations step back and ask a more difficult question:

If content is the raw material, what should a book become?

That question led to platforms, searchability, adaptive content, embedded multimedia, collaborative annotation, recommendation engines, and dynamic publishing ecosystems. In other words, this is the difference between a digitalized artifact and a digital-native product.

Today, we are seeing the same happen with AI: retrofitting isn’t rethinking, and that trade-off is often a costly way to buy time:

  • A chatbot layered on top of an existing support workflow
  • A search box enhanced with natural language
  • A recommendation widget embedded in legacy journeys
  • A productivity copilot attached to pre-existing task structures

The system is new, but the architecture of value creation is not. This is AI retrofitting, and it resembles a scanned PDF: an old system with a new interface.

---

A Structural Reframing: Working with Three Actors

The fundamental error lies in how we conceptualize the “system” that powers a product, service or business.

Traditional software systems operated in deterministic feedback loops. They processed structured inputs and returned predictable outputs. Their agency was minimal; they were instruments with simple, predictable rules in which simple repetition meant safe scalability.

Large-scale AI systems are qualitatively different. They reason probabilistically, infer context, generate content, learn patterns, initiate actions.

This changes the architecture of any viable product or service.

Historically, we designed for two primary actors:

  1. The user, the quintessential customer in customer-centricity;
  2. The business, as a supplying entity with must-dos and must-haves;

I’d consider now we should have a third actor, similarly subjective and with its own complexities and trade-offs:

  1. The system.

An AI-native system is not a feature. It is an autonomous participant within the value network, and it defines in a far deeper scale what the service/product will be.

If we treat it as a decorative layer, we constrain its potential and increase complexity without structural gain.

If we treat it as an actor, we must redesign workflows, decision rights, governance structures, and economic models accordingly. This is the same cognitive shift required when moving from digitized artifacts to digital-native platforms.

---

Why Retrofitting Happens: Risk Perception and Strategic Optionality

Retrofitting is not a failure of intelligence. It is a rational response to perceived risk.

If organizations do not retrofit, they must redesign workflows, decision rights, governance structures, incentive systems, and in many cases their economic model. That level of reinvention feels expensive, politically difficult, and operationally destabilizing.

Retrofitting, by contrast, appears controlled. It allows leaders to signal innovation without disrupting the underlying system. It preserves strategic optionality.

The logic often sounds like this:

Why commit to a full AI-native CRM redesign today if it remains unclear whether Salesforce, HubSpot, or autonomous OpenAI-based agent ecosystems will define the dominant architecture in three years? Why rebuild processes around a vendor stack that may not become the industry standard?

Retrofitting becomes a waiting strategy. It buys time.

This reasoning made sense in previous technological cycles, where dominant platforms took longer to consolidate and where competitive shifts unfolded gradually.

The difference today is acceleration.

The diffusion of generative AI capabilities is occurring at a pace rarely seen in enterprise technology adoption. Model performance improves quarterly. Infrastructure costs decline. Regulatory frameworks such as the EU AI Act are already structuring governance expectations. Competitive baselines shift in months, not years.

Under these conditions, waiting carries a rising opportunity cost.

---

The Hidden Cost: Structural Technical Debt

Technical debt is typically understood as accumulated engineering shortcuts that require later rework.

In the AI era, the concept expands.

There is architectural debt.

When organizations layer AI on top of legacy workflows instead of redesigning around system autonomy, they entrench outdated coordination models. Every incremental retrofit increases the complexity of future transformation.

The cost of waiting therefore compounds in three ways:

  1. Integration complexity grows as retrofitted layers accumulate.
  2. Organizational habits solidify around suboptimal AI usage patterns.
  3. Competitors redesign their economics while incumbents optimize interfaces.

By the time strategic certainty arrives, the redesign is no longer a greenfield opportunity. It becomes a high-friction overhaul.

This is why retrofitting feels safe in the short term but becomes expensive in the medium term.

---

Why Retrofitting Is Strategically Wasteful

AI retrofitting is attractive because it feels incremental. It fits within existing procurement logic and budget lines. It minimizes organizational discomfort.

However, it introduces three structural inefficiencies:

1. Workflow Duplication

Old workflows persist while AI generates parallel suggestions. Humans remain responsible for verification, effectively doubling cognitive load instead of reducing it.

2. Decision Latency

If the system is not granted structured autonomy, every AI output becomes advisory. Human review remains mandatory. Cycle times shrink only marginally.

3. Competitive Drift

Competitors who redesign around AI-native logic eventually operate with fundamentally lower coordination costs and faster iteration cycles.

The risk is not immediate failure.

It is gradual strategic irrelevance.

---

So, What Does AI-Native Actually Require?

An AI-native redesign begins with a more uncomfortable question:

What value are we fundamentally delivering?

Not:

Where can we plug AI in?

But, instead:

If intelligent systems can reason, generate, monitor, and act, what becomes possible that was structurally impossible before?

Examples of AI-native rethinking include:

  • Moving from ticket resolution to predictive issue prevention
  • Moving from periodic reporting to continuous decision orchestration
  • Moving from product configuration to autonomous personalization
  • Moving from customer service response to anticipatory lifecycle management

In each case, the workflow itself is redesigned around the system as a reasoning participant.

---

Designing for Three Actors

If user, business, and system are co-actors, governance must evolve.

User

  • Trust calibration
  • Transparency of reasoning
  • Agency preservation

Business

  • Risk tolerance thresholds
  • Accountability frameworks
  • Economic optimization logic

System

  • Scope of autonomy
  • Learning boundaries
  • Escalation protocols
  • Memory architecture

This triadic design model forces clarity about where decisions are made, who carries liability, and how performance is measured, in for at least three audiences: business people, technical people, and experience people. Win!

---

Do We Need to Talk KPIs? No, but it helps

I’m adding this because it may help to position yourself when discussing these type of transformative projects. If you (or someone you know!) are allocating budgets between €50K and €2M, you should require measurable indicators that make sure you’re not retrofitting, but rethinking.

Here are four KPIs for early stage projects, also in early stage AI adoption in a company:

1. Decision Latency Reduction

What is measured: Average time from signal detection to decision execution.

Why it matters: AI-native systems compress coordination cycles.

Early indicator: 10–20 percent reduction in targeted workflows.

Mature state: 40–60 percent sustained reduction across multiple functions.

---

2. AI-Enabled Revenue Contribution

What is measured: Percentage of revenue directly influenced by AI-mediated interactions or optimizations.

Why it matters: Moves AI from cost center to growth engine.

Early indicator: Revenue uplift in specific pilots.

Mature state: 15–30 percent revenue contribution tied to AI-native journeys.

---

3. Autonomous Resolution Rate

What is measured: Share of tasks resolved end-to-end by system within defined governance boundaries.

Why it matters: Indicates true system participation.

Early indicator: Partial task completion without escalation.

Mature state: 50 percent or more of defined micro-processes autonomously executed.

---

4. Trust and Usage Persistence

What is measured: Sustained voluntary usage across roles.

Why it matters: AI-native value depends on behavioral integration.

Early indicator: Weekly active use within pilot groups.

Mature state: Cross-functional reliance embedded in daily routines.

---

The Timeless Insight

Every technological shift initially produces translation rather than transformation.

Companies attempt to interpret the new capability through the structures they already understand. Books were digitized before publishing models were redesigned. Retail moved online before marketplaces reshaped supply chains and margin logic. Meetings became video calls before coordination models were reconsidered. The first response preserves the architecture and modifies the interface. The second response rethinks the architecture itself.

---

# SUMMARY PROTOCOL

  • TOPIC: AI-retrofitted versus AI-native product design
  • PROBLEM: Organizations are layering AI onto legacy workflows rather than redesigning value creation around intelligent systems
  • QUICK TAKEAWAY: AI retrofitting extends old architectures; AI-native design redefines them
  • CORE CONTENT: Introduces the “third actor” model, outlines structural inefficiencies of retrofitting, and proposes KPIs to measure AI-native maturity
  • POLITICAL LANDSCAPE: EU AI Act introduces governance obligations that reinforce the need to treat AI as an operational actor
  • QUICK ACTION: Audit one core workflow and map where AI is advisory versus autonomous
  • RISK OF DOING NOTHING: Gradual competitive drift through slower coordination and duplicated cognitive load
  • FUTURE PROTOCOL: Design systems where user, business, and AI operate as co-actors within a measurable governance framework
Back to Writing