Reading0%
Essays · Mar 28, 2026

Human, not AI: make sure you’re at either end of a decision

There is a genre of AI commentary that reassures professionals by telling them the thing that matters most is something AI cannot do.

016 3 min Strategy
Strategic Design MethodsWork and Organizations
SCQA dossier016
Situation There is a genre of AI commentary that reassures professionals by telling them the thing that matters most is something AI cannot do.
Complication The old frame no longer explains the work cleanly.
Question Human, not AI: make sure you’re at either end of a decision
Answer There is a genre of AI commentary that reassures professionals by telling them the thing that matters most is something AI cannot do.

There is a genre of AI commentary that reassures professionals by telling them the thing that matters most is something AI cannot do.

https://www.notion.so

Usually the word is “wisdom,” sometimes “judgment,” occasionally “empathy.” The argument follows a predictable arc: intelligence is being commodified, therefore the scarce resource becomes some ineffable human quality that machines lack. The audience feels better, hope is intact, the narrative is simplified, nothing is truly clarified.

The problem is that wisdom, judgment, and empathy are not job descriptions. They are not line items in a contract or conditions for liability. No one has ever been fired for a lack of wisdom. People get fired for making bad decisions they were formally responsible for — or for failing to make decisions at all.

What actually survives automation is not a cognitive trait but a structural position: the capacity to bear accountability for consequential choices. This is not a philosophical claim about the soul. It is a feature of how firms, legal systems, and regulatory frameworks are built.

Someone must initiate a disturbance — commission a product, approve a strategy, place a bet on a direction that did not previously exist — and someone must ratify the outcome, confirming that the new state of affairs is acceptable, or rejecting it and absorbing the cost of reversal.

Fama and Jensen described this decades ago as the separation of decision management from decision control: the right to propose and the right to approve are kept apart precisely because concentrating them creates unaccountable risk.

AI, at this stage, sits in neither position. It occupies the increasingly productive middle — the analysis, the synthesis, the generation of options, the processing of evidence. Anthropic’s most recent economic report measures this directly: among occupations with the highest observed AI exposure, the tasks being automated are overwhelmingly those rated as theoretically feasible for acceleration by an LLM, tasks like drafting, data processing, and pattern recognition (Appel et al. 2026).

The tasks that remain uncovered tend to involve coordination across people, authorization, and the absorption of risk when things go wrong. These are not uncovered because they are mysteriously human. They are uncovered because no one — no client, no regulator, no board — has agreed to let a model be the one who bears the consequences.

This reframes what professional development in an AI-augmented environment actually requires. The skill is not “being wise.” The skill is learning to make better decisions at the two exposed ends of the arc (the provocation and the confirmation) while using AI to radically improve the quality of everything that happens between them.

A strategist who can frame the right question and then hold the organization accountable to the answer is more valuable than one who spends weeks producing the intermediate analysis that a model now handles in minutes.

A product manager who knows which disturbance to introduce into a system, and who can judge whether the resulting state is better than what came before, is doing the work that cannot yet be delegated.

The relevant question for any professional is therefore not “can AI do my job?” but “am I standing at one of the two ends where decisions are actually made, or am I still in the middle, where the automation is?”

Judgement isn’t a job description, it’s a function. But deciding what to do, and who answers for the results, are concrete enough guidance.

Back to Writing