Everybody's smart. Now what? Professional upskilling after AI
# The anxiety of 'what will I do now?'
# The anxiety of 'what will I do now?'
We are absorbing, adapting, and being proactive towards the future. But what to learn next? Ten years ago, you'd hop into an Executive Education program, get a certification or learn the basics of a technology.
That doesn't make sense the same way now.
The shift of rethinking what exactly we should be doing is what makes the upskilling question harder than it looks.
There is a particular kind of professional anxiety that doesn't show up in surveys because it's too vague to name. It is a persistent sense of being on terrain you knew well become unfamiliar without being able to point to exactly when the shift happened.
People in design, strategy, product management, and consulting have been living in this feeling for about two years now (and before that, when Covid-19 hit), and the responses available to us have not been adequate: "Learn to prompt", "develop your judgment", "stay curious". These are not bad suggestions, but they arrive without enough substance to act on.
On Feb 24, the Ezra Klein show, at the New York Times, interviewed Jack Clark, co-founder of Anthropic. The discussion piece started with the Anthropic report on AI coverage of intellectual jobs. One bit called my attention, 50 minutes in:
About senior workers:
Something that we found is that the value of more senior people with really well-calibrated intuitions and taste is going up, and the value of more junior people is a bit more dubious.
About entry-level workers:
It’s like kids who grew up on the internet (...) People who spend a lot of time playing around with this stuff will develop very valuable intuitions, and they will come into organizations and be able to be extremely productive.The uncertainty of where to invest time, effort and focus
This gives clues to how new workers (and the entire educational system, especially Universities) should think education. And an entire path ahead for senior workers, who rarely focus on their so-called "soft skills". This reversal may be the fastest shift in importance we see. If AI can do the analysis, the skill of knowing what to ask, what area to provoke, and how to govern the complexity of organizational decision-making, is what AI cannot do in the foreseeable future.
# It's not about becoming "faster with AI"
Why this matters? AI is wiping out a lot of processes and outcomes. An insight used to be read by a person, who needed to interpret and act on it. Now the insight is read by the AI, and the action is taken by the AI. The decision of the action is human, but the rest is not. And this change is hard to accept.
It's worth being specific about what has actually changed, because the specificity matters for knowing what to do about it. The report published by Anthropic's economists earlier this month (which broke the internet) is already a reality. It tracks real usage patterns across occupations, rather than modeling hypothetical capability.
For market research analysts, the observed exposure rate is around 65 percent. For financial and investment analysts, 57 percent. So the work hasn't disappeared, and it's not simply changing into "smaller teams doing the same work".
A large portion of its intellectual layer (the research, the synthesis, the first draft, the options generation, the lateral thinking and considerations, the covering all bases in scenarios) has been absorbed into tools that most professionals now use daily, often without fully noticing the cumulative shift in how they spend their time. It's the first time we have a tool that does intellectual reasoning, instead of computational work alone.
This shift of rethinking what exactly we should be doing is what makes the upskilling question harder than it looks. Fighting an uphill battle to learn the elements of Python could absolutely not pay off when a) for practical purposes, Claude Code will do the everyday work, b) a seasoned developer can be your best team mate, and/or c) the whole focus goes to a different direction, and you end up with a sunk cost that sinks more than just weekends, certificate money and effort.
So what are safer bets for people working with business, intellectual work, strategy, design, market research and adjacencies of managerial positions?
Nobody knows the answer for sure. AI keeps telling us that judgement and 'taste' (in a warped definition of the word) is the next skill to have. But then again, that's a fluffy answer that leaves little to work with.
# Four speculative Executive Education programs for the Present-Future of Intellectual Workers
What I constructed with Claude in this article, based on the infamous graph of AI x the workforce, are four speculative courses. Speculative design is a rich exercise on the future that can greatly inform the present.
These could be offered as an Executive Education course, as a first semester for a Bachelor's Degree in a University (for a Degree that doesn't exist yet), or as a self-development track. They don't exist in practice, but they may offer clues on what to do next with your drive to stay up to date.
Here they are.
---
Course 1
# Critical Interpretation in Complex Environments
Rhetoric · Semiotics · Epistemology · Aesthetic theory
[](https://media.licdn.com/dms/image/v2/D4D12AQEBl0b2eing2A/article-inline_image-shrink_1000_1488/B4DZ0rD9V1H4AQ-/0/1774543951764?e=1776297600&v=beta&t=5zNAKbPkGN5B7Ay5IqOUQe2uZY9wsez0xTdU4puKneE)
The capability that has become strange to describe, because it was never treated as a discipline, is discernment — the ability to look at ten well-constructed outputs and know which one is actually right for this audience, this organization, this moment, and to explain why in terms that go beyond personal preference. Every team working with AI generation is now running into this problem daily. The tools produce competent work with very little friction. What fails is the evaluative layer: the capacity to read whether something is true to its context, whether the framing will land or misfire, whether the argument carries genuine conviction or merely its surface features.
Rhetoric, in its original sense, was exactly this study — not the art of sounding persuasive, but the systematic understanding of how meaning is made and received in specific human situations. Paired with semiotics, which asks how signs and structures carry significance across cultural contexts, and with epistemology, which becomes urgent the moment a tool can produce plausible-sounding versions of almost anything, this week builds toward something more durable than a preference vocabulary. The applied work runs participants through deliberate mismatches: communications, strategies, and product narratives that are technically accomplished but contextually wrong, with sustained analytical attention to identifying the specific mechanism of failure. By the end of the week, participants are not developing taste, which is private and slow to transmit. They are developing a transferable critical method — the ability to diagnose, explain, and correct, which is what organizations actually need from someone in this role.
---
Course 2
# Cognitive Foundations of Strategic Choice
Cognitive science · Game theory · Facilitation theory · Philosophy of action
[](https://media.licdn.com/dms/image/v2/D4D12AQG_AUGc6iEmbA/article-inline_image-shrink_1000_1488/B4DZ0rEAHiGgAY-/0/1774543968478?e=1776297600&v=beta&t=rAxoJbM9mLJibQlQ_fe7x7xTfErw0KBOycwbx9na444)
Decision-making frameworks taught in business education share a foundational assumption that has not held up well to empirical scrutiny: that the quality of a decision improves monotonically with the quantity and quality of information available, up to some optimal point. The actual cognitive science literature has been eroding this assumption for decades, and the current situation — in which good analysis can be generated faster than any organization can absorb it — has exposed the gap between the assumption and reality in a way that is now professionally consequential.
The binding constraint on most strategic decisions today is not insufficient information. It is the absence of a reliable mechanism for converting information into commitment under conditions of genuine uncertainty. These are different problems, and they require different tools. Game theory offers a rigorous framework for understanding how decisions made by multiple actors under uncertainty interact and evolve — which turns out to be an accurate description of most organizational decisions. Philosophy of action, a field that business training almost never touches, asks what it means to commit to a course of action when the outcome is unknowable and the alternatives remain live. Facilitation theory, at its most serious, is the discipline of moving a group from divergent interpretation toward shared direction without suppressing the divergence that made the deliberation worth having. Together, these three bodies of thinking address a problem that exists in every organization right now: analysis accumulates, options proliferate, and the room stalls. The week is built around that moment — entering it, diagnosing it, and moving through it with enough intellectual structure that the movement is repeatable.
---
Course 3
# Human Factors in Intelligent Work Environments
Cognitive ergonomics · Skill acquisition theory · Philosophy of technology · Practice analysis
[](https://media.licdn.com/dms/image/v2/D4D12AQHDnhxQt5MrFQ/article-inline_image-shrink_1000_1488/B4DZ0rElfcKoAQ-/0/1774544116657?e=1776297600&v=beta&t=NlzVVuOujhwaL7-grHvRVnlvU1xWboHhQH316Rz-RGU)
There is a question that almost no one in professional development is asking, and its absence is becoming expensive. When a skilled professional offloads a task to an AI tool — not occasionally, but routinely, over months — what happens to the underlying capability that the task was exercising? The output may be identical in the short run. The practitioner's competence may not be. Skill acquisition research, going back to work by Ericsson and colleagues in the 1990s and extended since, is fairly clear that expertise depends on the kind of deliberate, effortful practice that produces the errors and corrections through which skill is actually built. If the difficult part of a task is systematically removed, the development that difficulty was producing stops.
This course takes that finding seriously as a professional design problem. It draws on cognitive ergonomics — the study of how human cognitive systems interact with tools and environments — and on the philosophy of technology, specifically the tradition running through Heidegger and Stiegler that asks how tools reshape the people who use them over time, not just the tasks they perform. The applied work is a personal audit: participants map their own practice at the task level, identifying where AI assistance has entered, what the practice looked like before, and what, if anything, has been lost. The goal is not a judgment about whether to use AI — that question is settled — but a specific, defensible account of where the human process is non-substitutable and why, and therefore where continued deliberate practice remains essential. That account is the deliverable. Most professionals don't have one. They should.
---
Course 4
# Constraint Design for Consequential Systems
Institutional design · Constitutional theory · Complexity science · Applied ethics
[](https://media.licdn.com/dms/image/v2/D4D12AQG0ccmN8dnOiA/article-inline_image-shrink_1000_1488/B4DZ0rE0.SKQAU-/0/1774544185069?e=1776297600&v=beta&t=VzgWDUgtKmKDMeGihNU5btQ9hqVmNTbsDjLGHFOw1Mc)
As AI enters consequential organizational decisions — hiring, resource allocation, customer-facing judgment, strategic planning — a specific kind of professional capacity has become necessary that almost no existing training produces. Someone needs to be able to design the governance layer for these systems: not the ethics review that happens afterward, and not the legal compliance that establishes minimum floors, but the architectural decisions about what the system is and is not allowed to do, where human judgment is required rather than merely available, how errors surface and get corrected, and what accountability looks like when the decision-maker is partially synthetic. Right now, those decisions are being made ad hoc, or not at all.
Constitutional theory is the intellectual tradition concerned with how you build durable rules for complex human systems, rules that survive organizational pressure and don't collapse the moment they become inconvenient.
This theoretical framework offers more relevant tools for this problem than anything in current AI ethics discourse. Complexity science contributes the understanding that systems with many interacting components produce emergent behaviors that rule-makers consistently fail to anticipate, which is both a caution and a design principle. Applied ethics, in the serious sense of constraint engineering rather than value declaration, asks which things a system should structurally be unable to do, and how that incapacity gets built into the architecture rather than asserted in policy. The week works through a series of design challenges — a hiring pipeline, a recommendation system, an internal AI tool used in leadership decisions — with the task at each stage being not evaluation but construction: building the specific review points, failure modes, and accountability structures that make involvement responsible. The graduate of this week is not a compliance officer, and not an ethicist. They are someone who can design systems that are trustworthy by construction, which is a different and considerably harder thing.
---
There is a misguided idea that these capabilities are defensive measures: things to develop so that you remain useful as the tools improve. That framing is too narrow for what is actually being asked of intellectual workers right now.
The professionals who will do consequential work in the coming years understand what kind of interpretation they can perform that a system cannot, what kind of room they can command from stall to commitment, what kind of governance they can establish, day to day, that can perform under real-world organizational pressure.
These four courses do not deliver an ultimate answer. But they do offer intellectual framework that can concretely disturb assumptions you didn't know you were making. That disturbance is the exact point where reimagining, not faster horses, starts to form.
---
Photos by Hoi An and Da Nang Photographer, Lisa, Philippe Bout and Giammarco Boscaro on Unsplash.
Back to Writing