Reading0%
Essays · Apr 29, 2026

The Older Question Beneath AI and Morality

Training data, displacement, and longtermism look like new AI problems. Beneath them is an older moral question about who gets to govern the resources that sustain their continuation.

028 10 min AI, Morality, Ethics, Longtermism, Governance
AI and JudgmentMorality, Power, and Governance
A red-lit introspective portrait by Deep Patel.
CreditDeep Patel
KeywordMoral substrate
CaptionThe older question beneath new machines.
SCQA dossier028
Situation AI debates have splintered into arguments about copyright, labor displacement, accumulation, and speculative future beneficiaries.
Complication Those debates often use separate moral vocabularies, which makes the underlying resource-governance question harder to see.
Question What if AI ethics is not a new moral territory, but an old question about self-preservation and resource governance returning in a new technical form?
Answer Morality is the governance of resources for self-preservation. AI becomes morally serious when it compromises the substrate access of existing beings while justifying extraction through scale, efficiency, or hypothetical futures.

What we argue about when we argue about training data, displacement, and longtermism


There is a moral intuition that almost everyone shares and almost no one can adequately explain. When someone tortures an animal that does not belong to them, in private, for no reason, we recognize it as wrong. The animal has no property claim, no contract, no standing in any legal sense that would make the act a violation. Most contemporary moral vocabularies struggle here. Virtue ethics tells us that the torturer has a defective character, which is true but does not explain why the act is wrong rather than merely unattractive. Consequentialism counts the suffering, which is closer, but it cannot quite explain why a brief instance of animal suffering registers as a categorically different kind of wrong than, say, the discomfort of a long flight. Deontology gestures at the dignity of sentient beings, which is true but circular: the dignity is what we are trying to explain.

What the intuition is actually responding to, I believe, is the deprivation of a being's capacity to govern its own continuation. The animal has resources at stake — its body, its time, its capacity to act in the world — and the torture deprives it of those resources without any countervailing claim that could justify the deprivation. The wrongness is not in the cruelty as a feeling but in the violation of the more fundamental relation: that beings which exist in the world hold a certain governance claim over the resources that sustain their existence, and that other beings cannot deprive them of these resources arbitrarily. The animal case is the clean test because nothing else is in play. There is no contract, no property, no community, no consequence. There is only the substrate relation, and it is enough to ground the intuition.

Morality as the Governance of Resources

This suggests that morality, at its root, is the governance of resources for self-preservation. The argument is older than it sounds. Hobbes located the foundation of political order in the recognition that beings competing for scarce resources require governance to prevent mutual destruction. Hume, more carefully, argued that justice is the institutional response to the conjunction of limited generosity and moderate scarcity; in conditions of total abundance or total deprivation, the question of justice does not arise at all (Hume 1751). The contemporary anthropological literature on the evolution of moral norms tells a similar story from a different direction: cooperative groups facing problems of resource division, defense, and reproduction develop normative systems that regulate who may do what to whom, and these systems are the substrate of what we later call morality (Boehm 2012; Tomasello 2016).

The move I want to make is to extend the resource concept along the line that Darwin and Dawkins have already drawn for us. Resources are not only material. They include the substrates of self-replication, both genetic and memetic. The body is a resource. Reproductive choice is a resource. Cognitive autonomy is a resource. Time is a resource. The capacity to act in the world according to one's own purposes is a resource. Once the concept is generalized this way, the framework absorbs cases that the narrower property-and-livelihood version of resource ethics cannot reach. Sexual violence is categorically grave because it violates reproductive sovereignty, which is the most basic resource of self-replication, not because of modesty or honor or any contingent cultural framing. Incest, as well, is morally repulsive as it tends to create genetical anomalies that harm the genetic pool continuation. Slavery is a violation of the resource of self-direction, prior to and independent of any economic calculation. The prohibition on cruelty to animals follows from the same principle, applied to beings whose substrate relation we recognize even though they cannot articulate a claim. Morality, thus, universally, create narratives with emotional resonance from what is reasonably explainable. That itself is likely one of our most distinct evolutionary traits: similarly to animals repelling foul or poisonous food, morality is our memetic (or cultural) way of effectively repelling or adopting behaviors.

Why the Hard Cases Confirm the Framework

The framework's discipline is that it predicts which cases will be morally clear and which will be morally ambiguous. Clear cases involve the unambiguous deprivation of substrate resources from beings whose claims are not in conflict with anyone else's claims. Ambiguous cases involve competing governance claims over the same resources, or beings whose self-governance is itself in question. Consider the alcoholic who cannot manage their own continuation. Is it a violation of their resource claim to deprive them of the freedom to drink themselves to death? The framework says: it depends on whether their self-governance is intact enough to constitute a real claim, and whether their actions are externalizing harm onto others whose resource claims must also be governed. The ambiguity is not a flaw in the framework. It is what the framework predicts will happen when governance claims overlap or fail. A theory of morality that resolved every hard case cleanly would be more suspicious than one that locates the hard cases at exactly the points where the resource calculus genuinely is hard.

This is also why the framework can accommodate the legal-versus-moral distinction without collapsing into either positivism or pure intuitionism. Legal systems are the institutional materialization of moral judgments about resource governance, but they are downstream of the moral substrate rather than constitutive of it. A legal regime can fail to track the moral question — by recognizing claims that are not morally serious, or by failing to recognize claims that are. The test is empirical: does the legal arrangement actually preserve the substrate access of the beings it governs, or does it permit deprivations that the moral framework would recognize as violations? The question can be asked of feudal property law, of slavery, of contemporary intellectual property regimes, and of whatever institutional response we eventually develop to artificial intelligence. In each case the legal arrangement is being evaluated against a more fundamental criterion, and the criterion is the same one that grounds the intuition about animal cruelty.

Moral Systems and the Configurations They Govern

One implication is worth drawing out before we turn to the contemporary case, because it will matter for what follows. Moral systems are not static, because the resources they govern are not static. The shape of substrate access depends on the technological, economic, and institutional configuration of a given society, and when the configuration changes, the moral system has to adjust. Feudal property arrangements were morally coherent within a particular constellation of land, labor, and lineage; industrialization disrupted that constellation, and the moral framework had to be rebuilt to accommodate beings whose substrate access now depended on wage labor rather than land tenure. The rebuilding was contested, incomplete, and in many cases violent, but the underlying principle did not change. Beings need resources for self-preservation, and the institutional arrangements that govern resource distribution are morally serious or not depending on whether they honor that need.

It is also important to notice how, even if plausibly ruled by the laws that govern self-preservation, morality is an elastic concept. The history of capitalism is a statement to that. What seemed acceptable in the early days of the industrial revolution (such as 16 hour work journeys and children working in factories) is now unaccepatble in many developed countries. Interestinly, as well, people who wouldn't accept those conditions commonly do accept the reports of such conditions when it comes to products they purchase in, for example, fast fashion. The case is moral, but remains in a rational level, as the reports of abusive labour conditions are far enough, and diffuse enough, to be dismissed.

As the rules of what is acceptable change, legal changes follow, often with significant delays (the pushback from the status quo being one of the main slowing forces). We are now in another such moment where significant change is happening, and seeing a rapid discussion and adaptation of what are the moral and legal limits of what is acceptable in technology, especially when it comes to the widespread use of Artificial Intelligence.

The Cognitive Substitution

The transition we are in is the substitution of cognitive labor itself, accompanied by the simultaneous consumption of the cumulative memetic output of human culture as the input to that substitution. This is the structural feature that distinguishes the AI moment from earlier technological transitions. Industrialization changed the substrate of livelihood from land to wage labor, but it did not also consume the prior cultural inheritance as raw material for its own operation. Recent empirical work on creative-task substitution finds that generative systems now match or exceed median human performance on tasks that were considered cognitively distinctive only a few years ago, with the largest displacement effects concentrated in mid-skill creative and analytical work (Doshi and Hauser 2024; Noy and Zhang 2023). The framework lets us see this as a transition that operates simultaneously on two substrate relations: the livelihood of those whose cognitive labor is being substituted, and the memetic inheritance from which the substituting systems are built.

Extraction and the Limits of the Binary

The extraction question becomes legible when posed in these terms. The framework's contribution is to refuse the binary that treats AI training on copyrighted material as either categorically infringement or categorically permissible, and to ask instead whether the training compromises the substrate access of the originator. Some cases are clear at one end. A brand that has imposed itself on public cognition through decades of advertising has surrendered some governance claim by the act of saturation; the Mona Lisa and Os Lusíadas have entered the cultural substrate so thoroughly that any individual claim has effectively dissolved. Other cases are clear at the other end. A working illustrator whose distinctive style is reproduced at scale by a model trained on her uncompensated portfolio has been deprived of substrate access in a way the framework recognizes as a violation. Most cases sit between these extremes, and the framework's diagnostic move is to treat the question empirically rather than categorically. The current wave of litigation, including the high-profile suits against the major foundation-model providers, is best read as a mixture of morally serious claims and institutional defenses of historical revenue patterns, and the framework lets us sort them rather than treat them as a single phenomenon.

Displacement and the Locatable Failure

The displacement question yields a sharper diagnosis than the standard framing allows. Consider the lighthouse keeper whose role is automated. The substitution produces a net preservation gain — a thousand prevented shipwrecks against one displaced livelihood — and the framework recognizes this as morally tolerable provided the displacement is managed. The same analytic applies to the contemporary cases, which are not categorically different even when they feel more disruptive. Translators, illustrators, copywriters, junior developers, and customer-service workers are being displaced at scale, and the question the framework sharpens is where the moral failure actually lies. It is not in the technology, which is doing what technologies have always done when the resource calculus shifts. The failure is in the institutional arrangement that surrounds the technology, which has been organized to externalize the costs of transition onto the displaced. Recent labor-market analyses find that exposure to generative AI is concentrated in occupations with limited collective-bargaining infrastructure and weak retraining provision (Eloundou et al. 2023), which is the institutional pattern the framework would predict will produce the most severe substrate-access violations. The diagnostic point is that the failure is locatable, and it is not located where most commentary assumes it is.

Accumulation Beyond the Governance Constraint

The accumulation question is where the framework does its most original analytical work. Piketty's argument is that accumulation becomes corrosive not when inequality is unpleasant but when concentrated capital begins to stifle the conditions of further productive activity, including the substrate access of those who must labor under it (Piketty 2014). The framework grounds Piketty's economic argument in a more fundamental moral principle: accumulation past a certain threshold compromises others' resource governance, and the institutions that permit this commit the same kind of failure as institutions that permit theft. The contemporary AI economy is producing accumulation at speeds and concentrations that the existing institutional infrastructure cannot govern. A small number of foundation-model providers and the capital structures that fund them are absorbing the productivity gains of cognitive substitution while the displacement costs flow outward. The pattern is recognizable, and the framework names it precisely.

The longtermist justification is the rhetorical structure that allows this accumulation to escape the governance constraint, and it is here that the framework's diagnostic power is sharpest. The argument advanced by figures including William MacAskill, Nick Bostrom, and the broader effective altruist and longtermist communities holds that present resource concentration is justified by its expected contribution to vast future populations whose flourishing depends on the technological and economic capacity being built now (MacAskill 2022; Bostrom 2014). The framework reveals this as a category violation. Governance claims, in the moral sense the framework establishes, belong to beings who exist and whose substrate access is materially at stake. A speculative future population whose existence is contingent on the very accumulation being justified cannot exert governance claims; you cannot deprive someone who does not yet exist, and you cannot trade their hypothetical preservation against the certain preservation of those who do. The probability calculations that longtermism uses to weigh small chances of vast future gains against certain present losses operate within a moral grammar that the framework does not recognize. The accumulation is real. The displacement is real. The future beneficiaries are hypothetical, and they cannot be promoted into governance claimants by the rhetorical force of the argument that invokes them. The rocket programs and the planetary-scale ambitions that have come to symbolize this position are not eccentric expressions of an otherwise sound philosophy; they are the predictable output of a justification structure that has suspended the governance constraint entirely.

What the Framework Returns to Us

What the framework returns to us, then, is a way of seeing the contemporary AI debates as continuous with a much older moral question rather than as a novel ethical territory. The disputes about training data, displacement, and accumulation are not three separate problems requiring three different vocabularies. They are three faces of a single question about whether our institutional arrangements continue to govern resources for the self-preservation of the beings who depend on them, or whether they have been organized to suspend that governance under various justifications. The novelty of the technology has obscured the familiarity of the question. The framework gives the question back its proper shape, and once it has its proper shape, the apparent diversity of the disputes resolves into something the moral tradition has been working on for considerably longer than the present moment would suggest.


References

Boehm, Christopher. 2012. Moral Origins: The Evolution of Virtue, Altruism, and Shame. New York: Basic Books.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Doshi, Anil R., and Oliver P. Hauser. 2024. "Generative AI Enhances Individual Creativity but Reduces the Collective Diversity of Novel Content." Science Advances 10 (28).

Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. "GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models." arXiv preprint arXiv:2303.10130.

Hume, David. 1751. An Enquiry Concerning the Principles of Morals. London.

MacAskill, William. 2022. What We Owe the Future. New York: Basic Books.

Noy, Shakked, and Whitney Zhang. 2023. "Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence." Science 381 (6654): 187–192.

Piketty, Thomas. 2014. Capital in the Twenty-First Century. Cambridge, MA: Harvard University Press.

Tomasello, Michael. 2016. A Natural History of Human Morality. Cambridge, MA: Harvard University Press.

Back to Writing