TESCREAL+: A White Paper on Ethics of Strategic Inefficiency

Prologue: Some Useful Background

This white paper, and a few follow-up articles, mini-articles, and social media posts, that it will inevitably lead to, are my latest attempt at making sense of the efficiency-obsessed world we all live in.

As a technology hawk (I would like to think of myself as a harmless technology optimist, but who’s to say) I arrive at this point with more than two decades of work in AI (Artificial Intelligence) across various industries. My experience and studies encompass the first wave of logic-based AI, aka the good old days of intelligence based on certainly, as well as the new age AI, or what I have previously called the age of probability-infested pseudo-intelligence. And I am certainly crazy (or romantic?) enough to hang around as much as I can to see, and hopefully be part of, the next phase of AI development (why a next phase is needed is a subject of more articles in the near future).

With this background in mind, I suppose there should not be much of a surprise to learn that I have gradually found myself drawn to the Ethics of AI. This gradual shift is partly due to the fact that getting the ethics of AI right is an inherently fascinating technical challenge to overcome and party a result of my intense belief that AI, as a mirror for HI (Human Intelligence), is the humanity’s best chance to come to terms with our deepest biases, fears, and hopes.

One thing you learn when contemplating the impact of modern AI is the increasing influence of efficiency as a driving force for making any decision and for improving any algorithm. This obsession with efficiency has not only been tolerated but, in fact, aggressively encouraged, as a normal mode of operation, by most CEOs and technology investors whose definition of progress can be summed up with a few “strategically defined” KPIs (Key Performance Indicators) that tend to simplify (oversimplify is a more accurate word here) anything and everything into a set of “measurable” numbers.

While I cannot deny that this approach has resulted in tangible progress in many areas, overuse of it has certainly contributed towards the erosion of nuance, in favor of a more easily understood “average behavior”, and ultimately, the slowing down of the rate of innovation. After all, there are only a few ways one can apply a given set of techniques to “optimize” a problem and if everyone did just that, soon all solutions to the problem converge and we will be looking at a world of uninteresting uniformity where outliers are dismissed and thinking differently is abandoned on the charge of leading to suboptimal outcome.

The technological evidence of this “efficiency driven” approach is best observed in the way the new Gen AI (Generative AI) models have swept the AI landscape. Those of us who have been working with the latest Gen AI “toys” did not take long to notice the shortcomings of such solutions in their current form, from the annoying tendency for repeating the same incomplete or wrong answers to a painful “uniformity or blandness of output” across multiple Gen AI engines, which lack both creativity and tolerance for nuance. Needless to say, such shortcomings can, and often do, lead to outcomes that are ethically questionable, to say the least.

The topic of AI Ethics touches on many areas of discourse, such as philosophy, psychology, language, etc., and it is easy for anyone to lose their bearings, especially in the beginning of their journey. Therefore, I consider myself quite fortunate to have come across the seminal paper by Torres and Gebru(1), which helped me greatly in looking at the problems of modern AI, not as a set of simple mathematical equations to solve, but as an extension of a broader set of philosophies that drive its progress towards its nirvana of AGI (Artificial General Intelligence).

Going through this marvelous paper a few times, allowed me to connect a few dots that have been nagging at me for many years. And on further reflection, I decided to:

  1. Take on the challenge of extending their original bundle of related philosophies, which they call TESCREAL, and introduce TESCREAL+,
  2. Put forward an argument for relating the above extended bundle of philosophies to an “unhealthy” search for efficiency,
  3. Propose a manifesto for reframing this particular area of AI development in order to bring much needed humanity to its progress.

In what follows, I have tried, as much as possible, to adhere to the expected format of a formal white paper throughout (while remaining accessible to a wider audience) but as always, I appreciate constructive feedback on both form and contents from everyone who shares a common interest in the topic.

Here we go folks…

Introduction

In 2023, Torres and Gebru(1) introduced the concept of TESCREAL, an acronym for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism which, they argued, collectively form the ideological foundation behind contemporary AGI development. Interestingly, they traced the genealogical roots of TESCREAL to 20th century eugenics and techno-utopian worldview, highlighting how the promise of AGI is often expressed in terms of optimization and going beyond accepted human limits (an idea knows as transcendence).

In this paper we build upon that foundation but shift the analytical approach somewhat. We propose an expanded framework, called TESCREAL+, which incorporates additional ideological movements not originally addressed, namely Accelerationism, Dataism, and Computational Immortalism. While these ideologies are clearly distinct in both their origins and terminology, we argue that they do share a fundamental core idea, namely an undying commitment to efficiency as moral foundation of progress. Whether through pursuit of algorithmic optimization, technological advancements, or human transcendence, the guiding principle of TESCREAL+ is the optimization of all human systems, be it biological, cognitive, social, or ethical, towards an imagined state of perfection.

We term this phenomenon Optimization Determinism, a belief that friction, ambiguity, and redundancy are not only inefficient but morally inferior, and must be avoided at all costs. Inspired further by Holling’s systems theory(2), as well as works on human-centered ethics by Vallor(3) and Benjamin(4), we argue how this Optimization Determinism risks repeating historical mistakes, particularly those related to eugenic, colonial, and technocratic approaches.

In response to this worldview, we introduce Constrainism, a counter-philosophy based on what we call Strategic Inefficiency. Where search for optimization encourages elimination of friction (any form of resistance), Constrainism sees constraint as an integral source of ethical insight and moral wisdom. It offers a new design ethic for AI systems, which values and welcomes ambiguity, friction, and, above all, human judgment. In other words, Constrainism treats these characteristics not as design flaws, but as features essential to a fair and resilient system.

We conclude the paper by anticipating, and addressing, a number of potential critiques of the TESCREAL+ framework and propose a better path forward for how AI is taught, researched, used, and managed. And we argue that unless we move away from treating optimization as a virtue and efficiency as a moral guide, AI will not just reflect existing inequalities, it will, in fact, make them worse, even as its biggest supporters, investors, and evangelists claim to be solving them.

TESCREAL Revisited: A Brief Refresher on the Original Ideological Bundle

As referenced by Torres and Gebru(1), TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism
Each of the above philosophical branches has its own focus, but they all share a central idea, which is that advanced technology, guided by abstract ethical reasoning, can and should radically improve the human condition. Here’s a brief summary of each(5):

  • Transhumanism promotes using technology to transcend (enhance) human abilities, be it biological, mental, and emotional, through tools such as AI, genetic engineering, or nanotechnology, and the like.
  • Extropianism, which is a spin on transhumanism developed in the 1980s by thinkers such as Max More, supports a belief in the merits of endless progress and overcoming physical or biological limits.
  • Singularitarianism, in its more recent formulation, centers on the idea that AI will eventually surpass human intelligence, triggering a singularity or the birth of superintelligence, which renders human control unnecessary.
  • Cosmism combines science with spiritual ideas, imagining a future where we are capable of resurrecting the dead, colonizing space, and extending consciousness forever.
  • Rationalism refers to a philosophical viewpoint that sees logic, Bayesian reasoning, and utility maximization as the most trustworthy ways to make decisions.
  • Effective Altruism (EA) tries to apply reason and data to charity, or funding actions that aim to save (or improve) the most lives per money or resource spent. Some versions of this philosophy go further, focusing on long-term risks in the context of speculative futures.
  • Longtermism argues that future lives matter much more than present ones. Its main objective is the reduction of existential risks to humanity, and therefore, it claims that most of our energy and resources should be dedicated to shaping humanity’s long-term survival.

As Torres and Gebru argue, what connects these ideas isn’t just futurism or the idea of abstract morality. It’s a deeper logic that resembles early 20th century eugenics, the belief that humanity should be optimized, or engineered, towards a universal idea of progress. They link TESCREAL to an earlier wave of technocratic thinking that valued purity and efficiency over diversity and complexity.

This paper builds on their work but focuses more on what we call TESCREAL’s operational core, which is the belief that optimization is a moral good, and that ambiguity, resistance, and ethical friction should be eliminated from system design. From this angle, TESCREAL is not just a collection of disparate ideas, it’s a design blueprint that increasingly ignores human complexity, unpredictability, and personal experience.

That doesn’t mean all these ideologies are the same. For instance, transhumanism values individual enhancement, while longtermism focuses on more collective outcomes. But in practice, they often converge on a shared mindset, which favors perfect abstraction over real-life messiness, speed over reflection, and simplicity over nuance.

In the next section, we’ll expand TESCREAL bundle by introducing additional ideologies that further deepen this logic of optimization, in a new framework which we call TESCREAL+.

Expanding the Bundle: Why TESCREAL+?

The original TESCREAL framework provides a powerful foundation for analyzing AGI ideologies, but it is not complete. Since its introduction, other influential movements have emerged, most notably Accelerationism, which Torres and Gebru(1) noted as a variant of Effective Altruism. Alongside it, Dataism and Computational Immortalism have gained popularity and gradually become mainstream, particularly in the tech industry(a). These additions somewhat magnify TESCREAL’s core ideas, most notably the belief that optimization, disruption, and a constant focus on growth and scale are not just tools, but unquestioned virtues or moral principles.

To capture this broader bundle of related ideas, we propose the term TESCREAL+, which, in its current form, also includes the above additional ideological viewpoints: Accelerationism, Dataism, and Computational Immortalism.

Accelerationism

The origin of Accelerationism, as a philosophical movement, can be traced back to Carl Marx and his view of free trade in 1848(6). While Marx never used the word “Accelerationism,” his position fits the idea. He defended free trade even though he believed it would worsen class conflict. As he put it, free trade “…pushes the antagonism of the proletariat and the bourgeoisie to the extreme point.”. However, he still supported it based on the belief that “…free trade system hastens the social revolution”. This belief that speeding up harmful systems can force needed change is a core part of what later became known as Accelerationism.

The terms itself was introduced by Benjamin Noys in his book Malign velocities: Accelerationism and capitalism(7), in which he argued that embracing speed as a political strategy risks worsening the negative effects of the very systems the strategy hopes to change.

More recently, some right-wing thinkers and modern technologists have embraced Accelerationism as a strategy for speeding up social, economic, and technological change. They argue that rapid change can help society break free from existing political systems and mental limitations. And in its current AI-manifested form, Accelerationism has taken on new urgency and encourages:

  • Breaking things over improving them (move fast and break things),
  • Treating instability as innovation, and
  • Taking pursuit of speed as a valid strategy towards truth.

Accelerationism often discourages ethical reasoning, portrays caution as weakness, and paints a preferred version of the future system, as both “natural” and “inevitable” next step in evolution. It fits seamlessly with longtermist and singularitarian outlooks, which similarly advocate for technological progress which outpaces human reflection. Importantly, Accelerationist rationales are often found in startup culture, crypto-finance, and venture-capital discourse that actively funds AGI labs (not to mention in the boardroom of most major enterprises in the western world).

Dataism

The term Dataism was first used by David Brooks in a 2013 New York Times column(8), but it was Yuval Noah Harari who gave it broader visibility in his 2016 book Homo Deus(9). Dataism is the belief that the universe and human society are best understood as a series of data flows, and that the most ethical or advanced systems are those that maximize data collection and processing.

While Rationalism trusts internal reasoning, Dataism moves the focus outward to measurements and algorithms. This shifts ultimately results in treating more data as equivalent to more truth, considering algorithmic insights as integral to better decision making, and relegating subjective experience to the rank of statistical noise.

Though not often framed as a formal ideology (at least not so far), Dataism has quietly become the default worldview in many branches of the AI industry. It underlies everything from recommendation engines to predictive analytics. And in its more aggressive interpretation, it risks eliminating subjectivity as an ethical category altogether.

Computational Immortalism

Over the past two decades, the growing tendency to center human progress around technological breakthroughs has given rise to a set of worldviews that blend advanced computing with spiritual or metaphysical beliefs. While spiritualism itself is broad, ranging from religious traditions to metaphysical speculation, what concerns us here is a more specific thread, exemplified by influential thinkers such as Ray Kurzweil in his 1999 book The Age of Spiritual Machines(10).

We use the term Computational Immortalism to describe this emerging worldview, which unites several closely linked ideas, including:

  • The belief that consciousness can be uploaded or preserved digitally,
  • The view that AGI is a stepping-stone towards godlike intelligence, and
  • The idea that death is an engineering problem to be solved.

These ideas, while once considered fringe, now surface in AI labs, investor circles, policy documents, and the rhetoric of technopreneurs. This belief system often borrows religious tones, for example, by referencing transcendence, eternal life, or destiny, with the slight deviation that it replaces the divine with software (god as code). In doing so, it offers moral meaning (and justification) for high-risk or speculative technological paths and postponing ethical responsibility with promises of a radically improved future, which is a pattern we describe as moral deferral.

Computational Immortalism differs from (its most closely related) TESCREAL components in key ways:

  • Unlike Transhumanism, which focuses on enhancing the human body or mind, this ideology seeks to transcend them entirely,
  • Unlike Cosmism, which imagines the resurrection of the dead and expansion across the cosmos, it centers digital continuity over physical extension, and
  • Unlike Longtermism, which justifies present sacrifice in service of future generations, it justifies sacrifice in pursuit of personal digital salvation.

The impact of this idea on AI discourse is already visible, be it in projects that aim to simulate our loved ones, preserve digital legacies, or train models to mimic our consciousness. As such, Computational Immortalism is not just an extension of TESCREAL; It is a critical update for understanding where AI ideology may be heading next.

A Unified Logic Driving TESCREAL+

The inclusion of Accelerationism, Dataism, and Computational Immortalism into the TESCREAL+ bundle is not merely additive, for the sake of including new variants of the same core ideas. It reveals a deeper convergence of worldviews. Across all these ideologies, we find:

  • A moral elevation of speed, scale, and abstract thinking above all else,
  • A consistent treatment of resistance to a preferred (and highly abstract) vision of progress as irrational or regressive, and
  • A tendency to sideline human experience in favor of universal optimization logics.

TESCREAL+ is thus more than a set of philosophies. In fact, one may call it a composite operating system for imagining the future of intelligence, morality, and governance. It informs both design and development, reinterpreting ethics itself through the lens of calculability, economic utility (i.e., maximizing outcomes or efficiency), and risk-reward calculus.

In the next section, we identify the shared foundation uniting TESCREAL and TESCREAL+ namely, the obsession with efficiency, and its ethical, political, and knowledge-related consequences.

Efficiency as Ideological Core: Optimization as Moral Principle

Across the ideologies grouped under TESCREAL and its extended formulation, TESCREAL+, one theme appears with undeniable regularity: The treatment of efficiency as a moral principle. Whether it is framed through biological enhancement, existential risk reduction, increased data throughput, or long-term planning of civilization, optimization is not just a technical objective; It is a statement about what should be done.

This belief isn’t always made explicit. For example, in Effective Altruism, it shows up in cost-effectiveness calculations that prioritize interventions based on measurable impact. In Accelerationism and Singularitarianism, it is tied to indicators of exponential growth, technological inevitability, and recursive self-improvement. Across the board, efficiency appears as a moral compass. It justifies simplification (perhaps, one should say oversimplification), allows trade-offs to pass unquestioned (as long as mathematically justified), and turns complex ethical issues into seemingly neutral equations that can be solved (interestingly enough, often with inaccurate/estimated solutions!).

But this reliance on efficiency can be dangerous. When systems are optimized without space for uncertainty, contradiction, or judgment, they risk erasing the very things that make ethical decision-making possible and necessary. And as a consequence, bypassing moral considerations can be promoted as progress.

Optimization Determinism

We term this phenomenon Optimization Determinism. This denotes the belief that technological systems should, and inevitably will, be optimized towards some ideal end state, whether cognitive, moral, or economic. This belief rests on several interrelated assumptions:

  • That systems can and should be made frictionless,
  • That calculation outperforms deliberation and reflection,
  • That any form of slowness signals inefficiency, and therefore should be considered as failure, and
  • That constraints are flaws, not features.

This worldview, while often expressed in progressive language, reduces moral complexity to solvable problems (an optimization challenge) and frames resistance or objection as ignorance or obstruction. In doing so, it fosters overconfidence about what we can truly know, dismisses diverse perspectives, and limits the range of acceptable ethical thinking.

From Eugenics to Efficiency

This logic closely mirrors the thinking behind early 20th century eugenics, which emphasized rational planning, large-scale control, and the belief that society could (and should) be engineered from the top down. The problem with this approach, as seen in past social planning efforts, was not necessarily a lack of intelligence, but too much confidence in our collective ability to simplify and control complex realities.

Today’s systems may not sterilize or segregate, as was the case with eugenics, but they still rank, filter, and exclude, using algorithms that reward things such as productivity, intelligence scores, or credit ratings. In this way, the language of optimization hides the fact that many of the same harmful patterns continue, just in new forms. In other words, despite our undoubted progress and intelligence, we keep repeating the same mistakes over and over again.

Ethical Friction and the Loss of Insight

In many AI systems inspired by TESCREAL+ logic, speed is treated as synonymous with intelligence. Human-in-the-loop processes are removed in favor of real-time prediction. Edge cases are framed as statistical noise rather than indicators of potential structural bias or system limitations.

But friction is not inherently negative. It often signals the presence of moral complexity, and edge cases (outliers) often mark the thresholds of insight and birth of innovation. Historically, many scientific revolutions have emerged from unexpected anomalies. The Michelson–Morley experiment preceded the theory of special relativity, quantum mechanics was born from deviations in blackbody radiation models. In literature and art, too, breakthroughs often emerge from disruption, ambiguity, or scrutinizing the established forms. When systems are trained to suppress the exception, they may unwittingly block the conditions of innovation itself.

The irony is quite amusing: Many of the ideologies within TESCREAL+ are promoted by the very figures who publicly champion disruption and originality. Yet the knowledge habits they promote, including over-optimization, standardization, and the elimination of uncertainty, tend towards sameness and a loss of intellectual depth. The risk, then, is not just ethical or political, but also about how we understand the world. And the path they offer is likely to lead to a future that is efficient, smooth, and utterly incapable of surprise. Indeed, death by boredom seems to be the order of the day in the future!

Optimization and the Lessons of the Past

Concern about the risks of treating all human challenges as optimization problems is not new. Many researchers and thinkers have been warning for decades that this mindset can lead to fragile and harmful systems. Their insights reminds us that the drive for efficiency, when left unchecked, often repeats mistakes of the past, be it related to earlier eugenic movement, colonial philosophies, or technocratic projects.

For example, Holling’s work(2) on complex ecosystems shows that attempts to over-control natural environments, such as industrial forest management, often lead to collapse. Systems lose resilience whenever diversity and flexibility are stripped away in the search for increased efficiency. Vallor(3) reminds us that true ethical progress comes from cultivating virtues, such as patience and humility, rather than focusing only on measurable outcomes. And Benjamin(4) shows how algorithmic tools used in policing and credit scoring can deepen racial and social inequalities, especially when optimization replaces human judgment. Together, these lessons warn that focusing only on speed, efficiency, or outcome risks repeating the same patterns of harm observed in the past, as in earlier efforts to reshape society through control or exclusion.

Efficiency and the Delegitimization of Constraint

Constraint, in many optimization approaches, is treated as a technical limitation to be overcome, as seen in technological challenges involving compute power, latency, noise, and variability. Yet in human systems, constraint is often a moral safeguard. Constraints preserve space for reflection and resistance, allow for error correction, and prevent systems from spiraling out of control.

When these are stripped away, whether in infrastructure, education, or AI governance, what remains is a high performing system with no ethical safety locks. This leads to systems that are optimized for something, but are accountable to no one.

In the following section, we introduce Constrainism. This is a counter-philosophy that does not reject efficiency wholesale, but repositions it within a broader ethical architecture. Constrainism proposes that some inefficiencies, and indeed, some frictions, are not bugs but ethical features.

In doing so, it seeks to reinforce the value of slowness, ambiguity, redundancy, and resistance within the practice of technological design.

Strategic Inefficiency and the Case for Constrainism

If TESCREAL+ is driven by a belief in relentless optimization, then any serious challenge to its influence must address its ideas, design logic, and real-world consequences. To this end, we propose Constrainism, an alternative design ethic and philosophical stance, which treats Strategic Inefficiency as a necessary and principled response to the moral and knowledge-related failures of over-optimization.

Defining Strategic Inefficiency

Strategic inefficiency is not a rejection of performance or a call for technological stagnation. Rather, it is the deliberate embedding of friction, redundancy, and constraint within technological solutions, particularly those engaged in decision-making, classification, or resource distribution. It operates on the assumption that some forms of slowness, resistance, and ambiguity are not only tolerable but ethically indispensable.

Strategic inefficiency is evident in long-standing human systems:

  • In democratic governance, where checks and balances intentionally slow executive action,
  • In judicial systems, where appeals and adversarial reasoning delay resolution but preserve fairness, and
  • In education, where tests and revisions are integral to effective learning.

These systems treat constraint as an essential part of their designs, and consider inefficiency as the cost of reflection, pluralism, and safety.

What Is Constrainism?

Constrainism is the core idea behind strategic inefficiency. It offers an alternative to the accelerationist, rationalist, and utopian ideologies that dominate the current AI landscape. While TESCREAL+ envisions the future as a perfectly optimized and engineered nirvana, Constrainism argues that ethical systems must stay human-centric, use case specific, and open to interruption and reflection. Its core ideas include:

  • Friction is a feature: Ethics should resist the urge to flatten ambiguity into certainty,
  • Redundancy promotes resilience: Layered systems can help avoid catastrophic failures,
  • Human agency (judgment) can’t be removed or outsourced: Some decisions must remain deeply human,
  • Transparency matters: Some parts of a system, such as how decisions are made or what data is used, should be open and easy to verify, and
  • Integrity matters too: Other parts, including emotions, intuition, or personal experiences, should be protected from being turned into tools (instrumentalization) or tracked (measured) as hard cold numbers (KPIs).

Constrainism is not anti-AI. But it does reject the idea that performance scores alone determine ethical value in AI solutions. It encourages a deeper commitment about how knowledge is understood and applied. And it portrays a view of intelligence and ethics as processes which are paused, disrupted, and reflected on.

From Theory to Design: Constrainism in Practice

The principles of Constrainism can be translated into practical design constraints. These include:

  • Human-in-the-loop architectures, where oversight is not optional but structurally required,
  • Decision latency layers, where algorithmic recommendations must wait for human confirmation under conditions of uncertainty,
  • Plural objective functions, which allow systems to weigh competing ethical criteria rather than optimize a single scalar value,
  • Error-promoting simulations, where edge-case behavior is stress-tested and embraced as a learning vector, and
  • Schema enforcement for data systems, where logic is closely tied to use cases, errors are discouraged by design, and deviations trigger human oversight.

These are not merely technical suggestions. They reflect deeper commitments about how knowledge is understood and applied. They reflect a view of intelligence and ethics as processes interrupted and shaped by disagreement, disruption, and delay.

Sidebar: Chilán as a Constrainist Language

One instantiation of Constrainism in technical practice is the design of Chilán(b), a graph-native functional programming language aimed at enforcing schema discipline, constraint propagation, and interpretability in large-scale, complex data environments. By leveraging functional programming’s emphasis on correctness, referential transparency, and immutability, Chilán avoids the lack of clarity and hyper-flexibility often found in contemporary machine learning pipelines and languages.

Unlike many present-day systems that tolerate or even encourage schema-less flexibility, Chilán is built on the principle that constraints and schemas are essential safeguards. They ensure clarity of purpose, enforce domain-specific logic, and reduce dependence (over-reliance) on probabilistic guesswork. By embracing well-defined structures rather than avoiding them, Chilán implements the Constrainist belief that safety and intelligence begin with intentional design boundaries.

Anticipating and Addressing Criticism

Any approach that questions dominant ideas, especially ones painted as “rational,” “effective,” or “progressive”, must be ready to face resistance, particularly when those ideas are deeply rooted, institutionalized, and hold considerable cultural influence. In this section, we look at the most likely critiques of both the expanded TESCREAL+ analysis and the Constrainist alternative, and present our responses as part of the wider debates around AI ethics, knowledge, and system design.

Bundling Too Many Distinct Ideologies

Critique: The TESCREAL+ framework combines diverse ideological projects with different goals and origins. Transhumanism is not the same as Effective Altruism, Rationalism is not Longtermism, and Computational Immortalism is not Dataism.

Response: We acknowledge that these ideologies are internally diverse. Our claim is not that they are identical, but that they share a similar structure, with a common commitment to optimization, abstraction, and technological transformation as paths to moral progress. While they differ in face value, they converge on a design logic that favors speed, scale, and KPIs. The TESCREAL+ bundle is useful not as a classification, but as a diagnostic tool for tracing shared assumptions across philosophical ideologies that shape today’s technological progress, specifically within the realm of AI (and AGI).

Attacking Something that Saves Lives

Critique: Optimization is not inherently harmful. In fact, it has saved lives, for example, through better logistics in disaster relief, more accurate diagnostics in medicine, and higher efficiency in energy systems. Why critique it so broadly?

Response: Our argument is not against optimization per se, but against its unquestioned elevation to a moral principle. We distinguish between situational optimization (bounded, contextual, and deliberate) and optimization determinism, which holds that systems should always aim to eliminate friction, redundancy, and ambiguity. Many of the harms associated with algorithmic injustice come not from optimization alone, but from applying it without constraint, reflection, or considering diverse perspectives.

Luddism or Technophobia Revisited

Critique: The proposal to embed inefficiency into AI systems sounds like a technophobic resistance to progress. Isn’t this simply a modern version of anti-technology sentiment?

Response: Constrainism is not against technology, it’s against the idea that progress has only one fixed objective, namely, some form of optimization. We don’t reject AI systems. Instead, we argue they should be built to protect, and to promote, human oversight, moral complexity, and humility about what we know, while remaining cautious in situations where we do not know! Constrainism relies on careful systems thinking and good design. What it rejects is the transformation of moral values to narrow and impersonal metrics.

Operationalizing a Vague Concept

Critique: Terms like “friction” and “strategic inefficiency” are philosophically rich but technically vague. How would this actually be implemented in code, process, or policy?

Response: Constrainism is already visible in real systems, including human-in-the-loop pipelines, adversarial training in machine learning, techniques that protect individual data privacy, democratic oversight bodies, and regulatory tools like algorithmic impact assessments (formal reviews of how automated systems might affect people and society). What we offer is not an entirely new approach, but a reconsideration of design decisions as ethical choices. In addition, tools like Chilán (discussed earlier) demonstrate that constraint-first design is both technically viable and productive in how it supports meaningful knowledge work.

AI Ethics by Another Name

Critique: Isn’t Constrainism just a repackaging of responsible AI or human-centered design principles?

Response: Constrainism builds on, but goes beyond, existing AI ethics frameworks. While many responsible AI initiatives emphasize explainability, fairness, and safety, they often treat ethics as something to be added after the system has been optimized. Constrainism challenges this logic. It puts constraint at the core of the design process, not as an afterthought, nor as a separate filter, but as the starting point. This is not ethics-as-regulation. Rather, it is ethics as the foundation of the system itself (ethics-as-architecture).

Short-Term Thinking: Why Ignoring the Past Undermines the Future

One of the most dominant characteristics of TESCREAL+ ideologies is how unevenly they treat time. Longtermism, Accelerationism, Singularitarianism, and other forms of futurism all place the distant future at the center of moral consideration. They treat the present as an obstacle (a mathematical challenge, if you will) to overcome and the past as little more than an amusing footnote. But this approach is not just ethically shaky, it is also logically flawed.

Speculation Without Foundation

Many TESCREAL+ ideologies appeal to rationalism, evidence, and formal reasoning. And yet, they frequently rely on speculative futures that cannot be tested, updated, or falsified. Projections about posthuman intelligence, intergalactic civilizations, or moral value across astronomical timeframes are often presented with confidence and utmost mathematical precision. But precision is not the same as certainty or clarity. These speculative claims are intellectually fragile as they contradict their own assumptions by operating without reasonable verification or moral accountability.

The Erasure of the Past

By prioritizing an imagined future, TESCREAL+ advocates often treat the past as irrelevant. Consequently, ethical lessons from history, deep cultural knowledge, and intergenerational memory are dismissed, oversimplified, and eventually bypassed altogether. But this obsession with the current vision of the future, while dismissing history, fails to recognize that every future eventually becomes a past in its own right. Designing for the future while erasing the legacy of the past harm and mistakes only serves to deepen injustice. A system that ignores its own historical journey cannot learn, cannot adapt, and cannot be just.

The Insignificance of the Present

In many TESCREAL+ ideologies, the present is treated as morally insignificant. A single suffering person today is seen as less valuable than trillions of hypothetical lives in the future. Cultural diversity, systemic injustice, and ecological collapse become footnotes to the primary objective of safeguarding an imagined future populated by imagined beings. But such reasoning ignores the fact that the present is the only place where tangible action is possible. Disregarding the now in favor of speculative futures leads to a collapse of morality, where human pain becomes an acceptable cost of optimization on the road to a, seemingly, better future.

Towards a More Coherent View of Time

Constrainism takes a different view. It views time as layered, cyclical, and morally complex. The past is a source of wisdom and reflection, the present is a site of responsibility and action, and the future is a domain of possibility and progress. But none can be ignored in favor of the others. Systems built on Constrainist principles honor memory, preserve ambiguity, and embrace the complexity of ethical consideration across time. In doing so, they promote neither misplaced nostalgia nor vague techno-utopian fantasies. They are not paralyzed by the past, but guided by it.

The inability of TESCREAL+ ideologies to honor both the past and the present makes them logically incoherent. Constrainism, by contrast, insists that ethical design must work across time, not outside it.

Sidebar: A Technologist’s Ethical Dilemma: Am I What I Am Critiquing?

Constrainism reflects something rather personal for me. As a technologist, I was trained to value elegance, precision, and optimization. I built systems meant to reduce complexity, uncover structure, and eliminate waste. My early career was shaped by rationalist logic, measurable progress, clean models, and formal systems. I admired thinkers who promised a better world through intelligence, and I believed that technology, if designed rigorously enough, could save us from ourselves.

But, somehow along the way, something shifted for me.

The more I worked on AI, the more I realized that the problems were not just technical. They were philosophical. The tools I used were powerful, but the assumptions behind them were questionable. I started noticing how easily ideas, which at first, felt like rational thinking turned into clever justification (in what you may call a shift from rationalism to rationalization). I saw how useful tools and approaches, such as optimization techniques, were used to mask harmful oversimplifications. The systems we were designing reflected not just our intelligence, but also our blind spots, and perhaps even our worst emotional instincts and biases.

So, I started to entertain the following question: Am I what I am critiquing?

In many ways, yes. I have certainly benefited from the very ideas I now interrogate. I still believe in the power of science and technology. I still write code and admire clarity. But I no longer believe that clarity is always good, or that efficiency is always right, or that abstraction is always harmless. I no longer believe that the best systems are the ones that disappear into uniformity. I believe in systems that interrupt, invite pause, slow us down, and, above all, leave space for judgment, contradiction, and reflection on context.

This paper, and the concept of Constrainism, is my attempt to reconcile those internal tensions. To build a future not free from friction, but informed by it. To show that constraint is not an obstacle to intelligence, but a condition, a necessary condition, for wisdom. And to remind myself, and anyone listening (reading), that the most dangerous systems are not the ones we mistrust. They are the ones we think are inevitable and beyond criticism.

Sidebar: What AI Could Be: A Constrainist Vision of Progress

Despite the sharp critiques laid out in this paper, Constrainism is not a rejection of AI. It is a refusal to build it without reflection. It is a refusal to frame AI as destiny. And it is an invitation to design it deliberately.

In the TESCREAL+ worldview, AI is often imagined as a god, a prophecy, or a governor. Something that is destined to arrive, and is assumed to be both polished and optimized. In contrast, Constrainism sees AI as a mirror: it reflects our fears, our values, and our contradictions. Its role is not to dictate the future, but to open a continuous dialogue and deepen our understanding of ourselves.

A Constrainist AI is interruptible. It is not a black box or an oracle, but a tool that slows itself down when the stakes are high. It welcomes ambiguity rather than rushing decisions based on forced clarity. It augments human thought, but never replaces it. It is accountable, and not simply autonomous. It learns from the past. It accounts for the present. And it is designed for moral friction, not moral shortcuts.

What AI could be is not the endpoint of intelligence. It could be the beginning of humility. If we build with constraint, we do not lose power. Instead, we gain clarity about when not to use it.

Conclusion: Embracing Constraint, Reclaiming Ethics

The future of AI (and AGI) is not being written purely in code. Instead, it is being guided by a number of diverse, and mostly invisible, set of ideologies, TESCREAL, and its expanded offshoots, which promote the pursuit of speed, scale, and oversimplification as the only morally correct path for the future of humanity. These ideologies do not merely guide technical advances. They tend to redefine the very essence of ethical reasoning.

In this paper, we’ve argued that what ties these ideologies together is not just a shared vision of the future. It is a much deeper belief in what we call Optimization Determinism. In response, we introduced Constrainism, as a counter-approach that sees Strategic Inefficiency not as a flaw, but as a core design and ethical value. Constrainism argues that constraint, often dismissed as technical debt or unwelcome friction, is essential for reflection, resilience, and fairness. And more than that, constraint is a way to inject much-needed human intelligence into AI. While automation removes nuance and promotes speed as an ultimate objective, well-placed constraints encourage human reasoning, ethical considerations, and contextual thinking.

In doing so, constraints do not become limits to intelligence, but act as extensions of it, highlighting that learning is not just about computation, but also conflict, balance, and context.

The implications are broad: For system designers, it means treating constraint not as a limitation, but as a source of guidance and control. For researchers, it means developing new methodologies which preserve and utilize ambiguity rather than simply eliminate it. For educators, it means resisting the temptation to reduce AI ethics to predefined checklists, secondary filters, or compliance dashboards and abstract KPIs.

It is an uncomfortable fact that we are living in a world where ethics is sold as a service, or an external collection of requirements, designed to simply enhance an already built system. Constrainism rejects this “ethics-as-a-service” worldview and instead, promotes ethics as an essential component of responsible system design and architecture.

To embrace constraint is, therefore, to bring ethics back to real life. It means accepting that uncertainties, interruptions, and different points of view are not problems to fix, but realities we must consciously design for.

The choice before us is not between progress and stagnation. It is between over-optimization without understanding and accountability and bounded systems that preserve space for reflection, discovery, objection and, ultimately, responsibility. Constrainism does not reject the future. It simply insists that we arrive there thoughtfully, and with intent.

Appendix A: Teaching AI Within Constraint: Towards a Constrainist Approach to Education

As the TESCREAL+ ideologies continue to shape not only the design of AI systems but also the training of those who build, use, and promote them, it becomes increasingly important to examine how AI education treats their underlying logics. Technical curricula often internalize optimization determinism implicitly. For example, performance metrics are emphasized, deployment speed is glorified, and ambiguity is framed as a problem to solve, not a condition to explore and factor in.

If system design is a consequence of ideology, then teaching must be considered as instrumental to design. In other words, the approach we take in classrooms, and the mindset we promote, will influence the systems we create. That’s why Constrainism must be treated as more than a design philosophy. It should also guide how we teach.

From Optimization Mindsets to Ethical Architectures

A Constrainist worldview challenges educators to change the framing of AI education, from efficiency to restraint, from blind solutionism (the belief that all problems, including complex social issues, can be solved, often with the aid of technology) to deeper exploration, and from automation by default to guided intervention.

It invites instructors to explore how ethical friction, ambiguity, and redundancy can be integrated more broadly into learning, rather than introduced as disparate or standalone “ethics modules” or “compliance lectures”.

Curriculum Design Principles

A Constrainist curriculum might include:

  • Ethical Friction Labs: Exercises that deliberately embed ambiguity, conflicting objectives, or incomplete information to promote careful judgment over blind pursuit of automation at all costs,
  • Failure as Feature: Assignments in which identifying and analyzing modes of failure are valued just as highly, or over, arriving at a pre-defined “correct” result,
  • Diverse Logics: Exposure to non-Western ideologies, indigenous knowledge systems, and alternative ways of managing data as legitimate inputs to AI design,
  • Strategic Interruptibility: Projects that enforce human-in-the-loop mechanisms and test systems’ ability to pause, reflect, or defer, as part of its “normal” behavior, and
  • Constraint Injection: Creative design tasks where constraints are intentionally, and randomly, introduced (e.g., no floating-point math, no cloud access, limited time horizon, etc.) to encourage creative thinking and fresh perspectives.

The Manifesto as a Teaching Tool

The Constrainist Manifesto (see Appendix B below), can serve as a useful and flexible guide offering:

  • A conversation starter in design ethics lectures, seminars, and the like,
  • A framework for debate in multidisciplinary workshops on design, and
  • A challenge to stimulate student reflection, inviting critiques, idea generation, or alternative manifestos.

By inviting students to take the manifesto as a starting point for discussion, instructors can foster a more active, reflective, and consequential engagement with AI design.

A Call to Educators

Teaching Constrainism is not a rejection of technology or technical excellence. it is an invitation to deepen our understanding of technology. It insists that ethical reasoning is not separate from code, architecture, or systems design, but should be embedded in them. It challenges us to produce technologists who are not only builders, but interpreters, stewards, and critics.

In a time when “ethics-as-a-service” threatens to turn morality into a one-dimensional compliance checklist, Constrainism reminds us that education must resist that logic. And it offers a simple principle that reads: To teach constraint is to teach sound judgment.

Appendix B: The Constrainist Manifesto

Constraint is not a failure of design. It is its foundation.

Constraint is not a lack of imagination. It is an invitation to imagine differently.

We live in a time when optimization is treated as morality, any form of slowness is translated as inefficiency, redundancy is dismissed as waste, and ambiguity is considered failure. But these assumptions are not purely technical or neutral. They are, in fact, ideological. They reflect a worldview in which speed, scale, and simplicity are valued above justice, resilience, and meaning.

Constrainism rejects this worldview.

We believe that:

  • Slowness leads to wisdom: Systems that move too fast cannot be interrupted, questioned, or redirected,
  • Redundancy offers protection: A single elegant solution is more fragile than a network of imperfect backups,
  • Ambiguity promotes insight: Uncertainty and contradictions are invitations to reflect, not errors to suppress,
  • Constraint brings clarity: The most reliable systems are those that define limits, expose clear boundaries, and enable deeper understanding, and
  • Human judgment is irreducible: No optimization function can perfectly capture moral reasoning, cultural complexity, or historical depth.

Constrainism claims that ethics must be built into system architecture, not treated as an afterthought or a separate interface. It is a commitment to design systems that protect human judgment, not eliminate it.

We do not reject technology. We reject blind solutionism.

We do not reject progress. We reject mindless acceleration without thoughtful sense of direction.

We do not reject intelligence. We reject intelligence without care or accountability.

And above all:

Good Artificial Intelligence Must Be a Mirror to Human Intelligence.

***Notes

a. To the best of our knowledge, Computational Immortalism is a term introduced for the first time in this paper. When we state that “…Computational Immortalism have gained popularity and gradually become mainstream…” we mean that many of the ideas that make up this concept are becoming more visible in the current technology-related discussions.

b. Contact the writer at: info@parvizforoughian.com for more details about Chilán programming language.

***References

  1. Torres, É. P., & Gebru, T. (2023). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday: https://firstmonday.org/ojs/index.php/fm/article/view/13636
  2. Holling, C. S. (2001). Understanding the Complexity of Economic, Ecological, and Social Systems. Springer Nature Link: https://link.springer.com/article/10.1007/s10021-001-0101-5
  3. Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press: https://www.research.ed.ac.uk/en/publications/technology-and-the-virtues-a-philosophical-guide-to-a-future-wort
  4. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. (Polity 2019): https://www.ruhabenjamin.com/race-after-technology
  5. For more background on these ideologies, see Torres & Gebru (2023, as per reference 1 above), and related entries on Wikipedia.
  6. Marx. K (1848). On the Question of Free Trade. Democratic Association of Brussels, 9 January: https://cooperative-individualism.org/marx-karl_on-the-question-of-free-trade-1848.htm
  7. Noys, B. (2014). Malign velocities: Accelerationism and capitalism. Zero Books: https://www.amazon.sg/dp/1782793003
  8. Brooks, D (2013), The Philosophy of Data, New York Times: https://www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html
  9. Harari, Y N, 2016, Homo Deus: A Brief History of Tomorrow: https://tinyurl.com/mppnfjsp
  10. Kurzweil, R. (2000). The Age of Spiritual Machines: https://tinyurl.com/4h9u5f6c

Subscribe to this blog and follow me on LinkedIn to get early access.

Related posts:

AI Ethics

Leave a comment