TESCREAL+: A White Paper on Ethics of Strategic Inefficiency

Prologue: Some Useful Background

This white paper, and a few follow-up articles, mini-articles, and social media posts, that it will inevitably lead to, are my latest attempt at making sense of the efficiency-obsessed world we all live in.

As a technology hawk (I would like to think of myself as a harmless technology optimist, but who’s to say) I arrive at this point with more than two decades of work in AI (Artificial Intelligence) across various industries. My experience and studies encompass the first wave of logic-based AI, aka the good old days of intelligence based on certainly, as well as the new age AI, or what I have previously called the age of probability-infested pseudo-intelligence. And I am certainly crazy (or romantic?) enough to hang around as much as I can to see, and hopefully be part of, the next phase of AI development (why a next phase is needed is a subject of more articles in the near future).

With this background in mind, I suppose there should not be much of a surprise to learn that I have gradually found myself drawn to the Ethics of AI. This gradual shift is partly due to the fact that getting the ethics of AI right is an inherently fascinating technical challenge to overcome and party a result of my intense belief that AI, as a mirror for HI (Human Intelligence), is the humanity’s best chance to come to terms with our deepest biases, fears, and hopes.

One thing you learn when contemplating the impact of modern AI is the increasing influence of efficiency as a driving force for making any decision and for improving any algorithm. This obsession with efficiency has not only been tolerated but, in fact, aggressively encouraged, as a normal mode of operation, by most CEOs and technology investors whose definition of progress can be summed up with a few “strategically defined” KPIs (Key Performance Indicators) that tend to simplify (oversimplify is a more accurate word here) anything and everything into a set of “measurable” numbers.

While I cannot deny that this approach has resulted in tangible progress in many areas, overuse of it has certainly contributed towards the erosion of nuance, in favor of a more easily understood “average behavior”, and ultimately, the slowing down of the rate of innovation. After all, there are only a few ways one can apply a given set of techniques to “optimize” a problem and if everyone did just that, soon all solutions to the problem converge and we will be looking at a world of uninteresting uniformity where outliers are dismissed and thinking differently is abandoned on the charge of leading to suboptimal outcome.

The technological evidence of this “efficiency driven” approach is best observed in the way the new Gen AI (Generative AI) models have swept the AI landscape. Those of us who have been working with the latest Gen AI “toys” did not take long to notice the shortcomings of such solutions in their current form, from the annoying tendency for repeating the same incomplete or wrong answers to a painful “uniformity or blandness of output” across multiple Gen AI engines, which lack both creativity and tolerance for nuance. Needless to say, such shortcomings can, and often do, lead to outcomes that are ethically questionable, to say the least.

The topic of AI Ethics touches on many areas of discourse, such as philosophy, psychology, language, etc., and it is easy for anyone to lose their bearings, especially in the beginning of their journey. Therefore, I consider myself quite fortunate to have come across the seminal paper by Torres and Gebru(1), which helped me greatly in looking at the problems of modern AI, not as a set of simple mathematical equations to solve, but as an extension of a broader set of philosophies that drive its progress towards its nirvana of AGI (Artificial General Intelligence).

Going through this marvelous paper a few times, allowed me to connect a few dots that have been nagging at me for many years. And on further reflection, I decided to:

  1. Take on the challenge of extending their original bundle of related philosophies, which they call TESCREAL, and introduce TESCREAL+,
  2. Put forward an argument for relating the above extended bundle of philosophies to an “unhealthy” search for efficiency,
  3. Propose a manifesto for reframing this particular area of AI development in order to bring much needed humanity to its progress.

In what follows, I have tried, as much as possible, to adhere to the expected format of a formal white paper throughout (while remaining accessible to a wider audience) but as always, I appreciate constructive feedback on both form and contents from everyone who shares a common interest in the topic.

Here we go folks…

Introduction

In 2023, Torres and Gebru(1) introduced the concept of TESCREAL, an acronym for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism which, they argued, collectively form the ideological foundation behind contemporary AGI development. Interestingly, they traced the genealogical roots of TESCREAL to 20th century eugenics and techno-utopian worldview, highlighting how the promise of AGI is often expressed in terms of optimization and going beyond accepted human limits (an idea knows as transcendence).

In this paper we build upon that foundation but shift the analytical approach somewhat. We propose an expanded framework, called TESCREAL+, which incorporates additional ideological movements not originally addressed, namely Accelerationism, Dataism, and Computational Immortalism. While these ideologies are clearly distinct in both their origins and terminology, we argue that they do share a fundamental core idea, namely an undying commitment to efficiency as moral foundation of progress. Whether through pursuit of algorithmic optimization, technological advancements, or human transcendence, the guiding principle of TESCREAL+ is the optimization of all human systems, be it biological, cognitive, social, or ethical, towards an imagined state of perfection.

We term this phenomenon Optimization Determinism, a belief that friction, ambiguity, and redundancy are not only inefficient but morally inferior, and must be avoided at all costs. Inspired further by Holling’s systems theory(2), as well as works on human-centered ethics by Vallor(3) and Benjamin(4), we argue how this Optimization Determinism risks repeating historical mistakes, particularly those related to eugenic, colonial, and technocratic approaches.

In response to this worldview, we introduce Constrainism, a counter-philosophy based on what we call Strategic Inefficiency. Where search for optimization encourages elimination of friction (any form of resistance), Constrainism sees constraint as an integral source of ethical insight and moral wisdom. It offers a new design ethic for AI systems, which values and welcomes ambiguity, friction, and, above all, human judgment. In other words, Constrainism treats these characteristics not as design flaws, but as features essential to a fair and resilient system.

We conclude the paper by anticipating, and addressing, a number of potential critiques of the TESCREAL+ framework and propose a better path forward for how AI is taught, researched, used, and managed. And we argue that unless we move away from treating optimization as a virtue and efficiency as a moral guide, AI will not just reflect existing inequalities, it will, in fact, make them worse, even as its biggest supporters, investors, and evangelists claim to be solving them.

TESCREAL Revisited: A Brief Refresher on the Original Ideological Bundle

As referenced by Torres and Gebru(1), TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism
Each of the above philosophical branches has its own focus, but they all share a central idea, which is that advanced technology, guided by abstract ethical reasoning, can and should radically improve the human condition. Here’s a brief summary of each(5):

  • Transhumanism promotes using technology to transcend (enhance) human abilities, be it biological, mental, and emotional, through tools such as AI, genetic engineering, or nanotechnology, and the like.
  • Extropianism, which is a spin on transhumanism developed in the 1980s by thinkers such as Max More, supports a belief in the merits of endless progress and overcoming physical or biological limits.
  • Singularitarianism, in its more recent formulation, centers on the idea that AI will eventually surpass human intelligence, triggering a singularity or the birth of superintelligence, which renders human control unnecessary.
  • Cosmism combines science with spiritual ideas, imagining a future where we are capable of resurrecting the dead, colonizing space, and extending consciousness forever.
  • Rationalism refers to a philosophical viewpoint that sees logic, Bayesian reasoning, and utility maximization as the most trustworthy ways to make decisions.
  • Effective Altruism (EA) tries to apply reason and data to charity, or funding actions that aim to save (or improve) the most lives per money or resource spent. Some versions of this philosophy go further, focusing on long-term risks in the context of speculative futures.
  • Longtermism argues that future lives matter much more than present ones. Its main objective is the reduction of existential risks to humanity, and therefore, it claims that most of our energy and resources should be dedicated to shaping humanity’s long-term survival.

As Torres and Gebru argue, what connects these ideas isn’t just futurism or the idea of abstract morality. It’s a deeper logic that resembles early 20th century eugenics, the belief that humanity should be optimized, or engineered, towards a universal idea of progress. They link TESCREAL to an earlier wave of technocratic thinking that valued purity and efficiency over diversity and complexity.

This paper builds on their work but focuses more on what we call TESCREAL’s operational core, which is the belief that optimization is a moral good, and that ambiguity, resistance, and ethical friction should be eliminated from system design. From this angle, TESCREAL is not just a collection of disparate ideas, it’s a design blueprint that increasingly ignores human complexity, unpredictability, and personal experience.

That doesn’t mean all these ideologies are the same. For instance, transhumanism values individual enhancement, while longtermism focuses on more collective outcomes. But in practice, they often converge on a shared mindset, which favors perfect abstraction over real-life messiness, speed over reflection, and simplicity over nuance.

In the next section, we’ll expand TESCREAL bundle by introducing additional ideologies that further deepen this logic of optimization, in a new framework which we call TESCREAL+.

Expanding the Bundle: Why TESCREAL+?

The original TESCREAL framework provides a powerful foundation for analyzing AGI ideologies, but it is not complete. Since its introduction, other influential movements have emerged, most notably Accelerationism, which Torres and Gebru(1) noted as a variant of Effective Altruism. Alongside it, Dataism and Computational Immortalism have gained popularity and gradually become mainstream, particularly in the tech industry(a). These additions somewhat magnify TESCREAL’s core ideas, most notably the belief that optimization, disruption, and a constant focus on growth and scale are not just tools, but unquestioned virtues or moral principles.

To capture this broader bundle of related ideas, we propose the term TESCREAL+, which, in its current form, also includes the above additional ideological viewpoints: Accelerationism, Dataism, and Computational Immortalism.

Accelerationism

The origin of Accelerationism, as a philosophical movement, can be traced back to Carl Marx and his view of free trade in 1848(6). While Marx never used the word “Accelerationism,” his position fits the idea. He defended free trade even though he believed it would worsen class conflict. As he put it, free trade “…pushes the antagonism of the proletariat and the bourgeoisie to the extreme point.”. However, he still supported it based on the belief that “…free trade system hastens the social revolution”. This belief that speeding up harmful systems can force needed change is a core part of what later became known as Accelerationism.

The terms itself was introduced by Benjamin Noys in his book Malign velocities: Accelerationism and capitalism(7), in which he argued that embracing speed as a political strategy risks worsening the negative effects of the very systems the strategy hopes to change.

More recently, some right-wing thinkers and modern technologists have embraced Accelerationism as a strategy for speeding up social, economic, and technological change. They argue that rapid change can help society break free from existing political systems and mental limitations. And in its current AI-manifested form, Accelerationism has taken on new urgency and encourages:

  • Breaking things over improving them (move fast and break things),
  • Treating instability as innovation, and
  • Taking pursuit of speed as a valid strategy towards truth.

Accelerationism often discourages ethical reasoning, portrays caution as weakness, and paints a preferred version of the future system, as both “natural” and “inevitable” next step in evolution. It fits seamlessly with longtermist and singularitarian outlooks, which similarly advocate for technological progress which outpaces human reflection. Importantly, Accelerationist rationales are often found in startup culture, crypto-finance, and venture-capital discourse that actively funds AGI labs (not to mention in the boardroom of most major enterprises in the western world).

Dataism

The term Dataism was first used by David Brooks in a 2013 New York Times column(8), but it was Yuval Noah Harari who gave it broader visibility in his 2016 book Homo Deus(9). Dataism is the belief that the universe and human society are best understood as a series of data flows, and that the most ethical or advanced systems are those that maximize data collection and processing.

While Rationalism trusts internal reasoning, Dataism moves the focus outward to measurements and algorithms. This shifts ultimately results in treating more data as equivalent to more truth, considering algorithmic insights as integral to better decision making, and relegating subjective experience to the rank of statistical noise.

Though not often framed as a formal ideology (at least not so far), Dataism has quietly become the default worldview in many branches of the AI industry. It underlies everything from recommendation engines to predictive analytics. And in its more aggressive interpretation, it risks eliminating subjectivity as an ethical category altogether.

Computational Immortalism

Over the past two decades, the growing tendency to center human progress around technological breakthroughs has given rise to a set of worldviews that blend advanced computing with spiritual or metaphysical beliefs. While spiritualism itself is broad, ranging from religious traditions to metaphysical speculation, what concerns us here is a more specific thread, exemplified by influential thinkers such as Ray Kurzweil in his 1999 book The Age of Spiritual Machines(10).

We use the term Computational Immortalism to describe this emerging worldview, which unites several closely linked ideas, including:

  • The belief that consciousness can be uploaded or preserved digitally,
  • The view that AGI is a stepping-stone towards godlike intelligence, and
  • The idea that death is an engineering problem to be solved.

These ideas, while once considered fringe, now surface in AI labs, investor circles, policy documents, and the rhetoric of technopreneurs. This belief system often borrows religious tones, for example, by referencing transcendence, eternal life, or destiny, with the slight deviation that it replaces the divine with software (god as code). In doing so, it offers moral meaning (and justification) for high-risk or speculative technological paths and postponing ethical responsibility with promises of a radically improved future, which is a pattern we describe as moral deferral.

Computational Immortalism differs from (its most closely related) TESCREAL components in key ways:

  • Unlike Transhumanism, which focuses on enhancing the human body or mind, this ideology seeks to transcend them entirely,
  • Unlike Cosmism, which imagines the resurrection of the dead and expansion across the cosmos, it centers digital continuity over physical extension, and
  • Unlike Longtermism, which justifies present sacrifice in service of future generations, it justifies sacrifice in pursuit of personal digital salvation.

The impact of this idea on AI discourse is already visible, be it in projects that aim to simulate our loved ones, preserve digital legacies, or train models to mimic our consciousness. As such, Computational Immortalism is not just an extension of TESCREAL; It is a critical update for understanding where AI ideology may be heading next.

A Unified Logic Driving TESCREAL+

The inclusion of Accelerationism, Dataism, and Computational Immortalism into the TESCREAL+ bundle is not merely additive, for the sake of including new variants of the same core ideas. It reveals a deeper convergence of worldviews. Across all these ideologies, we find:

  • A moral elevation of speed, scale, and abstract thinking above all else,
  • A consistent treatment of resistance to a preferred (and highly abstract) vision of progress as irrational or regressive, and
  • A tendency to sideline human experience in favor of universal optimization logics.

TESCREAL+ is thus more than a set of philosophies. In fact, one may call it a composite operating system for imagining the future of intelligence, morality, and governance. It informs both design and development, reinterpreting ethics itself through the lens of calculability, economic utility (i.e., maximizing outcomes or efficiency), and risk-reward calculus.

In the next section, we identify the shared foundation uniting TESCREAL and TESCREAL+ namely, the obsession with efficiency, and its ethical, political, and knowledge-related consequences.

Efficiency as Ideological Core: Optimization as Moral Principle

Across the ideologies grouped under TESCREAL and its extended formulation, TESCREAL+, one theme appears with undeniable regularity: The treatment of efficiency as a moral principle. Whether it is framed through biological enhancement, existential risk reduction, increased data throughput, or long-term planning of civilization, optimization is not just a technical objective; It is a statement about what should be done.

This belief isn’t always made explicit. For example, in Effective Altruism, it shows up in cost-effectiveness calculations that prioritize interventions based on measurable impact. In Accelerationism and Singularitarianism, it is tied to indicators of exponential growth, technological inevitability, and recursive self-improvement. Across the board, efficiency appears as a moral compass. It justifies simplification (perhaps, one should say oversimplification), allows trade-offs to pass unquestioned (as long as mathematically justified), and turns complex ethical issues into seemingly neutral equations that can be solved (interestingly enough, often with inaccurate/estimated solutions!).

But this reliance on efficiency can be dangerous. When systems are optimized without space for uncertainty, contradiction, or judgment, they risk erasing the very things that make ethical decision-making possible and necessary. And as a consequence, bypassing moral considerations can be promoted as progress.

Optimization Determinism

We term this phenomenon Optimization Determinism. This denotes the belief that technological systems should, and inevitably will, be optimized towards some ideal end state, whether cognitive, moral, or economic. This belief rests on several interrelated assumptions:

  • That systems can and should be made frictionless,
  • That calculation outperforms deliberation and reflection,
  • That any form of slowness signals inefficiency, and therefore should be considered as failure, and
  • That constraints are flaws, not features.

This worldview, while often expressed in progressive language, reduces moral complexity to solvable problems (an optimization challenge) and frames resistance or objection as ignorance or obstruction. In doing so, it fosters overconfidence about what we can truly know, dismisses diverse perspectives, and limits the range of acceptable ethical thinking.

From Eugenics to Efficiency

This logic closely mirrors the thinking behind early 20th century eugenics, which emphasized rational planning, large-scale control, and the belief that society could (and should) be engineered from the top down. The problem with this approach, as seen in past social planning efforts, was not necessarily a lack of intelligence, but too much confidence in our collective ability to simplify and control complex realities.

Today’s systems may not sterilize or segregate, as was the case with eugenics, but they still rank, filter, and exclude, using algorithms that reward things such as productivity, intelligence scores, or credit ratings. In this way, the language of optimization hides the fact that many of the same harmful patterns continue, just in new forms. In other words, despite our undoubted progress and intelligence, we keep repeating the same mistakes over and over again.

Ethical Friction and the Loss of Insight

In many AI systems inspired by TESCREAL+ logic, speed is treated as synonymous with intelligence. Human-in-the-loop processes are removed in favor of real-time prediction. Edge cases are framed as statistical noise rather than indicators of potential structural bias or system limitations.

But friction is not inherently negative. It often signals the presence of moral complexity, and edge cases (outliers) often mark the thresholds of insight and birth of innovation. Historically, many scientific revolutions have emerged from unexpected anomalies. The Michelson–Morley experiment preceded the theory of special relativity, quantum mechanics was born from deviations in blackbody radiation models. In literature and art, too, breakthroughs often emerge from disruption, ambiguity, or scrutinizing the established forms. When systems are trained to suppress the exception, they may unwittingly block the conditions of innovation itself.

The irony is quite amusing: Many of the ideologies within TESCREAL+ are promoted by the very figures who publicly champion disruption and originality. Yet the knowledge habits they promote, including over-optimization, standardization, and the elimination of uncertainty, tend towards sameness and a loss of intellectual depth. The risk, then, is not just ethical or political, but also about how we understand the world. And the path they offer is likely to lead to a future that is efficient, smooth, and utterly incapable of surprise. Indeed, death by boredom seems to be the order of the day in the future!

Optimization and the Lessons of the Past

Concern about the risks of treating all human challenges as optimization problems is not new. Many researchers and thinkers have been warning for decades that this mindset can lead to fragile and harmful systems. Their insights reminds us that the drive for efficiency, when left unchecked, often repeats mistakes of the past, be it related to earlier eugenic movement, colonial philosophies, or technocratic projects.

For example, Holling’s work(2) on complex ecosystems shows that attempts to over-control natural environments, such as industrial forest management, often lead to collapse. Systems lose resilience whenever diversity and flexibility are stripped away in the search for increased efficiency. Vallor(3) reminds us that true ethical progress comes from cultivating virtues, such as patience and humility, rather than focusing only on measurable outcomes. And Benjamin(4) shows how algorithmic tools used in policing and credit scoring can deepen racial and social inequalities, especially when optimization replaces human judgment. Together, these lessons warn that focusing only on speed, efficiency, or outcome risks repeating the same patterns of harm observed in the past, as in earlier efforts to reshape society through control or exclusion.

Efficiency and the Delegitimization of Constraint

Constraint, in many optimization approaches, is treated as a technical limitation to be overcome, as seen in technological challenges involving compute power, latency, noise, and variability. Yet in human systems, constraint is often a moral safeguard. Constraints preserve space for reflection and resistance, allow for error correction, and prevent systems from spiraling out of control.

When these are stripped away, whether in infrastructure, education, or AI governance, what remains is a high performing system with no ethical safety locks. This leads to systems that are optimized for something, but are accountable to no one.

In the following section, we introduce Constrainism. This is a counter-philosophy that does not reject efficiency wholesale, but repositions it within a broader ethical architecture. Constrainism proposes that some inefficiencies, and indeed, some frictions, are not bugs but ethical features.

In doing so, it seeks to reinforce the value of slowness, ambiguity, redundancy, and resistance within the practice of technological design.

Strategic Inefficiency and the Case for Constrainism

If TESCREAL+ is driven by a belief in relentless optimization, then any serious challenge to its influence must address its ideas, design logic, and real-world consequences. To this end, we propose Constrainism, an alternative design ethic and philosophical stance, which treats Strategic Inefficiency as a necessary and principled response to the moral and knowledge-related failures of over-optimization.

Defining Strategic Inefficiency

Strategic inefficiency is not a rejection of performance or a call for technological stagnation. Rather, it is the deliberate embedding of friction, redundancy, and constraint within technological solutions, particularly those engaged in decision-making, classification, or resource distribution. It operates on the assumption that some forms of slowness, resistance, and ambiguity are not only tolerable but ethically indispensable.

Strategic inefficiency is evident in long-standing human systems:

  • In democratic governance, where checks and balances intentionally slow executive action,
  • In judicial systems, where appeals and adversarial reasoning delay resolution but preserve fairness, and
  • In education, where tests and revisions are integral to effective learning.

These systems treat constraint as an essential part of their designs, and consider inefficiency as the cost of reflection, pluralism, and safety.

What Is Constrainism?

Constrainism is the core idea behind strategic inefficiency. It offers an alternative to the accelerationist, rationalist, and utopian ideologies that dominate the current AI landscape. While TESCREAL+ envisions the future as a perfectly optimized and engineered nirvana, Constrainism argues that ethical systems must stay human-centric, use case specific, and open to interruption and reflection. Its core ideas include:

  • Friction is a feature: Ethics should resist the urge to flatten ambiguity into certainty,
  • Redundancy promotes resilience: Layered systems can help avoid catastrophic failures,
  • Human agency (judgment) can’t be removed or outsourced: Some decisions must remain deeply human,
  • Transparency matters: Some parts of a system, such as how decisions are made or what data is used, should be open and easy to verify, and
  • Integrity matters too: Other parts, including emotions, intuition, or personal experiences, should be protected from being turned into tools (instrumentalization) or tracked (measured) as hard cold numbers (KPIs).

Constrainism is not anti-AI. But it does reject the idea that performance scores alone determine ethical value in AI solutions. It encourages a deeper commitment about how knowledge is understood and applied. And it portrays a view of intelligence and ethics as processes which are paused, disrupted, and reflected on.

From Theory to Design: Constrainism in Practice

The principles of Constrainism can be translated into practical design constraints. These include:

  • Human-in-the-loop architectures, where oversight is not optional but structurally required,
  • Decision latency layers, where algorithmic recommendations must wait for human confirmation under conditions of uncertainty,
  • Plural objective functions, which allow systems to weigh competing ethical criteria rather than optimize a single scalar value,
  • Error-promoting simulations, where edge-case behavior is stress-tested and embraced as a learning vector, and
  • Schema enforcement for data systems, where logic is closely tied to use cases, errors are discouraged by design, and deviations trigger human oversight.

These are not merely technical suggestions. They reflect deeper commitments about how knowledge is understood and applied. They reflect a view of intelligence and ethics as processes interrupted and shaped by disagreement, disruption, and delay.

Sidebar: Chilán as a Constrainist Language

One instantiation of Constrainism in technical practice is the design of Chilán(b), a graph-native functional programming language aimed at enforcing schema discipline, constraint propagation, and interpretability in large-scale, complex data environments. By leveraging functional programming’s emphasis on correctness, referential transparency, and immutability, Chilán avoids the lack of clarity and hyper-flexibility often found in contemporary machine learning pipelines and languages.

Unlike many present-day systems that tolerate or even encourage schema-less flexibility, Chilán is built on the principle that constraints and schemas are essential safeguards. They ensure clarity of purpose, enforce domain-specific logic, and reduce dependence (over-reliance) on probabilistic guesswork. By embracing well-defined structures rather than avoiding them, Chilán implements the Constrainist belief that safety and intelligence begin with intentional design boundaries.

Anticipating and Addressing Criticism

Any approach that questions dominant ideas, especially ones painted as “rational,” “effective,” or “progressive”, must be ready to face resistance, particularly when those ideas are deeply rooted, institutionalized, and hold considerable cultural influence. In this section, we look at the most likely critiques of both the expanded TESCREAL+ analysis and the Constrainist alternative, and present our responses as part of the wider debates around AI ethics, knowledge, and system design.

Bundling Too Many Distinct Ideologies

Critique: The TESCREAL+ framework combines diverse ideological projects with different goals and origins. Transhumanism is not the same as Effective Altruism, Rationalism is not Longtermism, and Computational Immortalism is not Dataism.

Response: We acknowledge that these ideologies are internally diverse. Our claim is not that they are identical, but that they share a similar structure, with a common commitment to optimization, abstraction, and technological transformation as paths to moral progress. While they differ in face value, they converge on a design logic that favors speed, scale, and KPIs. The TESCREAL+ bundle is useful not as a classification, but as a diagnostic tool for tracing shared assumptions across philosophical ideologies that shape today’s technological progress, specifically within the realm of AI (and AGI).

Attacking Something that Saves Lives

Critique: Optimization is not inherently harmful. In fact, it has saved lives, for example, through better logistics in disaster relief, more accurate diagnostics in medicine, and higher efficiency in energy systems. Why critique it so broadly?

Response: Our argument is not against optimization per se, but against its unquestioned elevation to a moral principle. We distinguish between situational optimization (bounded, contextual, and deliberate) and optimization determinism, which holds that systems should always aim to eliminate friction, redundancy, and ambiguity. Many of the harms associated with algorithmic injustice come not from optimization alone, but from applying it without constraint, reflection, or considering diverse perspectives.

Luddism or Technophobia Revisited

Critique: The proposal to embed inefficiency into AI systems sounds like a technophobic resistance to progress. Isn’t this simply a modern version of anti-technology sentiment?

Response: Constrainism is not against technology, it’s against the idea that progress has only one fixed objective, namely, some form of optimization. We don’t reject AI systems. Instead, we argue they should be built to protect, and to promote, human oversight, moral complexity, and humility about what we know, while remaining cautious in situations where we do not know! Constrainism relies on careful systems thinking and good design. What it rejects is the transformation of moral values to narrow and impersonal metrics.

Operationalizing a Vague Concept

Critique: Terms like “friction” and “strategic inefficiency” are philosophically rich but technically vague. How would this actually be implemented in code, process, or policy?

Response: Constrainism is already visible in real systems, including human-in-the-loop pipelines, adversarial training in machine learning, techniques that protect individual data privacy, democratic oversight bodies, and regulatory tools like algorithmic impact assessments (formal reviews of how automated systems might affect people and society). What we offer is not an entirely new approach, but a reconsideration of design decisions as ethical choices. In addition, tools like Chilán (discussed earlier) demonstrate that constraint-first design is both technically viable and productive in how it supports meaningful knowledge work.

AI Ethics by Another Name

Critique: Isn’t Constrainism just a repackaging of responsible AI or human-centered design principles?

Response: Constrainism builds on, but goes beyond, existing AI ethics frameworks. While many responsible AI initiatives emphasize explainability, fairness, and safety, they often treat ethics as something to be added after the system has been optimized. Constrainism challenges this logic. It puts constraint at the core of the design process, not as an afterthought, nor as a separate filter, but as the starting point. This is not ethics-as-regulation. Rather, it is ethics as the foundation of the system itself (ethics-as-architecture).

Short-Term Thinking: Why Ignoring the Past Undermines the Future

One of the most dominant characteristics of TESCREAL+ ideologies is how unevenly they treat time. Longtermism, Accelerationism, Singularitarianism, and other forms of futurism all place the distant future at the center of moral consideration. They treat the present as an obstacle (a mathematical challenge, if you will) to overcome and the past as little more than an amusing footnote. But this approach is not just ethically shaky, it is also logically flawed.

Speculation Without Foundation

Many TESCREAL+ ideologies appeal to rationalism, evidence, and formal reasoning. And yet, they frequently rely on speculative futures that cannot be tested, updated, or falsified. Projections about posthuman intelligence, intergalactic civilizations, or moral value across astronomical timeframes are often presented with confidence and utmost mathematical precision. But precision is not the same as certainty or clarity. These speculative claims are intellectually fragile as they contradict their own assumptions by operating without reasonable verification or moral accountability.

The Erasure of the Past

By prioritizing an imagined future, TESCREAL+ advocates often treat the past as irrelevant. Consequently, ethical lessons from history, deep cultural knowledge, and intergenerational memory are dismissed, oversimplified, and eventually bypassed altogether. But this obsession with the current vision of the future, while dismissing history, fails to recognize that every future eventually becomes a past in its own right. Designing for the future while erasing the legacy of the past harm and mistakes only serves to deepen injustice. A system that ignores its own historical journey cannot learn, cannot adapt, and cannot be just.

The Insignificance of the Present

In many TESCREAL+ ideologies, the present is treated as morally insignificant. A single suffering person today is seen as less valuable than trillions of hypothetical lives in the future. Cultural diversity, systemic injustice, and ecological collapse become footnotes to the primary objective of safeguarding an imagined future populated by imagined beings. But such reasoning ignores the fact that the present is the only place where tangible action is possible. Disregarding the now in favor of speculative futures leads to a collapse of morality, where human pain becomes an acceptable cost of optimization on the road to a, seemingly, better future.

Towards a More Coherent View of Time

Constrainism takes a different view. It views time as layered, cyclical, and morally complex. The past is a source of wisdom and reflection, the present is a site of responsibility and action, and the future is a domain of possibility and progress. But none can be ignored in favor of the others. Systems built on Constrainist principles honor memory, preserve ambiguity, and embrace the complexity of ethical consideration across time. In doing so, they promote neither misplaced nostalgia nor vague techno-utopian fantasies. They are not paralyzed by the past, but guided by it.

The inability of TESCREAL+ ideologies to honor both the past and the present makes them logically incoherent. Constrainism, by contrast, insists that ethical design must work across time, not outside it.

Sidebar: A Technologist’s Ethical Dilemma: Am I What I Am Critiquing?

Constrainism reflects something rather personal for me. As a technologist, I was trained to value elegance, precision, and optimization. I built systems meant to reduce complexity, uncover structure, and eliminate waste. My early career was shaped by rationalist logic, measurable progress, clean models, and formal systems. I admired thinkers who promised a better world through intelligence, and I believed that technology, if designed rigorously enough, could save us from ourselves.

But, somehow along the way, something shifted for me.

The more I worked on AI, the more I realized that the problems were not just technical. They were philosophical. The tools I used were powerful, but the assumptions behind them were questionable. I started noticing how easily ideas, which at first, felt like rational thinking turned into clever justification (in what you may call a shift from rationalism to rationalization). I saw how useful tools and approaches, such as optimization techniques, were used to mask harmful oversimplifications. The systems we were designing reflected not just our intelligence, but also our blind spots, and perhaps even our worst emotional instincts and biases.

So, I started to entertain the following question: Am I what I am critiquing?

In many ways, yes. I have certainly benefited from the very ideas I now interrogate. I still believe in the power of science and technology. I still write code and admire clarity. But I no longer believe that clarity is always good, or that efficiency is always right, or that abstraction is always harmless. I no longer believe that the best systems are the ones that disappear into uniformity. I believe in systems that interrupt, invite pause, slow us down, and, above all, leave space for judgment, contradiction, and reflection on context.

This paper, and the concept of Constrainism, is my attempt to reconcile those internal tensions. To build a future not free from friction, but informed by it. To show that constraint is not an obstacle to intelligence, but a condition, a necessary condition, for wisdom. And to remind myself, and anyone listening (reading), that the most dangerous systems are not the ones we mistrust. They are the ones we think are inevitable and beyond criticism.

Sidebar: What AI Could Be: A Constrainist Vision of Progress

Despite the sharp critiques laid out in this paper, Constrainism is not a rejection of AI. It is a refusal to build it without reflection. It is a refusal to frame AI as destiny. And it is an invitation to design it deliberately.

In the TESCREAL+ worldview, AI is often imagined as a god, a prophecy, or a governor. Something that is destined to arrive, and is assumed to be both polished and optimized. In contrast, Constrainism sees AI as a mirror: it reflects our fears, our values, and our contradictions. Its role is not to dictate the future, but to open a continuous dialogue and deepen our understanding of ourselves.

A Constrainist AI is interruptible. It is not a black box or an oracle, but a tool that slows itself down when the stakes are high. It welcomes ambiguity rather than rushing decisions based on forced clarity. It augments human thought, but never replaces it. It is accountable, and not simply autonomous. It learns from the past. It accounts for the present. And it is designed for moral friction, not moral shortcuts.

What AI could be is not the endpoint of intelligence. It could be the beginning of humility. If we build with constraint, we do not lose power. Instead, we gain clarity about when not to use it.

Conclusion: Embracing Constraint, Reclaiming Ethics

The future of AI (and AGI) is not being written purely in code. Instead, it is being guided by a number of diverse, and mostly invisible, set of ideologies, TESCREAL, and its expanded offshoots, which promote the pursuit of speed, scale, and oversimplification as the only morally correct path for the future of humanity. These ideologies do not merely guide technical advances. They tend to redefine the very essence of ethical reasoning.

In this paper, we’ve argued that what ties these ideologies together is not just a shared vision of the future. It is a much deeper belief in what we call Optimization Determinism. In response, we introduced Constrainism, as a counter-approach that sees Strategic Inefficiency not as a flaw, but as a core design and ethical value. Constrainism argues that constraint, often dismissed as technical debt or unwelcome friction, is essential for reflection, resilience, and fairness. And more than that, constraint is a way to inject much-needed human intelligence into AI. While automation removes nuance and promotes speed as an ultimate objective, well-placed constraints encourage human reasoning, ethical considerations, and contextual thinking.

In doing so, constraints do not become limits to intelligence, but act as extensions of it, highlighting that learning is not just about computation, but also conflict, balance, and context.

The implications are broad: For system designers, it means treating constraint not as a limitation, but as a source of guidance and control. For researchers, it means developing new methodologies which preserve and utilize ambiguity rather than simply eliminate it. For educators, it means resisting the temptation to reduce AI ethics to predefined checklists, secondary filters, or compliance dashboards and abstract KPIs.

It is an uncomfortable fact that we are living in a world where ethics is sold as a service, or an external collection of requirements, designed to simply enhance an already built system. Constrainism rejects this “ethics-as-a-service” worldview and instead, promotes ethics as an essential component of responsible system design and architecture.

To embrace constraint is, therefore, to bring ethics back to real life. It means accepting that uncertainties, interruptions, and different points of view are not problems to fix, but realities we must consciously design for.

The choice before us is not between progress and stagnation. It is between over-optimization without understanding and accountability and bounded systems that preserve space for reflection, discovery, objection and, ultimately, responsibility. Constrainism does not reject the future. It simply insists that we arrive there thoughtfully, and with intent.

Appendix A: Teaching AI Within Constraint: Towards a Constrainist Approach to Education

As the TESCREAL+ ideologies continue to shape not only the design of AI systems but also the training of those who build, use, and promote them, it becomes increasingly important to examine how AI education treats their underlying logics. Technical curricula often internalize optimization determinism implicitly. For example, performance metrics are emphasized, deployment speed is glorified, and ambiguity is framed as a problem to solve, not a condition to explore and factor in.

If system design is a consequence of ideology, then teaching must be considered as instrumental to design. In other words, the approach we take in classrooms, and the mindset we promote, will influence the systems we create. That’s why Constrainism must be treated as more than a design philosophy. It should also guide how we teach.

From Optimization Mindsets to Ethical Architectures

A Constrainist worldview challenges educators to change the framing of AI education, from efficiency to restraint, from blind solutionism (the belief that all problems, including complex social issues, can be solved, often with the aid of technology) to deeper exploration, and from automation by default to guided intervention.

It invites instructors to explore how ethical friction, ambiguity, and redundancy can be integrated more broadly into learning, rather than introduced as disparate or standalone “ethics modules” or “compliance lectures”.

Curriculum Design Principles

A Constrainist curriculum might include:

  • Ethical Friction Labs: Exercises that deliberately embed ambiguity, conflicting objectives, or incomplete information to promote careful judgment over blind pursuit of automation at all costs,
  • Failure as Feature: Assignments in which identifying and analyzing modes of failure are valued just as highly, or over, arriving at a pre-defined “correct” result,
  • Diverse Logics: Exposure to non-Western ideologies, indigenous knowledge systems, and alternative ways of managing data as legitimate inputs to AI design,
  • Strategic Interruptibility: Projects that enforce human-in-the-loop mechanisms and test systems’ ability to pause, reflect, or defer, as part of its “normal” behavior, and
  • Constraint Injection: Creative design tasks where constraints are intentionally, and randomly, introduced (e.g., no floating-point math, no cloud access, limited time horizon, etc.) to encourage creative thinking and fresh perspectives.

The Manifesto as a Teaching Tool

The Constrainist Manifesto (see Appendix B below), can serve as a useful and flexible guide offering:

  • A conversation starter in design ethics lectures, seminars, and the like,
  • A framework for debate in multidisciplinary workshops on design, and
  • A challenge to stimulate student reflection, inviting critiques, idea generation, or alternative manifestos.

By inviting students to take the manifesto as a starting point for discussion, instructors can foster a more active, reflective, and consequential engagement with AI design.

A Call to Educators

Teaching Constrainism is not a rejection of technology or technical excellence. it is an invitation to deepen our understanding of technology. It insists that ethical reasoning is not separate from code, architecture, or systems design, but should be embedded in them. It challenges us to produce technologists who are not only builders, but interpreters, stewards, and critics.

In a time when “ethics-as-a-service” threatens to turn morality into a one-dimensional compliance checklist, Constrainism reminds us that education must resist that logic. And it offers a simple principle that reads: To teach constraint is to teach sound judgment.

Appendix B: The Constrainist Manifesto

Constraint is not a failure of design. It is its foundation.

Constraint is not a lack of imagination. It is an invitation to imagine differently.

We live in a time when optimization is treated as morality, any form of slowness is translated as inefficiency, redundancy is dismissed as waste, and ambiguity is considered failure. But these assumptions are not purely technical or neutral. They are, in fact, ideological. They reflect a worldview in which speed, scale, and simplicity are valued above justice, resilience, and meaning.

Constrainism rejects this worldview.

We believe that:

  • Slowness leads to wisdom: Systems that move too fast cannot be interrupted, questioned, or redirected,
  • Redundancy offers protection: A single elegant solution is more fragile than a network of imperfect backups,
  • Ambiguity promotes insight: Uncertainty and contradictions are invitations to reflect, not errors to suppress,
  • Constraint brings clarity: The most reliable systems are those that define limits, expose clear boundaries, and enable deeper understanding, and
  • Human judgment is irreducible: No optimization function can perfectly capture moral reasoning, cultural complexity, or historical depth.

Constrainism claims that ethics must be built into system architecture, not treated as an afterthought or a separate interface. It is a commitment to design systems that protect human judgment, not eliminate it.

We do not reject technology. We reject blind solutionism.

We do not reject progress. We reject mindless acceleration without thoughtful sense of direction.

We do not reject intelligence. We reject intelligence without care or accountability.

And above all:

Good Artificial Intelligence Must Be a Mirror to Human Intelligence.

***Notes

a. To the best of our knowledge, Computational Immortalism is a term introduced for the first time in this paper. When we state that “…Computational Immortalism have gained popularity and gradually become mainstream…” we mean that many of the ideas that make up this concept are becoming more visible in the current technology-related discussions.

b. Contact the writer at: info@parvizforoughian.com for more details about Chilán programming language.

***References

  1. Torres, É. P., & Gebru, T. (2023). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday: https://firstmonday.org/ojs/index.php/fm/article/view/13636
  2. Holling, C. S. (2001). Understanding the Complexity of Economic, Ecological, and Social Systems. Springer Nature Link: https://link.springer.com/article/10.1007/s10021-001-0101-5
  3. Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press: https://www.research.ed.ac.uk/en/publications/technology-and-the-virtues-a-philosophical-guide-to-a-future-wort
  4. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. (Polity 2019): https://www.ruhabenjamin.com/race-after-technology
  5. For more background on these ideologies, see Torres & Gebru (2023, as per reference 1 above), and related entries on Wikipedia.
  6. Marx. K (1848). On the Question of Free Trade. Democratic Association of Brussels, 9 January: https://cooperative-individualism.org/marx-karl_on-the-question-of-free-trade-1848.htm
  7. Noys, B. (2014). Malign velocities: Accelerationism and capitalism. Zero Books: https://www.amazon.sg/dp/1782793003
  8. Brooks, D (2013), The Philosophy of Data, New York Times: https://www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html
  9. Harari, Y N, 2016, Homo Deus: A Brief History of Tomorrow: https://tinyurl.com/mppnfjsp
  10. Kurzweil, R. (2000). The Age of Spiritual Machines: https://tinyurl.com/4h9u5f6c

Subscribe to this blog and follow me on LinkedIn to get early access.

Related posts:

AI Ethics

The Missing Structure in a Fragmented World

Why Graphs Matter Now

We’ve built AI systems that can predict, write, and generate.
We’ve built data pipelines that move petabytes across clouds.

But ask most systems:
“How are these things connected?”
And you’ll often get silent response or a sketchy one.

That’s where graph thinking comes in, not just as a data model, but as a way of seeing the world.

Graphs let us:

  • Represent relationships, not just records,
  • Understand context, not just content,
  • Build reasoning systems where logic is traceable, explainable, and human-aligned.

And yet, most organizations still think in rows and columns, even as they claim to be AI-driven.

What We Will Explore

In this category, we’ll cover:

  • The difference between graph models and traditional databases,
  • Why graph-native reasoning is essential for explainable and ethical AI,
  • How graph thinking can unify data strategy, AI strategy, and even Quantum models (oh, I love this one and this topic alone is worth you clicking the subscribe button and following me on LinkedIn for latest developments),
  • Use cases in healthcare (to start with), and more.

Graphs aren’t a niche tool. They’re an opportunity to rebuild meaning, one connection at a time.

I am currently designing a new graph-native, explainable, Quantum aware programming language to called Chilán. Subscribe to this blog and follow me on LinkedIn to get early exposure to it.

Related posts:
Data Strategy as Foundation | AI Strategy | The Hidden Thread

Navigating Probability-Infested Times

The Illusion of Intelligence

We are living in probability-infested times.

Text models predict the next word. Image models fill in noise. Code models autocomplete syntax. And all of them do so without true understanding.

This doesn’t just lead to hallucinated facts; It leads to hallucinated decisions, with very real consequences.

What We Will Explore

Generative AI is the new kid in town and everyday, we hear more people claiming years and years of experience in it (as well as tons of experience in Agentic AI and god knows what else). Anyway, it is not possible to ignore the beast so, that alone means it deserves its own category. Having said that, let me be clear that, in the right hands (or as those in the hood call it, with the right prompt) it can be a wonderful tool that can massively boost our productivity.

I am actually a big fan, as you will see throughout the coming posts, but, I refuse to ignore its deficiencies or sell perceived capabilities that are simply not there yet (and some of those capabilities are unlikely to ever be there as those are tightly coupled with the initial assumptions and the overall approach). Think of it this way, if real AI (or AGI as some will call it) is compared to Relativity, then the current Generative version of AI is equivalent to Newtonian Mechanics (I love Newton, but there are problems that his wonderful laws of Mechanics are just not equipped to answer, hence the birth of Relativity and the emergence of a certain Albert Einstein).

Now, enough of this. For this introductory post, let’s say that under Generative AI category, we will cover many topics including the following:

  • Why relying exclusively on probabilistic logic is both risky and regressive,
  • What AI Agents vs. Agentic AI really means.
  • Silly but illuminating examples of probabilistic failure in daily life that can help in our overall understanding of Generative AI, its strengths and its limitations,
  • How we might use Quantum logic, structured data, and human values to rebuild trust.

This space is crowded with hype. Our aim here is to teach clearly, challenge assumptions, and reclaim agency.

The above tagline was originally written as “Our aim here is to cut through the hype and some monumental BS so that we can better understand our assumptions and see more clearly.” The end result, however, is a revision by ChatGPT:-). I think we can all agree that ChatGPT’s output is more succinct and certainly more polite so, hats off to ChatGPT here. Like I said, if you know how to use these toys, they find a way to please you.


Related posts:
Quantum Computing | AI Ethics | The Hidden Thread

A Chance to Reclaim Logic in an AI World: Rationality at Scale!

The Paradox

Ironically, the most probabilistic field in science, Quantum Physics, might help us restore structure and logic to the AI ecosystem!

Where classical AI is drifting toward black-box opacity, Quantum gives us an opportunity to:

  • Represent complexity with nuance,
  • Model relationships more organically,
  • Reintroduce logic-based computation into systems currently driven by surface-level patterns.

Let’s not let Quantum go the way of Generative AI: all hype, no ethics.
I am optimistic on this and believe that we still have time to learn from AI hype, and in this series, we’ll explore how to use Quantum wisely.

What We Will Explore

In this category we will explore a number of topics including:

  • What Quantum can and can’t do (let’s demystify this thing once and for all and not let Quantum Computing become another Blockchain that nobody understands!),
  • How Quantum Computing could complement Agentic AI and ethical reasoning,
  • Why probabilistic computation might paradoxically help counterbalance the dark side of probabilistic AI.

Related posts:
AI Strategy | AI Ethics

Not Just Oversight: It’s About Upholding Human Dignity

Beyond Bias and Audits

AI Ethics is not just about minimizing harm.
It’s about defending what matters, dignity, autonomy, and accountability, in the face of powerful and ever-expanding automation.

If our AI system optimizes engagement at the cost of truth, or automates policy decisions without oversight, then it’s not an accident that happens, but it’s a failure of design that is showing its ugly face.

What We Will Explore

Under the category of AI Ethics, we will explore all things human, as it should, given the I in AI is supposed to refer to Human Intelligence. A sample of what we cover will include:

  • Why AI systems must have clear, value-aligned objectives,
  • What human-in-the-loop really means in practice,
  • Why ethical oversight is not just about compliance, but responsibility.

A white paper on AI Ethics is coming soon.
Subscribe to this blog and follow me on LinkedIn to get early access.


Related posts:
AI Strategy | Data Strategy | The Hidden Thread

Why Human Values Must Anchor Every System We Build

Strategy ≠ Tools

AI strategy isn’t about which model to use. It’s about whether we are building something that reflects our values (more accurately, our human values) or just optimizing for what’s easy to measure.

Too many systems are being built with unclear assumptions, opaque outputs, and no explainability.
This isn’t just a technical flaw. It’s a failure of leadership. and it’s lack of holistic view of life in

What We Will Explore

The articles published in this category (AI Strategy), will explore a number of interesting topics including

  • The difference between having a model and having a strategy,
  • Why auditability and explainability must be systemic, not superficial,
  • The risks of unchecked probabilistic models, and how to build AI systems that reflect human dignity (I bet most of you did not even think of dignity in the context of AI, right? Buckle up! I have a lot to say about that in AI Strategy, AI Ethics and Rant categories!).

You’ll also see how our contents here connect to Agentic Systems, Quantum Logic, and the ability to reason at scale.


Related posts:
Data Strategy as Foundation | The Hidden Thread

Why Data Strategy Is the Foundation of AI, Quantum, and Everything Else

The Problem Nobody Talks About

Many organizations today are eager to adopt AI, deploy Generative models, or explore Quantum Computing. And yet, most still haven’t solved the basics.

Data strategy remains the weakest link.
Disparate systems, undefined ownership, unclear lineage, and some more, are not just annoying inefficiencies (and boy, are they annoying and inefficient!). They’re risks, sometimes big risks. And they prevent everything else from working responsibly.

You cannot build trustworthy, explainable, or ethical AI on a foundation of scattered, misaligned data.

If this resonates, my book Data Unplugged unpacks this in detail, no fluff, no buzzwords (I did not want to make this or any of my posts a sales pitch but, objectively speaking, this is a good book:-)).

What We Will Explore

I wrote these lines quickly as a starting point so that you get the idea what is coming in the next few weeks and after. I have a lot of stuff to go through and share with you and have to balance it between a number of categories so, if you have a specific topic in mind you cannot wait for, send me a comment. And now that we are talking about topics my readers would want me to talk about next, I promised my dear friend Willem to write something juicy on Data Strategy next so, once I have put in a few lines under each category in the blog, I will jump on Data Strategy again.

Anyway, in the Data Strategy category, I will discuss the following and then some more:

  • Why data strategy is not just technical architecture, but organizational architecture,
  • The link between governance, ethics, and business value,
  • Why most enterprises are still operating on data sandcastles.

Related posts:
25 Not-To-Dos of Data | The Hidden Thread

What CEOs, Lawyers, Politicians, and Journalists Have in Common

Good day, my good readers!

I woke up with a headache today (it happens frequently, specially when I go to bed reading about the world events) and immediately found myself wrestling with a dirty the following “dirty” thoughts:

When we think of CEOs, lawyers, politicians, and journalists (or the Painful Four, as some may call them), we usually imagine vastly different worlds: boardrooms, courtrooms, parliaments, and press rooms. But beneath the surface, these professions often operate with surprisingly similar habits:

1- They tend to prefer binary answers in a world full of ambiguity (as a trained mathematician, physicist and computer scientist, working for more than 2 decades on the latest technologies, I have a healthy respect for the binary so, just hear me out, OK?).

2- They look for quick fixes rather than long-term systemic change.

3- They often rely on technical tricks or simplified narratives to gain leverage.

4- And sometimes, they rely on drama to get attention.

At their best, these habits can be used for real public good. At their worst, they amplify division, mask complexity, and concentrate value among a small few. The rest of the society is left with oversimplified stories and underwhelming outcomes. This becomes especially problematic (even dangerous) when these patterns affect how we use and regulate powerful new technologies such as Generative AI (that is for another post to delve into under Generative AI category. I started this post as a Rant so, forgive me for continuing the rant, I have a headache, remember?).

Now, let’s take a closer look.


CEOs: Impact Versus Optics

The Best Case
Yvon Chouinard, founder of Patagonia, transferred ownership of the company to a trust and nonprofit. His goal: ensure that all future profits are used to combat climate change and protect the planet. It turns out that, this was not a branding stunt or a tax dodge. It was a rejection of short-term profit-taking in favor of long-term environmental responsibility.

The Worst Case
Elizabeth Holmes promised revolutionary medical diagnostics. Instead, she sold a black-box illusion that fooled even experienced investors. The result was broken trust and burned billions.

The Pattern
We reward CEOs who pitch bold solutions, especially if they sound futuristic or, worse, unrealistic. We are seeing this “Impact vs Optics” dilemma playing itself out with the surge of Generative AI. Generative AI is based on probability and pattern prediction, not deterministic or logical approach. When we force complexity into simple business narratives, we end up creating tools and results that are misunderstood, misapplied, or mis-sold.


Lawyers: Justice Versus Justification

The Best Case
Bryan Stevenson, founder of the Equal Justice Initiative, uses the law to challenge systemic injustice. His work is slow, complex, and grounded in ethics.

The Worst Case
Legal teams working for Big Tobacco and Big Oil have spent decades defending harmful practices through legal loopholes. They win cases by focusing on technicalities, not on harm reduction. I often find myself admiring lawyers for their innovative work with the law, and the language, when using a nasty loophole, while, at the same time, feel bitterly disappointed at watching them justify the harm their brilliant legal maneuvering has done to their victims.

The Pattern
Legal culture often celebrates the win, not the impact. In the world of AI, where systems are embedded into compliance processes and automated decision-making, legal professionals may hide behind technical accuracy. But law, like code, reflects values. If ethics are not baked in, then technical correctness becomes a shield for irresponsible systems.


Politicians: Substance Versus Soundbites

The Best Case
Angela Merkel often took political risks to communicate complex decisions with clarity. Her leadership was not perfect, but it was thoughtful and consistent. It is hard to find a clear instance that she lied on anything for the sake of electability and that is a lot to say in this day and age.

The Worst Case
UK Brexiteers who used oversimplified slogans like “Get Brexit Done” (or some other version of it) to fuel an outcome that was deeply misunderstood, impractical, and wasteful. The consequences are still unfolding.

The Pattern
Political messaging favors clarity, not complexity. That becomes a problem when the issues are technical and long-term, such as climate change, managing misinformation, or designing resilient infrastructure. Simple slogans rarely make good policy, but here we are, looking to vote in the next (or the same) politicians based on how catchy their latest slogans are.


Journalists: Depth Versus Drama

The Best Case
Maria Ressa exposed authoritarian tactics and online disinformation with courage and depth. Her journalism is committed to complexity and accountability.

The Worst Case
On the opposite end, we have the example of the News of the World phone hacking scandal, where journalists illegally accessed voicemails of public figures and even a murdered teenager. The intrusion gave false hope to the victim’s family and interfered with the police investigation. What was framed as sensational reporting ended up exposing a toxic newsroom culture driven by profit, not public service, and ultimately led to the paper’s shutdown and a national inquiry into media ethics.

The Pattern
In a media economy based on attention, nuance is expensive. It is faster to simplify and it pays to be dramatic. As we all find new ways to get angry at each other, the temptation to blur the line between facts and fiction will grow. If journalists cannot hold the line, the public will not know what to trust.


So Who Is Really Responsible?

That is the hard part.

Is it the professionals? Or is it the rest of us, who reward these behaviors? And don’t forget that before professionals become professionals, they are the rest of us!

We are the ones who share clickbait. We demand certainty. We celebrate confidence even when it lacks substance. We are often too impatient for process, and too distracted for detail.

In a world of Deepfakes, synthetic media, and AI hallucinations, our demand for clarity over complexity is no longer just a personal habit. It is a societal vulnerability, a dangerous weakness that can only be exploited.

If we want better leaders, better laws, better journalism, and better outcomes, we need to become better at embracing complexity. That means making space for slower thinking. That means tolerating uncertainty. That means expecting more, not less, from the people and systems that shape our world.

The challenge is not just technical. It is cultural.


Where have you seen these patterns show up in your field? What would it take to shift the system rather than just the symptoms?

Subscribe to my blog or follow me on LinkedIn to get notified when I start ranting again!

25 Not-To-Dos of Data (Hello Blog World 🎉)

Starting any journey is tough, and more so when it involves data! Since I decided to join the Blog community I have been thinking actively of what to write about and it suddenly dawned on me that maybe highlighting a few Not-To-Dos is an easy way to start this Blog experiment. After all, it is always easier to tell people what not to do even if what not to do is what you actually do yourself (and know you should not do!). Hope you are feeling me. Anyway, here comes the first Blog post on this site so, hope you agree with the contents. Happy to hear from you to improve this and future articles. Oh, and remember, 25 is an arbitrary number. Somewhat round, not too small and not too big but we could go to 50 or more, as what is definitely not in short supply is bad data habits:-).

Finally, this is meant to be a way for me to get back to heavy writing, find like-minded colleagues and get positive feedback to learn. Each item really scratches the surface. I am hoping to pick a few of these short articles in the next few weeks and months to explore further. On the other hand, I have been thinking AI Ethics a lot recently so, maybe that will jump ahead. We will see. Too early to say how but now you know what the “plan” is!

For now, here is a list of 25 “Not-To-Dos” of data that (based on first hand experience over 2 decades) each of which, one way or another, affects effective Data Management:

  1. Neglecting Data Governance
    1. What is it: Failing to implement a framework that outlines how data is managed, who owns it, and how it’s used.
    2. Potential Impact: Poor data quality, lack of accountability, and compliance issues.
    3. Best Practice Fix: Establish a formal data governance framework with clear roles, policies, and ownership for all data assets.
  2. Ignoring Data Quality Management
    1. What is it: Not monitoring or enforcing the accuracy, completeness, and reliability of data.
    2. Potential Impact: Decisions based on inaccurate data lead to poor outcomes and financial losses.
    3. Example: JP Morgan’s London Whale incident where poor data quality in risk models led to $6 billion in losses (Google it!).
    4. Best Practice Fix: Implement regular data quality checks and establish clear KPIs around data integrity.
  3. Allowing/Encouraging/Tolerating Siloed Data
    1. What is it: Storing data in isolated systems where it cannot be easily accessed or shared.
    2. Potential Impact: Difficulty in gaining cross-departmental insights, leading to fragmented decision-making.
    3. Best Practice Fix: Break down data silos by using centralized platforms or data lakes that integrate all data sources. Remember a centralized solution is not necessarily physical (hope to touch on this one in more details sooner, rather than later).
  4. Over-Complicating Data Architecture
    1. What is it: Building overly complex data pipelines and systems that slow down the processing and decision-making.
    2. Potential Impact: Increased operational overhead and slower time to insights.
    3. Best Practice Fix: Design data architecture to be scalable and modular, focusing on simplicity and agility.
  5. Relying on Manual Data Processes
    1. What is it: Using manual data handling methods such as data entry and validation.
    2. Potential Impact: Human errors, inefficiencies, and high labor costs.
    3. Example: Just Google for examples of financial services firms who lost millions due to manual data entry errors in transactions.
    4. Best Practice Fix: Automate data workflows and validations to reduce human intervention and error.
  6. Lack of Standardization Across Data Sources
    1. What is it: Failing to standardize data formats, structures, and naming conventions across systems.
    2. Potential Impact: Inconsistent reporting, confusion, and difficulty in combining datasets. And maybe I should really say: too many reports for the same topics! I feel many know exactly what I mean here.
    3. Best Practice Fix: Establish data standards and enforce them across all data entry points and sources.
  7. Underinvesting in Data Security
    1. What is it: Neglecting to implement proper data security measures such as encryption, access control, and monitoring.
    2. Potential Impact: Data breaches, compliance violations, and loss of customer trust.
    3. Example: Equifax’s 2017 breach exposed 147 million records due to poor security practices.
    4. Best Practice Fix: Implement a multi-layered security approach with encryption, access controls, and real-time monitoring.
  8. Not Defining Clear Data Ownership
    1. What is it: Failing to assign responsibility for specific data assets to individuals or departments.
    2. Potential Impact: Lack of accountability, leading to mismanagement or neglect of critical data.
    3. Best Practice Fix: Assign data owners who are responsible for the quality, usage, and lifecycle of their data.
  9. Skipping Proper Data Documentation
    1. What is it: Failing to maintain documentation of data sources, transformations, and governance. This is one place that I go on about the dangers of “Agile Methodology”. Not that it is bad on its own, but documentation is typically the first thing that suffers when you seek agility in an organization!
    2. Potential Impact: Difficulties in understanding and trusting the data, leading to errors in decision-making.
    3. Best Practice Fix: Maintain comprehensive metadata and data lineage documentation to track the flow of data.
  10. Overlooking Data Privacy Regulations
    1. What is it: Ignoring or inadequately adhering to data privacy laws such as GDPR or CCPA.
    2. Potential Impact: Heavy fines, reputational damage, and loss of customer trust.
    3. Example: Google was fined $57 million for GDPR violations due to improper consent practices. An interesting one to Google about, of course:-).
    4. Best Practice Fix: Regularly audit data processes for compliance and update practices according to evolving regulations.
  11. Using Outdated or Unsupported Data Tools
    1. What is it: Relying on legacy systems or tools that are no longer maintained or secure.
    2. Potential Impact: System failures, security vulnerabilities, and reduced performance.
    3. Best Practice Fix: Regularly evaluate and update data infrastructure to ensure tools are modern, secure, and supported.
  12. Failing to Prioritize Data Integration
    1. What is it: Not connecting data from different systems, leading to fragmentation and incomplete insights.
    2. Potential Impact: Inaccurate reporting, poor customer service, and disjointed operations.
    3. Best Practice Fix: Use processes (ETL, etc.), middleware, or other relevant methodologies/technologies to integrate data from disparate sources into a single repository.
  13. Assuming All Data is Valuable
    1. What is it: Collecting and storing every piece of data without assessing its relevance or usefulness.
    2. Potential Impact: Increased storage costs, complexity in data management, and lower performance.
    3. Best Practice Fix: Focus on collecting high-quality, relevant data and regularly purge unnecessary or outdated data.
  14. Not Providing Data Literacy Training
    1. What is it: Failing to train employees on how to interpret and use data effectively.
    2. Potential Impact: Misinterpretation of data, poor decision-making, and underutilization of data tools.
    3. Best Practice Fix: Implement regular data literacy training programs tailored to different levels of expertise within the organization.
  15. Lack of Scalability in Data Infrastructure
    1. What is it: Building data systems that cannot grow with increasing data volumes or business needs.
    2. Potential Impact: Performance bottlenecks, downtime, and costly system overhauls.
    3. Best Practice Fix: Design data systems to scale dynamically by using cloud-native architectures and horizontal scaling techniques.
  16. Not Establishing Clear Data Use Policies
    1. What is it: Lacking formal policies on how data should be accessed, used, and shared within the organization.
    2. Potential Impact: Data misuse, security breaches, or legal violations.
    3. Best Practice Fix: Create comprehensive data use policies that define access levels, usage rules, and protocols for handling sensitive data.
  17. Rushing to Implement AI Without Clean Data
    1. What is it: Deploying AI models without ensuring the underlying data is accurate, consistent, and complete.
    2. Potential Impact: Poor AI model performance and inaccurate predictions.
    3. Example: IBM Watson for Oncology made incorrect treatment recommendations due to poor training data.
    4. Best Practice Fix: Focus on data cleansing and quality assurance before feeding data into AI models.
  18. Failing to Align Data Strategy with Business Objectives
    1. What is it: Implementing data initiatives without considering their alignment with the organization’s overall goals.
    2. Potential Impact: Underutilized systems, wasted resources, and missed business opportunities.
    3. Best Practice Fix: Develop a data strategy that directly supports key business objectives and regularly review its effectiveness.
  19. Not Archiving Old or Unused Data
    1. What is it: Retaining data indefinitely without determining its relevance or necessity.
    2. Potential Impact: Increased storage costs and legal risks from holding onto unnecessary or sensitive data.
    3. Best Practice Fix: Implement data lifecycle management policies that archive or delete old data based on usage patterns and regulatory requirements.
  20. Assuming Cloud Migration Solves All Data Problems
    1. What is it: Believing that moving data to the cloud will automatically resolve governance, quality, or integration issues.
    2. Potential Impact: Continued data problems, only now in a cloud environment, along with unexpected costs.
    3. Best Practice Fix: Plan cloud migrations carefully, addressing data governance and quality beforehand, and ensure cloud solutions are cost-effective and scalable.
  21. Lack of Real-Time Data Access
    1. What is it: Failing to provide access to real-time data, limiting the ability to respond quickly to changes.
    2. Potential Impact: Decision-making based on outdated information, leading to missed opportunities or poor outcomes.
    3. Best Practice Fix: Implement real-time data streaming or event-driven architectures to provide up-to-date insights.
  22. Relying Too Much on One Vendor
    1. What is it: Becoming overly dependent on a single vendor for critical data infrastructure or tools.
    2. Potential Impact: Vendor lock-in, pricing power shifts, and increased risk during outages or failures.
    3. Best Practice Fix: Diversify vendors and maintain vendor-neutral solutions where possible, with contingency plans in place.
  23. Over-Reliance on Shadow IT for Data Management
    1. What is it: Allowing departments to create and manage their own data solutions outside the oversight of the central IT team.
    2. Potential Impact: Uncontrolled data sprawl, security risks, and lack of governance.
    3. Best Practice Fix: Integrate shadow IT initiatives into the broader data strategy, providing IT governance while allowing flexibility.
  24. Not Having a Disaster Recovery Plan for Data
    1. What is it: Failing to create a backup or disaster recovery strategy for critical data systems.
    2. Potential Impact: Data loss, operational disruptions, and severe financial consequences.
    3. Example: In 2011, a Japanese automotive company lost critical data due to a natural disaster, as they had no offsite backup.
    4. Best Practice Fix: Establish a disaster recovery plan with regular backups (offsite or cloud-based) and conduct recovery drills to ensure business continuity.
  25. Failing to Automate Data Management
    1. What is it: Overlooking the automation of core data management functions such as metadata tracking, access control, lineage tracing, and policy enforcement, even when tools exist to streamline them.
    2. Potential Impact: Operational bottlenecks, inconsistent governance, reduced data trust, and higher risk of regulatory gaps. Manual oversight simply can’t keep pace with modern data volumes or complexity.
    3. Best Practice Fix: Adopt intelligent automation across your data management stack. Automation at the management layer ensures consistency, compliance, and scalability, not just efficiency.

This list is just the beginning. As I hinted at earlier, I plan to explore many of these issues in greater detail over time, and in doing so, I’ll inevitably cover a range of interconnected topics that shape the future of data and intelligent systems.

Here’s what you can expect from my blog posts going forward:

  • Data Strategy
  • AI Strategy
  • AI Ethics
  • Generative AI
  • Quantum Computing
  • Graph Models and Graph Thinking
  • General Reflections (which this first post falls into)

Whether you’re leading transformation efforts, building intelligent systems, or just trying to make better decisions with data, I hope you’ll find ideas here that challenge, clarify, and contribute.

See you in the next post.