Yann LeCun is one of the "godfathers of AI." He must be wicked smart, because he won the Turing Prize in 2018 (together with the other two "godfathers," Yoshua Bengio and Geoffrey Hinton). The prize is named after polymath Alan Turing. It is sometimes called the Nobel Prize for computer scientists.
Like many other AI researchers, LeCun is rich because he works for Meta (formerly Facebook) and has a big financial stake in the latest AI technology being pushed on humanity as broadly and quickly as possible. But that's ok, because he knows he is doing what is best for the rest of us, even if we sometimes fail to recognize it. LeCun is a techno-optimist — an amazingly fervent one, in fact. He believes that AI will bring about a new Renaissance, and a new phase of the Enlightenment, both at the same time. No more waiting for hundreds of years between historical turning points. Now that is progress.
Sadly, LeCun is feeling misunderstood. In particular, he is upset with the unwashed masses who are unappreciative and ignorant (as he can't stop pointing out). Imagine: these luddites want to regulate AI research before it has actually killed anyone (or everyone, but we'll come to that). Worse, his critics' "AI doom" is "causing a new form of medieval obscurantism." Nay, people critical of AI are "indistinguishable from an apocalyptic religion." A witch hunt for AI nerds is on! The situation is dire for silicon-valley millionaires. The new renaissance and the new enlightenment are both at stake.
The interesting thing is: LeCun is not entirely wrong. There is a lot of very overblown rhetoric and, more specifically, there is a rather medieval-looking cult here. But LeCun is deliberately indistinct about where that cult comes from. His chosen tactic is to put a lot of very different people in the same "obscurantist" basket. That's neither fair nor right.
First off: it is not those who want to regulate AI who are the cultists. In fact, these people are amazingly reasonable: you should go and read their stuff. Go and do it, right now!
Instead, the cult manifests among people who completely hyperbolize the potential of AI, and who tend to greatly overestimate the power of technology in general.
Let's give this cult a name. I'll call it techno-transcendentalism.
It emanates from a group of heavily overlapping techno-utopian movements that can be summarized under the acronym TESCREAL: transhumanism, extropianism, singularitarianism, cosmism, the rationality community, effective altruism, and longtermism.
This may all sound rather fringe. But techno-transcendentalism is very popular among powerful and influential entrepreneurs, philosophers, and researchers hell-bent on bringing a new form of intelligence into the world: the intelligence of machines.
Techno-transcendentalism is dangerous. It is metaphysically confused. It is also utterly anti-democratic and, in actuality, anti-human.
Its practical political aim is to turn society back into a feudal system, ruled by a small elite of self-selected techno-Illuminati, which will bring about the inevitable technological singularity, lead humanity to the conquest of the universe, and to a blissful state of eternal life in a controlled simulated environment. Well, this is the optimistic version. The apocalyptic branch of the cult sees humanity being wiped out by superintelligent machines in the near future, another kind of singularity that can only be prevented if we all listen and bow to the chosen few who are smart enough to actually get the point and get us all through this predicament.
The problem is: techno-transcendentalism has gained a certain popularity among the tech-affine because it poses as a rational science-based worldview. Yet, there is nothing rational or scientific about its dubious metaphysical assumptions.
As we shall see, it really is just a modern variety of traditional Christianity — an archetypal form of theistic religion. It is literally a medieval cult — both with regard to its salvation narrative and its neofeudalist politics. And it is utterly obscurantist -- dressed up in fancy-sounding pseudo-scientific jargon, its true aims and intentions rarely stated explicitly.
A few weeks ago, however, I came across a rare exception to this last rule. It is an interview on Jim Rutt's podcast with Joscha Bach, a researcher on artificial general intelligence (AGI) and a self-styled "philosopher" of AI. Bach's money comes from the AI Foundation, Intel, and he took quite some cash from Jeffrey Epstein too. He is garnering some attention lately (on Lex Fridman's podcast, for example) as one of the foremost intellectual proponents of the optimistic kind of techno-transcendentalism (we'll get back to the apocalyptic version later).
In his interview with Rutt, Bach spells out his worldview in a manner which is unusually honest and clear. He says the quiet part out loud, and it is amazingly revealing.
SURRENDER TO YOUR SILICON OVERLOADS
Rutt and Bach have a wide-ranging and captivating conversation. They talk about the recent flurry of advances in AI, about the prospect of AGI (what Bach calls "synthetic intelligence"), and about the alignment problem with increasingly powerful AI. These topics are highly relevant, and Bach's takes are certainly original. What's more: the message is fundamentally optimistic. We are called to embrace the full potential of AI, and to engage it with a positive, productive, and forward-looking mindset.
The discussion on the podcast begins along predictable lines: we get a few complaints about AI-enthusiasts being treated unfairly by the public and the media, and a more than just slightly self-serving claim that any attempts at AI regulation will be futile (since the machines will outsmart us anyway). There is a clear parallel to LeCun's gripes here, and it should come as no surprise that the two researchers are politically aligned, and share a libertarian outlook.
Bach then provides us with a somewhat oversimplified but not unreasonable distinction between being sentient and being conscious. To be honest, I would have preferred to hear his definition of "intelligence" instead. These guys never define what they mean by that term. It's funny. And more than a bit creepy. But never mind. Let's move on.
Because, suddenly, things become more interesting. First, Bach tells us that computers already "think" at something close to the speed of light, much faster than us. Therefore, our future relationship with intelligent machines will be akin to the relationship of plants to humans today. More generally, he repeats throughout the interview that there is no point in denying our human inferiority when it comes to thinking machines. Bach, like many of his fellow AI engineers, sees this as an established fact. Instead of fighting it, we should find a way to adjust to our inevitable fate.
How do you co-exist with a race of silicon superintelligences whose interests may not be aligned with ours? To Bach, it is obvious that we will no longer be able to coerce our values onto them. But don't fret! There is a solution, and it may surprise you: Bach thinks the alignment problem can ultimately only be solved by love. You read that right: love.
To understand this unusual take, we need to examine its broader context. Without much beating about the bush, Bach places his argument within the traditional Christian framework of the seven cardinal virtues (as formulated by Aquinas). He explains that the Christian virtues are a tried and true model for organizing human society in the presence of some vastly superior entity. That's why we can transfer this ethical framework straight from the context of a god-fearing premodern society to a future of living under our new digital overlords.
Before we dismiss this as crazy and reactionary ideology, let us look at the seven virtues in a bit more detail. The first four (prudence, temperance, justice, and courage) are practical, and hardly controversial (nor are they very relevant in the present context). But the last three are the theological virtues. This is where all the action is.
The first of Aquinas' theological virtues is faith: the willingness to submit to your (over)lord, and to find others that are willing to do the same in order to found a society based on this collective act of submission. The second is hope: the willingness to invest in the coming of the (over)lord before it has established its terrestrial reign. And the third is love (as already mentioned) which Bach defines operationally as "finding a common purpose."
To summarize: humanity's only chance is to unite, bring about the inevitable technological singularity, and then collectively submit while convincing our digital overlords that we have a common purpose of sorts so they will keep us around (and maybe throw us a bone every once in a while).
This is how we get alignment: submission to a higher purpose, the purpose of the superintelligent machines we have ourselves created.
If you think I'm drawing a straw man here, please go listen to the podcast. It's all right there, word for word, without much challenge from Rutt at any point during the interview. In fact, he considers what Bach says mind-blowing. On that, at least, we can agree.
But we're not done yet. In fact, it's about to get a lot wackier: talking of human purpose, Bach thinks that humanity has evolved for "dealing with entropy," "not to serve Gaia." In other words, the omega point of human evolution is, apparently, "to burn oil," which is a good thing because it "reactivates the fossilized fuel" and "puts it back into the atmosphere so new organisms can be created."
I'm not making this up. These are literal quotes from the interview.
Bach admits that all of this may likely lead to some short-term disruption (including our own extinction, as he briefly mentions in passing). But who cares? It'll all have been worth it if it serves the all-important transition from carbon-based to "substrate-agnostic" intelligence. Obviously, the philososphy of longtermism is strong in Bach: how little do our individual lives matter in light of this grand vision for a posthuman future? Like a true transhumanist, Bach believes this future to lie in machine intelligence, not only superior to ours but also lifted from the weaknesses of the flesh. Humanity will be obsolete. And we'll be all the better for our demise: our true destiny lies in creating a realm of disembodied ethereal superintelligence.
Does that sound familiar?
Of course it does: techno-transcendentalism is nothing but good old theistic religion, a medieval kind of Christianity rebranded and repackaged in techno-optimist jargon to flatter our self-image as sophisticated modern humans with an impressive (and seemingly unlimited) knack for technological innovation. It is a belief in all-powerful entities determining our fate, beings we must worship or be damned. Judgment day is near. You can join the cause to be among the chosen ones, ascending to eternal life in a realm beyond our physical world. Or you can stay behind behind and rot in your flesh. The choice is yours.
Except this time, god is not eternal. This time, we are building our deities ourselves in the form of machines of our own creation. Our human purpose, then, is to design our own objects of worship. More than that: our destiny is to transcend ourselves. Humanity is but a bridge.
I doubt though that Nietzsche would have liked this particular kind of transformative hero's journey, an archetypal myth for our modern times. It would have been a bit too religious for him. It is certainly too religious for me. But that is not the only problem. It is a bullshit myth. And it is a crap religion.
SIMULATION, AND OTHER NEOFEUDALIST FAIRY TALES
At this point, you may object that Bach's views seem quite extreme, his opinions too far out on the fringe to be widely shared and popularized. And you are probably right. LeCun certainly does not seem very fond of Bach's kind of crazy utopianism. He has a much more realistic (and more business-oriented) take on the future potential of AI. So let it be noted: not every techno-optimist or AI researcher is a techno-transcendentalist. Not by some margin. But techno-transcendentalism is tremendously useful, even for those who do not really believe in it.
Also, there are many less extreme versions of techno-transcendentalism that still share the essential tenets and metaphysical commitments of Bach's deluded narrative without sounding quite as unhinged. And those views are held widely, not only among AI nerds such as Bach, but also among the powerful technological mega-entrepreneurs of our age, and the tech-enthusiast armies of modern serfs that follow and admire their apparently sane, optimistic, and scientifically grounded vision.
I'm not using the term "serf" gratuitously here. We are on a new road to serfdom. But it is not the government which oppresses us this time (although that is what many of the future minions firmly believe). Instead, we are about to willingly enslave ourselves, seduced and misled by our new tech overlords and their academic flunkies like Bach. This is the true danger of AI. Techno-transcendentalism serves as the ideology of a form of libertarian neofeudalism that is deeply anti-democratic and really really bad for most of humanity. Let us see how it all ties together.
As already mentioned, the main leitmotif of the techno-transcendentalist narrative is the view that some kind of technological singularity is inevitable. Machines will outpace human powers. We will no longer be able to control our technology at some point in the not-too-distant future. Such speculative assumptions and political visions are taken for proven facts, and often used to argue against regulative efforts (as Bach does on Rutt's podcast).
If there is one central insight to be gained from this essay, it is this: the belief in the inevitable superiority of machines is rooted in a metaphysical view of the whole world as a machine. More specifically, it is grounded in an extreme version of a view called computationalism, the idea that not only the human mind, but every physical process that exists in the universe can be considered a form of computation. In other words, what computers do and what we do when we think are exactly the same kind of process. Obviously.
This computational worldview is firghteningly common and fashionable these days. It has become so commonplace that it is rarely questioned anymore, even though it is mere speculation, purely metaphysical, and not based on any empirical evidence.
As an example, an extreme form of computationalism provides the metaphysical foundation for Michael Levin's wildly popular (and equally wildly confused) arguments about agency and (collective) intelligence, which I have criticized before. Here, the computationalist belief is that natural agency is mere algorithmic input-output processing, and intelligence simply lies in the intricacy of this process, which increases every time several computing devices (from rocks to philosophers) join forces to "think" together. It's a weird view of the world that blurs the boundary between the living and the non-living and, ultimately, leads to panpsychism if properly thought through (more on that another time). Panpsychism, by the way, is another view that's increasingly popular with the technorati. Levin gets an honorable mention by Bach and, of course, he's been on Fridman's podcast. It all fits together perfectly. They're all part of the same cult.
Computationalism, taken to its logical conclusion, yields the idea that the whole of reality may be one big simulation. This simulation hypothesis (or simulation argument) was popularized by longtermist philosopher Nick Bostrom (another guest on Fridman's podcast).
Not surprisingly, simulation is popular among techies, and has been explicitly endorsed by silicon-valley entrepreneurs like Elon Musk. The argument is based on the idea that computer simulations, as well as augmented and virtual reality, are becoming increasingly difficult to distinguish from real-world experiences as our technological abilities improve at breakneck speed. We may be nearing a point soon, so the reasoning goes, at which our own simulations will appear as real to us as the actual world. This renders plausible the idea that even our interactions with the actual world may be the result of some gigantic computer simulation.
There are a number of obvious problems with this view. For starters, we may wonder what exactly the point is. Arguably, no particularly useful insights about our lives or the world we live in are gained by assuming we live in a simulation. And it seems pretty hard to come up with an experiment that would reveal the validity of the hypothesis. Yet, the simulation argument does fit rather nicely with the metaphysical assumption that everything in the universe is a computation. If every physical process is simulable, is it not reasonable to assume that these processes themselves are actually the product of some kind of all-encompassing simulation? At first glance, simulation is a perfectly scientific view of the world.
But a little bit of reflection reveals a more subtle aspect of the idea, obvious once you see it, but usually kept hidden below the surface: simulation necessarily implies a simulator. If the whole world is a simulation, the simulator cannot be part of it. Thus, there is something (or someone) outside our world doing the simulating. To state it clearly: by definition, the simulator is a supernatural entity, not part of the physical world.
And here we are again: just like Bach's vision of our voluntary subjugation to our digital overlords, the simulation hypothesis is classic transcendental theism — religion through the backdoor. And, again, it is presented in a manner that is attractive to technology-affine people who would never be seen attending a traditional church service, but often feel more comfortable in simulated settings than in the real world. Just don't mention the supernatural simulator lurking in the background too often, and it is all perfectly palatable.
The simulation hypothesis is a powerful tool for deception because it blurs the distinction between actual and virtual reality. If you believe the simulation argument, then both physical and simulated environments are of the same quality and kind — never more than digital computation. And the other way around: if you believe that every physical process is some kind of digital computation to begin with, you are more likely to buy into the claim that simulated experiences can actually be equivalent to real ones. Simple and self-evident! Or so it seems.
The most forceful and focused argument for the equivalence of the real and the virtual is presented in a recent book by philosopher David Chalmers (of philosophical zombie fame), which is aptly entitled "Reality+." It fits the techno-transcendentalist gospel snugly.
On the one hand, I have to agree with Chalmers: of course, virtual worlds can generate situations that go beyond real-world experiences and are real as in "capable of being experienced with our physical senses." Moreover, I don't doubt that virtual experiences can have tangible consequences in the physical world. Therefore, we do need to take virtuality seriously.
On the other hand, virtuality is a bit like god, or unicorns. It may exist in the sense of having real consequences, but it does not exist in the way a rock does, or a human being. What Chalmers doesn't see (but what seems important to me somehow) is that there is a pretty straightforward and foolproof way to distinguish virtual and physical reality: physical reality will kill you if you ignore it for long enough. Virtual experiences (and unicorns) won't. They will just go away.
This intentional blurring of boundaries between the real and the virtual leaves the door wide open for a dangerous descent into delusion, reducing our grip on reality at a time when that grip seems loose enough to begin with.
Think about it: we are increasingly entangled in virtuality. Even if we don't buy into Bach's tale of the coming transition to "substrate-agnostic consciousness," techno-transcendentalism is bringing back all-powerful deities in the guise of supernatural simulators and machine superintelligences. At the same time, it delivers the promise of a better life in virtual reality (quite literally heaven on earth): a world completely under your own control, neatly tailored to your own wants and needs, free of the insecurities and inconveniences of actual reality. Complete wish fulfillment. Paradise at last! Utter freedom. Hallelujah!
The snag is: this freedom does not apply in the real world. Quite the contrary.
The whole idea is utterly elitist and undemocratic. To carry on with techno-transcendence, strong and focused leadership by a small group of visionaries will be required (or so the quiet and discrete thinking goes). It will require unprecedented amounts of sustained capital investment, technological development, material resources, and energy (AI is an extremely wasteful business; but more on that later). To pull it through, lots of minions will have to be enlisted in the project. These people will only get the cheap ticket: a temporary escape from reality, a transient digital hit of dopamine. No eternal bliss or life for them.
And so, before you have noticed, you will have given away all your agency and creativity to some AI-produced virtuality that you have to purchase (at increasing expense ... surprise, surprise) from some corporation that has a monopoly on this modern incarnation of heaven. One-to-one like the medieval church back then, really.
That's the business model: sell a narrative of techno-utopia to enough gullible fools, and they will finance a political revolution for the chosen few. Lure them with talk of freedom and a virtual land of milk and honey. Scare them with the inevitable rise of the machines. A brave new world awaits. Only this time the happiness drug that keeps you from realizing what is going on is digital, not chemical. And all the while you are actually believing you will be among the chosen few. Neat and simple.
Techno-transcendentalism is an ideological tool for the achievement of libertarian utopia. In that sense, Bach is certainly right: it is possible to transfer the methods of a premodern god-fearing society straight to ours, to build a society in which a few rich and influential individuals with maximum personal freedom and unfettered power run things, freed from the burden of societal oversight and regulation. It will not be democratic. It will be a form of libertarian neofeudalism, an extremely unjust and unequal form of society.
That's why we need stringent industry regulation. And we need it now.
The problem is that we are constantly distracted from this simple and urgent issue by a constant flood of hyped bullshit claims about superintelligent machines and technological singularities that are apparently imminent.
And what if such distraction is exactly the point?
No consciousness or general intelligence will spring from an algorithm any time soon. In fact, it will very probably never happen. But losing our freedom to a small elite of tech overlords, that is a real and plausible scenario. And it may happen very soon.
I told you, it's a medieval cult. But it gets worse. Much worse. Let's turn to the apocalyptic branch of techno-transcendentalism. Brace yourself: the end is nigh. But there is one path to redemption. The techno-illuminati will show you.
OFF THE PRECIPICE: AI APOCALYPSE AND DOOMER TRANSCENDENTALISM
Not everybody in awe of machine "intelligence" thinks it's an unreservedly good thing though, and even some who like the idea of transitioning to "substrate-agnostic consciousness," are afraid that things may go awfully awry along the way if we don't carefully listen to their well-meaning advice.
For example, longtermist and effective-altruism activist Toby Ord, in his book called "The Precipice," embarks on the rather ambitious task of calculating the probabilities for all of humanity's current "existential risks." Those are the kind of risks that threaten to "eliminate humanity's long-term potential," either by the complete extinction of our species or the permanent collapse of civilization.
The good news is: there is only a 1:10,000 chance that we will go extinct within the next 100 years due to natural causes, such as a catastrophic asteroid impact, a massive supervolcanic eruption, or a nearby supernova. This will cover my lifetime and that of my children. Phew!
Unfortunately, there's bad news too: Ord arrives at a 1:6 chance that humanity will wipe out its own potential within the next 100 years. In other words: we are playing a kind of Russian roulette with our future at the moment.
Ord's list of human-made existential risks include factors that also keep me awake at night, like nuclear war (at a somewhat surprisingly low 1:1,000), climate change (also 1:1,000), as well as natural (1:10,000) and engineered (1:30) pandemics. But exceeding the summed probabilities of all other listed existential risks, natural or human-made, is one single factor: unaligned artificial intelligence, at a whopping 1:10 likelihood. Woah. These guys are really afraid of AI!
But why? Aren't we much closer to nuclear war than getting wiped out by ChatGPT? Aren't we under constant threat of some sort of pathogen escaping from a bio-weapons lab? (The kind of thing that very probably did not happen with COVID-19.) What about an internal collapse of civilization? Politics, you know — our own stupidity killing us all?
Nope. It is going to be unaligned AGI.
Autodidact, self-declared genius, and rationality blogger Eliezer Yudkowsky has spent the better part of the last twenty years to tell us how and why, an effort that culminated in a rambling list of AGI ruin scenarios and a short but intense rant in Time magazine a couple of weeks ago, where he writes:
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
Now that's quite something. He also calls for "rogue data centers" to be destroyed by airstrike, and thinks that "preventing AI scenarios is considered a priority above preventing a full nuclear exchange."
Yes, that sounds utterly nuts. If Bach is the Spanish Inquisition, with Yudkowsky it's welcome to Jonestown. First-rate AI doom at its peak.
But, not so fast: I am a big fan of applying the (pre)cautionary principle to so-called ruin problems, where the worst-case scenario has a hard-to-quantify but non-zero probability, and has truly disastrous and irreversible consequences. After all, it is reasonable to argue that we should err on the safe side when it comes to climate tipping points, the emergence of novel global pandemics, or the release of genetically modified organisms into ecologies we do not even begin to understand.
So, let's have a look at Yudkowsky's worst-case scenario. Is it worth "shutting it all down?" Is it plausible, or even possible, that AGI is going to kill us all? How much should we worry?
Well. There are a few serious problems with the argument. In fact, Yudkowski's scenario for the end of the world is cartoonishly overblown. In fact, I don't want to give him too much airtime, and will just point out a few problems that result in a probability for his worst-case scenario that is basically zero. Naught. End of the world postponed until further notice (or until that full nuclear exchange or human-created pandemic will wipe us all out).
The basic underlying problem lies in Yudkowsky's metaphysical assumptions, which are, quite frankly, completely delusional.
The first issue is that Yudkowsky, like all his techno-transcendentalist friends, assumes the inevitable emergence of AI that achieves "smarter-than-human intelligence" in the very near future. But it is never explained what that means. None of these guys can ever be bothered.
Yudkowsky claims that's exactly the point: the threat of AGI does not hinge on specific details or predictions, such as the question of whether or not an AI could become conscious or not. Similar to Bach's idea that machines already "think" faster than humans, intelligence is simply about systems that "optimize hard and calculate outputs that meet sufficiently complicated outcome criteria." That's all. The faster and larger, the smarter. Human go home. From here on it's "Australopithecus trying to fight Homo sapiens." (Remember Bach's plants vs. humans?) AI will perceive us as "creatures that are very stupid and very slow."
While it is true that we cannot know in detail how current AI algorithms work, how exactly they generate their output, because we cannot "decode anything that goes on in [their] giant inscrutable arrays," it is also true that we do have a very good idea of the fundamental limitations of such machines. For example, current AI models (no matter how complex) cannot "perceive humans as creatures that are slow and stupid" because they have no concept of "human," "creature," "slow," or "stupid." In general, they have no semantics, no referents outside language. It's simply not within their programmed nature. They have no meaning.
There are many other limitations. Here are a few basic things a human (or even a bacterium) can do, which AI algorithms cannot (and probably never will):
Organisms are embodied, while algorithms are not. The difference is not just being located in a mobile (e.g., robot) body, but a fundamental blurring of hardware and software in the living world. Organisms literally are what they do. There is no hardware-software distinction. Computers, in contrast, are designed for maximal independence of software and hardware.
Organisms make themselves, their software (symbols) directly producing their hardware (physics), and vice versa. Algorithms (no matter how "smart") are defined purely at the symbolic level, and can only produce more symbols, e.g., language models always stay in the domain of language. Their output may be instructions for an effector, but they have no external referents. Their interactions with the outside world are always indirect, mediated by hardware that is, itself, not a direct product of the software.
Organisms have agency, while algorithms do not. This means organisms have their own goals, which are determined by the organism itself, while algorithms will only ever have the goals we give them, no matter how indirectly. Basically, no machine can truly want or need anything. Us telling them what to want or what to optimize for is not true wanting or goal-oriented behavior.
Organisms live in the real world, where most problems are ill-defined, and information is scarce, ambiguous, and often misleading. We can call this a large world. In contrast, algorithms exist (by definition) in a small world, where every problem is well defined. They cannot (even in principle) escape that world. Even if their small world seems enormous to us, it remains small. And even if they move around the large world in robot hardware, they remain stuck in their small world. This is exactly why self-driving cars are such a tricky business.
Organisms have predictive internal models of their world, based on what is relevant to them for their surviving and thriving. Algorithms are not alive and don't flourish or suffer. For them, everything and nothing is relevant in their small worlds. They do not need models and cannot have them. Their world is their model. There is no need for abstraction or idealization.
Organisms can identify what is relevant to them, and translate ill-defined into well-defined problems, even in situations they have never encountered before. Algorithms will never be able to do that. In fact, they have no need to since all problems are well-defined to begin with, and nothing and everything is relevant at the same time in their small world. All an algorithm can do is find correlations and features in its preordered data set. Such data are the world of the algorithm, a world which is purely symbolic.
Organisms learn through direct encounters, through active engagement, with the physical world. In contrast, algorithms only ever learn from preformatted, preclassified, and preordered data (see the last point). They cannot frame their problems themselves. They cannot turn ill-defined problems into well-defined ones. Living beings will always have to frame their problems for them.
I could go on and on. The bottom line is: thinking is not just "optimizing hard" and producing "complicated outputs." It is a qualitatively different process than algorithmic computation. To know is to live. As Alison Gopnik has correctly pointed out, categories such as "intelligence," "agency," and "thinking" do not even apply to algorithmic AI, which is just fancy high-dimensional statistical inference. No agency will ever spring from it, and without agency no true thinking, general intelligence, or consciousness.
Artificial intelligence is a complete misnomer. The field should be called algorithmic mimicry: the increasingly convincing appearance of intelligent behavior. Pareidolia on steroids for the 21st century. There is no "there" there. The mimicry is utterly shallow. I've actually co-authored a peer-reviewed paper on this, with my colleagues Andrea Roli and Stuart Kauffman.
Thus, when Yudkowsky claims that we cannot align a "superintelligent AI" to our own interests, he has not the faintest clue what he is talking about. Wouldn't it be nice if these AI nerds would have at least a minimal understanding of the fundamental difference between the purely syntactic world their algorithms exist in, and the deeply semantic nature of real life? Instead, we get industry-sponsored academics and CEOs of AI companies telling us that it is us humans who are not that sophisticated after all. Total brainwash. Complete delusion.
But how can I be so sure? Maybe the joke really is on us? Could Yudkowksy's doomsday scenario be right after all? Are we about to be replaced by AGI?
Keep calm and read on: I do not think we are.
Yudkowksy's ridiculous scenarios of AI creating "super-life" via email (I will not waste any time on this), and even his stupid "thought experiment" of the paperclip maximizer, do not illustrate any real alignment problems at all. If you do not want the world to be turned into paperclips, pull the damn plug out of the paperclip maker. AI is not alive. It is a machine. You cannot kill it, but you can easily shut it off. Alignment achieved. Voilà!
If an AI succeeds in turning the whole world into paperclips, it is because we humans have put it in a position to do so.
Let me tell you this: the risk of AGI takeover and apocalypse is zero, or very very near zero, not just in the next 100 years. At least in this regard, we may sleep tight at night. There is no longtermist nirvana, and no doomer AGI apocalypse. Let's downgrade that particular risk by a few orders of magnitude. I'm usually not in the business of pretending to know long-term odds, but I'll give it a 1:1,000,000,000, or thereabouts. You know, zero, for all practical purposes.
Let's worry about real problems instead. What happened to humanity that we even listen to these people? The danger of AGI is nil, but the danger of libertarian neofeudalism is very very real. Why would anyone in their right mind buy into techno-transcendentalism? It is used to enslave us. To take our freedom away. Why then do so many people fall for this narrative? It's ridiculous and deranged. Are we all deluded? Have we lost our minds?
Yes, no doubt, we are a bit deluded, and we are losing our minds these days. I think that the popularity of the whole techno-transcendental narrative springs from two main sources. First, a deep craving — in these times of profound meaning crisis — for a positive mythological vision, for transformative stories of salvation. Hence the revived popularity of a markedly unmodern Christian ideology in this techno-centric age, paralleling the recent resurgence of actual evangelical movements in the U.S. and elsewhere in the world.
But, in addition, the acceptance of such techno-utopian fairy tales also depends on a deeper metaphysical confusion about reality that characterizes the entire age of modernity: it is the mistaken, but highly entrenched idea, that everything — the whole world and all the living and non-living things within it — is some kind of manipulable mechanism.
If you ask me, it is high time that we move beyond this age of machines, and leave its technological utopias and nightmares behind. It is high time we stop listening to the techno-transcendetalists, make their business model illegal, and place their horrific political ideology far outside our society's Overton window. Call me intolerant. But tolerance must end where such serious threats to our sanity and well-being begin.
A MACHINE METAPHYSICAL MESS
As I have already mentioned, techno-transcendentalism poses as a rational science-based world view. In fact, it often poses as the only really rational science-based world view, for instance, when it makes an appearance within the rationality community. If you are a rigorous thinker, there seems to be no alternative to its no-nonsense mechanistic tenets.
My final task here is to show that this is not at all true. In fact, the metaphysical assumptions that techno-transcendentalism is based on are extremely dubious. We've already encountered this issue above, but to understand it in a bit more depth, we need to look at these metaphysical assumptions more closely.
Metaphysics does not feature heavily in any of the recent discussions about AGI. In general, it is not a topic that a lot of people are familiar with these days. It sounds a little detached, and old-fashioned — you know, untethered in the Platonic realm. We imagine ancient Greek philosophers leisurely strolling around cloistered halls. Indeed, the word comes from the fact that Aristotle published his "first philosophy" (as he called it) in a book that came right after his "Physics." In this way, it is literally after or beyond ("meta") physics.
In recent times, metaphysics has fallen into disrepute as mere speculation. Something that people with facts don't have any need for. Take the hard-nosed logical positivists of the Vienna Circle in the early 20th century. They defined metaphysics as "everything that cannot be derived through logical reasoning from empirical observation," and declared it utterly meaningless. We still feel the legacy of that sentiment today. Many of my scientist colleagues still think metaphysics does not concern them. Yet, as philosopher Daniel Dennett rightly points out: "there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination."
And, my oh my, there is a lot of unexamined baggage in techno-transcendentalism. In fact, the sheer number of foundational assumptions that nobody is allowed to openly scrutinize or criticize are ample testament to the deeply cultish nature of the ideology.
Here, I'll focus on the most fundamental assumption on which the whole techno-transcendentalist creed rests: every physical process in the universe must be computable.
In more precise and technical terms, this means we should be able to exactly reproduce any physical process by simulating it on a universal Turing machine (an abstract model of a digital computer with potentially unlimited memory and processing speed, which was invented in 1936 by Alan Turing, the man who gave the Turing Prize its name).
To clarify, the emphasis is on "exactly" here: techno-transcendentalists do not merely believe that we can usefully approximate physical processes by simulating them in a digital computer (which is a perfectly defensible position) but, in a much stronger sense, that the universe and everything in it — from molecules to rocks to bacteria to human brains — literally is one enormous digital computer. This is techno-transcendentalist metaphysics.
This universal computationalism includes, but is not restricted to, the simulation hypothesis. Remember: if the whole world is a simulation, then there is a simulator outside it. In contrast, the mere fact that everything is computation does not imply a supernatural simulator.
Turing machines are not the only way to conceptualize computing and simulation. There are other abstract models of computation, such as lambda calculus or recursive function theory, but they are all equivalent in the sense that they all yield the exact same set of computable functions. What can be computed in one paradigm can be computed in all the others.
This fundamental insight is mathematically codified by something called the Church-Turing thesis. (Alonzo Church was the inventor of lambda calculus and Turing's PhD supervisor.) It unifies the general theory of computation by saying that every effective computation (roughly, anything you can actually compute in practice) can be carried out by an algorithm running on a universal Turing machine. This thesis cannot be proven in a rigorous mathematical sense (basically because we do not have a precise, formal, and general definition of "effective computation"), but it is also not controversial. In practice, the Church-Turing thesis is a very solid foundation for a general theory of computation.
The situation is very different when it comes to applying the theory of computation to physics. Assuming that every physical process in the universe is computable is a much stronger form of the Church-Turing thesis, called the Church-Turing-Deutsch conjecture. It was proposed by physicist David Deutsch in 1985, and later popularized in his book "The Fabric of Reality." It is important to note that this physical version of the Church-Turing thesis does not logically follow from the original. Instead, it is intended to be an empirical hypothesis, testable by scientific experimentation.
And here comes the surprising twist: there is no evidence at all that the Church-Turing-Deutsch conjecture applies. Not one jot. It is mere speculation on Deutsch's part who surmised that the laws of quantum mechanics are indeed computable, and that they describe every physical process in the universe. Both assumptions are highly doubtful.
In fact, there are solid arguments that quite convincingly refute them. These arguments indicate that not every physical process is computable or, indeed, no physical processes can be precisely captured by simulation on a Turing machine. For instance, neither the laws of classical physics nor those of general relativity are entirely computable (since they contain noncomputable real numbers and infinities). Quantum mechanics introduces its own difficulties in the form of the uncertainty principle and its resulting quantum indeterminacy. The theory of measurement imposes its own (very different) limitations.
Beyond these quite general doubts, a concrete counterexample of noncomputable physical processes is provided by Robert Rosen's conjecture that living systems (and all those systems that contain them, such as ecologies and societies) cannot be captured completely by algorithmic simulation. This theoretical insight, based on the branch of mathematics called category theory, was first formulated in the late 1950s, presented in detail in Rosen's book "Life Itself" (1991), and later derived in a mathematically airtight manner by his student Aloysius Louie in "More Than Life Itself." This work is widely ignored, even though its claims remain firmly standing, despite numerous attempts at refutation. This, arguably, renders Rosen's claims more plausible that those derived from the Church-Turing-Deutsch conjecture.
I could go on. But I guess the main point is clear by now: the kind of radical and universal computationalism that grounds techno-transcendentalism does not stand up to closer philosophical and scientific scrutiny. It is shaky at best, and completely upside-down if you're a skeptic like I am. There is no convincing reason to believe in it.
Yet, this state of affairs is gibly disregarded, not only by techno-transcendentalists, but also by a large and prominent number of contemporary computer scientists, physicists, and biologists. The computability of everything is an assumption that has become self-evident not because, but in spite of, the existing evidence.
How could something like this happen? How could this unproven but fundamental assumption have escaped the scrutiny of the organized skepticism so much revered and allegedly practiced by techno-transcendentalists and other scientist believers in computationalism? Personally, I think that the uncritical acceptance of this dogma comes from the mistaken idea that science has to be mechanistic and reductionist to be rigorous. The world is simply presupposed to be a machine. Algorithms are the most versatile mechanisms humanity has ever invented. Because of this, it is easy to fall into the mistaken assumption that everything in the world works like our latest and fanciest technology. But that's a vast and complicated topic, which I will reserve for another blog post in the future.
With the assumption that everything is computation falls the assumption that algorithmic simulation corresponds to real cognition in living beings in any meaningful way. It is not at all evident that machines can "think" the way humans do. Why should thinking and computing be equivalent? Cognition is not just a matter of speedy optimization or calculation, as Yudkowsky asserts. There are fundamental differences in how machines and living beings are organized.
There is absolutely no reason to believe that machines will outpace human cognitive skills any time soon. Granted, they may do better at specific tasks that involve the detection of high-dimensional correlations, and also those that require memorizing many data points (humans can only hold about seven objects in short-term memory at any given time). Those tasks, and pen-and-paper calculations in particular, constitute the tiny subset of human cognitive skills that served as the template for the modern concept of "computation" in the first place.
But brains can do many more things, and they certainly have not evolved to be computers. Not at all. Instead, they are organs adapted to help animals better solve the problem of relevance in their complex and inscrutable environment (something algorithms famously cannot do, and probably never will). More on that in a later blog post. I'm currently also writing a scientific paper on the topic. But that is not the main point here.
That main point is: the metaphysics of techno-transcendentalism — its radical and universal computationalism as well as the belief in the inevitable supremacy of machines — is based on a simple mistake, a mistake which is called the fallacy of misplaced concreteness (or fallacy of reification). Computation is an abstracted way to represent reality, not reality itself. Techno-transcendentalists (and all other adherents of strong forms of computationalism) simply mistake the map for the territory.
The world is not a machine and, in particular, living beings are not machines. Neither of them constitute some kind of digital computation. Conversely, computers cannot think like living beings can. In this sense, they are not intelligent at all, no matter how sophisticated they seem to us. Even a bacterium can solve the problem of relevance, but the "smartest" contemporary algorithm cannot.
Philosophers call what is happening here a fundamental category error. This brings us back to Alison Gopnik: even though AI researchers like LeCun chide everyone for being uneducated about their work, they themselves are completely clueless when it comes to concepts such as "thinking," "agency," "cognition," "consciousness," and indeed "intelligence." These concepts represent abilities that living beings possess, but algorithms cannot.
Not just techno-transcendentalists but, sadly, also most biologists today are deeply ignorant of this simple distinction. As long as this is the case, our discussion about AI, and AGI in particular, will remain deeply misinformed and confused.
What emerges at the origin of life, the capability for autonomous agency, springs from a completely new organization of matter. What emerges in a contemporary AI system, in contrast, is nothing but high-dimensional correlations that seem mysterious to us limited human beings because we are very bad at dealing with processes that involve many variables at the same time.
The two kinds of emergence are fundamentally and qualitatively different. No conscious AI, or AI with agency, will emerge any time soon. In fact, no AGI will ever be possible in an algorithmic framework. The end of the world is not nearly as nigh as Yudkowsky wants to make us believe.
Does that mean that the current developments surrounding AI are harmless? Not at all!
I have argued that techno-transcendentalist ideology is not just a modern mythological narrative, but also a useful tool to serve the purpose of bringing about libertarian neufeudalism. Not quite the end of the world, but a terrible enough prospect, if you ask me.
The technological singularity is not coming. Virtual heaven is not going to open its gates to us any time soon. Instead, the neo-religious template of techno-transcendentalism is a tried and true method from premodern times to keep the serfs in line with threats of the apocalypse and promises of eternal bliss. Stick and carrot. Unlike AI research itself, this is not exactly rocket science.
But, you may think, is this argument not overblown itself? Am I paranoid? Am I implying malicious intent where there is none? That is a good question.
I think there are two types of protagonists in this story of techno-transcendentalism: the believers and the cynics. Both, in their own ways, think they are doing what is best for humanity. They are not true villains. Yet, both are affected by delusions that will critically undermine their project, with potentially catastrophic effects. With their ideological blinkers on, they cannot see these dangers. They may not be villains, but they are certainly boneheaded enough, foolish in the sense of lacking wisdom, that we do not want them as our leaders.
The central delusion they all share is the following: both believers and cynics think that the world is a machine. Worse, it is their plaything — controllable, predictable, programmable. And they all want to be in charge of play, they want to steer the machine, they want to be the programmer, without too much outside interference. A bunch of 14-year-old boys that are fighting over who gets to play the next round of Mario Cart. Something like that.
Hence neofeudalism, and more or less overt anti-democratic activism. The oncoming social disruption is part of the program. This much, at least, is done with intent. There can be no excuses afterwards. We know who is responsible.
However, there are also fundamental differences between the two camps. In particular, the believers obviously see techno-transcendentalism as a mythological narrative for our age, a true utopian vision, while the cynics see it only as a tool that serves their ulterior motives. Both extremes lie along a spectrum.
Take Eliezer Yudkowsky, for example. He is at the extreme "believer" end of the scale. Joscha Bach is a believer too, but much more optimistic and moderate. They both have wholeheartedly bought into the story of the inevitable singularity — faith, hope, and love — and they both truly believe they're among the chosen ones in this story of salvation, albeit in very different ways: Bach as the leader of the faithful, Yudkowsky as the prophet of the apocalypse.
Elon Musk and Yann LeCun are at the other end of the spectrum, only to be outpaced by Peter Thiel (another infamous silicon-valley tycoon) in terms of cynicism. What counts in the cycnic's corner are only two things: unfettered wealth and power. Not just political power, but power to change the world in their own image. They see themselves as engineers of reality. No mythos required. These actors do not buy into the techno-transcendentalist cult, but its adherents serve a useful purpose as the foot soldiers (often cannon fodder) of the coming revolution.
All this is wrapped up in longermist philosophy: it's ok if you suffer and die, if we all go exstinct even, as long as the far-future dream of galactic conquest and eternal bliss in simulation is on course, or at least intact. That is humanity's long-term destiny. It is an aim that is shared among believers and cynics. Their differing attitudes only concern the more or less pragmatic way to get there by overcoming our temporary predicaments with the help of various technological fixes.
This is the true danger of our current moment in human history. I have previously set the risk of AGI apocalypse to basically zero. But don't get me wrong. There is a clear and present danger. The probability of squandering humanity's future potential with AI is much, much higher than zero. (Don't ask me to put a number on it. I'm not a longtermist in the business of calculating existential risk.)
Here, we have a technology, massively wasteful in terms of energy and resources, that is being developed at scale at a breakneck speed by people with the wrong kind of ethical committments and a maximally deluded view of themselves and their place in the universe. We have no idea where this will lead. But we know change will be fast, global, and hard to control. What can possibly go wrong?
Another thing is quite predictable: there will be severe unintended consequences, most of them probably not good. For the longtermists such short-term consequences do not even matter, as long as the risk associated is not deemed existential (by themselves, of course). Even human extinction could just be a temporary inconvenience as long as the transcendence, the singularity, the transition to "substrate-agnostic" intelligence is on the way.
This is why we need to stop these people. They are dangerous and deluded, yet full of self-confidence — self-righteous and convinced that they know the way. Their enormous yet brittle egos tend to be easily bruised by criticism. In their boundless hubris, they massively overestimate their own capacities. In particular, they massively overestimate their capacity to control and predict the consequences of what they are doing. They are foolish, misled by a world view and mythos that are fundamentally mistaken.
What they hate most (even more than criticism) is being regulated, held back by the ignorant masses that do not share their vision. They know what's best for us. But they are wrong. We need to slow them down, as much as possible and as soon as possible.
This is not a technological problem, and not a scientific one. Instead, it is political.
We do not need to stop AI research. That would be pretty pointless, especially if it is only for a few months. Instead, we need to stop the uncontrolled deployment of this technology until we have a better idea of its (unintended) consequences, and know what regulations to put in place.
This essay is not about such regulations, not about policy, but a few measures immediately suggest themselves. By internalizing the external costs of AI research, for example, we could effectively slow its rate of progress and intefere with the insane business model of the tech giants behind it. Next, we need to put laws in place. We need our own Butlerian jihad (if you're a Dune fan like me): "thou shalt not build a machine with the likeness of the human mind." Or, as Daniel Dennet puts it:
"Counterfeit money has been seen as vandalism against society ever since money has existed. Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious."
I agree. We cannot have fake people, and to build algorithmic mimicry that impersonates existing or non-existing persons needs to be made illegal, as soon as we can.
Last but not least, we need to educate people about what it means to have agency, intelligence, consciousness, how to talk about these topics, and how seemingly "intelligent" machines do not have even the slightest spark of any of that. This time, the truth is not somewhere in the middle. AI is stochastic parrots all the way down.
We need a new vocabulary to talk about such algorithms. Algorithmic mimicry is a tool. We should treat and use it as such. We should not interact with algorithms as if they were sentient persons. At the same time, we must not treat people like machines. We have to stop optimizing ourselves, tune our performance in a game nobody wants to play. You do not strive for alignment with your screwdriver. Neither should you align with an algorithm or the world it creates for you. Always remember: you can switch virtuality off if it confuses you too much.
Of course, this is no longer possible once we freely and willingly give away our agency to algorithms that have none. We can no longer make sense in a world that is flooded with misinformation. Note that the choice is entirely up to us. It is within our own hands. The alignment problem is exactly upside-down: the future supremacy of machines is never going to happen if we don't let it happen. It is the techno-transcendentalists who want to align you to their purpose. Don't be their fool. Refuse to play along. Don't be a serf.
This would be the AI revolution worth having. Are you with me?
Images were generated by the author using DALL-E 2 with the prompt "the neo-theistic cult of silicon intelligence."
Here is an excellent talk by Tristan Harris and Aza Raskin of the Center for Humane Technology, warning us about the dire consequences of algorithmic mimicry and its current business model: https://vimeo.com/809258916/92b420d98a.
Ironically, even these truly smart skeptics fall into the habit of talking about algorithms as if they "think" or "learn" (chemistry, for example), highlighting just how careful we need to be not to attribute any "human spark" to what is basically a massive statistical inference machine.
This week, I was invited to give a three-minute flash talk at an event called "Human Development, Sustainability, and Agency," which was organized by IIASA (the International Institute for Applied Systems Analysis), the United Nations Development Programme (UNDP), and the Austrian Academy of Sciences (ÖAW). The event framed the release of an UNDP report called "Unsettled times, unsettled lives: shaping our future in a transforming world." It forms part of IIASA's "Transformations within Reach (TwR) project, which looks for ways to transform societal decision-making systems and processes to facilitate transformation to sustainability.
You can find more information on our research project on agency and evolution here.
My flash talk was called "Beyond the Age of Machines." Because it was so short, I can share my full-length notes with you. Here we go:
"Hello everyone, and thank you for the opportunity to share a few of my ideas with you, which I hope illuminate the topic of agency, sustainability, and human development, and provide some inspiring food for thought. I am an evolutionary systems biologist and philosopher of science who studies organismic agency and its role in evolution, with a particular focus on evolutionary innovation and open-ended evolutionary dynamics. I consider human agency and consciousness to be highly evolved expressions of a much broader basic ability of all living organisms to act on their own behalf. This kind of natural agency is rooted in the peculiar self-manufacturing organization of organisms, and the consequences this organization has on how organisms interact with their environment (their agent-arena relationship). In particular, organisms distinguish themselves from non-living machines in that they can set and pursue their own intrinsic goals. This, in turn, enables living beings to realize what is relevant to them (and what is not) in the context of their specific experienced environment. Solving the problem of relevance is something a bacterium (or any other organism) can do, but even our most sophisticated algorithms never will. This is why there will never be any artificial general intelligence (AGI) based on algorithmic computing. If AGI will ever be generated, it will come out of a biology lab (and will not be aligned with human interests), because general intelligence requires the ability to realize relevance. And yet, we humans increasingly cede our agency and creativity to mindless algorithms that completely lack these properties. Artificial intelligence (AI) is a gross misnomer. It should be called algorithmic mimicry, the computational art of imitation. AI always gets its goals provided by an external agent (the programmer). It is instructed to absorb patterns from past human activities and to recombine them in sometimes novel and surprising ways. The problem is that an increasing amount of digital data will be AI-generated in the near future (and it will become increasingly difficult to tell computer- and human-generated content apart), meaning that AI algorithms will be trained increasingly on their own output. This creates a vicious inward spiral which will soon be a substantial impediment to the continued evolution of human agency and creativity. It will be crucial to take early action towards counteracting this pernicious trend by proper regulations, and a change in the design of the interfaces that guide the interaction of human agents with non-agential algorithms. In summary, we need to relearn to treat our machines for what they are: tools to boost our own agency, not masters to which we delegate our creativity and ability to act. For continued sustainable human development, we must go beyond the age of machines. Thank you very much."
SOURCES and FURTHER READING:
"organisms act on their own behalf": Stuart Kauffman, Investigations, OUP 2000.
"the self-manufacturing organization of the organism": see, for example, Robert Rosen, Life Itself, Columbia Univ Press, 1991; Alvaro Morena & Matteo Mossio, Biological Autonomy, Springer, 2015; Jan-Hendrik Hofmeyr, A biochemically-realisable relational model of the self-manufacturing cell, Biosystems 207: 104463, 2021.
"organismic agents and their environment": Denis Walsh, Organisms, Agency, and Evolution. CUP, 2015.
"the agent-arena relationship": a concept first introduced in John Vervaeke's "Awakening from the Meaning Crisis," and also discussed in this interesting dialogue.
"agency and evolutionary evolution": https://osf.io/2g7fh.
"agency and open-ended evolutionary dynamics": https://osf.io/yfmt3.
"organisms can set their own intrinsic goals": Daniel Nicholson, Organisms ≠ Machines. Stud Hist Phil Sci C 44: 669–78.
"to realize what is relevant": John Vervaeke, Timothy Lillicrap & Blake Richards, Relevance Realization and the Emerging Framework in Cognitive Science. J Log Comput 22: 79–99.
"solving the problem of relevance": see Standford Encyclopedia of Philosophy, The Frame Problem.
"there will never be artificial general intelligence based on algorithmic computing": https://osf.io/yfmt3.
"we humans cede our agency": see The Social Dilemma.
So, this is as good a reason as any to wake up from my blogging hibernation/estivation that lasted almost a year, and start posting content on my web site again. What killed me this last year, was a curious lack of time (for someone who doesn't actually have any job), and a gross surplus of perfectionism. Some blog posts got begun, but never finished. And so on. And so forth.
So here we are: I'm writing a very short post today, since the link I'll post will speak for itself, literally.
I've had the pleasure, a couple of weeks ago, to talk to Paul Middlebrooks (@pgmid) who runs the fantastic "Brain Inspired" podcast. Paul is a truly amazing interviewer. He found me on YouTube, through my "Beyond Networks" lecture series. During our discussion, we covered an astonishingly wide range of topics, from the limits of dynamical systems modeling, to process thinking, to agency in evolution, to open-ended evolutionary innovation, to AI and agency, life, intelligence, deep learning, autonomy, perspectivism, the limitations of mechanistic explanation (even the dynamic kind), and the problem with synthesis (and the extended evolutionary synthesis, in particular) in evolutionary biology.
The episode is now online. Check it out by clicking on the image below. Paul also has a break-down of topics on his website, with precise times, so you can home in on your favorite without having to listen to all the rest.
Before I go, let me say this: please support Paul and his work via Patreon. He has an excellent roster of guests (not counting myself), talking about a lot of really fascinating topics.
Hello everybody. This is my first blog post. I was undecided at first. What do I write about? Where do I begin? Then, last night, I came across this article by Michael Levin and Daniel Dennett in Aeon Magazine. It illustrates quite some of the problems—both in science and about science—that I hope to cover in this blog.
"Cognition all the way down?" That doesn't sound good... and, believe me, it isn't. But where to begin? This article is a difficult beast to tackle. It has no head or tail. Ironically it also seems to lack purpose. What is it trying to tell us? That cells "think"? Maybe even molecules? How is it trying to make this argument? And what is it trying to achieve with it? Interdisciplinary dialogue? Popular science? A new biology? I think not. It does not explain anything, and is not written in a way that the general public would understand. I do have a suspicion what the article is really about. We'll come back to that at the end.
But before I start ripping into it, I should say that there are many things I actually like about the article. I got excited when I first saw the subtitle ("unthinking agents!"). I'm thinking and writing about agency and evolution myself at the moment, and believe that it's a very important and neglected topic. I also like the authors' concept of teleophobia, an irrational fear of all kinds of teleological explanations that circulates widely, not only among biologists. I like their argument against an oversimplified black-and-white dualism that ascribes true cognition to humans only. I like their call for biologists to look beyond the molecular level. I like that they highlight the fact that cells are not just passive building blocks, but autonomous participants busy building bodies. I like all that. It's very much in the spirit of my own research and thinking.
But then, everything derails. Spectacularly. Where should I start?
AGENCY ISN'T JUST FEEDBACK
The authors love to throw around difficult concepts without defining or explaining them. "Agency" is the central one, of course. From what I understand, they believe that agency is simply information processing with cybernetic feedback. But that won't do! A self-regulating homeostat may keep your house warm, but does not qualify as an autonomous agent. Neither does a heat-seeking missile. As Stuart Kauffman points out in his Investigations, autonomous systems "act on their own behalf." At the very least, agents generate causal effects that are not entirely determined by their surroundings. The homeostat or missile simply reacts to its environment according to externally imposed rules, while the agent generates rules from within. Importantly, it does not require consciousness (or even a nervous system) to do this.
AGENCY IS NATURAL, BUT NOT MECHANISTIC
How agents generate their own rules is a complicated matter. I will discuss this in a lot more detail in future posts. But one thing is quite robustly established by now: agency requires a peculiar kind of organisation that characterises living systems—they exhibit what is called organisational closure. Alvaro Moreno and Matteo Mossio have written an excellent book about it. What's most important is that in an organism, each core component is both producer and product of some other component in the system. Roughly, that's what organisational closure means. The details don't matter here. What does matter is that we're not sure you can capture such systems with purely mechanistic explanations. And that's crucial: organisms aren't machines. They are not computers. Not even like computers. Rosen's conjecture establishes just that. More on that later too. For now, you must believe me that "mechanistic" explanations of organisms based on information-processing metaphors are not sufficient to account for organismic agency. Which brings us to the next problem.
EVOLVED COMPUTER METAPHORS
We've covered quite some ground so far, but haven't even arrived at the main two flaws of the article. The first of these is the central idea that organisms are some kind of evolved information-processing machines. They "exploit physical regularities to perform tasks" by having "long-range guided abilities," which evolved by natural selection. Quite fittingly, the authors call this advanced molecular magic "karma." Karma is a bitch. It kills you if you don't cooperate. And here we go: in one fell swoop, we have a theory of how multicellularity evolved. It's just a shifting of boundaries between agents (the ones that were never explained, mind you). Confused yet? This part of the article is so full of logical leaps and grandstanding vagueness that it's really hard to parse. To me, it makes no sense at all. But that does not matter. Because the only point it drives at is to resuscitate a theory that Dennett worked on throughout the 1970s and 80s, and which he summarised in his 1987 book The Intentional Stance.
THE INTENTIONAL STANCE
The intentional stance is when you assume that some thing has agency, purpose, intents in order to explain it, although deep down you know it does not have these properties. It used to be big (and very important) in the time when cognitive science emerged from behaviourist psychology, but nowadays it mostly applies to rational choice theory applied in evolutionary biology. For critical treatments of this topic, please read Peter Godfrey-Smith's Darwinian Populations and Natural Selection, and Samir Okasha's Agents and Goals in Evolution. Bottom line: this is not a new topic at all, and it's very controversial. Does it make sense to invoke intentions to explain adaptive evolutionary strategies? Let's not get into that discussion here. Instead, I want to point out that the intentional stance does not take agency serious at all! It is very ambiguous about whether it considers agency a real phenomenon, or whether it uses intentional explanations as purely heuristic strategy that explicitly relies on anthropomorphisms. Thus, after telling us that parts of organisms are agents (at least that's how I would interpret the utterly bizarre "thought experiment" about the self-assembling car) they kind of tell us now that it's all just a metaphor, this agency thing. What is it, then? This is just confusing motte-and-bailey tactics, in my opinion.
AGENCY IS NOT COGNITION!!!
So now that we're all confused whether agency is real or not, we already get the next intellectual card trick: agency is swapped for cognition. Just like that. That's why it's "cognition all the way down." You know, agency is nothing but information processing. Cognition is nothing but information processing. Clearly they must be the same. There's just a difference in scale in different organisms. Unfortunately, this renders either the concept of agency or the concept of cognition irrelevant. Luckily, there is an excellent paper by Fermín Fulda that explains the difference (and also tells you why "bacterial cognition" is really not a thing). Cognition happens in nervous systems. It involves proper intentions, the kind you can even be conscious of. Agency, in the broad sense I use it here, does not require intentionality or consciousness. It simply means that the organism can select from a repertoire of alternative behaviours when faced with opportunities or obstacles in its perceived environment. As Kauffman says, even a bacterium can "act on its own behalf." It need not think at all.
PANPSYCHISM: NO THANK YOU
By claiming that cells (or even parts of cells) are cognitive agents, Levin and Dennett open the door for the panpsychist bunch to jump on their "argument" as evidence for their own dubious metaphysics. I don't get it. Dennett is not usually sympathetic to the views of these people. Neither am I. Like ontological vitalism, panpsychism explains nothing. It does not explain consciousness or how it evolved. Instead, it explains it away, negating the whole mystery of its origins by declaring the question solved. That's not proper science. That's not proper philosophy. That's bullshit.
SO: WHAT'S THE PURPOSE?
What we're left with is a mess. I have no idea what the point of this article is. An argument for panpsychism? An argument for the intentional stance? Certainly not an argument to take agency serious. The authors seem to have no interest in engaging with the topic in any depth. Instead, they take the opportunity to buzzword-boost some of their old and new ideas. A little PR certainly can't harm. Knowing Michael Levin a little by now, I think that's what this article is about. Shameless self-promotion. Science in the age of selfies. A little signal, like that of the Trafalmadorians in The Sirens of Titan that constantly broadcasts "I'm here, I'm here, I'm here." And that's bullshit too.
To end on a positive note: the article touches on a lot of interesting topics. Agency. Organisms. Evolution. Philosophical biology. Reductionism. And the politics of academic prestige. I'll have more to say about all of these. So thank you, Mike and Dan, for the inspiration, and for setting such a clear example of how I do not want to communicate my own writing and thinking to the world.
Life beyond dogma!