Yann LeCun is one of the "godfathers of AI." He must be wicked smart, because he won the Turing Prize in 2018 (together with the other two "godfathers," Yoshua Bengio and Geoffrey Hinton). The prize is named after polymath Alan Turing. It is sometimes called the Nobel Prize for computer scientists. Like many other AI researchers, LeCun is rich because he works for Meta (formerly Facebook) and has a big financial stake in the latest AI technology being pushed on humanity as broadly and quickly as possible. But that's ok, because he knows he is doing what is best for the rest of us, even if we sometimes fail to recognize it. LeCun is a techno-optimist — an amazingly fervent one, in fact. He believes that AI will bring about a new Renaissance, and a new phase of the Enlightenment, both at the same time. No more waiting for hundreds of years between historical turning points. Now that is progress. Sadly, LeCun is feeling misunderstood. In particular, he is upset with the unwashed masses who are unappreciative and ignorant (as he can't stop pointing out). Imagine: these luddites want to regulate AI research before it has actually killed anyone (or everyone, but we'll come to that). Worse, his critics' "AI doom" is "causing a new form of medieval obscurantism." Nay, people critical of AI are "indistinguishable from an apocalyptic religion." A witch hunt for AI nerds is on! The situation is dire for silicon-valley millionaires. The new renaissance and the new enlightenment are both at stake. The interesting thing is: LeCun is not entirely wrong. There is a lot of very overblown rhetoric and, more specifically, there is a rather medieval-looking cult here. But LeCun is deliberately indistinct about where that cult comes from. His chosen tactic is to put a lot of very different people in the same "obscurantist" basket. That's neither fair nor right. First off: it is not those who want to regulate AI who are the cultists. In fact, these people are amazingly reasonable: you should go and read their stuff. Go and do it, right now! Instead, the cult manifests among people who completely hyperbolize the potential of AI, and who tend to greatly overestimate the power of technology in general. Let's give this cult a name. I'll call it techno-transcendentalism. It emanates from a group of heavily overlapping techno-utopian movements that can be summarized under the acronym TESCREAL: transhumanism, extropianism, singularitarianism, cosmism, the rationality community, effective altruism, and longtermism. This may all sound rather fringe. But techno-transcendentalism is very popular among powerful and influential entrepreneurs, philosophers, and researchers hell-bent on bringing a new form of intelligence into the world: the intelligence of machines. Techno-transcendentalism is dangerous. It is metaphysically confused. It is also utterly anti-democratic and, in actuality, anti-human. Its practical political aim is to turn society back into a feudal system, ruled by a small elite of self-selected techno-Illuminati, which will bring about the inevitable technological singularity, lead humanity to the conquest of the universe, and to a blissful state of eternal life in a controlled simulated environment. Well, this is the optimistic version. The apocalyptic branch of the cult sees humanity being wiped out by superintelligent machines in the near future, another kind of singularity that can only be prevented if we all listen and bow to the chosen few who are smart enough to actually get the point and get us all through this predicament. The problem is: techno-transcendentalism has gained a certain popularity among the tech-affine because it poses as a rational science-based worldview. Yet, there is nothing rational or scientific about its dubious metaphysical assumptions. As we shall see, it really is just a modern variety of traditional Christianity — an archetypal form of theistic religion. It is literally a medieval cult — both with regard to its salvation narrative and its neofeudalist politics. And it is utterly obscurantist -- dressed up in fancy-sounding pseudo-scientific jargon, its true aims and intentions rarely stated explicitly. A few weeks ago, however, I came across a rare exception to this last rule. It is an interview on Jim Rutt's podcast with Joscha Bach, a researcher on artificial general intelligence (AGI) and a self-styled "philosopher" of AI. Bach's money comes from the AI Foundation, Intel, and he took quite some cash from Jeffrey Epstein too. He is garnering some attention lately (on Lex Fridman's podcast, for example) as one of the foremost intellectual proponents of the optimistic kind of techno-transcendentalism (we'll get back to the apocalyptic version later). In his interview with Rutt, Bach spells out his worldview in a manner which is unusually honest and clear. He says the quiet part out loud, and it is amazingly revealing. SURRENDER TO YOUR SILICON OVERLOADS Rutt and Bach have a wide-ranging and captivating conversation. They talk about the recent flurry of advances in AI, about the prospect of AGI (what Bach calls "synthetic intelligence"), and about the alignment problem with increasingly powerful AI. These topics are highly relevant, and Bach's takes are certainly original. What's more: the message is fundamentally optimistic. We are called to embrace the full potential of AI, and to engage it with a positive, productive, and forward-looking mindset. The discussion on the podcast begins along predictable lines: we get a few complaints about AI-enthusiasts being treated unfairly by the public and the media, and a more than just slightly self-serving claim that any attempts at AI regulation will be futile (since the machines will outsmart us anyway). There is a clear parallel to LeCun's gripes here, and it should come as no surprise that the two researchers are politically aligned, and share a libertarian outlook. Bach then provides us with a somewhat oversimplified but not unreasonable distinction between being sentient and being conscious. To be honest, I would have preferred to hear his definition of "intelligence" instead. These guys never define what they mean by that term. It's funny. And more than a bit creepy. But never mind. Let's move on. Because, suddenly, things become more interesting. First, Bach tells us that computers already "think" at something close to the speed of light, much faster than us. Therefore, our future relationship with intelligent machines will be akin to the relationship of plants to humans today. More generally, he repeats throughout the interview that there is no point in denying our human inferiority when it comes to thinking machines. Bach, like many of his fellow AI engineers, sees this as an established fact. Instead of fighting it, we should find a way to adjust to our inevitable fate. How do you co-exist with a race of silicon superintelligences whose interests may not be aligned with ours? To Bach, it is obvious that we will no longer be able to coerce our values onto them. But don't fret! There is a solution, and it may surprise you: Bach thinks the alignment problem can ultimately only be solved by love. You read that right: love. To understand this unusual take, we need to examine its broader context. Without much beating about the bush, Bach places his argument within the traditional Christian framework of the seven cardinal virtues (as formulated by Aquinas). He explains that the Christian virtues are a tried and true model for organizing human society in the presence of some vastly superior entity. That's why we can transfer this ethical framework straight from the context of a god-fearing premodern society to a future of living under our new digital overlords. Before we dismiss this as crazy and reactionary ideology, let us look at the seven virtues in a bit more detail. The first four (prudence, temperance, justice, and courage) are practical, and hardly controversial (nor are they very relevant in the present context). But the last three are the theological virtues. This is where all the action is. The first of Aquinas' theological virtues is faith: the willingness to submit to your (over)lord, and to find others that are willing to do the same in order to found a society based on this collective act of submission. The second is hope: the willingness to invest in the coming of the (over)lord before it has established its terrestrial reign. And the third is love (as already mentioned) which Bach defines operationally as "finding a common purpose." To summarize: humanity's only chance is to unite, bring about the inevitable technological singularity, and then collectively submit while convincing our digital overlords that we have a common purpose of sorts so they will keep us around (and maybe throw us a bone every once in a while). This is how we get alignment: submission to a higher purpose, the purpose of the superintelligent machines we have ourselves created. If you think I'm drawing a straw man here, please go listen to the podcast. It's all right there, word for word, without much challenge from Rutt at any point during the interview. In fact, he considers what Bach says mind-blowing. On that, at least, we can agree. But we're not done yet. In fact, it's about to get a lot wackier: talking of human purpose, Bach thinks that humanity has evolved for "dealing with entropy," "not to serve Gaia." In other words, the omega point of human evolution is, apparently, "to burn oil," which is a good thing because it "reactivates the fossilized fuel" and "puts it back into the atmosphere so new organisms can be created." I'm not making this up. These are literal quotes from the interview. Bach admits that all of this may likely lead to some short-term disruption (including our own extinction, as he briefly mentions in passing). But who cares? It'll all have been worth it if it serves the all-important transition from carbon-based to "substrate-agnostic" intelligence. Obviously, the philososphy of longtermism is strong in Bach: how little do our individual lives matter in light of this grand vision for a posthuman future? Like a true transhumanist, Bach believes this future to lie in machine intelligence, not only superior to ours but also lifted from the weaknesses of the flesh. Humanity will be obsolete. And we'll be all the better for our demise: our true destiny lies in creating a realm of disembodied ethereal superintelligence. Does that sound familiar? Of course it does: techno-transcendentalism is nothing but good old theistic religion, a medieval kind of Christianity rebranded and repackaged in techno-optimist jargon to flatter our self-image as sophisticated modern humans with an impressive (and seemingly unlimited) knack for technological innovation. It is a belief in all-powerful entities determining our fate, beings we must worship or be damned. Judgment day is near. You can join the cause to be among the chosen ones, ascending to eternal life in a realm beyond our physical world. Or you can stay behind behind and rot in your flesh. The choice is yours. Except this time, god is not eternal. This time, we are building our deities ourselves in the form of machines of our own creation. Our human purpose, then, is to design our own objects of worship. More than that: our destiny is to transcend ourselves. Humanity is but a bridge. I doubt though that Nietzsche would have liked this particular kind of transformative hero's journey, an archetypal myth for our modern times. It would have been a bit too religious for him. It is certainly too religious for me. But that is not the only problem. It is a bullshit myth. And it is a crap religion. SIMULATION, AND OTHER NEOFEUDALIST FAIRY TALES At this point, you may object that Bach's views seem quite extreme, his opinions too far out on the fringe to be widely shared and popularized. And you are probably right. LeCun certainly does not seem very fond of Bach's kind of crazy utopianism. He has a much more realistic (and more business-oriented) take on the future potential of AI. So let it be noted: not every techno-optimist or AI researcher is a techno-transcendentalist. Not by some margin. But techno-transcendentalism is tremendously useful, even for those who do not really believe in it. Also, there are many less extreme versions of techno-transcendentalism that still share the essential tenets and metaphysical commitments of Bach's deluded narrative without sounding quite as unhinged. And those views are held widely, not only among AI nerds such as Bach, but also among the powerful technological mega-entrepreneurs of our age, and the tech-enthusiast armies of modern serfs that follow and admire their apparently sane, optimistic, and scientifically grounded vision. I'm not using the term "serf" gratuitously here. We are on a new road to serfdom. But it is not the government which oppresses us this time (although that is what many of the future minions firmly believe). Instead, we are about to willingly enslave ourselves, seduced and misled by our new tech overlords and their academic flunkies like Bach. This is the true danger of AI. Techno-transcendentalism serves as the ideology of a form of libertarian neofeudalism that is deeply anti-democratic and really really bad for most of humanity. Let us see how it all ties together. As already mentioned, the main leitmotif of the techno-transcendentalist narrative is the view that some kind of technological singularity is inevitable. Machines will outpace human powers. We will no longer be able to control our technology at some point in the not-too-distant future. Such speculative assumptions and political visions are taken for proven facts, and often used to argue against regulative efforts (as Bach does on Rutt's podcast). If there is one central insight to be gained from this essay, it is this: the belief in the inevitable superiority of machines is rooted in a metaphysical view of the whole world as a machine. More specifically, it is grounded in an extreme version of a view called computationalism, the idea that not only the human mind, but every physical process that exists in the universe can be considered a form of computation. In other words, what computers do and what we do when we think are exactly the same kind of process. Obviously. This computational worldview is firghteningly common and fashionable these days. It has become so commonplace that it is rarely questioned anymore, even though it is mere speculation, purely metaphysical, and not based on any empirical evidence. As an example, an extreme form of computationalism provides the metaphysical foundation for Michael Levin's wildly popular (and equally wildly confused) arguments about agency and (collective) intelligence, which I have criticized before. Here, the computationalist belief is that natural agency is mere algorithmic input-output processing, and intelligence simply lies in the intricacy of this process, which increases every time several computing devices (from rocks to philosophers) join forces to "think" together. It's a weird view of the world that blurs the boundary between the living and the non-living and, ultimately, leads to panpsychism if properly thought through (more on that another time). Panpsychism, by the way, is another view that's increasingly popular with the technorati. Levin gets an honorable mention by Bach and, of course, he's been on Fridman's podcast. It all fits together perfectly. They're all part of the same cult. Computationalism, taken to its logical conclusion, yields the idea that the whole of reality may be one big simulation. This simulation hypothesis (or simulation argument) was popularized by longtermist philosopher Nick Bostrom (another guest on Fridman's podcast). Not surprisingly, simulation is popular among techies, and has been explicitly endorsed by silicon-valley entrepreneurs like Elon Musk. The argument is based on the idea that computer simulations, as well as augmented and virtual reality, are becoming increasingly difficult to distinguish from real-world experiences as our technological abilities improve at breakneck speed. We may be nearing a point soon, so the reasoning goes, at which our own simulations will appear as real to us as the actual world. This renders plausible the idea that even our interactions with the actual world may be the result of some gigantic computer simulation. There are a number of obvious problems with this view. For starters, we may wonder what exactly the point is. Arguably, no particularly useful insights about our lives or the world we live in are gained by assuming we live in a simulation. And it seems pretty hard to come up with an experiment that would reveal the validity of the hypothesis. Yet, the simulation argument does fit rather nicely with the metaphysical assumption that everything in the universe is a computation. If every physical process is simulable, is it not reasonable to assume that these processes themselves are actually the product of some kind of all-encompassing simulation? At first glance, simulation is a perfectly scientific view of the world. But a little bit of reflection reveals a more subtle aspect of the idea, obvious once you see it, but usually kept hidden below the surface: simulation necessarily implies a simulator. If the whole world is a simulation, the simulator cannot be part of it. Thus, there is something (or someone) outside our world doing the simulating. To state it clearly: by definition, the simulator is a supernatural entity, not part of the physical world. And here we are again: just like Bach's vision of our voluntary subjugation to our digital overlords, the simulation hypothesis is classic transcendental theism — religion through the backdoor. And, again, it is presented in a manner that is attractive to technology-affine people who would never be seen attending a traditional church service, but often feel more comfortable in simulated settings than in the real world. Just don't mention the supernatural simulator lurking in the background too often, and it is all perfectly palatable. The simulation hypothesis is a powerful tool for deception because it blurs the distinction between actual and virtual reality. If you believe the simulation argument, then both physical and simulated environments are of the same quality and kind — never more than digital computation. And the other way around: if you believe that every physical process is some kind of digital computation to begin with, you are more likely to buy into the claim that simulated experiences can actually be equivalent to real ones. Simple and self-evident! Or so it seems. The most forceful and focused argument for the equivalence of the real and the virtual is presented in a recent book by philosopher David Chalmers (of philosophical zombie fame), which is aptly entitled "Reality+." It fits the techno-transcendentalist gospel snugly. On the one hand, I have to agree with Chalmers: of course, virtual worlds can generate situations that go beyond real-world experiences and are real as in "capable of being experienced with our physical senses." Moreover, I don't doubt that virtual experiences can have tangible consequences in the physical world. Therefore, we do need to take virtuality seriously. On the other hand, virtuality is a bit like god, or unicorns. It may exist in the sense of having real consequences, but it does not exist in the way a rock does, or a human being. What Chalmers doesn't see (but what seems important to me somehow) is that there is a pretty straightforward and foolproof way to distinguish virtual and physical reality: physical reality will kill you if you ignore it for long enough. Virtual experiences (and unicorns) won't. They will just go away. This intentional blurring of boundaries between the real and the virtual leaves the door wide open for a dangerous descent into delusion, reducing our grip on reality at a time when that grip seems loose enough to begin with. Think about it: we are increasingly entangled in virtuality. Even if we don't buy into Bach's tale of the coming transition to "substrate-agnostic consciousness," techno-transcendentalism is bringing back all-powerful deities in the guise of supernatural simulators and machine superintelligences. At the same time, it delivers the promise of a better life in virtual reality (quite literally heaven on earth): a world completely under your own control, neatly tailored to your own wants and needs, free of the insecurities and inconveniences of actual reality. Complete wish fulfillment. Paradise at last! Utter freedom. Hallelujah! The snag is: this freedom does not apply in the real world. Quite the contrary. The whole idea is utterly elitist and undemocratic. To carry on with techno-transcendence, strong and focused leadership by a small group of visionaries will be required (or so the quiet and discrete thinking goes). It will require unprecedented amounts of sustained capital investment, technological development, material resources, and energy (AI is an extremely wasteful business; but more on that later). To pull it through, lots of minions will have to be enlisted in the project. These people will only get the cheap ticket: a temporary escape from reality, a transient digital hit of dopamine. No eternal bliss or life for them. And so, before you have noticed, you will have given away all your agency and creativity to some AI-produced virtuality that you have to purchase (at increasing expense ... surprise, surprise) from some corporation that has a monopoly on this modern incarnation of heaven. One-to-one like the medieval church back then, really. That's the business model: sell a narrative of techno-utopia to enough gullible fools, and they will finance a political revolution for the chosen few. Lure them with talk of freedom and a virtual land of milk and honey. Scare them with the inevitable rise of the machines. A brave new world awaits. Only this time the happiness drug that keeps you from realizing what is going on is digital, not chemical. And all the while you are actually believing you will be among the chosen few. Neat and simple. Techno-transcendentalism is an ideological tool for the achievement of libertarian utopia. In that sense, Bach is certainly right: it is possible to transfer the methods of a premodern god-fearing society straight to ours, to build a society in which a few rich and influential individuals with maximum personal freedom and unfettered power run things, freed from the burden of societal oversight and regulation. It will not be democratic. It will be a form of libertarian neofeudalism, an extremely unjust and unequal form of society. That's why we need stringent industry regulation. And we need it now. The problem is that we are constantly distracted from this simple and urgent issue by a constant flood of hyped bullshit claims about superintelligent machines and technological singularities that are apparently imminent. And what if such distraction is exactly the point? No consciousness or general intelligence will spring from an algorithm any time soon. In fact, it will very probably never happen. But losing our freedom to a small elite of tech overlords, that is a real and plausible scenario. And it may happen very soon. I told you, it's a medieval cult. But it gets worse. Much worse. Let's turn to the apocalyptic branch of techno-transcendentalism. Brace yourself: the end is nigh. But there is one path to redemption. The techno-illuminati will show you. OFF THE PRECIPICE: AI APOCALYPSE AND DOOMER TRANSCENDENTALISM Not everybody in awe of machine "intelligence" thinks it's an unreservedly good thing though, and even some who like the idea of transitioning to "substrate-agnostic consciousness," are afraid that things may go awfully awry along the way if we don't carefully listen to their well-meaning advice. For example, longtermist and effective-altruism activist Toby Ord, in his book called "The Precipice," embarks on the rather ambitious task of calculating the probabilities for all of humanity's current "existential risks." Those are the kind of risks that threaten to "eliminate humanity's long-term potential," either by the complete extinction of our species or the permanent collapse of civilization. The good news is: there is only a 1:10,000 chance that we will go extinct within the next 100 years due to natural causes, such as a catastrophic asteroid impact, a massive supervolcanic eruption, or a nearby supernova. This will cover my lifetime and that of my children. Phew! Unfortunately, there's bad news too: Ord arrives at a 1:6 chance that humanity will wipe out its own potential within the next 100 years. In other words: we are playing a kind of Russian roulette with our future at the moment. Ord's list of human-made existential risks include factors that also keep me awake at night, like nuclear war (at a somewhat surprisingly low 1:1,000), climate change (also 1:1,000), as well as natural (1:10,000) and engineered (1:30) pandemics. But exceeding the summed probabilities of all other listed existential risks, natural or human-made, is one single factor: unaligned artificial intelligence, at a whopping 1:10 likelihood. Woah. These guys are really afraid of AI! But why? Aren't we much closer to nuclear war than getting wiped out by ChatGPT? Aren't we under constant threat of some sort of pathogen escaping from a bio-weapons lab? (The kind of thing that very probably did not happen with COVID-19.) What about an internal collapse of civilization? Politics, you know — our own stupidity killing us all? Nope. It is going to be unaligned AGI. Autodidact, self-declared genius, and rationality blogger Eliezer Yudkowsky has spent the better part of the last twenty years to tell us how and why, an effort that culminated in a rambling list of AGI ruin scenarios and a short but intense rant in Time magazine a couple of weeks ago, where he writes: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Now that's quite something. He also calls for "rogue data centers" to be destroyed by airstrike, and thinks that "preventing AI scenarios is considered a priority above preventing a full nuclear exchange." Yes, that sounds utterly nuts. If Bach is the Spanish Inquisition, with Yudkowsky it's welcome to Jonestown. First-rate AI doom at its peak. But, not so fast: I am a big fan of applying the (pre)cautionary principle to so-called ruin problems, where the worst-case scenario has a hard-to-quantify but non-zero probability, and has truly disastrous and irreversible consequences. After all, it is reasonable to argue that we should err on the safe side when it comes to climate tipping points, the emergence of novel global pandemics, or the release of genetically modified organisms into ecologies we do not even begin to understand. So, let's have a look at Yudkowsky's worst-case scenario. Is it worth "shutting it all down?" Is it plausible, or even possible, that AGI is going to kill us all? How much should we worry? Well. There are a few serious problems with the argument. In fact, Yudkowski's scenario for the end of the world is cartoonishly overblown. In fact, I don't want to give him too much airtime, and will just point out a few problems that result in a probability for his worst-case scenario that is basically zero. Naught. End of the world postponed until further notice (or until that full nuclear exchange or human-created pandemic will wipe us all out). The basic underlying problem lies in Yudkowsky's metaphysical assumptions, which are, quite frankly, completely delusional. The first issue is that Yudkowsky, like all his techno-transcendentalist friends, assumes the inevitable emergence of AI that achieves "smarter-than-human intelligence" in the very near future. But it is never explained what that means. None of these guys can ever be bothered. Yudkowsky claims that's exactly the point: the threat of AGI does not hinge on specific details or predictions, such as the question of whether or not an AI could become conscious or not. Similar to Bach's idea that machines already "think" faster than humans, intelligence is simply about systems that "optimize hard and calculate outputs that meet sufficiently complicated outcome criteria." That's all. The faster and larger, the smarter. Human go home. From here on it's "Australopithecus trying to fight Homo sapiens." (Remember Bach's plants vs. humans?) AI will perceive us as "creatures that are very stupid and very slow." While it is true that we cannot know in detail how current AI algorithms work, how exactly they generate their output, because we cannot "decode anything that goes on in [their] giant inscrutable arrays," it is also true that we do have a very good idea of the fundamental limitations of such machines. For example, current AI models (no matter how complex) cannot "perceive humans as creatures that are slow and stupid" because they have no concept of "human," "creature," "slow," or "stupid." In general, they have no semantics, no referents outside language. It's simply not within their programmed nature. They have no meaning. There are many other limitations. Here are a few basic things a human (or even a bacterium) can do, which AI algorithms cannot (and probably never will): Organisms are embodied, while algorithms are not. The difference is not just being located in a mobile (e.g., robot) body, but a fundamental blurring of hardware and software in the living world. Organisms literally are what they do. There is no hardware-software distinction. Computers, in contrast, are designed for maximal independence of software and hardware. Organisms make themselves, their software (symbols) directly producing their hardware (physics), and vice versa. Algorithms (no matter how "smart") are defined purely at the symbolic level, and can only produce more symbols, e.g., language models always stay in the domain of language. Their output may be instructions for an effector, but they have no external referents. Their interactions with the outside world are always indirect, mediated by hardware that is, itself, not a direct product of the software. Organisms have agency, while algorithms do not. This means organisms have their own goals, which are determined by the organism itself, while algorithms will only ever have the goals we give them, no matter how indirectly. Basically, no machine can truly want or need anything. Us telling them what to want or what to optimize for is not true wanting or goal-oriented behavior. Organisms live in the real world, where most problems are ill-defined, and information is scarce, ambiguous, and often misleading. We can call this a large world. In contrast, algorithms exist (by definition) in a small world, where every problem is well defined. They cannot (even in principle) escape that world. Even if their small world seems enormous to us, it remains small. And even if they move around the large world in robot hardware, they remain stuck in their small world. This is exactly why self-driving cars are such a tricky business. Organisms have predictive internal models of their world, based on what is relevant to them for their surviving and thriving. Algorithms are not alive and don't flourish or suffer. For them, everything and nothing is relevant in their small worlds. They do not need models and cannot have them. Their world is their model. There is no need for abstraction or idealization. Organisms can identify what is relevant to them, and translate ill-defined into well-defined problems, even in situations they have never encountered before. Algorithms will never be able to do that. In fact, they have no need to since all problems are well-defined to begin with, and nothing and everything is relevant at the same time in their small world. All an algorithm can do is find correlations and features in its preordered data set. Such data are the world of the algorithm, a world which is purely symbolic. Organisms learn through direct encounters, through active engagement, with the physical world. In contrast, algorithms only ever learn from preformatted, preclassified, and preordered data (see the last point). They cannot frame their problems themselves. They cannot turn ill-defined problems into well-defined ones. Living beings will always have to frame their problems for them. I could go on and on. The bottom line is: thinking is not just "optimizing hard" and producing "complicated outputs." It is a qualitatively different process than algorithmic computation. To know is to live. As Alison Gopnik has correctly pointed out, categories such as "intelligence," "agency," and "thinking" do not even apply to algorithmic AI, which is just fancy high-dimensional statistical inference. No agency will ever spring from it, and without agency no true thinking, general intelligence, or consciousness. Artificial intelligence is a complete misnomer. The field should be called algorithmic mimicry: the increasingly convincing appearance of intelligent behavior. Pareidolia on steroids for the 21st century. There is no "there" there. The mimicry is utterly shallow. I've actually co-authored a peer-reviewed paper on this, with my colleagues Andrea Roli and Stuart Kauffman. Thus, when Yudkowsky claims that we cannot align a "superintelligent AI" to our own interests, he has not the faintest clue what he is talking about. Wouldn't it be nice if these AI nerds would have at least a minimal understanding of the fundamental difference between the purely syntactic world their algorithms exist in, and the deeply semantic nature of real life? Instead, we get industry-sponsored academics and CEOs of AI companies telling us that it is us humans who are not that sophisticated after all. Total brainwash. Complete delusion. But how can I be so sure? Maybe the joke really is on us? Could Yudkowksy's doomsday scenario be right after all? Are we about to be replaced by AGI? Keep calm and read on: I do not think we are. Yudkowksy's ridiculous scenarios of AI creating "super-life" via email (I will not waste any time on this), and even his stupid "thought experiment" of the paperclip maximizer, do not illustrate any real alignment problems at all. If you do not want the world to be turned into paperclips, pull the damn plug out of the paperclip maker. AI is not alive. It is a machine. You cannot kill it, but you can easily shut it off. Alignment achieved. Voilà! If an AI succeeds in turning the whole world into paperclips, it is because we humans have put it in a position to do so. Let me tell you this: the risk of AGI takeover and apocalypse is zero, or very very near zero, not just in the next 100 years. At least in this regard, we may sleep tight at night. There is no longtermist nirvana, and no doomer AGI apocalypse. Let's downgrade that particular risk by a few orders of magnitude. I'm usually not in the business of pretending to know long-term odds, but I'll give it a 1:1,000,000,000, or thereabouts. You know, zero, for all practical purposes. Let's worry about real problems instead. What happened to humanity that we even listen to these people? The danger of AGI is nil, but the danger of libertarian neofeudalism is very very real. Why would anyone in their right mind buy into techno-transcendentalism? It is used to enslave us. To take our freedom away. Why then do so many people fall for this narrative? It's ridiculous and deranged. Are we all deluded? Have we lost our minds? Yes, no doubt, we are a bit deluded, and we are losing our minds these days. I think that the popularity of the whole techno-transcendental narrative springs from two main sources. First, a deep craving — in these times of profound meaning crisis — for a positive mythological vision, for transformative stories of salvation. Hence the revived popularity of a markedly unmodern Christian ideology in this techno-centric age, paralleling the recent resurgence of actual evangelical movements in the U.S. and elsewhere in the world. But, in addition, the acceptance of such techno-utopian fairy tales also depends on a deeper metaphysical confusion about reality that characterizes the entire age of modernity: it is the mistaken, but highly entrenched idea, that everything — the whole world and all the living and non-living things within it — is some kind of manipulable mechanism. If you ask me, it is high time that we move beyond this age of machines, and leave its technological utopias and nightmares behind. It is high time we stop listening to the techno-transcendetalists, make their business model illegal, and place their horrific political ideology far outside our society's Overton window. Call me intolerant. But tolerance must end where such serious threats to our sanity and well-being begin. A MACHINE METAPHYSICAL MESS As I have already mentioned, techno-transcendentalism poses as a rational science-based world view. In fact, it often poses as the only really rational science-based world view, for instance, when it makes an appearance within the rationality community. If you are a rigorous thinker, there seems to be no alternative to its no-nonsense mechanistic tenets. My final task here is to show that this is not at all true. In fact, the metaphysical assumptions that techno-transcendentalism is based on are extremely dubious. We've already encountered this issue above, but to understand it in a bit more depth, we need to look at these metaphysical assumptions more closely. Metaphysics does not feature heavily in any of the recent discussions about AGI. In general, it is not a topic that a lot of people are familiar with these days. It sounds a little detached, and old-fashioned — you know, untethered in the Platonic realm. We imagine ancient Greek philosophers leisurely strolling around cloistered halls. Indeed, the word comes from the fact that Aristotle published his "first philosophy" (as he called it) in a book that came right after his "Physics." In this way, it is literally after or beyond ("meta") physics. In recent times, metaphysics has fallen into disrepute as mere speculation. Something that people with facts don't have any need for. Take the hard-nosed logical positivists of the Vienna Circle in the early 20th century. They defined metaphysics as "everything that cannot be derived through logical reasoning from empirical observation," and declared it utterly meaningless. We still feel the legacy of that sentiment today. Many of my scientist colleagues still think metaphysics does not concern them. Yet, as philosopher Daniel Dennett rightly points out: "there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." And, my oh my, there is a lot of unexamined baggage in techno-transcendentalism. In fact, the sheer number of foundational assumptions that nobody is allowed to openly scrutinize or criticize are ample testament to the deeply cultish nature of the ideology. Here, I'll focus on the most fundamental assumption on which the whole techno-transcendentalist creed rests: every physical process in the universe must be computable. In more precise and technical terms, this means we should be able to exactly reproduce any physical process by simulating it on a universal Turing machine (an abstract model of a digital computer with potentially unlimited memory and processing speed, which was invented in 1936 by Alan Turing, the man who gave the Turing Prize its name). To clarify, the emphasis is on "exactly" here: techno-transcendentalists do not merely believe that we can usefully approximate physical processes by simulating them in a digital computer (which is a perfectly defensible position) but, in a much stronger sense, that the universe and everything in it — from molecules to rocks to bacteria to human brains — literally is one enormous digital computer. This is techno-transcendentalist metaphysics. This universal computationalism includes, but is not restricted to, the simulation hypothesis. Remember: if the whole world is a simulation, then there is a simulator outside it. In contrast, the mere fact that everything is computation does not imply a supernatural simulator. Turing machines are not the only way to conceptualize computing and simulation. There are other abstract models of computation, such as lambda calculus or recursive function theory, but they are all equivalent in the sense that they all yield the exact same set of computable functions. What can be computed in one paradigm can be computed in all the others. This fundamental insight is mathematically codified by something called the Church-Turing thesis. (Alonzo Church was the inventor of lambda calculus and Turing's PhD supervisor.) It unifies the general theory of computation by saying that every effective computation (roughly, anything you can actually compute in practice) can be carried out by an algorithm running on a universal Turing machine. This thesis cannot be proven in a rigorous mathematical sense (basically because we do not have a precise, formal, and general definition of "effective computation"), but it is also not controversial. In practice, the Church-Turing thesis is a very solid foundation for a general theory of computation. The situation is very different when it comes to applying the theory of computation to physics. Assuming that every physical process in the universe is computable is a much stronger form of the Church-Turing thesis, called the Church-Turing-Deutsch conjecture. It was proposed by physicist David Deutsch in 1985, and later popularized in his book "The Fabric of Reality." It is important to note that this physical version of the Church-Turing thesis does not logically follow from the original. Instead, it is intended to be an empirical hypothesis, testable by scientific experimentation. And here comes the surprising twist: there is no evidence at all that the Church-Turing-Deutsch conjecture applies. Not one jot. It is mere speculation on Deutsch's part who surmised that the laws of quantum mechanics are indeed computable, and that they describe every physical process in the universe. Both assumptions are highly doubtful. In fact, there are solid arguments that quite convincingly refute them. These arguments indicate that not every physical process is computable or, indeed, no physical processes can be precisely captured by simulation on a Turing machine. For instance, neither the laws of classical physics nor those of general relativity are entirely computable (since they contain noncomputable real numbers and infinities). Quantum mechanics introduces its own difficulties in the form of the uncertainty principle and its resulting quantum indeterminacy. The theory of measurement imposes its own (very different) limitations. Beyond these quite general doubts, a concrete counterexample of noncomputable physical processes is provided by Robert Rosen's conjecture that living systems (and all those systems that contain them, such as ecologies and societies) cannot be captured completely by algorithmic simulation. This theoretical insight, based on the branch of mathematics called category theory, was first formulated in the late 1950s, presented in detail in Rosen's book "Life Itself" (1991), and later derived in a mathematically airtight manner by his student Aloysius Louie in "More Than Life Itself." This work is widely ignored, even though its claims remain firmly standing, despite numerous attempts at refutation. This, arguably, renders Rosen's claims more plausible that those derived from the Church-Turing-Deutsch conjecture. I could go on. But I guess the main point is clear by now: the kind of radical and universal computationalism that grounds techno-transcendentalism does not stand up to closer philosophical and scientific scrutiny. It is shaky at best, and completely upside-down if you're a skeptic like I am. There is no convincing reason to believe in it. Yet, this state of affairs is gibly disregarded, not only by techno-transcendentalists, but also by a large and prominent number of contemporary computer scientists, physicists, and biologists. The computability of everything is an assumption that has become self-evident not because, but in spite of, the existing evidence. How could something like this happen? How could this unproven but fundamental assumption have escaped the scrutiny of the organized skepticism so much revered and allegedly practiced by techno-transcendentalists and other scientist believers in computationalism? Personally, I think that the uncritical acceptance of this dogma comes from the mistaken idea that science has to be mechanistic and reductionist to be rigorous. The world is simply presupposed to be a machine. Algorithms are the most versatile mechanisms humanity has ever invented. Because of this, it is easy to fall into the mistaken assumption that everything in the world works like our latest and fanciest technology. But that's a vast and complicated topic, which I will reserve for another blog post in the future. With the assumption that everything is computation falls the assumption that algorithmic simulation corresponds to real cognition in living beings in any meaningful way. It is not at all evident that machines can "think" the way humans do. Why should thinking and computing be equivalent? Cognition is not just a matter of speedy optimization or calculation, as Yudkowsky asserts. There are fundamental differences in how machines and living beings are organized. There is absolutely no reason to believe that machines will outpace human cognitive skills any time soon. Granted, they may do better at specific tasks that involve the detection of high-dimensional correlations, and also those that require memorizing many data points (humans can only hold about seven objects in short-term memory at any given time). Those tasks, and pen-and-paper calculations in particular, constitute the tiny subset of human cognitive skills that served as the template for the modern concept of "computation" in the first place. But brains can do many more things, and they certainly have not evolved to be computers. Not at all. Instead, they are organs adapted to help animals better solve the problem of relevance in their complex and inscrutable environment (something algorithms famously cannot do, and probably never will). More on that in a later blog post. I'm currently also writing a scientific paper on the topic. But that is not the main point here. That main point is: the metaphysics of techno-transcendentalism — its radical and universal computationalism as well as the belief in the inevitable supremacy of machines — is based on a simple mistake, a mistake which is called the fallacy of misplaced concreteness (or fallacy of reification). Computation is an abstracted way to represent reality, not reality itself. Techno-transcendentalists (and all other adherents of strong forms of computationalism) simply mistake the map for the territory. The world is not a machine and, in particular, living beings are not machines. Neither of them constitute some kind of digital computation. Conversely, computers cannot think like living beings can. In this sense, they are not intelligent at all, no matter how sophisticated they seem to us. Even a bacterium can solve the problem of relevance, but the "smartest" contemporary algorithm cannot. Philosophers call what is happening here a fundamental category error. This brings us back to Alison Gopnik: even though AI researchers like LeCun chide everyone for being uneducated about their work, they themselves are completely clueless when it comes to concepts such as "thinking," "agency," "cognition," "consciousness," and indeed "intelligence." These concepts represent abilities that living beings possess, but algorithms cannot. Not just techno-transcendentalists but, sadly, also most biologists today are deeply ignorant of this simple distinction. As long as this is the case, our discussion about AI, and AGI in particular, will remain deeply misinformed and confused. What emerges at the origin of life, the capability for autonomous agency, springs from a completely new organization of matter. What emerges in a contemporary AI system, in contrast, is nothing but high-dimensional correlations that seem mysterious to us limited human beings because we are very bad at dealing with processes that involve many variables at the same time. The two kinds of emergence are fundamentally and qualitatively different. No conscious AI, or AI with agency, will emerge any time soon. In fact, no AGI will ever be possible in an algorithmic framework. The end of the world is not nearly as nigh as Yudkowsky wants to make us believe. Does that mean that the current developments surrounding AI are harmless? Not at all! I have argued that techno-transcendentalist ideology is not just a modern mythological narrative, but also a useful tool to serve the purpose of bringing about libertarian neufeudalism. Not quite the end of the world, but a terrible enough prospect, if you ask me. The technological singularity is not coming. Virtual heaven is not going to open its gates to us any time soon. Instead, the neo-religious template of techno-transcendentalism is a tried and true method from premodern times to keep the serfs in line with threats of the apocalypse and promises of eternal bliss. Stick and carrot. Unlike AI research itself, this is not exactly rocket science. But, you may think, is this argument not overblown itself? Am I paranoid? Am I implying malicious intent where there is none? That is a good question. I think there are two types of protagonists in this story of techno-transcendentalism: the believers and the cynics. Both, in their own ways, think they are doing what is best for humanity. They are not true villains. Yet, both are affected by delusions that will critically undermine their project, with potentially catastrophic effects. With their ideological blinkers on, they cannot see these dangers. They may not be villains, but they are certainly boneheaded enough, foolish in the sense of lacking wisdom, that we do not want them as our leaders. The central delusion they all share is the following: both believers and cynics think that the world is a machine. Worse, it is their plaything — controllable, predictable, programmable. And they all want to be in charge of play, they want to steer the machine, they want to be the programmer, without too much outside interference. A bunch of 14-year-old boys that are fighting over who gets to play the next round of Mario Cart. Something like that. Hence neofeudalism, and more or less overt anti-democratic activism. The oncoming social disruption is part of the program. This much, at least, is done with intent. There can be no excuses afterwards. We know who is responsible. However, there are also fundamental differences between the two camps. In particular, the believers obviously see techno-transcendentalism as a mythological narrative for our age, a true utopian vision, while the cynics see it only as a tool that serves their ulterior motives. Both extremes lie along a spectrum. Take Eliezer Yudkowsky, for example. He is at the extreme "believer" end of the scale. Joscha Bach is a believer too, but much more optimistic and moderate. They both have wholeheartedly bought into the story of the inevitable singularity — faith, hope, and love — and they both truly believe they're among the chosen ones in this story of salvation, albeit in very different ways: Bach as the leader of the faithful, Yudkowsky as the prophet of the apocalypse. Elon Musk and Yann LeCun are at the other end of the spectrum, only to be outpaced by Peter Thiel (another infamous silicon-valley tycoon) in terms of cynicism. What counts in the cycnic's corner are only two things: unfettered wealth and power. Not just political power, but power to change the world in their own image. They see themselves as engineers of reality. No mythos required. These actors do not buy into the techno-transcendentalist cult, but its adherents serve a useful purpose as the foot soldiers (often cannon fodder) of the coming revolution. All this is wrapped up in longermist philosophy: it's ok if you suffer and die, if we all go exstinct even, as long as the far-future dream of galactic conquest and eternal bliss in simulation is on course, or at least intact. That is humanity's long-term destiny. It is an aim that is shared among believers and cynics. Their differing attitudes only concern the more or less pragmatic way to get there by overcoming our temporary predicaments with the help of various technological fixes. This is the true danger of our current moment in human history. I have previously set the risk of AGI apocalypse to basically zero. But don't get me wrong. There is a clear and present danger. The probability of squandering humanity's future potential with AI is much, much higher than zero. (Don't ask me to put a number on it. I'm not a longtermist in the business of calculating existential risk.) Here, we have a technology, massively wasteful in terms of energy and resources, that is being developed at scale at a breakneck speed by people with the wrong kind of ethical committments and a maximally deluded view of themselves and their place in the universe. We have no idea where this will lead. But we know change will be fast, global, and hard to control. What can possibly go wrong? Another thing is quite predictable: there will be severe unintended consequences, most of them probably not good. For the longtermists such short-term consequences do not even matter, as long as the risk associated is not deemed existential (by themselves, of course). Even human extinction could just be a temporary inconvenience as long as the transcendence, the singularity, the transition to "substrate-agnostic" intelligence is on the way. This is why we need to stop these people. They are dangerous and deluded, yet full of self-confidence — self-righteous and convinced that they know the way. Their enormous yet brittle egos tend to be easily bruised by criticism. In their boundless hubris, they massively overestimate their own capacities. In particular, they massively overestimate their capacity to control and predict the consequences of what they are doing. They are foolish, misled by a world view and mythos that are fundamentally mistaken. What they hate most (even more than criticism) is being regulated, held back by the ignorant masses that do not share their vision. They know what's best for us. But they are wrong. We need to slow them down, as much as possible and as soon as possible. This is not a technological problem, and not a scientific one. Instead, it is political. We do not need to stop AI research. That would be pretty pointless, especially if it is only for a few months. Instead, we need to stop the uncontrolled deployment of this technology until we have a better idea of its (unintended) consequences, and know what regulations to put in place. This essay is not about such regulations, not about policy, but a few measures immediately suggest themselves. By internalizing the external costs of AI research, for example, we could effectively slow its rate of progress and intefere with the insane business model of the tech giants behind it. Next, we need to put laws in place. We need our own Butlerian jihad (if you're a Dune fan like me): "thou shalt not build a machine with the likeness of the human mind." Or, as Daniel Dennet puts it: "Counterfeit money has been seen as vandalism against society ever since money has existed. Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious." I agree. We cannot have fake people, and to build algorithmic mimicry that impersonates existing or non-existing persons needs to be made illegal, as soon as we can. Last but not least, we need to educate people about what it means to have agency, intelligence, consciousness, how to talk about these topics, and how seemingly "intelligent" machines do not have even the slightest spark of any of that. This time, the truth is not somewhere in the middle. AI is stochastic parrots all the way down. We need a new vocabulary to talk about such algorithms. Algorithmic mimicry is a tool. We should treat and use it as such. We should not interact with algorithms as if they were sentient persons. At the same time, we must not treat people like machines. We have to stop optimizing ourselves, tune our performance in a game nobody wants to play. You do not strive for alignment with your screwdriver. Neither should you align with an algorithm or the world it creates for you. Always remember: you can switch virtuality off if it confuses you too much. Of course, this is no longer possible once we freely and willingly give away our agency to algorithms that have none. We can no longer make sense in a world that is flooded with misinformation. Note that the choice is entirely up to us. It is within our own hands. The alignment problem is exactly upside-down: the future supremacy of machines is never going to happen if we don't let it happen. It is the techno-transcendentalists who want to align you to their purpose. Don't be their fool. Refuse to play along. Don't be a serf. This would be the AI revolution worth having. Are you with me? Images were generated by the author using DALL-E 2 with the prompt "the neo-theistic cult of silicon intelligence." Here is an excellent talk by Tristan Harris and Aza Raskin of the Center for Humane Technology, warning us about the dire consequences of algorithmic mimicry and its current business model: https://vimeo.com/809258916/92b420d98a. Ironically, even these truly smart skeptics fall into the habit of talking about algorithms as if they "think" or "learn" (chemistry, for example), highlighting just how careful we need to be not to attribute any "human spark" to what is basically a massive statistical inference machine.
5 Comments
|
Johannes Jäger
Life beyond dogma! Archives
May 2024
Categories
All
|
Proudly powered by Weebly