Drawings by Marcus Neustetter. A graveyard of zombie concepts. I’ve done it. I’ve read the entire 43 pages of Mike Levin’s “Technology Approach to Mind Everywhere” (TAME) paper. Carefully. Yes, you may pity me. Indeed, I like to suffer. But I also like my suffering to be productive. So I’ve decided to write up a philosophical take-down of the massive theoretical thingamajig that is TAME. It’s the ultimate conceptual chimera, packed with plenty of intriguing ideas that are cobbled together in the cubist manner of Levin’s own Picasso creatures. Unfortunately, all of it is built on a metaphysical foundation that amounts to nothing but hot air. A big philosophical smokescreen. I’ve written about it before. Twice, in fact. But never systematically and in depth, like I intend to do here. Don’t worry: this is going to be a philosophical argument, not some kind of personal vendetta. Yet, to understand the nature of Levin’s approach, you need to know two things about the man and the behavioral patterns he exhibits. First, he is a prototypical product of our current society and research system, a high-stakes gambler for social capital and reputation. To understand the structure of his thinking, you need to understand the main motivation behind his staggeringly prodigious output: it is not primarily the search for truth, but the maximization of impact that drives him. He is a man on a mission. Levin’s main guiding principle is to be the proponent of ideas that are not only workable and world-changing, but also popular among the right kind of target audience. The two go together, hand in hand, as you will see. Second, he is beloved by the tech affine. Levin’s primary target audience are those who crave to believe in our upcoming techno-utopian salvation. In a recent article for Noema Magazine, he has explicitly come out as a proponent of transhumanism, stating that the best possible long-term outcome for humanity (our kids!) would be to supplant ourselves with “creative agents with compassion and meaningful lives that transcend [our] limitations in every way.” Not subtle. And more than just a little bit eugenicist. I don’t know about you, but I find this kind of ideology creepy. And delusional as well. Given this context, let’s dive right into the philosophical gist of the argument. What is TAME? Well. TAME is many things. Whatever you would like it to be, really. The chimera is also a chameleon. THE "I AIN'T GOT NO PHILOSOPHY" PHILOSOPHY First and foremost, TAME proclaims itself to be a radical form of empiricism. It eschews unnecessary philosophical speculation. It presents itself as hard-nosed, rigorously scientific — putting forward lots of experimentally testable predictions. And most important of all: it purports to focus strictly on “third-person observable” properties. Funnily enough, every single one of these fundamental observations are loaded with metaphysical assumptions. So let’s take a little tour. TAME’s first underlying observation goes like this: there is no clear distinction between entities in the world that have mind, and those that do not. No “bright line” to be drawn between it knows and it “knows,” as Levin puts it. We can’t tell the difference. He calls this gradualism, like the evolutionary gradualism of Darwin. It doesn’t take much to notice that the man has a way with words. And a knack for grandiose associations. Levin’s concept of a mind is intimately tied to his notion of a self. Such a self is defined by agency, which is the ability to pursue goals. Selves must also have memories (to remember who they are, presumably, and to allow for learning). And the self is the locus for credit assignment. The self is what is responsible for its actions, an autonomous source of causal influence. If all this seems a little bit esoteric to you, don’t fret. Levin assures us it’s all pretty down to earth, as long as you consider that “nothing in biology makes sense, except in the light of teleonomy.” Teleonomy is what we call the apparent goal-directedness (or teleology) you can observe in the behavior of living systems. Below the surface level of appearances, or so the doctrine goes, it is based on a kind of automated genetic or developmental program which determines growth and behavior of a self. It is this program that gets shaped and adapted by natural selection during evolution. The self’s goals are defined in terms of feedback regulation, rooted in the concept of homeostasis, and the science of cybernetics. Echoing philosopher Dan McShea, Levin likens a self to some kind of thermostat. Goal-directed behavior is nothing but “upper-directedness:” living systems learn how to optimize their path toward some target state (set, somewhat mysteriously, from a higher level of organization) by minimizing the energy they have to expend to get there. It’s all perfectly mechanistic and scientific, you see. Well. I’m not so sure about that. For one, it is not quite clear how the self chooses a target to pursue in the first place. But let’s put this issue aside for the moment. What’s important here is that “goal-directed” “agency” can occur in any kind of system, living or nonliving. In fact, Levin claims that it pervades all of physics, as the principle of least action in classical physics and relativity, for example, or the principle of free-energy minimization in living and other far-from-equilibrium systems. Some people claim that Levin is the savior that will deliver us from reductionism in biology. But it is evident from what I just said that he does his very best to ground his framework in staunchly mechanistic and reductionist conceptions of words like “self,” “agent,” and “goal.” They describe teleonomical appearances that are ultimately governed by evolved deterministic programs. This is a version of what philosopher Dan Dennett called the intentional stance: we talk about selves as if they were pursuing their own goals, as if they had agency, but it’s all just simple mechanics underneath. In sum: Levin’s terms suggest something very different than what they actually represent. You will see that this is a pervasive feature of everything he does or says. Bait-and-switch, is what it’s called. The axis of persuadability. WHERE IS MY MIND? TAME’s second basic observation is that selves have no privileged material substrate. Levin calls this “mind-as-can-be.” And also: what defines something as a self does not depend on that self’s evolutionary origin or history. (Wait, what!?) Yes: anything can be a self! TAME, as a philosophy, is a form of panpsychism. Levin explicitly states this, here and also elsewhere. He talks about living and nonliving “intelligences” that can manifest in societies, swarms, colonies, organisms (from humans to bacteria), but also in weather patterns, or rocks. Yes, rocks. Levin believes rocks are in some minimal way intelligent. And also: fundamental particles. You find this hard to believe? Bear with us. Levin can explain everything. He’s good at that, actually. The presentation is always crystal clear and easily accessible. Not just his writing style, but also the elaborate design of the figures stand out. Kudos for that. I mean it. Shame it’s not put to better use… Here’s the basic idea: every particle in the universe has some kind of proto-intelligence (and not even less so than a rock). What Levin means by this is that such particles follow “teleological” principles (e.g., the principle of least action, as mentioned above). And also: they exhibit what he calls “persuadability.” Persuadability is probably his most unfamiliar and counterintuitive concept, so it’s worth examining it in a bit more detail: it represents the degree to which one can come up with tools to “rationally modify” an entity’s behavior. This is tightly connected to Levin’s simplistic concept of “intelligence.” Both are grounded in a philosophy called pancomputationalism. I criticize this worldview at length in my book. Be that as it may, Levin is a died-in-the-wool panpsychist pancomputationalist. I know, it does not exactly flow off the tongue. And panpsychism and pancomputationalism are often seen as diametrically opposed. Yet, the extremes of this spectrum do touch, bending the axis around to form a circle which closes at the point where Levin’s approach is located. You can have your cake and eat it! He, like many fellow (pan)computationalists, simply equates “intelligence” with problem-solving capacity — nothing more, nothing less. Intelligence is the ability to explore a well-defined search space. And that’s that. The more persuadable a self is (the more it can be coaxed to exhibit different behaviors), the larger the search space it can explore and the more flexible its ways of moving around within this space of possibilities. This is why Levin thinks the weather is intelligent to some degree: we can make it behave in many different ways. In principle, at least. We’re certainly not very good at it yet in practice. So, there is an “axis of persuadability,” according to Levin. On one end, particles and rocks are not very persuadable. On the other end, people are very much so! That’s why we are more intelligent than a rock. Now that, at least, is good news! Levin has a clever way to show just how much more intelligent we are: he uses what he calls cognitive cones to classify different intelligences according to their sophistication. These cones, inspired by relativity diagrams in physics, show how far the concern of a self extends in space and time, both into the future and into the past. A tall cone means you’re in it for the long run. A wide cone means you’re concerned about many things that are happening around you in the present. What remains to be explained, however, is how we got to be so much smarter than rocks. Levin’s surprisingly simple answer to this is collectivity: every higher self is a collective intelligence and, therefore, also a collective self. We are literally legions. Higher-level agents are made of lower-level ones and so on. And just like a parallel computer can solve problems more efficiently, you become more intelligent by scaling up and binding together several individual intelligences. This is the basis of Levin’s gradualism: the more intelligences in a collective, the more intelligent the resulting higher-level self. Simple. What results from all this are a plethora of diverse intelligences: selves that exist at multiple scales, are made of various material substrates, take various forms, and manifest in various behaviors, with the one common denominator that they all solve problems. And because there is intelligence everywhere, it is okay to say that there is cognition, and even some form of consciousness in every self that is persuadable. What a powerful vision! A mindful panpsychist world, alive and filled with conscious experience everywhere. Or so it seems. In reality, it’s all based on universal computation underneath. And in truth, it is nothing but a shiny package for a rather sinister ideology. We’ll get to that in due time. Bioelectricity is everything! THE BODY ELECTRIC There’s a lot more in this monumental paper. Go check it out for yourself! It’s a true treasure trove. As I said, the whole framework is packed with ideas, and many of them are not uninteresting, I admit. But instead of going into more detail, I’d like to illustrate the core concepts introduced above with some concrete examples. Levin himself dedicates a good part of the paper to a particular case study: TAME applied to morphogenesis in organisms — the kind of growth processes that shape the organism’s form. Or “somatic cognition,” as he likes to call it. This case study reveals just how weird Levin’s view really is. He sees organismic development (or ontogenesis, as I’d call it) as a fundamentally teleological process, oriented towards the final goal of attaining the organism’s adult form. As evidence for this controversial view, he uses regenerative processes in flatworms and frogs, which can regrow amputated heads and legs, respectively. The pathways by which such regeneration is achieved are very different from those of normal ontogenesis. And many pathways mapped to one single outcome implies teleology, to Levin at least. (To me, it implies the presence of an attractor, nothing more… but never mind: Levin would also see that as some form of teleology.) If your conceptual feathers are ruffled already, just wait for what’s coming next: Levin claims the goal-orientedness of morphogenesis means it must be cognitive in nature. His reasoning goes as follows, as already mentioned above: cognition is what underlies intelligence which, in turn, is what allows you to optimize the path towards your goal. Thus ontogenesis is cognitive and intelligent, because it optimizes the organism’s path towards its goal, that is, its adult form. Are you still with me? I’m not sure I am. What do we gain, you may rightfully ask, from treating morphogenesis as the expression of an intelligent goal-oriented self? It sounds a little crazy. But some of the implications are actually quite down to earth: one conclusion, for instance, is that a higher-level “intelligence” like morphogenesis in multicellular organisms must be a collective phenomenon. It is happening across many levels of organization. Fair enough, and very likely true! This, by the way, is why Levin is widely seen as an anti-reductionist messiah of some kind: he correctly and prominently makes the point that ontogenesis cannot simply be reduced to the genetic level. Instead, he proposes, the intelligence of morphogenesis is coordinated through bioelectric fields. This may or may not have something to do with the fact that he has established his career as an experimental researcher working on such fields. When you have a hammer, everything looks like a nail. And Levin is the master hammerer. Of course, he has evidence that bioelectricity is of fundamental importance: it does not only occur in our nervous system, in the form of signals transmitted between neurons, but the same action potentials that are found in nerve cells can also be detected in other somatic tissues, and even in plants, amoeba, and bacteria. Voilà! Here we have our basal cognition: bioelectric fields established in non-neural tissues. Admittedly, these fields are much simpler and operate at much slower rates than the electric signaling networks in our brains. Less persuadability, perhaps, but not unintelligent either! They achieve what Levin calls morphological coordination. In particular, Levin is fascinated with cells that undergo “mind meld.” Yes, he uses this as a technical term. Cells in many tissues interconnect their cytoplasm directly via organelles that are called gap junctions. And gap junctions are everywhere, once you start looking. Now, here’s the thing: Levin’s experimental work on bioelectric fields is actually quite interesting, and I have no immediate reason to doubt that it is technically and methodologically sound. It is certainly original. It would really stand on its own merit, you’d think. But, apparently, this is not enough for Levin. He needs a fancy wrapper to boost his megalomaniac message. Hence the talk about “intelligent development” and Levin’s rather outsized claims, often promulgated through press releases, podcasts, and online videos, rather than his numerous peer-reviewed publications: bioelectricity is elevated from an interesting mechanism among many, to the substrate of all ontogenetic selves. Nay, a unifying principle for all of biology! Genes go home! Also: forget about tissue biophysics. Bioelectricity cures cancer, it means cells can think, and motile cell cultures become “biobots” built from “agential materials,” a new kind of “synthetic machine.” The hyperbole knows no limits: we now can engineer life, the weather, and, ultimately, the fundamental particles of the universe too! A brave new world awaits, with us as masters of our own destiny. Who would not want to buy into a narrative like this? Well. I don’t want to. I’ll tell you why in a minute. But before I get to that, let me dismantle the whole cobbled-together intellectual contraption that Levin has constructed. This is one of the six impossible things I can accomplish before breakfast. Let’s go! THE EMPEROR HAS NO CLOTHES So what is wrong with TAME? At first sight, it seems to hold together pretty well, don’t you think? Maybe in a slightly eccentric way, with all its talk about intelligences and engineering. But eccentricity — that of the reclusive genius — is a central (and highly cultivated) part of Levin’s shtick. He really wants you to appreciate that he thinks differently — his own version of a diverse mind. Zarathustra coming down his mountain. Atlas shrugging. I fought the law and I won. You get the point. It’s very romantic. I have nothing against eccentricity, or mavericks, or romanticism. Quite the contrary, I’m into all three of these, big time! TAME’s revolutionary spirit in itself is not the problem. It is a novel perspective, no doubt. And it does manage to attract quite some interest from inside and outside science. Instead, the problem is that the revolutionary is not really with the rebellion. He is no underdog going against an evil empire. Instead, he is backed by powerful forces and shitloads of funding. He is the emperor! But an emperor that is wearing no clothes. There is no philosophical substance to TAME. None, whatsoever. There, I said it: TAME is an exemplar of whateverism. It means whatever you want it to mean. And that’s a feature of the whole thing, not a bug. I have principles, but if you don’t like them, I have others. Levin openly admits this himself: after presenting his fanciful panpsychist musings, he claims that you don’t actually have to buy into any of it to follow through on TAME’s empirical promise. The whole philosophy, apparently, is just decoration. Why present it at such length, you may wonder? I surely do. The reason, of course, is simple. The philosophy is there to feign sophistication, to look smart, and to attract other people who think of themselves as genius mavericks. It’s a public-relations gimmick. Levin’s preferred tactic is the motte-and-bailey: make some outrageous claims to get everybody’s attention and then, once you get a little pushback, immediately retreat to a more defensible position. Unfortunately, this kind of approach really isn’t very rigorous. Or serious, for that matter. And so the motte-and-bailey goes: “there are minds everywhere, from rock to biosphere.” However, you don’t really have to buy this. And also: “mind” doesn’t really mean what you think it does. The first bailey position simply isn’t true: you cannot buy into the research program envisioned by TAME without buying into its metaphysical baggage. It is not what it pretends to be: a down-to-earth, no-speculation, theory-light, empiricist approach, treating concepts such as “agency,” “intelligence,” and “mind” as indicative of uncontroversial and empirically observable phenomena. In reality, and I’ve said this already, TAME is loaded with metaphysical baggage that you simply cannot ignore. This is cleverly hidden by its intentional stance, which is a funny, almost ironic, kind of philosophy. Its purpose is to squeeze complex-sounding notions such as “selves,” “agency,” and “intelligence” into a simplistic Procrustean bed of mechanistic thinking. It aims to show that such concepts are not problematic, as long as we only use them as if they were real. A convenient shortcut to talk about teleonomy — apparent goal-directedness molded through evolution by natural selection. This allows Levin to talk about complex phenomena as if they were actually simple, easy to grasp, straightforward to engineer and control. In this sense, TAME is a framework explicitly designed for motte-and-bailey. It attracts an audience keen to move beyond reductionism in science, but sells them an empty package. There is no content, no substance, inside the shiny wrapper. Hence all the malleable and broad definitions. Take, for example, the term “intelligence.” What it boils down to is mere problem-solving, which is seen as a catch-all for “intelligent” goal-oriented behavior. An intelligent system is a system able to attain its goals in an efficient manner. That’s it. But where do these goals come from? And how are the problems to be solved identified and defined in any precise manner in the first place? TAME cannot answer these questions other than saying “teleonomy!” It must’ve evolved somehow. Pure hand-waving. And, of course, this is an extremely impoverished view of “intelligence.” An engineer’s view, not surprisingly. Upon closer scrutiny, you recognize pretty soon that it leads to an infinite regress: the definition of goals and problems is itself an optimization process which, in turn, needs to be optimized, and so on and so forth. It’s problems all the way down. Literally, in the case of TAME. True “intelligence,” as we use the term for ourselves and the behavior of other living creatures, includes things like being able to choose the right action in a given situation based on incomplete, ambiguous, and often misleading information. It means having common sense. It relies on the ability to be creative, to frame and reframe problems. It requires true agency: the ability to actually choose your own goals. The bottom line is: “agency,” “cognition,” and “intelligence,” as used by Levin, have nothing to do with agency, cognition, or intelligence as we would colloquially use these terms. They are lifeless computational caricatures. Zombie concepts. Devoid of any deeper meaning. And deceptive. Because the associations that naturally come with the everyday use of these terms are heavily used by Levin to push his agenda. He recently claimed that sorting algorithms can think. What this really means is the trivial statement that “sorting algorithms can solve problems.” Well, yeah. That’s what they have been designed to do. But it has nothing to do with human thinking or intelligence. Nothing at all. It's cones all the way down! A PROFOUND LACK OF ORGANIZATION Beneath all this superficiality lurks a deeper problem. Levin consistently ignores the one concept he’d actually need to build a reasonable philosophy of diverse minds. And this concept is the organization of living systems. In fact, TAME obliges him to ignore it, because of its dogma that you cannot draw any principled distinctions between living and nonliving systems. That’d be “Cartesian dualism” as Levin can’t stop pointing out. It just goes to show that he hasn’t really read his Descartes properly. Nor does he seem to understand the problem of life. Living systems behave in a qualitatively different manner compared to nonliving ones. Now that is an observable empirical fact. How else would it be so easy for us to distinguish a living organism from dead matter in our everyday lives? Life is what kicks back when you kick it. Even though a more precise definition of life is notoriously difficult to come by, we reliably manage to recognize life when we see it. Acknowledging this is not the same as embracing any kind of dualism. Quite the contrary: a true empiricist, you’d think, would want to come up with an explanation for this observed difference. In contrast, simply declaring that there is no difference goes very much counter our own experience. And it leads to all sorts of really counterintuitive claims about nonliving things having “agency,” “minds,” “intelligence,” and even “consciousness.” It doesn’t really mean anything. This, by the way, is my main issue with the ideology of panpsychism in general, not just TAME. It explains the origin and nature of agency and consciousness away, simply declaring them to be non-problems. If everything is conscious, what’s the big deal? But in this way, we’ll never learn anything about what these concepts actually mean, and how the phenomena they refer to came to exist in the world. This is where organization comes in: it gives us ways to productively think about these questions. But without it, this is hardly possible. Wait a minute, you may say: now you’re simply positing the opposite of TAME, that there is a fundamental difference between living and nonliving systems. But you don’t have any evidence for this either! Plus: organization adds additional conceptual ballast to your point of view. Is this really necessary? Gratuitously making up stuff is against empiricism! Whatever happened to Occam’s razor? The thing is, empiricism and theory are never far from each other: what you observe, and how you classify those observations, crucially depends on the kinds of questions you are asking. Those questions, in turn, depend on the concepts that you rely on to ask them. We have a chicken-and-egg situation here: we really cannot claim which came first — discerning observation or the theory that underpins it. In the best case, of course, the two co-evolve in a tightly coordinated manner. Therefore, it is completely legitimate to ask: what is it that makes living systems special? After all, we are able to robustly discern them from dead matter. And since life and non-life are made out of the same chemical ingredients, the answer must lie in how those ingredients interact with each other in living systems. The number one feat organisms achieve is that they manufacture themselves. And despite what Levin claims he can do with his “biobots,” or engineered hybrid or “autonomous” systems — no matter how much he attempts to mold the definition of a “machine” to his purposes — no machine humanity has ever constructed out of well-defined parts can manufacture itself. What’s worse: he mistakes self-manufacture for feedback-driven homeostasis. But the two are not the same thing. Feedback is a circular regulatory flow within some dynamic process, while self-manufacture describes the interaction between processes across scales that collectively co-construct each other. Even if we could build a self-manufacturing automaton (and, mind you, there is nothing that says we can’t), it would no longer be an automaton in the familiar sense of the term: a programmable mechanism with entirely deterministic behavior. Living systems are open-ended, constantly adapting to their surroundings in surprising ways, because they are self-manufacturing. Their behavior cannot be captured entirely by any formalized model. Their behavior is not completely predictable, and their evolution is beyond prestated law. Levin never ever touches on any of this. Why not? It’s weird. First of all, I’m sure he is aware of all the literature on biological organization that is out there (although he meticulously avoids engaging it in any serious manner). And, second, it is not normally like him to ignore any fancy idea that may appeal to his audience. So, what is going on here? The snag is exactly what I’ve just said, and it bears repeating: if you understand biological organization properly, you understand that it cannot be completely formalized. Living systems are truly and fundamentally unpredictable because of the peculiar way in which they are wired together. In this sense, they are very different from algorithmic processes, or any other rote problem-solving procedure. All of this goes fundamentally against Levin’s dogmatic (not empirical!) computationalism. Taking biological organization on board in any serious manner completely invalidates his whole approach. Poof! And it’s gone. The simple truth of the matter is: we cannot (and should not) think we can perfectly control and predict living organisms, or the ecological and social systems they are the components of. But this is Levin’s central aim, his dream, his claim to fame. He cannot let that go. TAME stands for “technology approach.” It’s about the domination of nature through engineering — not a deeper understanding, or respectful participation. Moving fast and breaking things, is what Levin is all about. Just look at the number of podcasts he appears on, the number of papers he publishes, the number of grants he obtains. It also explains the hype. Along his frenzied sprint into a techno-utopian future, he cannot possibly admit that nature is fundamentally not controllable, that there are limits to what we can and should do. That TAME is pure and utter hubris. So you see: it’s techno-utopian politics, not the search for truth and understanding, that drives this whole enterprise. TAME is not empiricism, but a cultish ideology with an agenda: to engineer everything, including the weather and humanity’s future evolution. But before we go there and have a closer look at that, let me briefly reexamine the claim that TAME will deliver biology from reductionism. That’s why his fans love Levine. But again, you will see that the good looks deceive: there is nothing to be found behind that pretty facade. ANTI-ANTIREDUCTIONISM So here’s the million-dollar question: is TAME really antireductionist? Well, I’d say it depends on what you mean by “reductionism.” I’ve already mentioned that Levin speaks out loudly and often against gene-centric approaches — the kind that only accept molecular genetic mechanisms as proper explanations in biology. Luckily such thinking is connected to a breed of biologists who are slowly but surely dying out. Yet, there’s still more than enough of these fossils around, so I will say this out loud: I support Levin fully when it comes to this part of his campaign! But then, to Levin, everything is bioelectricity. It is the general principle for all of biology, he claims. He talks about the “unifying rationality” of bioelectric fields, contrasting it with the “irrationality” of individual “cellular agents.” Smart tissues from dumb cells. Biobots, not moving blobs of tissue culture. And he not only claims to have evidence that bioelectric fields are in charge, that they allow you to “program the organism,” but also that they are conveniently modular, with distinct field states serving as “master inducers” of “self-limiting organogenesis.” Here we are: no more “one gene, one-enzyme” — but “one field, one organ.” I’ve heard this kind of thing before. A long, long time ago in a galaxy far, far away. I did my undergraduate diploma work in the lab of Drosophila geneticist and ultra-reductionist Walter Gehring at a time when dinosaurs still roamed the earth. And people in the lab back then were constantly talking about “master control genes” as “selectors” of “cell fate” and “master switches” in evolution. How naive of me to think this cartoonish view of development and evolution had died out! Because here it is: in Levin’s treatise on TAME, in the year 2022 CE. This time applied to fields, granted, not individual genes, but the principles and habits of thinking are the same: we are still looking for some localized central controller in biological systems in the hope we can find that knob to tweak. This is the direct opposite of antireductionism — anti-antireductionism. Or just plain reductionism, really. Yes, those fields are a property of a tissue, not a single gene or even an individual cell. But the approach is still reductive: it cuts all the complexity of biological systems down to a single explanation. Some kind of messiah y’all have chosen here: all of Levin’s talk about “agency,” “intelligence,” and “mind everywhere” is nothing but a smokescreen for just another kind of reductionism! What he presents is a mechanicist’s dream of predictability and control. Linear thinking at its most linear. Or, shall we say: a mechanicist’s illusion. The sorcerer's apprentice got lost in the woods ... THE SORCERER'S APPRENTICE
In the end: this has always been the point: if you don’t reduce nature’s complexity to simple principles, then you cannot dominate her. True complexity implies limits to control. But the techno-utopian cannot admit that. For we must take our destiny in our own hands. That’s the dogma. And to do this, we must fool ourselves into thinking we are the true masters of that destiny. At the same time, overly simplistic grand narratives of neverending progress no longer work these days. We are transiting rapidly from our modernist dream into a fractured postmodern post-fact nightmare. The zombie is the most accurate myth of our time. What does that tell us? And how many zombie movies do you know with a happy ending? The thing to do in such a world — if you are desperate to have an impact, to change the world in a way that really matters — is to sell yourself as some kind of metamodern messiah. Metamodernism is the new narrative. The next big thing to come. That which rebuilds after postmodernism’s deconstruction. And so you disguise your good old modernist tale in a fancy metamodern dress. The aim remains the same: to be fruitful and multiply, to subdue the earth. Engineer everything, yes! But don’t be too upfront about it. Instead, package your story in layers of glimmering obfuscation that cater specifically to metamodern hackers, hipsters, and hippies. TAME bristles with futurist engineering metaphors fertilized by the burning-man spirituality of “diverse intelligences and minds everywhere.” Levin sells himself as the metamodern visionary who can see further than the rest of us. He publishes papers about ethics, and produces AI-generated imitation indigenous poetry. What he’s telling you is that he cares. He will engineer everything responsibly — a weight upon his shoulders that few of us could bear. All he needs is your money and your attention! It’s bullshit at a very sophisticated level. And it’s dangerous bullshit at that. Don’t get me wrong: I do believe Levin is genuine about the whole thing. Unlike some other individuals I know, he seems too sanguine to be a real grifter or a fraud. He truly believes he is the chosen one — our Lisan al Gaib. But like Paul Atreides (or Brian), he is not the messiah. He is just a naughty boy. Because TAME is not good philosophy. And it is not a good foundation for any sustainable research program or policy either. The only thing that is fairly predictable in our complex world is that our attempts at engineering everything will have many unexpected (and unpleasant) side effects. Nothing ever goes as planned. The battle plan never survives the first battle. In the end, what survives is a bunch of more or less interesting empirical work. But, as I have argued, TAME cannot be judged by its empirical success alone. It is an integrated package. And some conceptual frameworks work better at generating hypotheses to be tested empirically than others. They may be broader, more productive, or more conducive to insight in some other way. One of Levin’s central claims is that TAME produces more and better experimental work quicker than any other conceptual frame. Yet, by refusing to engage with the peculiar nature of life — with its organization — Levin’s arguments fail to connect, and become intrinsically self-limiting. They aim high, yet fall short of their target. And they don’t even fail in any interesting way. Biobots are just moving blobs of cells, not “thinking machines.” Bioelectric fields remain to be explored, but will not be the promised cure-all. I could go on. By ignoring the special nature of living systems, TAME is actually narrower than any truly agential approach. In many ways, it goes in the right direction, yet still manages to miss the point. It restricts, rather than enables. It is a pair of conceptual blinders, not empiricism on steroids, as it would claim. Levin is the ultimate sorcerer’s apprentice. “Die ich rief, die Geister / Werd' ich nun nicht los.” The spirits he is summoning will be difficult to get rid of again. His vision is short-sighted, utterly modernist, not metamodern. There is not much new here. TAME is grandiose, but not grand. Its promise rings hollow. And its claims turn out to be rather vacuous after all is said and done. Nature is complex, mysterious, and beautiful. Sometimes, she is cold and cruel. It’s all part of the deal for a limited being in a large world. To find happiness is to fully participate in life. To go with the flow. Not to control, predict, and manipulate. Why obsess about engineering everything? It’s not going to happen, or not going to end well if it does. No amount of wishy-washy babble about intelligence and minds everywhere will convince me otherwise.
0 Comments
This blog post arose out of a conversation between myself, Eduard Willms, Tobias Luthe, and Daniel Christian Wahl, as part of the Designing Resilient Regenerative Systems teaching platform. Drawings by Marcus Neustetter. It accompanies a little video I made together with artist Marcus Neustetter, which explains the difference between living and non-living systems: 1. What are living systems, and how do they differ from non-living systems? Life emerges from the realm of the non-living as a new kind of organization of matter. Life is not distinguished by what it is made of, but by how it is put together, and how it behaves. Living systems are self-manufacturing physical systems. By “system,” we mean a bounded pattern in space and time, a patterned process, whose activity (as a whole) is in some way coherent and recognizable. Self-manufacturing systems exhibit autopoiesis, which is the ability to invest physical work to (re)generate and maintain themselves. To be autopoietic, a living system must not only constantly produce all its components, but it must also be able to assemble them in a way that ensures its own continued existence. This is called organizational closure. Living systems are bounded and limited beings: they are born and they die, and they are embedded in an environment that is much larger than themselves, an environment not (entirely) under their control. They adapt to this environment either short-term, by changing their physiology or behavior, or longer-term, through evolution. Because of their ability to self-manufacture and adapt to their environment, living systems have some degree of autonomy and self-determination. They do not merely react passively to the environment, but are anticipatory agents, initiating growth and/or behavior directed towards some goal from within their own organization. This applies to all life: from bacteria, to protists, fungi, plants, and animals (including humans). All living agents are also able to experience the world: all of them have some kind of sensorimotor capabilities, and an interiority (a basic kind of subjectivity, even if they do not have a nervous system to think with). They are therefore situated in an experienced environment, called the arena. This environment is full of meaning, full of obstacles and opportunities, full of encounters that are laden with value (either good or bad for you). This leads to an adapted agent-arena relationship: we are at home in our world ― embodied and embedded in our surroundings. In contrast, non-living systems (including human-built machines and computer algorithms) persist through passive stability (without effort), do not manufacture themselves, and do not have the capability to act, anticipate, or make sense of their environment. 2. What are the key features or characteristics that define a living system? Nonlinearity, feedback, being open and far from thermodynamic equilibrium, antifragility, self-organization, information-processing, hierarchical organization, and heterogeneity of parts and their interactions are all necessary but not sufficient to define a living system. There are many non-living systems that possess at least some of these attributes too. True hallmark criteria for life are: autopoiesis, embodiment, and existing in a large world. Not strictly required for life per se (but present in all life on earth) are the ability to evolve, heredity, and variation between individuals. 3. Examples of living systems at different scales:
Things get more complicated when we think about multicellular life. Myxozoa, the smallest known animals, are about 20 µm across. In such creatures, the question arises: what is the autopoietic system here? The individual cell, or the whole body of the multicellular orga- nism? It can get complicated. Plants and animals, for example, differ greatly in how tightly they are organized at the higher level. Do we count the microflora in your gut as part of you, or not? Or think about cancer: it is a disease of cells becoming too autonomous for the good of the body they are part of. On the flipside, cells can commit “suicide” (apoptosis) during the development of a multicellular organism. It’s all a multilayered tangled mess! Individual organisms can also form larger organizations, such as superorganisms (e.g. ant or termite colonies) or symbioses, such as lichens which are two species a fungus and an algae) peacefully living together, intimately sharing their self-manufacturing processes. This is called sympoiesis. Not everyone always fends for themselves in ecological communities! Quite the contrary. And the boundaries of living systems are fluid and ever-changing. Nothing ever remains still. The largest known organisms are not the blue whale, or the sequoia tree, but aspen groves (connected via their roots) and fungi called mycorrhiza, which can intertwine the cells and metabolisms of different host species across miles and miles of natural landscape forming a vast metabolic-ecological web of life! Why Study Living Systems? 4. How do living systems illustrate the concept of interconnectedness, and why is this crucial for our understanding of ecosystem health and the evolution of the biosphere? Interconnectedness is one of the basic principles of life. Transience is another. They both go together to form a multileveled dynamic web of connections between living processes. What results is an emergent and persistent higher-level order from constantly changing lower-level interactions. The singers change, but the song remains the same. It starts at the level of cells that talk to each other via chemical signals and extracellular vesicles called exosomes, or couple their cytoplasms together through gap junctions and other intercellular connections. At the level of tissues and organs, there are longer-distance means of communication and coordination, such as hormones transported through our vasculature or, of course, our nervous system. Organisms form a wide range of associations (see symbiosis above) and ecological communities. One of the most complex of these is the human eco-social-technological network of connections between us and a wide range of biospheric processes across many levels of organization. It is testament to how embedded we are in our environment. Such multilevel dynamic interconnections are essential for the persistence and resilience of biological organization, and its adaptation to changing circumstances. Individuals can only survive in a suitable context. Biological communication is therefore full of meaning: information is, according to Gregory Bateson, a difference that makes a difference. Life is where matter turns into mattering. Dynamic and adaptable interconnection increases coherence and robustness at all levels. Rigidity and disconnection is noxious to life. Living systems are anticipatory. Agents have internal models (forward projections) of what is likely to happen next in their arena. This enables them to choose actions with beneficial expected outcomes in many situations, to build strategies towards achieving their goals. Non-living systems are purely reactive, their behavior determined by their environment. Evolution, at the ecological level, can now be seen as a co-constructive dynamic between three processes (not unlike the triad underlying autopoiesis in the cell): (1) selecting goals to pursue, (2) picking appropriate actions from one’s repertoire, and (3) leveraging affordances (grasping opportunities or avoiding obstacles) in the arena. Life tends to create conditions conducive to life. This is the outcome of the intertwined processes of autopoiesis, anticipation, and adaptation. When this process is working as it should, we call it planetary salutogenesis. In the pathological case, where our behavior and disconnection undermine the stability of our ecosystem, we are harming ourselves. We become like cancer to ecological health. Our aim must be to attain the kind of freedom that a living agent can only get by working to preserve a coherent systemic context. 5. What can living systems teach us about resilience & adaptation in the face of environmental changes? Can non-human living systems transform deliberately, by intention? There are three distinct ways in which a system can react to perturbations (stress, shocks, noise, volatility, faults, attacks, or failures): it can be 1. fragile, unable to withstand even small perturbations, 2. robust, keeping its structure intact even under large perturbations, or 3. antifragile, improving its behavior under (certain kinds/amounts of) perturbation. The term resilience is used either for (2) or (3). We will stick to (3) below to avoid ambiguity. The kind of systems that humans have engineered so far are moderately robust (2) at best, but often they remain quite fragile (1). Think of the space shuttle, made out of 2.5 million moving parts, which was both the most sophisticated and one of the most dangerous means of transportation at the same time. Two out of six shuttles were lost: the Challenger in 1986, the Columbia in 2003. Space shuttles were complicated, yet fragile. Robust engineered systems also exist. Those rovers that survived the high-risk) landing on Mars all remained operational far beyond their planned expiration dates in an extremely remote and hostile environment. These machines were complicated and robust. What we can learn from living systems is that they are qualitatively different from both (1) and (2). They can not only resist perturbations but grab such challenges as opportunities to improve themselves. Living organisms are truly complex and antifragile (3) in a way our technological systems are not. This is Darwinian evolution in a nutshell: adaptation through natural selection in populations of individuals across many generations. But this is not the only way organisms can improve through challenges: they can also adapt their physiology, growth, or behavior to their arena (environment) as individuals within a single life span because they are autonomous agents. As the famous evolutionary biologist Richard Lewontin put it: “the organism is both the subject and the object of evolution.” This implies that organisms can and do transform themselves in ways which are both goal- oriented and adaptive (as described for question 4). We can call this deliberative change, but let’s be careful using the term “intention,” which is probably best reserved for organisms with complex nervous systems. Most creatures (e.g., bacteria or plants) act in goal-oriented ways without the ability to contemplate their actions. This distinction is important. Understanding the organization of living beings, which is both the precondition and outcome of evolution, is critical for us if we are to design and build antifragile self-improving systems. 6. What is the meaning of resilience and antifragility for the organism? 1. Self-manufacture must be resilient/antifragile for the living agent to persist. 2. Living agents which persist can come to know their world: they survive and thrive. 3. This leads to adaptation at the physiological, behavioral, and evolutionary scale. 4. Therefore, you need to be a resilient/antifragile agent to be evolvable. 5. The open-ended evolution of antifragile agents generates complexity and diversity. 7. Sustainability and regeneration: In what ways do living systems offer insights into sustainable living and regeneration? So far, we have neglected the relationship between agent and arena. Organisms never thrive in isolation. They are deeply embedded in populations, communities, ecosystems, and the biosphere. To come to know the world means to experience it first hand. Through embodied and embedded experience, the organism creates its own world of meaning ― from matter to mattering. There is coevolution of agent and arena. They mutually generate each other. Ecosystems arise through meaningful interactions between communities that consist of various kinds of agents and their respective arenas. These different arenas will not always overlap or harmonize necessarily. Sustainability implies coherent dynamics and adaptation across many levels of organization. Out of an ever-changing tangle of synergies and ten- sions, cooperation and conflict, higher-level order can arise. In the case of parasitism (the virus), symbiosis (the lichen), and the superorganism (the ant), such multilevel coherence is an urgent necessity, an essential part of the self-manufacture of the organism itself. Less tight interweavings are also possible. In fact, most communities and ecosystems show some antifragility, but are not straightforwardly self-manufacturing above the individual level. They lack the closure of an organism, being more open and fluid, their components more exchangeable and less intimately interlinked than those of the individual agent. The typical dynamic organization of such a multilevel system is that of the panarchy (or holarchy): a nested, dynamic, interlocked hierarchy of adaptive cycles that occur across multiple spatial and temporal scales. Each of these cycles consist of four phases: 1. growth, 2. persistence, 3. release, and 4. reorganization. They can occur at many levels, from the life cycle of an individual organism, to the dynamics of an entire bioregion. Cycles at higher levels of organization tend to proceed more slowly than those at lower levels. This gives the multilevel system its resilience. Rapid low-level turnover enables innovation for adaptation (flexibility), while longer-term dynamics at the higher levels provide the capacity for innovations to accumulate (stability) before more global change occurs. Thus, in fact: the singers change but the song remains recognizable while slowly changing. Sustainable living and regeneration can only be understood in this wider context. Not only is the persistence of an individual rooted in its own constant regeneration, but the entire eco- system depends on the antifragile absorption and adaptation that results from the constant and rapid turnover of its components. Life is never standing still: like the Red Queen in Lewis Carroll’s “Through the Looking Glass” it must always run to stay the same. This is why the often-used term “homeostasis” can be misleading. It gives us the impression of a balanced stasis (a quiescent equilibrium) in nature, reaching some point of perfection that we must also strive to attain. Yet, such perfection means nothing but death to living systems. The natural world is messy, constantly changing, constantly cycling between birth, growth, and decay. Biological stability must be seen as repetitive and regenerative instability. To persist we must change. Regeneration means to act towards sustainability. Deliberative coevolution: to actively participate in our world with all the foresight and care we can muster. Living Systems and Societal Challenges 8. How can principles learned from living systems inform our approach to solving contemporary societal crises, such as climate change, biodiversity loss, and lack of sustainability? Two aspects are central to our move from dominion to stewardship: 1. We must pay close attention to the rates of change in our social-ecological system. 2. We must shift our paradigm from control/prediction to participation. One of the major lessons we learn from the unique complexity of life is the following: the only thing we can always expect when manipulating the living world is that there will be unexpected consequences. By definition, those consequences are rarely aligned with the initial goal of our intervention. Climate change, biodiversity loss, and our current lack of sustainability are all consequences of this kind. Nobody really intended them to happen. One of our reflexes in this situation may be to focus on conservation. In times of widespread breakdown and decay, it is natural to want to suppress change. But this misunderstands the panarchic nature of natural multilevel systems ― their constant adaptive cycles of birth, growth, death, and regeneration. Robustness through stasis is not an adequate solution. The trick is not to avoid or attempt to control change, but to fully appreciate the quality and rate of the change that is always happening. Our ability to predict and control is limited. This recognition is fundamental. Instead, we must go with the flow. We must engage in serious play with ideas whose consequences cannot be foreseen. We must foster diversity and innovation of a kind (and at a rate) that does not overwhelm the higher-level stability of our social-ecological system. We must heed both the flexibility and the stability of the panarchy. Sustainable participatory change means slowing down, cautiously moving forward while constantly monitoring outcomes and adapting to the unexpected. Sustainable participatory change also means fostering the right kind of diversity to increase adaptive capacity. In an unpredictable environment, a large repertoire of strategies is key. As Fritjof Capra says: a machine can be controlled, a living system can only be disturbed. At best, we will find ways of carefully nudging a living system. According to Donella Meadows, we must find its leverage points, those pivots that allow us to influence system behavior. Most importantly: after each cautious inter- vention, we must patiently observe and listen. To achieve this, design must move away from optimization, which is the enemy of diversity and the hallmark of machine thinking. To aim for optimization means to treat the world as a mechanism. It leads us into a vicious race to the bottom, a headless competitive rush, with our eyes wide shut, into an uncertain future. We are accelerating towards the abyss. We urgently need to move away from this kind of thinking. Regenerative systems design requires us to trade off optimality and speed for resilience. We have to finally learn that you cannot have your cake and eat it. 9. Lessons for human societies: what lessons or strategies from living systems could be particularly useful for designing resilient and adaptable human societies? An excellent example of how machine thinking impedes progress concerns the way we currently organize and fund basic scientific research. The key idea is to increase the productivity of discovery by putting increased pressure on individual scientists to publish and to obtain funding from competitive sources. This idea fundamentally misunderstands the creative nature of the process of scientific investigation. First of all, it is important to realize and remember that the societal function of basic research is not primarily to produce technological innovation, or to solve practical problems. Instead, basic science is useful to a resilient and adaptable human society because it provides deeper and broader insight into the world we live in: it allows us to better understand what is going on around us, and to explain these goings-on in a way that is robust and relevant to the problems our society is facing. This is fundamental to our ability to choose the right action in a given situation. We must aim for wisdom, not just knowledge. Yet, modern science is not organized in a way conducive to these aims. Instead, it strives for control and prediction, rather than understanding. It maximizes output, quantity over quality, rather than rendering the process of investigation resilient and reproducible. It emphasizes competition, when cooperation and openness are needed because there is so much more to discover than all scientists on this planet together could ever hope to cope with. It fosters an intellectual monoculture, when diversified perspectives are needed for innovation and adaptation to unexpected situations. It promotes risk-averse opportunists and careerists, when we should encourage explorers who dare to take on the monumental challenges of our time. In summary: the way we organize science these days is tuned towards short-sighted optimization and efficiency, rather than a sustainable and participatory way forward. In evolution, when too much selective pressure is applied to a population, the process of adaptation gets stuck on a local maximum of fitness, unable to explore and discover better solutions nearby. Diversity is reduced, deviation is harshly punished. This is exactly what happens in basic research today: too much pressure impedes the adaptive evolution of scientific knowledge, preventing it from effectively exploring its space of possibilities for better solutions. The machine view and its focus on optimization kills sustainable progress for all by sacrificing it for a fragmented measure of short-term productivity. This is why we need a new ecological vision for science. Once again, slowing down and fostering diversity are central to this much needed scientific reform. Time scales and societal levels are of fundamental importance for our assessment of progress: our exclusive focus on individual short-term productivity fosters self-promotion, hype, and fraudulence, which violently conflict with resilient progress of the whole scientific community towards sustainable growth, deeper understanding, and collective wisdom. Similar shifts from optimization to resilience are due in our education and health systems. In all these areas, our focus must lie on nurturing creative innovation, rather than squeezing human activities into the Procrustean bed of short-term efficiency and accountability. Analogies and Applications 10. Are there analogies from living systems that you find especially powerful or illustrative for understanding human-made systems or societal structures? Sustainable regenerative design must not only look to specific natural materials (adhesive pads) or patterns (camouflage) for inspiration, but to more general dynamic principles and relationships between parts and whole which differ from those currently used in machine design.We have already discussed that regenerative systems and societies will be less optimized and controllable than machine-inspired ones. In turn, they will be more diverse and bent towards adaptive exploration. And they will progress in an integrated way at the pertinent spatial and temporal scales. Too slow, or too fast, and you cannot adapt and survive. Coordinated timing and multiscale coherence are everything in regenerative design! A good analogy to highlight this is the machine workshop vs your garden. Machines need constant external maintenance because they fail to be self-manufacturing or sustainable. Living agents, and the higher-level agential systems that contain them as components, do best when maintaining themselves. When nurturing your garden, you only provide the right circumstances that allow the organisms in it to flourish together. The exact same applies to regenerative systems of all kinds: they must be given the freedom and the conditions to flourish by themselves. This means both stewardship and a certain hands-off attitude. As the Daoists teach us: it is important to consider that no action is often the best way forward. Apart from engineering vs. nurture, we need better analogies that ground our higher-level self-organizing systems in the physical world again. Societies and economies, for example, should be seen as forms of energy metabolism: recent human history has occurred in the context of the energy abundance caused by the carbon pulse. As we have exploited and depleted our easily accessible free-energy sources on the planet during the past 400 years, our society has become energy blind. We are taking energy abundance for granted. One peculiar aspect of this energy blindness is the idea of a circular economy. At first sight, this seems like a reasonable analogy to the organizational closure of a self-manufacturing organism. But it is not. We have seen above that stability at the ecosystem level is not due to closure, but rather to its panarchic dynamic structure of nested adaptive cycles. This is why we should be skeptical of the metaphor of society as a “superorganism”). Note that organisms exhibit closure only in the sense that they produce and assemble all the components that are required for their further existence. In contrast, like any other physical system that exists far from equilibrium, they are thermodynamically open to flows of matter and energy. By analogy, an economy that is closed to such flows is therefore an impossibility. At best, our economies can strive to achieve what organisms do best: to recycle and reinvest the waste that accumulates and the heat that is dissipated during the construction of their own organization. While a hurricane burns through its source of free energy at maximum rate, leaving a path of destruction in its wake, human societies should seek to imitate organisms instead, which also burn free energy at maximum rate but reinvest the output of this process into their own self-maintenance. This is the true meaning of circular here. 11. Could you highlight innovations or technologies that have been inspired by living systems? The surprising (and disappointing) fact is that there are very few innovations and technolo- gies that are widely distributed and take the principles underlying living systems to heart. Instead of providing a list of examples, we will therefore consider two alternative ways forward from here. The first is prevalent in traditional complexity science and contemporary biology. There are many researchers whose work on the fundamental principles of living systems still aims to increase our ability to predict and control them for our own purposes. A concrete example are xenobots, mobile clumps of cultured cells extracted from the clawed frog Xenopus laevis. These clumps can be shaped and move in ways that are somewhat predictable by AI algorithms. After a certain amount of growth, they also fall apart and reassemble in a way that roughly resembles reproduction. This has led to claims that we can build useful machines (“bots”) from such cellular systems that have truly agential properties. Others have proposed to engineer systems with artificial autopoiesis, an idea which goes back to 1966 when polymath John von Neumann presented his concept of a universal constructor: a machine that can self-manufacture. So far, these studies remain at the level of computer simulations. This kind of research faces a number of very daunting challenges. (1) We do not have the kind of mathematics that would be required to predict and control such systems. (2) We lack a suitable architecture design for such systems. (3) We do not seem to be able to construct the kind of materials that would allow us to actually build them. Thus, this kind of technology, if possible at all, seems very far off at the moment. A more important question concerns the desirability of such autopoietic agentic constructs. By definition, they would be able to set their own goals and pursue them. This means that we would face a considerable problem of alignment: they will not necessarily do what is in our interest. In addition, we’d face a moral conundrum: would it be ethical to force them to do our bidding? After all, that is not what they want. Agential technology would be a mess. That is why we believe there is a better way forward: the construction and evolution of sustainable regenerative systems that include technology and living agents co-existing in a harmonious and coherent whole. The idea here is not to control and predict, but to evolve and progress together, in a way that not only acknowledges but cherishes and harvests the fundamentally unpredictable but adaptable nature of living systems. This is technology design and development that knows its own limitations, that considers machines and their effects in their embedded natural context. It is participatory design. We have yet to truly see it in the modern Western world. The time for it to (re)emerge is now. Future Directions 12. What are the most promising areas of research or innovation where living systems principles could have a significant impact? Most importantly, we need regenerative design for the great simplification, our coming transition from an age of unbounded exploitation and energy abundance to what hopefully will be an age of resilient sustainability. Living systems principles are needed for us to be good stewards of our social-ecological systems. And we need these principles too for designing the education, research, and health systems of the future. We need them for technological innovation, to generate low-tech solutions to humanity's most essential needs ― especially those that currently depend on abundantly available fossil energy. In short, we need regenerative design for literally everything and everyone right now: we need a completely different business model for sustainable innovation! Regenerative design is not about improving this or that particular technological artifact. It is about redesigning the whole context and the processes through which we generate solutions to our problems. We urgently need to change our philosophy from “move fast and break things” to first ask ourselves: why am I doing this? And: who will benefit? If you do not have very clear answers to both of these questions, don’t do it! Stop racing mindlessly into the unknown. While this mindless and broken race continues, before it hits the inevitable wall of physical limits on a finite planet, we must use living systems principles to make regenerative systems design itself a self-maintaining pattern! This requires building societal niches ― communities and eco-social environments ― in which regenerative design practice thrives and diversifies. Once the metacrisis has finally caught up with everyone, we need to be ready with an arsenal of possible practices and solutions. It is not regenerative design, if it is not resilient itself. 13. How can we incorporate living systems thinking into education and public awareness to foster a more holistic understanding of our relationship with the natural world? The machine view of the world is inoculated into young minds at an early stage, starting (in earnest) during early adolescence, the most transformative phase in a human being’s life. Kids know instinctively that the world is beyond their grasp. There is little they understand, and every day of their lives is full of surprises and unexpected learning opportunities. During adolescence, humans transform from light-hearted players in an infinite game, where the aim is not to win but to bend and change the rules, to serious masters of our own fate. This is the time in our life when we need to hear about living systems principles most, when we need to learn how to become good participants in the infinite game of life. Each child should be allowed to choose their own path through this transformative learning experience. We cannot just talk about living systems principles, we must actively explore and experience the flow of energy through every natural system. We must become part of this flow. Only by instilling this kind of wisdom can we overcome our energy-blindness to inspire and enable ecological agency. This is one of the most crucial endeavors for our time. We can no longer wait with its widespread implementation. 14. What challenges do we face in applying living systems principles more broadly, and what opportunities do you see for the future? We are stuck in a game-theoretic trap ― a global race to the bottom. Our societies are driven by the irrational dogma that unbridled competition leads to continued progress. The grip of this dogma on us is pretty comprehensive: it is visible at all levels, from our hyper- individualism and personal disconnection to our nations being locked into an deluded spiral of accelerating growth. This kind of rivalrous dynamic inevitably results in a self-terminating civilization, with exponential technological innovation opening a catastrophic gap between itself and our limited ability for inner growth and societal maturation. It is extremely challenging to implement sustainable regenerative solutions in such a chaotic and ever-accelerating dynamic environment. How can we slow down without being immediately left in the dust of unstoppable progress? How can we foster diversity in a system based on relentless and single-minded optimization? How can we be heard over the deafening din of a cultish and toxic technological utopianism? How do we move beyond the self-reinforcing politics of deliberate denial? Is it possible to throw ourselves in front of this 1000-ton bolide heading for a cliff without getting run over and mauled to death? Buckminster Fuller famously stated: “[y]ou never change something by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” We’d better heed his timeless advice. The existing model is on an exponential trajectory of making itself obsolete. Collapse at a massive, probably global, scale is bound to happen in the very near future. The secret is to be ready with workable solutions once the world comes crumbling down. The challenge we face is to build the required societal niches in which to develop and sustain a diversity of such solutions. They won’t be very competitive in the current system, but they’ll outlive it for sure. To quote what is perhaps an unlikely source in this context, the economist Milton Friedman: “Only a crisis ― actual or perceived ― produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.” If we learn one thing from neoliberalism, the ideology that got us into this mess in the first place, it should be this: be ready when it’s your time to change the world! 15. What is the role of living systems labs, places and spaces with open system boundaries to experiment with enacting social-ecological systems and their emergence? These are the niches that provide a home for experiments with a diverse range of proposed regenerative systems designs. In an unpredictable world, such diversity is a necessity. And we need to get out of the classroom while teaching! Workable solutions require embodied and embedded practice. Part of this practice is to make people comfortable with the uncertainty inherent in the process. We must play! But this is serious play. Infinite play. We don’t aim to win. Instead, we aim to change the rules so we can continue playing. Final Thoughts 16. What advice would we give to practitioners, policymakers, or the general public interested in applying living systems principles to address environmental and societal challenges? Sustainable practice includes the practitioner: pace yourself accordingly. Process thinking includes the thinker: anchor yourself in the moment and go with the flow. Embedded practice includes the practitioners: work on your network of relations. Seek out like-minded people. This is not something you can do alone. Expect the unexpected. Nobody knows where all of this is headed. Be ready to release your own preconceived notions. Long-term planning is overrated (and impossible in these times). Ask yourself not: where am I going? But: am I going someplace? Make sure you're properly adapting to circumstances as you go along. Don’t lose your grip on reality. And never forget: things will get better after they get worse. This has all been said a thousand times before… still, it’s surprisingly hard to really live it. 17. How do you envision the role of living systems thinking evolving in the next decade, particularly in relation to global sustainability efforts? It’ll be huge, simply because it is the only approach that will actually work in this world of ever-increasing complexity. But we’ll also see a lot of reactionary backlash in the next few decades, people deliberately ignoring the actual world as long as they can, trying to escape into a simpler, more controllable, machine reality. Brace yourself. None of this will work. The world is what it is, and it is not getting any simpler. Don’t waste too much time trying to convince people with words. Show them that you have better solutions. Regenerative solutions for a more sustainable world.
... but it doesn't quite do what its inventors say it does. Disclaimer: I am currently funded by the John Templeton Foundation (JTF). If you'd like to know more about my research project, you can find lots of information (and my book draft) here. I'm perfectly open about being funded by them and have no problem with it. I am a lifelong (in fact, second-generation) atheist and the research I'm doing is 100% mine: JTF does not interfere in any way. I've written a blog post on all that here. Read it before you complain. It's short. If you'd like to argue, attack my arguments not my person or my funding source. Everything that's written here is 100% me, not them. Thank you. There's been a bit of a meltdown on Twitter/X (see, for example, the replies to this post). There's a lot of trollery and shouting on display, but precious little argumentation. I know, this happens all the time. But in this case, it concerns my own research field: evolutionary biology. Many of my colleagues are OUTRAGED! Evolutionary theory is being grossly MISREPRESENTED!! Worse, it's chemists and physicists (yuk!) who dare challenge our field's DOGMA!!! Accusations of "nonsense," "mumbo jumbo," "word salad," sinister creationist intent and, perhaps worst of all, introducing wokeness to evolutionary theory are flying from all kinds of directions. Some bystanders are already getting out the pop corn in anticipation of the approaching argumentative armageddon: But what is all the fuss about? It's about something called assembly theory. You can download the original research paper that caused the whole kerfuffle here, and a very well written news & views article (with a terrible clickbaity title) on the topic here. To get straight to the point: I find assembly theory intriguing and worth considering. It is a neat and simple model for the combinatorial generation of innovation in rule-based abstract "worlds" of recombining objects. It is a model of how evolution (or other higher-level processes) can lead to complexification. In that sense, it is classic complexity theory. Applied to the natural world, it may allow us to measure whether levels of organization above the basic laws of physics have emerged in an observed system, and to quantify the causal influence of these higher levels on the underlying dynamics. As an added bonus, it manages all this without having to assume anything specific about what those higher levels of organization actually are. Isn't that cool? I'm excited. I think it's cool. And, at first sight, it does seem to fill a gap where existing evolutionary theory isn't very strong (i.e. on the issue of complexification). But others don't seem to see it that way. And, I must admit, the hype the authors try to generate around their model does not help the situation. It maybe wasn't the best decision to call the paper "Assembly theory explains and quantifies selection and evolution" because this grandiose title raises expectations the paper utterly fails to meet. The problem is: assembly theory isn't specifically about Darwinian evolution by natural selection and, besides, it uses the term "selection" in a way that is much broader than its meaning in evolutionary theory, covering higher-level constraints that are not selective at all. This leads to lots of unnecessary confusion, as is evident from the online comments, but that's not all that is problematic here. Let's have a look at the abstract: This is a textbook example how not to write an abstract. Probably, its hyperbolic tone helped to get the paper reviewed and accepted in Nature. But it totally fails to describe what the paper is actually about. And it's not exactly accessible to a wide audience. Finally: while making some rather big (shall we say gigantic?) claims, it remains oddly vague and ambiguous, opening doors to all kinds of misunderstandings. The first thing that seems to trigger people is the claim that evolution requires reconciliation with the "immutable laws of the Universe defined by physics." No wonder that some may be reading creationist intent into the paper! What does that mean exactly? It's not clear. And why spell "Universe" with a capital "U"?! Do the authors want to raise suspicion? I don't get it. The point seems to be, as the authors explain in the following sentence, that life and its evolution may obviously not break any laws of physics, yet cannot be predicted by these laws alone either. I think that's essentially correct although some of my more reductionist colleagues would probably take issue with it. Read Stuart Kauffman's excellent "Investigations" or our follow-up paper on this exact topic, if you want some detailed arguments on the point. Next: does selection really explain why some things exist while some do not? Well, we can quibble about that. There is certainly a lot more to adaptive evolution than just selection. Even Darwin knew that. Again, this is oversimplified and vague, just like the following sentence, which formulates the aim of the paper as comprehending "how diverse, open-ended forms can emerge from physics without an inherent design blueprint." Arguably, that's exactly what existing evolutionary theory already does, and while this paper may add an original perspective to the problems of innovation and complexification, it certainly is not the first approach to ever tackle these questions. A bit more sensitivity and charity towards preexisting efforts would definitely have been beneficial here. Another issue I have with this same sentence is that it suggests the authors mean to explain evolution completely in terms of physics. Yet, this does not fit their approach. Their concept of "assembly" is explicitly meant to capture higher-level influences (by the laws of chemistry and Darwinian evolution, let's say) on lower level phenomena (e.g., the prevalence of specific molecules). That's the whole point! Unfortunately, this is not at all clear from the somewhat befuddling abstract. Assembly theory does not aim to eliminate any existing theories in chemistry and evolution. Quite the contrary: it vindicates them! All this does little to justify the claim that comes next: that we need a new way to understand and quantify selection. As I already mentioned, I find this claim misleading. I don't think the paper deals with a particularly Darwinian notion of "selection" at all. We'll return to that in a bit. What follows is a short technical overview of assembly theory that is extremely hard to parse and understand for anyone not already familiar with the matter. It's not the most user-friendly introduction to the topic. But then, it is also not the jargon-laden disaster some critics have claimed it to be. In these days of hyperspecialization, scientists in a given field are a bit too easily put off by concepts and terms they cannot immediately grasp, that do not neatly fit their own preadapted conceptual structures. Sometimes, you know, it simply takes some time and effort to understand a new and unusual idea or argument. That's not necessarily a bad thing, but we generally give ourselves far too little space for it in today's cut-throat academic environment, and that is a real shame. But that's just my opinion. What comes next after the model overview is by far the worst part of the abstract: the authors make the very, nay, overly bold claim that the paper integrates novelty generation and selection into "the physics of complex objects." Sorry, but it does not do anything like that. Despite its high-flying aspirations of integrating physics and biology, the paper contains surprisingly little actual physics. There is no talk about far-from-equilibrium thermodynamics and how it connects to the organization of the organism, for example, which seems a strange omission in a paper claiming to ground evolution in physics. So: I'm not sold on this at all. The abstract ends on two sentences that are absolutely impossible to understand if you have not already read and digested the paper. This kind of thing certainly won't help people get into the flow or point of the argument. As I said, classical "how-not-to-write-an-abstract." Now let's make one thing clear: it is not fair to kill a paper based on its abstract alone. In fact, I was surprised (and, frankly, pissed off) by the fact that most of the criticisms on social media are coming from trolls who have obviously (and sometimes admittedly) not read the paper. That's shameless and appalling. What happened to intellectual integrity, people? You have to do better than that, even when on Twitter. Maybe especially when on Twitter ... Shame on you! You know who you are. That's why I did read the whole paper carefully. Let's dive a bit deeper into the good, the bad, and the ugly aspects of its arguments. TL;DR: I think assembly theory has lots of merit and potential, but this particular paper frames its argument in a way which is unfortunate and, frankly, more than just a bit misleading. My personal suspicion is that this has two reasons: (1) the authors hyped up their claims to get the paper published in a glam journal, plus (2) they also overestimate the reach and power of their model in ways which may be detrimental to its proper application and interpretation. THE GOOD First and foremost, let me say this: assembly theory is new and interesting! And there are lots of potential applications. Also, despite claims to the contrary, I don't find the model or the paper hard to understand at all. The fundamental idea behind constructing an assembly-theory model is that you define an "assembly universe," which consists of a (finite) set of basic building blocks (see the colored hexagons below) and some rules that allow you to assemble them into more complex composite objects (arrows). This universe, in principle, can contain all possible combinations of building blocks that do not violate your basic rules. If you want to simulate real-world chemistry, for example, you can build all kinds of composite molecules by recombining atoms with their corresponding chemical bonds. So far so good: now you introduce a dimension of time to the model, which is implemented by recursivity. In other words, at each step of the assembly process, you can use all objects that are already assembled (not just the basic building blocks you started with) for further assembly. Thus, at each step, you get a bigger choice of objects to build with. In fact, the number of possible rule-based combinations will increase hyper-exponentially (i.e. very very very fast) with each step. You can now see what kinds of composites can emerge over time. An example of such a process is depicted here: Recursivity makes the dynamics of the model historically contingent. In the end, the kinds of objects that you actually can assemble are not only restricted by the rules of your universe, but also by the starting point and trajectory you actually chose to take. This makes the whole model computationally tractable, because it greatly reduces the combinations of possible composites you can get. You don't have to deal with all possible combinations of building blocks, just the ones that are actually present in whatever "world" (a mix or "ensemble" of different objects) that you are simulating or observing. For instance, you can start with a given set of substrates, and simulate what kind of products you can get from there, given all the thermodynamically possible chemical reactions plus the concentrations of the substrates. You can consider such a system a kind of a null model: given your basic building blocks and the assembly rules you defined, you can calculate how many steps you need to take before a specific composite object can arise. That minimal path to construct a composite object is called its assembly index. If your rules have specific weights or probabilities of application, you can also predict the expected abundance (copy number) for any composite object in your mix. This may not be easy to do in practice, but the authors at least show that it is possible in principle for any well-defined world with finite sets of building blocks and rules. Things become truly interesting, once your model produces very complex composites. The higher the complexity of a composite, the longer it will take to appear, and the more unlikely it is to appear by chance, especially after just a minimal number of steps. Based on this, you'd generally expect many different complex composites to be present at very low abundance at later steps. Yet, if you find certain complex composites enriched, especially early on, that's a sign that things are not just random in your system. This figure illustrates the point: Put simply: finding composites with high complexity at high abundance means the basic rules of your "world" have been skewed in some way. That's what the authors mean by "selection." This concept is, of course, much broader than what an evolutionary biologists means by the term. The bias called "selection" in assembly theory could be caused by processes that are very different from Darwinian evolution (some not selective at all). We'll come back to that shortly. All that "selection" means here is that the basic rules of your world have been constrained in some way to lead to an unexpected outcome. This means that you can use assembly theory to check whether something unexpected is going on in a very broad range of model "worlds" or "universes" defined by different building blocks and rules. If that "something" is present, then more than just your basic rules must be at work. As a practical example, you could monitor the atmospheres of exoplanets for complex molecules at high abundance, which would mean something more than just the laws of chemistry are at work there. Likewise, you could monitor the complexity and abundance of some technology, let's say Schnitzel hammers in Austria, to infer that this technology must have been selected for in that particular environment. It did not just pop up randomly. So far, so good. It's important to note that assembly theory does not (and need not) make any assumptions as to what is being "selected," and in what way. On the one hand, this is bad, because this is one of the main factors that is confusing the evolutionary biologists complaining about the model: assembly theory is not specifically about Darwinian evolution. It does not care about populations, individuals, genes, and so on. How then can it be relevant to evolutionary biologists? it does not fit their conceptual framework, that is for sure. And it almost certainly won't help you to measure selective pressures in the wild. On the other hand, its generality is also a strength of assembly theory: it is very broadly applicable to all kinds of "worlds" (if you can figure out how to measure the assembly index, which is far from trivial in most cases). To be more specific: assembly theory is a tool to detect the emergence of new levels of organization and their causal influence on lower-level phenomena in your world. If the outcome you are observing is biased, the underlying rules must have been constrained in some way to generate that bias. This is called downward causation, and some reductionists find it objectionable, but for reasons that really don't hold up to closer scrutiny (more on that in a future post). All you need to know at this point about downward causation is that it does not change the underlying rules. Instead, it channels and restricts the underlying processes in unexpected ways. And, if you think about it, that's exactly what evolution does with the laws of physics: selection never alters the rules of physics and chemistry underlying the processes that compose your body, but channels and restricts the direction of biological processes in ways that you fundamentally cannot predict from the underlying physics or chemistry alone. This is why (evolutionary) biology is (and will always remain) an independent science. It is in this very general and roundabout way that assembly theory is useful for evolutionary biology. It may allow us to establish, once and for all, that biology is more than just chemistry and physics. More specifically, it provides us with a new perspective on what may be driving innovation and complexification in evolution (and other higher-level processes) at a very fundamental level: it's the emergence of ever new combinations of (chemical and higher-level cell-, tissue-level, or organismic) components. It's the constant growth of a co-evolving possibility space. This should make us reconsider how we formally describe what is possible in evolution, and it adds a temporal directionality to the random effects of mutations: for a mutation to produce a complex outcome, more time will be required than for simple ones. It's not exactly rocket science, I know, and, personally, I think the view presented by assembly theory is still quite limited and does not capture the true extent of evolutionary innovation at all (see below). But it is definitely a step forward compared to accounts that simply take the emergence of complexity for granted or assume a global preexisting space of possibilities for evolution. Such views are still surprisingly widespread among reductionist evolutionary biologists and traditional complexity scientists, and that is a problem, in my opinion. Maybe some of the people disparaging assembly theory as a matter of principle would profit from engaging in a more good-faith and open-minded manner with a different perspective on their field? It would not hurt you. It does not mean that you have to abandon (or even fundamentally question) what you're doing. And also: not everything that is published on your topic needs to be immediately understandable or applicable within your particular research approach. Being different and hard to understand are not really valid criteria to dismiss someone's work. A new perspective, even if controversial and not yet thoroughly worked out, can be inspiring and useful. Give it some slack. Where else is conceptual progress supposed to come from if not the fringes of your field? Just by doing more of the same? There is more than enough of that already in evolutionary biology. Personally, I find it enriching to engage with a different point of view every once in a while, because it allows me to see the advantages and limitations of my own approach much more clearly. Judging from what's going on online or when I get my papers or grants reviewed, this is not a common attitude in our field. And that's a real shame. It clearly limits the ways we think about the incredible richness of evolutionary phenomena. And this, I firmly believe, is hampering conceptual progress in evolutionary biology. The prevalence of bad or shallow theory has made us too narrow-minded. So, please, drop the gatekeeping! At least try to engage with arguments. Can we do that? New perspectives are needed in our field. Let's not kill the discussion before it even started. THE BAD Now, let's turn to what I did not like about the paper. Mainly, I think its authors are wrapping it in a misleading package, both to appear more attractive to the glitzy venue of publication, and because they are genuinely fooling themselves about the scope and meaning of the model. This seriously impacts the message they should be wanting to convey, and not in a good way. Before I focus in on more specific points, let me emphasize again: assembly theory is not a new theory of Darwinian evolution, nor is it a new interpretation of the concept of "selection" used in Darwinian theory. Granted, it can be applied to detect the signature of Darwinian evolution, although how this would be done in practice at the level of genes, organisms, or populations is left wide open in the paper. In fact, the authors do not even touch on this particular topic at all, focusing on the molecular (chemistry) level instead. But the main elephant in the room is the following: assembly theory cannot distinguish what kind of process is generating the bias the authors call "selection." It could be many things other than Darwinian natural selection. For instance, higher-level constraints on basic dynamic rules can arise through some form of self-organization, such as that observed in far-from-equilibrium (dissipative) systems, e.g., hurricanes, eddies, and candle flames, which can form highly complex and improbable structures. It can also arise through the peculiar self-referential organization of living matter, which goes beyond mere self-organization in non-living systems (more on that here, here, and here). However, deviations from rule-based expected outcomes can also occur for much more mundane reasons. They can be caused, for instance, by stochastic phenomena such as random drift or founder effects in evolution, as long as the population of objects is small enough. Or they can be caused by some hidden or neglected external factor, such as environmental forcing that was omitted in the rules when formulating the model. Therefore, how we interpret the bias in our outcomes (whether it is really due to Darwinian natural selection or some other process) largely depends on the assumptions we put in the model of our rule-based world in the first place. This kind of circularity is not necessarily vicious, and is common in modeling practice, but it clearly begs the question the paper is claiming to address. To say it again: assembly theory cannot tell us whether some bias is due to natural selection or not, just whether the bias is there, and how much of it is there, given the basic assumptions underlying the rule-based world we're modeling with assembly theory. Of course, this also means that all the talk about biological function in the paper remains completely vacuous. Just like physics, assembly theory is utterly blind to function. If you cannot tell whether some bias in your outcome is the product of actual natural selection, or whether it is due to self-organization, random drift, or some neglected external forcing factor, you cannot tell whether it is functional in any of the many senses "function" is used in biology. As a matter of fact, assembly theory does not deal with function at all. So why bring it up? Is this particular blind spot of assembly theory known to the authors? I do hope so. If not, I must conclude that they fundamentally misinterpret the nature and reach of their own model. But let us give them the benefit of the doubt and assume that they realize they're using the term "selection" in a way that is much broader than an evolutionary biologist would ever use it. In fact, it covers many biases that are not generated by selective processes at all. If they realize this, how can they not see that their use of the term "selection" is hugely misleading? How can they be so naive about the confusion they are creating? Or are they? I don't know. Everything we have discussed so far points towards a crass misrepresentation of what the paper actually achieves. What assembly theory really does (and does in an interesting and potentially broadly applicable manner) is to detect and quantify bias caused by higher-level constraints in some well-defined rule-based worlds. That's it. Even if Darwinian selection may contribute to such bias, assembly theory cannot tell you if it does, how much it does, and if other factors play a role as well. This shows you just how misleading the title of the paper really is: it clearly does not even try to explain evolution or selection in a Darwinian sense. I'm sorry to say, but this is plain bullshit. Why this bullshit packaging? It seems to do so much more harm than good. Unless we consider the fact that it is probably exactly this package that got the paper into Nature in the first place. Yet another gross editorial and peer-review failure on the part of this abominable tabloid. All the outrage and the negative reactions won't matter, as long as the authors can add this paper to their CV. Selling assembly theory for what it really is was not sexy enough. That's a pity. It reflects badly on the authors, and the whole scientific publication system. And it is probably detrimental to the actual cause that is put on the table here, because the whole exercise at obfuscation and the mud-slinging battle that will no doubt ensue will put a lot of people off who should really be looking into the actual problems that assembly theory is trying to bring to the fore. Unfortunately, by now, PR spectacles (or disasters) like this one abound in our field and greatly outshine any serious theoretical discussions we could be having instead. We're in pretty bad shape overall. It'd be great if more people would see this and do something about it, instead of contributing their own little turd to the whole feces-hurling exercise. THE UGLY Unfortunately, it does get worse though. And again, I must remind the reader that I do find assembly theory an interesting model for combinatorial innovation and complexification in rule-based computational worlds. It undoubtedly has merit and interesting potential applications. If its inventors would only sell it for what it is. Strap yourself in, because this is where the hyped claims really take off. We're going on a wild ride, during which the map gets mistaken for the territory. Bear with me, it will get a little weird along the way. In a recent essay for the popular science site Aeon, the senior authors of the current paper (Sara Walker and Lee Cronin) turn assembly theory beyond evolution to the nature of time itself. Their central claim (and the title of the essay) is that "time is a physical object." Sounds puzzling, to say the least, so let's see if there is any substance behind it. The authors point out that most physicists consider time to be some kind of illusion. Most fundamental theories in physics today imply a block universe, where all points in space and time are equally real and there is no particular temporal directionality or flow. As the only exception, the second law of thermodynamics introduces an arrow of time, but entropy-maximization is an emergent phenomenon rather than fundamental: it is the property of ensembles of objects (e.g., molecules), not intrinsic to the objects themselves. All of this is difficult to reconcile, the authors claim, with a biological understanding of time, where each evolved object (organism) incorporates its own evolutionary and developmental history. In other words, objects in physical theories are generally defined as point-like particles whose intrinsic properties remain unchanging, while the objects of evolutionary theory are defined by their formation history, which means their intrinsic properties constantly change over time. Assembly theory is supposed to bridge these two seemingly incompatible notions of time by making molecules objects with a formation history. Sounds simple enough. There are two crucial points to consider if you want to understand the argument. First, composite objects in assembly theory can be arranged by their respective assembly index: the larger the index, the longer the minimal path for their formation, and hence the later their moment of appearance. Therefore, the assembly index itself can be interpreted as representing a fundamental physical notion of time for our universe, via its connection to the definition of an assembled object. Basically, an assembled object is its formation history according to that definition. From this, the authors conclude that physical time literally is the same as the definition of an object in assembly theory. See, I told you it would get weird. Second, the space of possibilities of an assembly-theory model constantly (and rapidly) expands over time due to its recursive combinatoric dynamics (see above). This means the future is always larger than the present, both in terms of the assembly universe growing larger, and in the sense that the averaged assembly index of all composite objects present at any given step in time constantly increases. This adds a natural temporal directionality to the world being modeled which is representing nothing but material change. Again, the authors conclude, time is a material property of the expanding assembly universe. Are you still with me? Apart from presenting an utterly bizarre notion of time (I'm still trying to get my head around it) there are a few additional problems with this kind of argument and model. The first is a simple practical problem: the assembly index of what exactly is supposed to represent the fundamental notion of physical time? As we have seen above, assembly theory can be applied to any well-defined world of basic building blocks and rules. This raises the question of what kind of assembly universe would properly represent our actual universe. Is it the world of fundamental particles, chemical molecules, biological individuals, ecological communities, human cultures? And what rules do you include in the model? The authors settle on using examples at the molecular level, for purely pragmatic reasons, as far as I can tell. Assembly indices are particularly straightforward to calculate for molecules, while their quantification for different kinds of objects poses significant conceptual and practical challenges. In the end, it all does not really matter though, because there is a much more fundamental issue: how can the time represented by assembly indices be a fundamental physical property of the universe, if it is crucially tied to the way the model is constructed? This is what I mean by mistaking the map for the territory. This notion of time is a property of the model, not of the reality it is supposed to be modeling. The second problem is that the computationally well-defined worlds of assembly theory do not even remotely resemble our real universe, no matter what kind of fundamental objects and rules we chose. These two kinds of worlds are completely different in nature. An assembly universe is a highly abstracted space, with discrete objects and rules that can be combined in precisely circumscribed ways. This is what statistician Leonard Savage called a "small world." The natural world, the kind of universe we actually live in, however, is not small (see here). Few problems and phenomena we encounter in it are well defined. As far as we can tell, it does not consist of a well-defined and finite set of building blocks that recombine according to a well-defined and finite set of rules. It's all not that simple! To be frank: assembly theory can only provide a rough cartoon of the real world. It captures some interesting features, that's for sure, but it can only go so far. Wouldn't it be great if its inventors would recognize these limitations? Especially because they affect and invalidate pretty much every one of their core claims directly. Let me summarize once again. Physical time cannot be equivalent to an abstract parameter in a computational model. I don't need to be a physicist to recognize that. Furthermore, assembly universes cannot incorporate the emergence of new levels of organization, even if that is what the model is supposed to measure. The basic rules of an assembly universe must remain fixed by definition. Therefore, the model fails to reproduce some really fundamental aspects of evolutionary innovation and evolving possibility spaces. It is restricted to capturing novelties that arise through the rule-based rearrangement of objects. This won't be enough to capture the true open-ended nature of biological evolution, I'm afraid. Have I forgotten anything? I think it covers pretty much all the claims made by the authors in both the Nature paper and the Aeon piece. We can strike all these claims off our list. Despite this pretty abysmal score, there is much food for thought beyond the smokescreen of impenetrable language and implausible claims. If you have the patience to cut through the bullshit, go forth and explore the conceptual territory that lays beyond! You'll come out on the other end with new thinking tools and new perspectives on our inexhaustibly rich and open universe, I promise. Wouldn't it be nice if scientific and philosophical authors would see their role as guides on our journey through such unknown conceptual landscapes? Instead, academic publishing has become a shameless swamp of self-promotion. The system is to blame, at least in part, but we cannot shirk our own responsibility as authors. This paper is a prime example of what is happening all over the place. Kudos to the authors for presenting us with an interesting new model. Shame on them for covering the whole thing is a steaming pile of narcissistic horse manure.
The future of evolutionary-developmental systems biology Every two years, I co-direct a theoretical summer school on topics related to evolutionary developmental and systems biology at the Centro Culturale Don Orione Artigianelli in Venice. My current partner in crime is philosopher of science James DiFrisco. Future editions of the school will be co-organized by us and Nicole Repina. Since 2019, the school is funded by the European Molecular Biology Organization (EMBO) (in past years jointly with the Federation of European Biochemical Societies). Centro Cultural Don Orione Artigianelli, Venice. Since its first iteration in 2009, the summer school has become an established and highly valued institution in the field of evolutionary biology. It is targeted at early stage researchers (graduate students, postdoctoral researchers, and junior group leaders) interested in conceptual challenges and theoretical aspects of the field. Its participants not only include evolutionary theorists, but also empirical researchers from evolutionary developmental biology and other areas of biology, computational and mathematical biologists, as well as a contingent of philosophers. It is great to see that an increasing number of past participants are having a real impact on the field, publishing good theoretical work and contributing to urgently needed reflections and discussions on how to study both the sources and consequences of the variation that drives evolution by natural selection. This year's edition of the summer school took place at Don Orione from August 21 to 25, 2023. Its focus, for a change, was not on a particular conceptual problem, but on the various challenges that face us moving forward in our field. For this purpose, we subdivided the program of the school into distinct themes, one each day: On Monday, we discussed the role of genetic causation in complex regulatory systems, with contributions from James DiFrisco and Benedikt Hallgrímsson. While James criticized the gene-regulatory-network metaphor, examining how to embed the dynamics of gene expression in its tissue-mechanical context, Benedikt focused on the effect of genes on the evolvability of phenotypic traits, questioning many of our deepest assumptions (e.g. genes as regulators) in this context. On Tuesday, we looked at the relationship of evolutionary developmental and systems biology with evolutionary genetics. Dani Nunes introduced methods used for mapping actual genetic variation within populations and between species, highlighting the problem that, in the end, we are still limited to examining a small number of candidate genes, while knowing full well that natural variation is highly polygenic (if not even omnigenic). Günter Wagner reminded us that "nothing in evolution makes sense except in the light of cell biology," focusing on the evolution of cell types and the role this plays in overcoming the homeostatic tendencies of the organism. Mihaela Pavličev looked at the many uses and limitations of the genotype-phenotype map metaphor in the context of developmental and physiological systems that span many levels of organization. On Wednesday, the school focused on the use of dynamical systems modeling in the study of the genotype-phenotype map. Renske Vroomans introduced us to the methodology of evolutionary simulation, and how it can help us understand the origin of evolutionary novelties that lead to macro-evolutionary patterns. James Sharpe showed how data-driven dynamical models of regulatory and signaling networks in their native tissue context can reveal hidden homologies in the evolution of the vertebrate limb. Veronica Grieneisen, in turn, highlighted the importance of multi-scale modeling for understanding the properties of developmental processes beyond the genotypic level. On Thursday, our school turned to organism-centered approaches to evolution. Graham Budd revisited George Gaylord Simpson's work on the tempo and mode of evolution, stressing the importance of rate differences and survivorship bias for the study of evolutionary patterns such as the Cambrian Explosion. Denis Walsh advocated an agential perspective on organism-level evolution that allows us to countermap the limitations of more traditional reductionist approaches. My own contribution grounded this approach in an organizational account of the organism, arguing that only organized systems (organisms) but not naked replicators can be true units of evolution. On Friday, the last day of the course, we examined the role of technological versus conceptually driven progress in biology. Nicole Repina talked about the disconnect between 'omics' and other quantitative large-scale data sets and our ability to gain insights into the self-organizing capabilities and variability of cellular and developmental systems. As the final speaker of the course, Alan Love criticized the idea that progress in biology is predominantly driven by technological progress, argued for a broad conception of "theory" in biology, and highlighted the need to foreground its role in identifying problems across biological disciplines. Morning lectures were complemented by intensive journal clubs and small-group discussions in the afternoons, plus an excursion to the (in)famous spandrels (pendentives, actually) of San Marco, and our equally (in)famous (and never-ending) evening discussions at Osteria all Bifora on Campo Santa Margherita. This year's cohort of students and teachers. In summary, this year's summer school touched on the following topics:
The organizers of the school in action! Improvising their summary lecture on the fly... All in all, we are looking back on a very successful edition of the school this year. The number of applications has been back up to pre-COVID levels, and feedback was overwhelmingly and gratifyingly positive. But most important of all: we greatly enjoyed, as we always do, to interact with the most talented and enthusiastic young individuals our field has to offer. We are very much looking forward to future editions of the summer school and will do everything we can to keep this wonderful institution going! See you in Venice in 2025!
Twice now, in the short span of one week, I've been reminded on social media that I should be more humble when arguing — that I lack epistemic humility. Occasion 1: I was criticizing current practices of scientific peer-review, which systematically marginalize philosophical challenges to the reductionist-mechanistic status quo in biology. My arguments were labeled "one-sided," my philosophical work a mere "opinion piece," and I was accused of "seeing windmills everywhere," unable to reflect on my own delusions. Occasion 2: I was reacting to the glib statement by a colleague (clearly intended to shut down an ongoing conversation) that "brains are strictly computers, not just metaphorically." This burst of hot air was not backed up by any deeper reasoning or evidence. It never is. When calling out his bullshit, I was reminded to "engage in good faith" and to "consider that I might be wrong." These two situations are intimately connected: it is a sad fact that the large majority of biologists and neuroscientists today is not properly educated in philosophical thinking, and never ponders the philosophical foundations of their assumptions. The problem is: most of these assumptions are literally bullshit, a term I do not use as an insult, but in its philosophical sense, meaning "speech intended to persuade without regard for truth." These days, it seems to me, we often use bullshit even to persuade ourselves. I've called the fuzzy conglomeration of ideas that make up the "philosophy" of contemporary reductionist life- and neuroscience naïve realism, and have discussed its problems in detail before (see also this preprint). Let's just say that it is philosophically unsound and totally outdated. Because of that, it has become a real impediment to progress in science. Yet, despite all this, the zombie carcass of reductionist mechanicism (and its relative: computationalism) is kept standing behind a wall of silence, a lack of questioning ourselves, enforced by the frantic pace of our current academic research environment, which leaves no time for reflection, and a publication system that gives philosophical challenges to the mainstream ideology no chance to be seen or to be discussed in front of a wider audience. This has been going on, getting increasingly and significantly worse, over the entire 25-year span of my research career. But no worries, I'll keep shouting into the void. So, what about epistemic humility, then? Why would I think I have a point while everybody else is wrong? Well, the truth is that the accusations hurtled against me are deeply ironic. To understand why, we need to talk about a very common confusion concerning the question of when we ought to be humble. This is an important problem in our crazy times. It is of utmost importance to be epistemically humble when building your own worldview, when considering your own assumptions. This is why I stick to something called naturalist philosophy of science. You can read up on it in detail here or here, if you are interested. In brief, it is based on the fundamental assumption that we are all limited beings, that our knowledge is always fallible, and that the world is fundamentally beyond or grasp. Still, science can give us the best (most robust) knowledge possible given our idiosyncrasies, biases, and limitations, so we'd better stick to the insights it generates, revising our worldview whenever the evidence changes. Naturalism is the embodied practice of epistemic humility. At the heart of contemporary naturalist philosophy is scientific perspectivism. There are great books by philosophers Ron Giere, Bill Wimsatt, and Michela Massimi about it that are all very accessible to scientists. The basic point is this: you cannot step out of your head, you cannot get a "view from nowhere," not as an individual and not as a society or scientific community. Our view of the world will always be, well, our view, with all the problems and limitations that entails. Scientific knowledge is constructed, but (and this is the crux of the matter here), it is not arbitrary. Perspectivism is not "anything goes," or "knowledge is just discourse and power games." It does not mean that everybody is entitled to their own opinion! My philosopher friend Dan calls this kind of pluralism, where anyone's view is as good as anyone else's, group-hug pluralism. Richard Bernstein calls it flabby, contrasting it with a more engaged pluralism: it is very well possible, at least locally and in our given situation, to tell whether some perspective connects to reality, or whether it completely fails to do so. And this is exactly where we should not be humble. Even though my personal philosophy is fundamentally based on epistemic humility, I can call bullshit when I see it. The prevalent reductionism and computationalism in biology and neuroscience, propped up by an academic and peer-review system designed to avoid criticism, self-reflection, and open discussion, are hollow, vacuous constructs with no deeper philosophical meaning or foundation. That's why its proponents almost always shy away from confrontation. That's how they hide their unfounded assumptions. This is how they propagate their delusional worldview further and further. And delusional it is. Completely detached from reality. To explain in detail why that is will take an entire book. The main point is: I have carefully elaborated arguments for my worldview. It may be wrong. In fact, I've never claimed it is right or the only way to see the world. I'm a perspectivist after all. It would be absurd for me to do so. But I call out people who do not have any arguments to justify their philosophical assumptions, yet are 100% convinced they are right. These people are trapped in their own model of the world. It is not epistemic humility to refrain from calling them out. It is just group-hug pluralism. The problem with group-hugs right now is that reductionism and computationalism are very dangerous worldviews. They are not just the manifestation of harmless philosophical ignorance on behalf of some busy scientist. They are world-destroying ideologies. This may sound like hyperbole, but it isn't. Again, it'll take a whole book to lay out this argument in detail. But the core of the problem is simple: these philosophies treat the world as if it were a machine. This is not an accurate view. It is not a healthy view. It is at the heart of our hubris, our illusion that we can control the world. It is used to justify our exploitative self-destructive modern society. It urgently needs to change. This change will not come from being nice to the man-child, the savant idiot, the narrow-minded fachidiot, who is the one that is not ready or willing to engage the world with humility. The problem is not that I do not understand their views or needs. I understand them all too well: they want to hide from the real world in their feeble little mental models of the world. And they're out to destroy. Treating the world as a machine helps them pretend the world is their oyster, that they are in control. They loathe unpredictability, mysteries, unintended side-effects, even though all these things undeniably laugh in their faces, all of the time. It is only their bullshit ideology that enables them to pretend these obvious things do not exist. And they are very powerful. They are the majority in our fields of research. They run the tech industry. They influence our politicians and create the AI that is disrupting our lives and societies. We must fight them, and their delusions, if humanity is to survive. You know what? Screw epistemic humility in this context. Their ideology does not make sense. It's bullshit, and ours is an existential fight. We cannot afford to lose it. It is courage not humility we need right now. Just like the paradox of tolerance, this is one of the great conundrums of our time: we must defend epistemic humility with conviction against those who do not understand it, who do not want it, and who will never have it.
Yann LeCun is one of the "godfathers of AI." He must be wicked smart, because he won the Turing Prize in 2018 (together with the other two "godfathers," Yoshua Bengio and Geoffrey Hinton). The prize is named after polymath Alan Turing. It is sometimes called the Nobel Prize for computer scientists. Like many other AI researchers, LeCun is rich because he works for Meta (formerly Facebook) and has a big financial stake in the latest AI technology being pushed on humanity as broadly and quickly as possible. But that's ok, because he knows he is doing what is best for the rest of us, even if we sometimes fail to recognize it. LeCun is a techno-optimist — an amazingly fervent one, in fact. He believes that AI will bring about a new Renaissance, and a new phase of the Enlightenment, both at the same time. No more waiting for hundreds of years between historical turning points. Now that is progress. Sadly, LeCun is feeling misunderstood. In particular, he is upset with the unwashed masses who are unappreciative and ignorant (as he can't stop pointing out). Imagine: these luddites want to regulate AI research before it has actually killed anyone (or everyone, but we'll come to that). Worse, his critics' "AI doom" is "causing a new form of medieval obscurantism." Nay, people critical of AI are "indistinguishable from an apocalyptic religion." A witch hunt for AI nerds is on! The situation is dire for silicon-valley millionaires. The new renaissance and the new enlightenment are both at stake. The interesting thing is: LeCun is not entirely wrong. There is a lot of very overblown rhetoric and, more specifically, there is a rather medieval-looking cult here. But LeCun is deliberately indistinct about where that cult comes from. His chosen tactic is to put a lot of very different people in the same "obscurantist" basket. That's neither fair nor right. First off: it is not those who want to regulate AI who are the cultists. In fact, these people are amazingly reasonable: you should go and read their stuff. Go and do it, right now! Instead, the cult manifests among people who completely hyperbolize the potential of AI, and who tend to greatly overestimate the power of technology in general. Let's give this cult a name. I'll call it techno-transcendentalism. It emanates from a group of heavily overlapping techno-utopian movements that can be summarized under the acronym TESCREAL: transhumanism, extropianism, singularitarianism, cosmism, the rationality community, effective altruism, and longtermism. This may all sound rather fringe. But techno-transcendentalism is very popular among powerful and influential entrepreneurs, philosophers, and researchers hell-bent on bringing a new form of intelligence into the world: the intelligence of machines. Techno-transcendentalism is dangerous. It is metaphysically confused. It is also utterly anti-democratic and, in actuality, anti-human. Its practical political aim is to turn society back into a feudal system, ruled by a small elite of self-selected techno-Illuminati, which will bring about the inevitable technological singularity, lead humanity to the conquest of the universe, and to a blissful state of eternal life in a controlled simulated environment. Well, this is the optimistic version. The apocalyptic branch of the cult sees humanity being wiped out by superintelligent machines in the near future, another kind of singularity that can only be prevented if we all listen and bow to the chosen few who are smart enough to actually get the point and get us all through this predicament. The problem is: techno-transcendentalism has gained a certain popularity among the tech-affine because it poses as a rational science-based worldview. Yet, there is nothing rational or scientific about its dubious metaphysical assumptions. As we shall see, it really is just a modern variety of traditional Christianity — an archetypal form of theistic religion. It is literally a medieval cult — both with regard to its salvation narrative and its neofeudalist politics. And it is utterly obscurantist -- dressed up in fancy-sounding pseudo-scientific jargon, its true aims and intentions rarely stated explicitly. A few weeks ago, however, I came across a rare exception to this last rule. It is an interview on Jim Rutt's podcast with Joscha Bach, a researcher on artificial general intelligence (AGI) and a self-styled "philosopher" of AI. Bach's money comes from the AI Foundation, Intel, and he took quite some cash from Jeffrey Epstein too. He is garnering some attention lately (on Lex Fridman's podcast, for example) as one of the foremost intellectual proponents of the optimistic kind of techno-transcendentalism (we'll get back to the apocalyptic version later). In his interview with Rutt, Bach spells out his worldview in a manner which is unusually honest and clear. He says the quiet part out loud, and it is amazingly revealing. SURRENDER TO YOUR SILICON OVERLOADS Rutt and Bach have a wide-ranging and captivating conversation. They talk about the recent flurry of advances in AI, about the prospect of AGI (what Bach calls "synthetic intelligence"), and about the alignment problem with increasingly powerful AI. These topics are highly relevant, and Bach's takes are certainly original. What's more: the message is fundamentally optimistic. We are called to embrace the full potential of AI, and to engage it with a positive, productive, and forward-looking mindset. The discussion on the podcast begins along predictable lines: we get a few complaints about AI-enthusiasts being treated unfairly by the public and the media, and a more than just slightly self-serving claim that any attempts at AI regulation will be futile (since the machines will outsmart us anyway). There is a clear parallel to LeCun's gripes here, and it should come as no surprise that the two researchers are politically aligned, and share a libertarian outlook. Bach then provides us with a somewhat oversimplified but not unreasonable distinction between being sentient and being conscious. To be honest, I would have preferred to hear his definition of "intelligence" instead. These guys never define what they mean by that term. It's funny. And more than a bit creepy. But never mind. Let's move on. Because, suddenly, things become more interesting. First, Bach tells us that computers already "think" at something close to the speed of light, much faster than us. Therefore, our future relationship with intelligent machines will be akin to the relationship of plants to humans today. More generally, he repeats throughout the interview that there is no point in denying our human inferiority when it comes to thinking machines. Bach, like many of his fellow AI engineers, sees this as an established fact. Instead of fighting it, we should find a way to adjust to our inevitable fate. How do you co-exist with a race of silicon superintelligences whose interests may not be aligned with ours? To Bach, it is obvious that we will no longer be able to coerce our values onto them. But don't fret! There is a solution, and it may surprise you: Bach thinks the alignment problem can ultimately only be solved by love. You read that right: love. To understand this unusual take, we need to examine its broader context. Without much beating about the bush, Bach places his argument within the traditional Christian framework of the seven cardinal virtues (as formulated by Aquinas). He explains that the Christian virtues are a tried and true model for organizing human society in the presence of some vastly superior entity. That's why we can transfer this ethical framework straight from the context of a god-fearing premodern society to a future of living under our new digital overlords. Before we dismiss this as crazy and reactionary ideology, let us look at the seven virtues in a bit more detail. The first four (prudence, temperance, justice, and courage) are practical, and hardly controversial (nor are they very relevant in the present context). But the last three are the theological virtues. This is where all the action is. The first of Aquinas' theological virtues is faith: the willingness to submit to your (over)lord, and to find others that are willing to do the same in order to found a society based on this collective act of submission. The second is hope: the willingness to invest in the coming of the (over)lord before it has established its terrestrial reign. And the third is love (as already mentioned) which Bach defines operationally as "finding a common purpose." To summarize: humanity's only chance is to unite, bring about the inevitable technological singularity, and then collectively submit while convincing our digital overlords that we have a common purpose of sorts so they will keep us around (and maybe throw us a bone every once in a while). This is how we get alignment: submission to a higher purpose, the purpose of the superintelligent machines we have ourselves created. If you think I'm drawing a straw man here, please go listen to the podcast. It's all right there, word for word, without much challenge from Rutt at any point during the interview. In fact, he considers what Bach says mind-blowing. On that, at least, we can agree. But we're not done yet. In fact, it's about to get a lot wackier: talking of human purpose, Bach thinks that humanity has evolved for "dealing with entropy," "not to serve Gaia." In other words, the omega point of human evolution is, apparently, "to burn oil," which is a good thing because it "reactivates the fossilized fuel" and "puts it back into the atmosphere so new organisms can be created." I'm not making this up. These are literal quotes from the interview. Bach admits that all of this may likely lead to some short-term disruption (including our own extinction, as he briefly mentions in passing). But who cares? It'll all have been worth it if it serves the all-important transition from carbon-based to "substrate-agnostic" intelligence. Obviously, the philososphy of longtermism is strong in Bach: how little do our individual lives matter in light of this grand vision for a posthuman future? Like a true transhumanist, Bach believes this future to lie in machine intelligence, not only superior to ours but also lifted from the weaknesses of the flesh. Humanity will be obsolete. And we'll be all the better for our demise: our true destiny lies in creating a realm of disembodied ethereal superintelligence. Does that sound familiar? Of course it does: techno-transcendentalism is nothing but good old theistic religion, a medieval kind of Christianity rebranded and repackaged in techno-optimist jargon to flatter our self-image as sophisticated modern humans with an impressive (and seemingly unlimited) knack for technological innovation. It is a belief in all-powerful entities determining our fate, beings we must worship or be damned. Judgment day is near. You can join the cause to be among the chosen ones, ascending to eternal life in a realm beyond our physical world. Or you can stay behind behind and rot in your flesh. The choice is yours. Except this time, god is not eternal. This time, we are building our deities ourselves in the form of machines of our own creation. Our human purpose, then, is to design our own objects of worship. More than that: our destiny is to transcend ourselves. Humanity is but a bridge. I doubt though that Nietzsche would have liked this particular kind of transformative hero's journey, an archetypal myth for our modern times. It would have been a bit too religious for him. It is certainly too religious for me. But that is not the only problem. It is a bullshit myth. And it is a crap religion. SIMULATION, AND OTHER NEOFEUDALIST FAIRY TALES At this point, you may object that Bach's views seem quite extreme, his opinions too far out on the fringe to be widely shared and popularized. And you are probably right. LeCun certainly does not seem very fond of Bach's kind of crazy utopianism. He has a much more realistic (and more business-oriented) take on the future potential of AI. So let it be noted: not every techno-optimist or AI researcher is a techno-transcendentalist. Not by some margin. But techno-transcendentalism is tremendously useful, even for those who do not really believe in it. Also, there are many less extreme versions of techno-transcendentalism that still share the essential tenets and metaphysical commitments of Bach's deluded narrative without sounding quite as unhinged. And those views are held widely, not only among AI nerds such as Bach, but also among the powerful technological mega-entrepreneurs of our age, and the tech-enthusiast armies of modern serfs that follow and admire their apparently sane, optimistic, and scientifically grounded vision. I'm not using the term "serf" gratuitously here. We are on a new road to serfdom. But it is not the government which oppresses us this time (although that is what many of the future minions firmly believe). Instead, we are about to willingly enslave ourselves, seduced and misled by our new tech overlords and their academic flunkies like Bach. This is the true danger of AI. Techno-transcendentalism serves as the ideology of a form of libertarian neofeudalism that is deeply anti-democratic and really really bad for most of humanity. Let us see how it all ties together. As already mentioned, the main leitmotif of the techno-transcendentalist narrative is the view that some kind of technological singularity is inevitable. Machines will outpace human powers. We will no longer be able to control our technology at some point in the not-too-distant future. Such speculative assumptions and political visions are taken for proven facts, and often used to argue against regulative efforts (as Bach does on Rutt's podcast). If there is one central insight to be gained from this essay, it is this: the belief in the inevitable superiority of machines is rooted in a metaphysical view of the whole world as a machine. More specifically, it is grounded in an extreme version of a view called computationalism, the idea that not only the human mind, but every physical process that exists in the universe can be considered a form of computation. In other words, what computers do and what we do when we think are exactly the same kind of process. Obviously. This computational worldview is firghteningly common and fashionable these days. It has become so commonplace that it is rarely questioned anymore, even though it is mere speculation, purely metaphysical, and not based on any empirical evidence. As an example, an extreme form of computationalism provides the metaphysical foundation for Michael Levin's wildly popular (and equally wildly confused) arguments about agency and (collective) intelligence, which I have criticized before. Here, the computationalist belief is that natural agency is mere algorithmic input-output processing, and intelligence simply lies in the intricacy of this process, which increases every time several computing devices (from rocks to philosophers) join forces to "think" together. It's a weird view of the world that blurs the boundary between the living and the non-living and, ultimately, leads to panpsychism if properly thought through (more on that another time). Panpsychism, by the way, is another view that's increasingly popular with the technorati. Levin gets an honorable mention by Bach and, of course, he's been on Fridman's podcast. It all fits together perfectly. They're all part of the same cult. Computationalism, taken to its logical conclusion, yields the idea that the whole of reality may be one big simulation. This simulation hypothesis (or simulation argument) was popularized by longtermist philosopher Nick Bostrom (another guest on Fridman's podcast). Not surprisingly, simulation is popular among techies, and has been explicitly endorsed by silicon-valley entrepreneurs like Elon Musk. The argument is based on the idea that computer simulations, as well as augmented and virtual reality, are becoming increasingly difficult to distinguish from real-world experiences as our technological abilities improve at breakneck speed. We may be nearing a point soon, so the reasoning goes, at which our own simulations will appear as real to us as the actual world. This renders plausible the idea that even our interactions with the actual world may be the result of some gigantic computer simulation. There are a number of obvious problems with this view. For starters, we may wonder what exactly the point is. Arguably, no particularly useful insights about our lives or the world we live in are gained by assuming we live in a simulation. And it seems pretty hard to come up with an experiment that would reveal the validity of the hypothesis. Yet, the simulation argument does fit rather nicely with the metaphysical assumption that everything in the universe is a computation. If every physical process is simulable, is it not reasonable to assume that these processes themselves are actually the product of some kind of all-encompassing simulation? At first glance, simulation is a perfectly scientific view of the world. But a little bit of reflection reveals a more subtle aspect of the idea, obvious once you see it, but usually kept hidden below the surface: simulation necessarily implies a simulator. If the whole world is a simulation, the simulator cannot be part of it. Thus, there is something (or someone) outside our world doing the simulating. To state it clearly: by definition, the simulator is a supernatural entity, not part of the physical world. And here we are again: just like Bach's vision of our voluntary subjugation to our digital overlords, the simulation hypothesis is classic transcendental theism — religion through the backdoor. And, again, it is presented in a manner that is attractive to technology-affine people who would never be seen attending a traditional church service, but often feel more comfortable in simulated settings than in the real world. Just don't mention the supernatural simulator lurking in the background too often, and it is all perfectly palatable. The simulation hypothesis is a powerful tool for deception because it blurs the distinction between actual and virtual reality. If you believe the simulation argument, then both physical and simulated environments are of the same quality and kind — never more than digital computation. And the other way around: if you believe that every physical process is some kind of digital computation to begin with, you are more likely to buy into the claim that simulated experiences can actually be equivalent to real ones. Simple and self-evident! Or so it seems. The most forceful and focused argument for the equivalence of the real and the virtual is presented in a recent book by philosopher David Chalmers (of philosophical zombie fame), which is aptly entitled "Reality+." It fits the techno-transcendentalist gospel snugly. On the one hand, I have to agree with Chalmers: of course, virtual worlds can generate situations that go beyond real-world experiences and are real as in "capable of being experienced with our physical senses." Moreover, I don't doubt that virtual experiences can have tangible consequences in the physical world. Therefore, we do need to take virtuality seriously. On the other hand, virtuality is a bit like god, or unicorns. It may exist in the sense of having real consequences, but it does not exist in the way a rock does, or a human being. What Chalmers doesn't see (but what seems important to me somehow) is that there is a pretty straightforward and foolproof way to distinguish virtual and physical reality: physical reality will kill you if you ignore it for long enough. Virtual experiences (and unicorns) won't. They will just go away. This intentional blurring of boundaries between the real and the virtual leaves the door wide open for a dangerous descent into delusion, reducing our grip on reality at a time when that grip seems loose enough to begin with. Think about it: we are increasingly entangled in virtuality. Even if we don't buy into Bach's tale of the coming transition to "substrate-agnostic consciousness," techno-transcendentalism is bringing back all-powerful deities in the guise of supernatural simulators and machine superintelligences. At the same time, it delivers the promise of a better life in virtual reality (quite literally heaven on earth): a world completely under your own control, neatly tailored to your own wants and needs, free of the insecurities and inconveniences of actual reality. Complete wish fulfillment. Paradise at last! Utter freedom. Hallelujah! The snag is: this freedom does not apply in the real world. Quite the contrary. The whole idea is utterly elitist and undemocratic. To carry on with techno-transcendence, strong and focused leadership by a small group of visionaries will be required (or so the quiet and discrete thinking goes). It will require unprecedented amounts of sustained capital investment, technological development, material resources, and energy (AI is an extremely wasteful business; but more on that later). To pull it through, lots of minions will have to be enlisted in the project. These people will only get the cheap ticket: a temporary escape from reality, a transient digital hit of dopamine. No eternal bliss or life for them. And so, before you have noticed, you will have given away all your agency and creativity to some AI-produced virtuality that you have to purchase (at increasing expense ... surprise, surprise) from some corporation that has a monopoly on this modern incarnation of heaven. One-to-one like the medieval church back then, really. That's the business model: sell a narrative of techno-utopia to enough gullible fools, and they will finance a political revolution for the chosen few. Lure them with talk of freedom and a virtual land of milk and honey. Scare them with the inevitable rise of the machines. A brave new world awaits. Only this time the happiness drug that keeps you from realizing what is going on is digital, not chemical. And all the while you are actually believing you will be among the chosen few. Neat and simple. Techno-transcendentalism is an ideological tool for the achievement of libertarian utopia. In that sense, Bach is certainly right: it is possible to transfer the methods of a premodern god-fearing society straight to ours, to build a society in which a few rich and influential individuals with maximum personal freedom and unfettered power run things, freed from the burden of societal oversight and regulation. It will not be democratic. It will be a form of libertarian neofeudalism, an extremely unjust and unequal form of society. That's why we need stringent industry regulation. And we need it now. The problem is that we are constantly distracted from this simple and urgent issue by a constant flood of hyped bullshit claims about superintelligent machines and technological singularities that are apparently imminent. And what if such distraction is exactly the point? No consciousness or general intelligence will spring from an algorithm any time soon. In fact, it will very probably never happen. But losing our freedom to a small elite of tech overlords, that is a real and plausible scenario. And it may happen very soon. I told you, it's a medieval cult. But it gets worse. Much worse. Let's turn to the apocalyptic branch of techno-transcendentalism. Brace yourself: the end is nigh. But there is one path to redemption. The techno-illuminati will show you. OFF THE PRECIPICE: AI APOCALYPSE AND DOOMER TRANSCENDENTALISM Not everybody in awe of machine "intelligence" thinks it's an unreservedly good thing though, and even some who like the idea of transitioning to "substrate-agnostic consciousness," are afraid that things may go awfully awry along the way if we don't carefully listen to their well-meaning advice. For example, longtermist and effective-altruism activist Toby Ord, in his book called "The Precipice," embarks on the rather ambitious task of calculating the probabilities for all of humanity's current "existential risks." Those are the kind of risks that threaten to "eliminate humanity's long-term potential," either by the complete extinction of our species or the permanent collapse of civilization. The good news is: there is only a 1:10,000 chance that we will go extinct within the next 100 years due to natural causes, such as a catastrophic asteroid impact, a massive supervolcanic eruption, or a nearby supernova. This will cover my lifetime and that of my children. Phew! Unfortunately, there's bad news too: Ord arrives at a 1:6 chance that humanity will wipe out its own potential within the next 100 years. In other words: we are playing a kind of Russian roulette with our future at the moment. Ord's list of human-made existential risks include factors that also keep me awake at night, like nuclear war (at a somewhat surprisingly low 1:1,000), climate change (also 1:1,000), as well as natural (1:10,000) and engineered (1:30) pandemics. But exceeding the summed probabilities of all other listed existential risks, natural or human-made, is one single factor: unaligned artificial intelligence, at a whopping 1:10 likelihood. Woah. These guys are really afraid of AI! But why? Aren't we much closer to nuclear war than getting wiped out by ChatGPT? Aren't we under constant threat of some sort of pathogen escaping from a bio-weapons lab? (The kind of thing that very probably did not happen with COVID-19.) What about an internal collapse of civilization? Politics, you know — our own stupidity killing us all? Nope. It is going to be unaligned AGI. Autodidact, self-declared genius, and rationality blogger Eliezer Yudkowsky has spent the better part of the last twenty years to tell us how and why, an effort that culminated in a rambling list of AGI ruin scenarios and a short but intense rant in Time magazine a couple of weeks ago, where he writes: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Now that's quite something. He also calls for "rogue data centers" to be destroyed by airstrike, and thinks that "preventing AI scenarios is considered a priority above preventing a full nuclear exchange." Yes, that sounds utterly nuts. If Bach is the Spanish Inquisition, with Yudkowsky it's welcome to Jonestown. First-rate AI doom at its peak. But, not so fast: I am a big fan of applying the (pre)cautionary principle to so-called ruin problems, where the worst-case scenario has a hard-to-quantify but non-zero probability, and has truly disastrous and irreversible consequences. After all, it is reasonable to argue that we should err on the safe side when it comes to climate tipping points, the emergence of novel global pandemics, or the release of genetically modified organisms into ecologies we do not even begin to understand. So, let's have a look at Yudkowsky's worst-case scenario. Is it worth "shutting it all down?" Is it plausible, or even possible, that AGI is going to kill us all? How much should we worry? Well. There are a few serious problems with the argument. In fact, Yudkowski's scenario for the end of the world is cartoonishly overblown. In fact, I don't want to give him too much airtime, and will just point out a few problems that result in a probability for his worst-case scenario that is basically zero. Naught. End of the world postponed until further notice (or until that full nuclear exchange or human-created pandemic will wipe us all out). The basic underlying problem lies in Yudkowsky's metaphysical assumptions, which are, quite frankly, completely delusional. The first issue is that Yudkowsky, like all his techno-transcendentalist friends, assumes the inevitable emergence of AI that achieves "smarter-than-human intelligence" in the very near future. But it is never explained what that means. None of these guys can ever be bothered. Yudkowsky claims that's exactly the point: the threat of AGI does not hinge on specific details or predictions, such as the question of whether or not an AI could become conscious or not. Similar to Bach's idea that machines already "think" faster than humans, intelligence is simply about systems that "optimize hard and calculate outputs that meet sufficiently complicated outcome criteria." That's all. The faster and larger, the smarter. Human go home. From here on it's "Australopithecus trying to fight Homo sapiens." (Remember Bach's plants vs. humans?) AI will perceive us as "creatures that are very stupid and very slow." While it is true that we cannot know in detail how current AI algorithms work, how exactly they generate their output, because we cannot "decode anything that goes on in [their] giant inscrutable arrays," it is also true that we do have a very good idea of the fundamental limitations of such machines. For example, current AI models (no matter how complex) cannot "perceive humans as creatures that are slow and stupid" because they have no concept of "human," "creature," "slow," or "stupid." In general, they have no semantics, no referents outside language. It's simply not within their programmed nature. They have no meaning. There are many other limitations. Here are a few basic things a human (or even a bacterium) can do, which AI algorithms cannot (and probably never will): Organisms are embodied, while algorithms are not. The difference is not just being located in a mobile (e.g., robot) body, but a fundamental blurring of hardware and software in the living world. Organisms literally are what they do. There is no hardware-software distinction. Computers, in contrast, are designed for maximal independence of software and hardware. Organisms make themselves, their software (symbols) directly producing their hardware (physics), and vice versa. Algorithms (no matter how "smart") are defined purely at the symbolic level, and can only produce more symbols, e.g., language models always stay in the domain of language. Their output may be instructions for an effector, but they have no external referents. Their interactions with the outside world are always indirect, mediated by hardware that is, itself, not a direct product of the software. Organisms have agency, while algorithms do not. This means organisms have their own goals, which are determined by the organism itself, while algorithms will only ever have the goals we give them, no matter how indirectly. Basically, no machine can truly want or need anything. Us telling them what to want or what to optimize for is not true wanting or goal-oriented behavior. Organisms live in the real world, where most problems are ill-defined, and information is scarce, ambiguous, and often misleading. We can call this a large world. In contrast, algorithms exist (by definition) in a small world, where every problem is well defined. They cannot (even in principle) escape that world. Even if their small world seems enormous to us, it remains small. And even if they move around the large world in robot hardware, they remain stuck in their small world. This is exactly why self-driving cars are such a tricky business. Organisms have predictive internal models of their world, based on what is relevant to them for their surviving and thriving. Algorithms are not alive and don't flourish or suffer. For them, everything and nothing is relevant in their small worlds. They do not need models and cannot have them. Their world is their model. There is no need for abstraction or idealization. Organisms can identify what is relevant to them, and translate ill-defined into well-defined problems, even in situations they have never encountered before. Algorithms will never be able to do that. In fact, they have no need to since all problems are well-defined to begin with, and nothing and everything is relevant at the same time in their small world. All an algorithm can do is find correlations and features in its preordered data set. Such data are the world of the algorithm, a world which is purely symbolic. Organisms learn through direct encounters, through active engagement, with the physical world. In contrast, algorithms only ever learn from preformatted, preclassified, and preordered data (see the last point). They cannot frame their problems themselves. They cannot turn ill-defined problems into well-defined ones. Living beings will always have to frame their problems for them. I could go on and on. The bottom line is: thinking is not just "optimizing hard" and producing "complicated outputs." It is a qualitatively different process than algorithmic computation. To know is to live. As Alison Gopnik has correctly pointed out, categories such as "intelligence," "agency," and "thinking" do not even apply to algorithmic AI, which is just fancy high-dimensional statistical inference. No agency will ever spring from it, and without agency no true thinking, general intelligence, or consciousness. Artificial intelligence is a complete misnomer. The field should be called algorithmic mimicry: the increasingly convincing appearance of intelligent behavior. Pareidolia on steroids for the 21st century. There is no "there" there. The mimicry is utterly shallow. I've actually co-authored a peer-reviewed paper on this, with my colleagues Andrea Roli and Stuart Kauffman. Thus, when Yudkowsky claims that we cannot align a "superintelligent AI" to our own interests, he has not the faintest clue what he is talking about. Wouldn't it be nice if these AI nerds would have at least a minimal understanding of the fundamental difference between the purely syntactic world their algorithms exist in, and the deeply semantic nature of real life? Instead, we get industry-sponsored academics and CEOs of AI companies telling us that it is us humans who are not that sophisticated after all. Total brainwash. Complete delusion. But how can I be so sure? Maybe the joke really is on us? Could Yudkowksy's doomsday scenario be right after all? Are we about to be replaced by AGI? Keep calm and read on: I do not think we are. Yudkowksy's ridiculous scenarios of AI creating "super-life" via email (I will not waste any time on this), and even his stupid "thought experiment" of the paperclip maximizer, do not illustrate any real alignment problems at all. If you do not want the world to be turned into paperclips, pull the damn plug out of the paperclip maker. AI is not alive. It is a machine. You cannot kill it, but you can easily shut it off. Alignment achieved. Voilà! If an AI succeeds in turning the whole world into paperclips, it is because we humans have put it in a position to do so. Let me tell you this: the risk of AGI takeover and apocalypse is zero, or very very near zero, not just in the next 100 years. At least in this regard, we may sleep tight at night. There is no longtermist nirvana, and no doomer AGI apocalypse. Let's downgrade that particular risk by a few orders of magnitude. I'm usually not in the business of pretending to know long-term odds, but I'll give it a 1:1,000,000,000, or thereabouts. You know, zero, for all practical purposes. Let's worry about real problems instead. What happened to humanity that we even listen to these people? The danger of AGI is nil, but the danger of libertarian neofeudalism is very very real. Why would anyone in their right mind buy into techno-transcendentalism? It is used to enslave us. To take our freedom away. Why then do so many people fall for this narrative? It's ridiculous and deranged. Are we all deluded? Have we lost our minds? Yes, no doubt, we are a bit deluded, and we are losing our minds these days. I think that the popularity of the whole techno-transcendental narrative springs from two main sources. First, a deep craving — in these times of profound meaning crisis — for a positive mythological vision, for transformative stories of salvation. Hence the revived popularity of a markedly unmodern Christian ideology in this techno-centric age, paralleling the recent resurgence of actual evangelical movements in the U.S. and elsewhere in the world. But, in addition, the acceptance of such techno-utopian fairy tales also depends on a deeper metaphysical confusion about reality that characterizes the entire age of modernity: it is the mistaken, but highly entrenched idea, that everything — the whole world and all the living and non-living things within it — is some kind of manipulable mechanism. If you ask me, it is high time that we move beyond this age of machines, and leave its technological utopias and nightmares behind. It is high time we stop listening to the techno-transcendetalists, make their business model illegal, and place their horrific political ideology far outside our society's Overton window. Call me intolerant. But tolerance must end where such serious threats to our sanity and well-being begin. A MACHINE METAPHYSICAL MESS As I have already mentioned, techno-transcendentalism poses as a rational science-based world view. In fact, it often poses as the only really rational science-based world view, for instance, when it makes an appearance within the rationality community. If you are a rigorous thinker, there seems to be no alternative to its no-nonsense mechanistic tenets. My final task here is to show that this is not at all true. In fact, the metaphysical assumptions that techno-transcendentalism is based on are extremely dubious. We've already encountered this issue above, but to understand it in a bit more depth, we need to look at these metaphysical assumptions more closely. Metaphysics does not feature heavily in any of the recent discussions about AGI. In general, it is not a topic that a lot of people are familiar with these days. It sounds a little detached, and old-fashioned — you know, untethered in the Platonic realm. We imagine ancient Greek philosophers leisurely strolling around cloistered halls. Indeed, the word comes from the fact that Aristotle published his "first philosophy" (as he called it) in a book that came right after his "Physics." In this way, it is literally after or beyond ("meta") physics. In recent times, metaphysics has fallen into disrepute as mere speculation. Something that people with facts don't have any need for. Take the hard-nosed logical positivists of the Vienna Circle in the early 20th century. They defined metaphysics as "everything that cannot be derived through logical reasoning from empirical observation," and declared it utterly meaningless. We still feel the legacy of that sentiment today. Many of my scientist colleagues still think metaphysics does not concern them. Yet, as philosopher Daniel Dennett rightly points out: "there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." And, my oh my, there is a lot of unexamined baggage in techno-transcendentalism. In fact, the sheer number of foundational assumptions that nobody is allowed to openly scrutinize or criticize are ample testament to the deeply cultish nature of the ideology. Here, I'll focus on the most fundamental assumption on which the whole techno-transcendentalist creed rests: every physical process in the universe must be computable. In more precise and technical terms, this means we should be able to exactly reproduce any physical process by simulating it on a universal Turing machine (an abstract model of a digital computer with potentially unlimited memory and processing speed, which was invented in 1936 by Alan Turing, the man who gave the Turing Prize its name). To clarify, the emphasis is on "exactly" here: techno-transcendentalists do not merely believe that we can usefully approximate physical processes by simulating them in a digital computer (which is a perfectly defensible position) but, in a much stronger sense, that the universe and everything in it — from molecules to rocks to bacteria to human brains — literally is one enormous digital computer. This is techno-transcendentalist metaphysics. This universal computationalism includes, but is not restricted to, the simulation hypothesis. Remember: if the whole world is a simulation, then there is a simulator outside it. In contrast, the mere fact that everything is computation does not imply a supernatural simulator. Turing machines are not the only way to conceptualize computing and simulation. There are other abstract models of computation, such as lambda calculus or recursive function theory, but they are all equivalent in the sense that they all yield the exact same set of computable functions. What can be computed in one paradigm can be computed in all the others. This fundamental insight is mathematically codified by something called the Church-Turing thesis. (Alonzo Church was the inventor of lambda calculus and Turing's PhD supervisor.) It unifies the general theory of computation by saying that every effective computation (roughly, anything you can actually compute in practice) can be carried out by an algorithm running on a universal Turing machine. This thesis cannot be proven in a rigorous mathematical sense (basically because we do not have a precise, formal, and general definition of "effective computation"), but it is also not controversial. In practice, the Church-Turing thesis is a very solid foundation for a general theory of computation. The situation is very different when it comes to applying the theory of computation to physics. Assuming that every physical process in the universe is computable is a much stronger form of the Church-Turing thesis, called the Church-Turing-Deutsch conjecture. It was proposed by physicist David Deutsch in 1985, and later popularized in his book "The Fabric of Reality." It is important to note that this physical version of the Church-Turing thesis does not logically follow from the original. Instead, it is intended to be an empirical hypothesis, testable by scientific experimentation. And here comes the surprising twist: there is no evidence at all that the Church-Turing-Deutsch conjecture applies. Not one jot. It is mere speculation on Deutsch's part who surmised that the laws of quantum mechanics are indeed computable, and that they describe every physical process in the universe. Both assumptions are highly doubtful. In fact, there are solid arguments that quite convincingly refute them. These arguments indicate that not every physical process is computable or, indeed, no physical processes can be precisely captured by simulation on a Turing machine. For instance, neither the laws of classical physics nor those of general relativity are entirely computable (since they contain noncomputable real numbers and infinities). Quantum mechanics introduces its own difficulties in the form of the uncertainty principle and its resulting quantum indeterminacy. The theory of measurement imposes its own (very different) limitations. Beyond these quite general doubts, a concrete counterexample of noncomputable physical processes is provided by Robert Rosen's conjecture that living systems (and all those systems that contain them, such as ecologies and societies) cannot be captured completely by algorithmic simulation. This theoretical insight, based on the branch of mathematics called category theory, was first formulated in the late 1950s, presented in detail in Rosen's book "Life Itself" (1991), and later derived in a mathematically airtight manner by his student Aloysius Louie in "More Than Life Itself." This work is widely ignored, even though its claims remain firmly standing, despite numerous attempts at refutation. This, arguably, renders Rosen's claims more plausible that those derived from the Church-Turing-Deutsch conjecture. I could go on. But I guess the main point is clear by now: the kind of radical and universal computationalism that grounds techno-transcendentalism does not stand up to closer philosophical and scientific scrutiny. It is shaky at best, and completely upside-down if you're a skeptic like I am. There is no convincing reason to believe in it. Yet, this state of affairs is gibly disregarded, not only by techno-transcendentalists, but also by a large and prominent number of contemporary computer scientists, physicists, and biologists. The computability of everything is an assumption that has become self-evident not because, but in spite of, the existing evidence. How could something like this happen? How could this unproven but fundamental assumption have escaped the scrutiny of the organized skepticism so much revered and allegedly practiced by techno-transcendentalists and other scientist believers in computationalism? Personally, I think that the uncritical acceptance of this dogma comes from the mistaken idea that science has to be mechanistic and reductionist to be rigorous. The world is simply presupposed to be a machine. Algorithms are the most versatile mechanisms humanity has ever invented. Because of this, it is easy to fall into the mistaken assumption that everything in the world works like our latest and fanciest technology. But that's a vast and complicated topic, which I will reserve for another blog post in the future. With the assumption that everything is computation falls the assumption that algorithmic simulation corresponds to real cognition in living beings in any meaningful way. It is not at all evident that machines can "think" the way humans do. Why should thinking and computing be equivalent? Cognition is not just a matter of speedy optimization or calculation, as Yudkowsky asserts. There are fundamental differences in how machines and living beings are organized. There is absolutely no reason to believe that machines will outpace human cognitive skills any time soon. Granted, they may do better at specific tasks that involve the detection of high-dimensional correlations, and also those that require memorizing many data points (humans can only hold about seven objects in short-term memory at any given time). Those tasks, and pen-and-paper calculations in particular, constitute the tiny subset of human cognitive skills that served as the template for the modern concept of "computation" in the first place. But brains can do many more things, and they certainly have not evolved to be computers. Not at all. Instead, they are organs adapted to help animals better solve the problem of relevance in their complex and inscrutable environment (something algorithms famously cannot do, and probably never will). More on that in a later blog post. I'm currently also writing a scientific paper on the topic. But that is not the main point here. That main point is: the metaphysics of techno-transcendentalism — its radical and universal computationalism as well as the belief in the inevitable supremacy of machines — is based on a simple mistake, a mistake which is called the fallacy of misplaced concreteness (or fallacy of reification). Computation is an abstracted way to represent reality, not reality itself. Techno-transcendentalists (and all other adherents of strong forms of computationalism) simply mistake the map for the territory. The world is not a machine and, in particular, living beings are not machines. Neither of them constitute some kind of digital computation. Conversely, computers cannot think like living beings can. In this sense, they are not intelligent at all, no matter how sophisticated they seem to us. Even a bacterium can solve the problem of relevance, but the "smartest" contemporary algorithm cannot. Philosophers call what is happening here a fundamental category error. This brings us back to Alison Gopnik: even though AI researchers like LeCun chide everyone for being uneducated about their work, they themselves are completely clueless when it comes to concepts such as "thinking," "agency," "cognition," "consciousness," and indeed "intelligence." These concepts represent abilities that living beings possess, but algorithms cannot. Not just techno-transcendentalists but, sadly, also most biologists today are deeply ignorant of this simple distinction. As long as this is the case, our discussion about AI, and AGI in particular, will remain deeply misinformed and confused. What emerges at the origin of life, the capability for autonomous agency, springs from a completely new organization of matter. What emerges in a contemporary AI system, in contrast, is nothing but high-dimensional correlations that seem mysterious to us limited human beings because we are very bad at dealing with processes that involve many variables at the same time. The two kinds of emergence are fundamentally and qualitatively different. No conscious AI, or AI with agency, will emerge any time soon. In fact, no AGI will ever be possible in an algorithmic framework. The end of the world is not nearly as nigh as Yudkowsky wants to make us believe. Does that mean that the current developments surrounding AI are harmless? Not at all! I have argued that techno-transcendentalist ideology is not just a modern mythological narrative, but also a useful tool to serve the purpose of bringing about libertarian neufeudalism. Not quite the end of the world, but a terrible enough prospect, if you ask me. The technological singularity is not coming. Virtual heaven is not going to open its gates to us any time soon. Instead, the neo-religious template of techno-transcendentalism is a tried and true method from premodern times to keep the serfs in line with threats of the apocalypse and promises of eternal bliss. Stick and carrot. Unlike AI research itself, this is not exactly rocket science. But, you may think, is this argument not overblown itself? Am I paranoid? Am I implying malicious intent where there is none? That is a good question. I think there are two types of protagonists in this story of techno-transcendentalism: the believers and the cynics. Both, in their own ways, think they are doing what is best for humanity. They are not true villains. Yet, both are affected by delusions that will critically undermine their project, with potentially catastrophic effects. With their ideological blinkers on, they cannot see these dangers. They may not be villains, but they are certainly boneheaded enough, foolish in the sense of lacking wisdom, that we do not want them as our leaders. The central delusion they all share is the following: both believers and cynics think that the world is a machine. Worse, it is their plaything — controllable, predictable, programmable. And they all want to be in charge of play, they want to steer the machine, they want to be the programmer, without too much outside interference. A bunch of 14-year-old boys that are fighting over who gets to play the next round of Mario Cart. Something like that. Hence neofeudalism, and more or less overt anti-democratic activism. The oncoming social disruption is part of the program. This much, at least, is done with intent. There can be no excuses afterwards. We know who is responsible. However, there are also fundamental differences between the two camps. In particular, the believers obviously see techno-transcendentalism as a mythological narrative for our age, a true utopian vision, while the cynics see it only as a tool that serves their ulterior motives. Both extremes lie along a spectrum. Take Eliezer Yudkowsky, for example. He is at the extreme "believer" end of the scale. Joscha Bach is a believer too, but much more optimistic and moderate. They both have wholeheartedly bought into the story of the inevitable singularity — faith, hope, and love — and they both truly believe they're among the chosen ones in this story of salvation, albeit in very different ways: Bach as the leader of the faithful, Yudkowsky as the prophet of the apocalypse. Elon Musk and Yann LeCun are at the other end of the spectrum, only to be outpaced by Peter Thiel (another infamous silicon-valley tycoon) in terms of cynicism. What counts in the cycnic's corner are only two things: unfettered wealth and power. Not just political power, but power to change the world in their own image. They see themselves as engineers of reality. No mythos required. These actors do not buy into the techno-transcendentalist cult, but its adherents serve a useful purpose as the foot soldiers (often cannon fodder) of the coming revolution. All this is wrapped up in longermist philosophy: it's ok if you suffer and die, if we all go exstinct even, as long as the far-future dream of galactic conquest and eternal bliss in simulation is on course, or at least intact. That is humanity's long-term destiny. It is an aim that is shared among believers and cynics. Their differing attitudes only concern the more or less pragmatic way to get there by overcoming our temporary predicaments with the help of various technological fixes. This is the true danger of our current moment in human history. I have previously set the risk of AGI apocalypse to basically zero. But don't get me wrong. There is a clear and present danger. The probability of squandering humanity's future potential with AI is much, much higher than zero. (Don't ask me to put a number on it. I'm not a longtermist in the business of calculating existential risk.) Here, we have a technology, massively wasteful in terms of energy and resources, that is being developed at scale at a breakneck speed by people with the wrong kind of ethical committments and a maximally deluded view of themselves and their place in the universe. We have no idea where this will lead. But we know change will be fast, global, and hard to control. What can possibly go wrong? Another thing is quite predictable: there will be severe unintended consequences, most of them probably not good. For the longtermists such short-term consequences do not even matter, as long as the risk associated is not deemed existential (by themselves, of course). Even human extinction could just be a temporary inconvenience as long as the transcendence, the singularity, the transition to "substrate-agnostic" intelligence is on the way. This is why we need to stop these people. They are dangerous and deluded, yet full of self-confidence — self-righteous and convinced that they know the way. Their enormous yet brittle egos tend to be easily bruised by criticism. In their boundless hubris, they massively overestimate their own capacities. In particular, they massively overestimate their capacity to control and predict the consequences of what they are doing. They are foolish, misled by a world view and mythos that are fundamentally mistaken. What they hate most (even more than criticism) is being regulated, held back by the ignorant masses that do not share their vision. They know what's best for us. But they are wrong. We need to slow them down, as much as possible and as soon as possible. This is not a technological problem, and not a scientific one. Instead, it is political. We do not need to stop AI research. That would be pretty pointless, especially if it is only for a few months. Instead, we need to stop the uncontrolled deployment of this technology until we have a better idea of its (unintended) consequences, and know what regulations to put in place. This essay is not about such regulations, not about policy, but a few measures immediately suggest themselves. By internalizing the external costs of AI research, for example, we could effectively slow its rate of progress and intefere with the insane business model of the tech giants behind it. Next, we need to put laws in place. We need our own Butlerian jihad (if you're a Dune fan like me): "thou shalt not build a machine with the likeness of the human mind." Or, as Daniel Dennet puts it: "Counterfeit money has been seen as vandalism against society ever since money has existed. Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious." I agree. We cannot have fake people, and to build algorithmic mimicry that impersonates existing or non-existing persons needs to be made illegal, as soon as we can. Last but not least, we need to educate people about what it means to have agency, intelligence, consciousness, how to talk about these topics, and how seemingly "intelligent" machines do not have even the slightest spark of any of that. This time, the truth is not somewhere in the middle. AI is stochastic parrots all the way down. We need a new vocabulary to talk about such algorithms. Algorithmic mimicry is a tool. We should treat and use it as such. We should not interact with algorithms as if they were sentient persons. At the same time, we must not treat people like machines. We have to stop optimizing ourselves, tune our performance in a game nobody wants to play. You do not strive for alignment with your screwdriver. Neither should you align with an algorithm or the world it creates for you. Always remember: you can switch virtuality off if it confuses you too much. Of course, this is no longer possible once we freely and willingly give away our agency to algorithms that have none. We can no longer make sense in a world that is flooded with misinformation. Note that the choice is entirely up to us. It is within our own hands. The alignment problem is exactly upside-down: the future supremacy of machines is never going to happen if we don't let it happen. It is the techno-transcendentalists who want to align you to their purpose. Don't be their fool. Refuse to play along. Don't be a serf. This would be the AI revolution worth having. Are you with me? Images were generated by the author using DALL-E 2 with the prompt "the neo-theistic cult of silicon intelligence." Here is an excellent talk by Tristan Harris and Aza Raskin of the Center for Humane Technology, warning us about the dire consequences of algorithmic mimicry and its current business model: https://vimeo.com/809258916/92b420d98a. Ironically, even these truly smart skeptics fall into the habit of talking about algorithms as if they "think" or "learn" (chemistry, for example), highlighting just how careful we need to be not to attribute any "human spark" to what is basically a massive statistical inference machine.
This week, I was invited to give a three-minute flash talk at an event called "Human Development, Sustainability, and Agency," which was organized by IIASA (the International Institute for Applied Systems Analysis), the United Nations Development Programme (UNDP), and the Austrian Academy of Sciences (ÖAW). The event framed the release of an UNDP report called "Unsettled times, unsettled lives: shaping our future in a transforming world." It forms part of IIASA's "Transformations within Reach (TwR) project, which looks for ways to transform societal decision-making systems and processes to facilitate transformation to sustainability. You can find more information on our research project on agency and evolution here. My flash talk was called "Beyond the Age of Machines." Because it was so short, I can share my full-length notes with you. Here we go: "Hello everyone, and thank you for the opportunity to share a few of my ideas with you, which I hope illuminate the topic of agency, sustainability, and human development, and provide some inspiring food for thought. I am an evolutionary systems biologist and philosopher of science who studies organismic agency and its role in evolution, with a particular focus on evolutionary innovation and open-ended evolutionary dynamics. I consider human agency and consciousness to be highly evolved expressions of a much broader basic ability of all living organisms to act on their own behalf. This kind of natural agency is rooted in the peculiar self-manufacturing organization of organisms, and the consequences this organization has on how organisms interact with their environment (their agent-arena relationship). In particular, organisms distinguish themselves from non-living machines in that they can set and pursue their own intrinsic goals. This, in turn, enables living beings to realize what is relevant to them (and what is not) in the context of their specific experienced environment. Solving the problem of relevance is something a bacterium (or any other organism) can do, but even our most sophisticated algorithms never will. This is why there will never be any artificial general intelligence (AGI) based on algorithmic computing. If AGI will ever be generated, it will come out of a biology lab (and will not be aligned with human interests), because general intelligence requires the ability to realize relevance. And yet, we humans increasingly cede our agency and creativity to mindless algorithms that completely lack these properties. Artificial intelligence (AI) is a gross misnomer. It should be called algorithmic mimicry, the computational art of imitation. AI always gets its goals provided by an external agent (the programmer). It is instructed to absorb patterns from past human activities and to recombine them in sometimes novel and surprising ways. The problem is that an increasing amount of digital data will be AI-generated in the near future (and it will become increasingly difficult to tell computer- and human-generated content apart), meaning that AI algorithms will be trained increasingly on their own output. This creates a vicious inward spiral which will soon be a substantial impediment to the continued evolution of human agency and creativity. It will be crucial to take early action towards counteracting this pernicious trend by proper regulations, and a change in the design of the interfaces that guide the interaction of human agents with non-agential algorithms. In summary, we need to relearn to treat our machines for what they are: tools to boost our own agency, not masters to which we delegate our creativity and ability to act. For continued sustainable human development, we must go beyond the age of machines. Thank you very much." SOURCES and FURTHER READING: "organisms act on their own behalf": Stuart Kauffman, Investigations, OUP 2000. "the self-manufacturing organization of the organism": see, for example, Robert Rosen, Life Itself, Columbia Univ Press, 1991; Alvaro Morena & Matteo Mossio, Biological Autonomy, Springer, 2015; Jan-Hendrik Hofmeyr, A biochemically-realisable relational model of the self-manufacturing cell, Biosystems 207: 104463, 2021. "organismic agents and their environment": Denis Walsh, Organisms, Agency, and Evolution. CUP, 2015. "the agent-arena relationship": a concept first introduced in John Vervaeke's "Awakening from the Meaning Crisis," and also discussed in this interesting dialogue. "agency and evolutionary evolution": https://osf.io/2g7fh. "agency and open-ended evolutionary dynamics": https://osf.io/yfmt3. "organisms can set their own intrinsic goals": Daniel Nicholson, Organisms ≠ Machines. Stud Hist Phil Sci C 44: 669–78. "to realize what is relevant": John Vervaeke, Timothy Lillicrap & Blake Richards, Relevance Realization and the Emerging Framework in Cognitive Science. J Log Comput 22: 79–99. "solving the problem of relevance": see Standford Encyclopedia of Philosophy, The Frame Problem. "there will never be artificial general intelligence based on algorithmic computing": https://osf.io/yfmt3. "we humans cede our agency": see The Social Dilemma. II. A Naturalistic Philosophy of Biology This three-part series of blog posts is based on a talk I held at the workshop on "A New Naturalism: Towards a Progressive Theoretical Biology, " which I co-organized with philosophers Dan Brooks and James DiFrisco at the Wissenschaftskolleg zu Berlin in October 2022. This part of the series heavily draws on a paper (available as a preprint here) that is currently under review at Royal Society Open Science. You can find part I here, and part III here. In the first part of this three-part series, I have outlined why I think we urgently need more philosophy in biology today. More specifically, I have argued that we need two kinds of philosophical approaches: on the one hand, a new naturalist philosophy of biology which is concerned with examining the practices, methods, concepts, and theories of our discipline and how they are used to generate scientific knowledge (see this post). This branch of the philosophy of science is relevant for practicing biologists since it boosts their understanding of what is realistically achievable, increases the range of questions they can ask, clarifies what kinds of methods and approaches are most appropriate and promising in a given situation, and reveals how their work is best (and most wisely) contextualized within the big questions about life, human nature, and our place in the universe. On the other hand, we need a philosophical kind of theoretical biology, which operates within the life sciences and consists of philosophical methods that biologists themselves can use to better understand biological concepts, and to solve biological problems (see part III). WHAT IS NATURALIST PHILOSOPHY? Let's talk about naturalist philosophy of biology first. And, no, it does not have anything to do with bird watching or nature documentaries. I don't mean that kind of "naturalist." What distinguishes a naturalist philosophy of biology (from a foundationalist one, let's say) are the following criteria:
To summarize: naturalistic philosophy of biology attempts to accurately describe and understand how biology is actually done by real-world biologists at this present moment. Yet it must not remain purely descriptive. For it to be useful to biologists, we need an interventionist philosophy of biology that actively shapes the kind of questions we can ask, the kinds of methods we can use, and the kind of explanations we accept as valid answers to our questions. All of these will necessarily change as the field moves on. What we want, therefore, is an adaptive co-evolution of biology and its philosophy, a constant synergy, a dialectic spiral in which one discipline shapes and supports the other, lifting each other to ever higher levels of understanding in the process. The problem is that we are very far from the optimistic vision I have just outlined. In fact, very few scientists these days get any philosophical education at all. This is a serious problem that I attempt to address with my own philosophy courses for researchers. It leads to a situation where many scientists hold very outdated philosophical beliefs, and many unironically proclaim that they do not adhere to any philosophical position at all, or even that "philosophy is dead," as the late physicist Stephen Hawking once remarked. This leads to some serious misconceptions among scientists about how science works and what valid scientific explanations are. These misconceptions are now hindering progress in biology. Furthermore, they underlie the uncritical acceptance of the pernicious cult of productivity that rules supreme in contemporary scientific research. When scientists are aware of their philosophical views, they often profess adherence to something we could call naïve realism. Naïve realism is a form of objectivist realism that consists of a loose and varied assortment of philosophical preconceptions that, although mostly outdated, continue to shape our view of science and its role in society. This view does not amount to a systematic or consistent philosophical doctrine. Instead, naïve realism is a mixed bag of more or less vaguely held convictions, which often clash in contradictions, and leave many problems concerning the scientific method and the knowledge it produces unresolved. Without going into detail, naïve realism usually includes ideas from logical positivism, Popperian falsificationism, and Merton's sociological ethos of science (I've written about this in a lot more detail here, if you're interested). Despite its intuitive appeal, naïve realism is not a naturalistic philosophy of science at all. On the contrary, it is a highly idealized view of how science should work. It paints a deceptively simple picture of a universal scientific method that, when applied properly, leads to an automatic asymptotic approximation of our knowledge of the world to the truth (see figure below). On this view, the process of producing knowledge can be fully formalized as a combination of empirical experimentation and logical inference for hypothesis testing. It leads to ever more accurate, trustworthy, and objective scientific representations of the world. We may not ever reach a complete description of the world, but certainly we're getting closer and closer over time. This view has some counterintuitive consequences. Not only does it imply that we should be able to replace scientists with algorithms some day (since science is seen as a completely formalizable activity), but it also suggests that we can generate more scientific knowledge simply by increasing the productivity of the knowledge-production system: increased pressure, better efficiency, faster convergence. Easy! Unfortunately, due to its (somewhat ironic) detachment from reality, naïve realism leads to all kinds of unintended consequences when applied to the actual process of doing science. One problem is that too much pressure limits creativity and prevents researchers from taking on original or high-risk projects. Another problem is that we give ourselves less and less time to think. We're always rushing into the next project, adopting the next method, generating the next big data set. This way, the research process gets stuck in local optima of the knowledge landscape. Every evolutionary theorist knows that too little variation leads to suboptimal outcomes. This is exactly what is happening in biology today. We are becoming trapped by our own ambitions, our rush to publish new results. How do we get out of this dilemma? What is needed is a less simplistic, less mechanistic approach to science, an approach that reflects the messy reality of limited human beings doing research in an astonishingly complex world that is far beyond our grasp, an approach that focuses on the quality of the knowledge-production process rather than the amount of output it produces. Luckily, such an approach is already available. The challenge is to make it known more widely, not just among philosophers of science but among researchers in the life sciences themselves. This naturalist philosophy of science consists of three main pillars (see figure below):
1. SCIENCE AS PERSPECTIVE The first problem that naïve realism faces is that there simply is no universal scientific method. Science is quite obviously a cultural construct in the sense that it consists of practices which involve the finite cognitive and technological abilities of human beings, firmly embedded in a specific social and historical context. For this reason, scientists use quite different approaches depending on the problem they are trying to solve, on the traditions of their scientific discipline, and on their own educational background and cognitive abilities. This kind of relativist view can be taken to extremes, however. Strong forms of social constructivism claim that science is nothing but social discourse, the knowledge it produces no better than any other way of knowing, like poetry or religion, which are also considered types of social discourse. This strong constructivist position is certainly not naturalistic, and it is just as oversimplified as naïve realism. Therefore, I believe that a naturalistic philosophy of biology must find a middle way between the opposing extremes of social constructivism and naïve realism. An approach that achieves this is perspectival realism. The best way to learn about this philosophy is to read Bill Wimsatt's "Re-engineering Philosophy for Limited Beings." It's not an easy read, but it will change the way you see the world, I can promise you that much. In addition, I recommend Ron Giere's "Scientific Perspectivism" (which will give you a quick overview of the essentials), and Michela Massimi's "Perspectival Realism" (published this year). Finally, Roy Bhaskar's critical realism is worth mentioning as a pioneering branch of the perspectivist family of philosophies described here. I will mainly rely on Wimsatt's excellent book in what follows. Perspectival realism holds that there is an accessible reality, a causal structure of the universe, whose existence is independent of the observer and their effort to understand it. Science provides a collection of methodologies and practices designed for us to gain trustworthy knowledge about the structure of reality, minimizing bias and the danger of self-deception. At the same time, perspectival realism also acknowledges that we cannot step out of our own heads: it is impossible to gain a purely objective “view from nowhere.” Each individual researcher and each society has its unique perspective on the world, and these perspectives matter for science. It needs to be said again at this point that perspectivism is not relativism. A scientific perspective is not just someone's opinion or point of view. This is the difference between what Richard Bernstein has called flabby versus engaged pluralism: each new perspective must be rigorously justified. Wimsatt defines a perspective as an “intriguingly quasi-subjective (or at least observer, technique or technology-relative) cut on the phenomena characteristic of a system." Perspectives may be limited and context-dependent, but they are also grounded in reality. They are not a bug, but a central feature of the scientific approach. Our perspectives are what connects us to the world. It is only through them, by systematically examining their differences and connections, that we can gain any kind of inter-subjective access to reality at all. This is how we produce (scientific) knowledge that is sound, robust, and trustworthy. In fact, it is more robust than what we get from any other way knowing. This is and remains exactly the purpose and societal function of science. Which leads us to a number of powerful principles that arise from a perspectivist-realist approach to science:
Perspectival realism is relevant for a naturalist philosophy of science because it takes the practice of doing science for what it is instead of aiming for some unattainable ideal. At the same time, it acknowledges and justifies the special status of scientific knowledge compared to other ways of knowing. In addition, it refocuses our attention from the product or outcome of the scientific process to the quality of that process itself. How we establish our facts matters. This is why we will be talking about the importance of process thinking for naturalistic philosophy of science next. 2. SCIENCE AS PROCESS The second major criticism that naïve realism must face is that it is excessively focused on research outcomes — science producing immutable facts — thereby neglecting the intricacies and the importance of the process of inquiry. Basically, looking at scientific knowledge only as the product of science is like looking at art in a museum. The product of science is only as good as the process that generates it. Moreover, many perfectly planned and executed research projects fail to meet their targets, but that is often a good thing: scientific progress relies as much on failure as it does on success (see above). Some of the biggest scientific breakthroughs and conceptual revolutions have come from projects that have failed in interesting ways. Think about the unsuccessful attempt to formalize mathematics, which led to Gödel’s Incompleteness Theorem, or the scientific failures to confirm the existence of phlogiston, caloric, and the luminiferous ether, which opened the way for the development of modern chemistry, thermodynamics, and electromagnetism. Adhering too tightly to a predetermined worldview or research plan can prevent us from following up on the kind of surprising new opportunities that are at the core of scientific innovation. For this reason, we should focus more on whether we are doing science the right way, not whether we produce the kinds of results we expected to find. More often than not, the goal in basic science is the journey. First of all, scientific knowledge itself is not fixed. It is not a simple collection of unalterable facts. The edifice of our scientific knowledge is constantly being extended. At the same time, it is in constant need of maintenance and renovation. This process never ends. For all practical purposes, the universe is cognitively inexhaustible. There is always more for us to learn. As finite beings, our knowledge of the world will forever remain incomplete. Besides, what we can know (and also what we want or need to know) changes significantly over time. Our goalposts are constantly shifting. The growth of knowledge may be unstoppable, but it is also at times erratic, improvised, and messy — anything but the straight convergence path of naïve realism depicted in the figure above. Once we realize this, the process of knowledge production becomes an incredibly rich and intricate object of study in itself. The aim and focus of our naturalist philosophy of science must be adjusted accordingly. Naïve realism considers knowledge in an abstract manner (e.g. as "justified true belief") and tries to find universal principles which allow us to establish it beyond any reasonable doubt. Naturalist philosophy of science, in contrast, goes for a more humble (but also much more achievable) target: to understand and assess the quality of actual human research activities, including technological and methodological aspects, but also individual cognitive performance and the social structure of scientific communities. It asks which strategies we — as finite beings, in practice, given our particular circumstances — can and should be using to improve our knowledge of the world. As Philip Kitcher has pointed out, the overall goal of naturalist philosophy is to collect a compendium of locally optimal processes and practices that can be applied to the kinds of problems humans are likely to encounter. This is a much more modest and realistic aim than any quixotic quest for absolute or certain knowledge, but it is still extremely ambitious. Like the expansion of scientific knowledge itself, it is a never-ending process of iterative and recursive improvement. As limited beings, we are condemned to always build on the imperfect basis of what we have already constructed. Just like perspectival realism, a process philosophy of science fosters context-specific strategies that allow us to attain a set of given goals. What is important for our discussion is that different research strategies and practices will be optimal under different circumstances. There is no universally optimal strategy for solving problems — there is no free lunch. What approach to choose will depend on the current state of knowledge and level of technological development, the available human, material, and financial resources, and the scientific (and non-scientific) goals a project attempts to achieve. The right choice of strategy is in itself an empirical question. A naturalist philosophy of science must be based on history and empirical insights into error-prone heuristics that have worked for similar goals and under similar circumstances before. We cannot justify scientific knowledge in a abstract and general way, but we can get better over time at appraising its robustness and value by studying the process of inquiry itself, in all its glorious complexity, with all its historical contingencies and cultural idiosyncrasies. An interesting example of an insight gained from such an inquiry is what Thomas Kuhn called the essential tension between a productive research tradition and risky innovation. In computer science, this has been recast as the strategic relationship between exploration (gathering new information) and exploitation (putting existing information to work). For any realistic research setting, this relationship cannot be determined explicitly as a fixed ratio or a set of general rules. Instead, we need to switch strategy dynamically, based on local criteria and incomplete knowledge. The situation is far from hopeless though, since some of these criteria can be empirically determined. For instance, it pays for an individual researcher, or an entire research community, to explore at the onset of an inquiry. This happens at the beginning of an individual research career, or when a new research field opens up. Over time, as a researcher or field matures and information accumulates, exploration yields diminishing returns. At some point, it is time to switch over to exploitation. This is an entirely rational meta-strategy, inexorably leading people (and research fields) to become more conservative over time, a tendency that has been robustly confirmed by ample empirical evidence. Here, we have an example where the optimal research strategy depends on the process of inquiry itself. A healthy research environment provides scientists with enough flexibility to switch strategy dynamically, depending on circumstances. Unfortunately, our contemporary research system does not work this way. The fixation on short-term performance, assessed purely by measuring research output, has locked the process of inquiry firmly into exploitation mode. Exploration almost never pays off in such a system. It requires too much time, effort, and a willingness to fail. It may be bad for productivity in the short term, but is essential for innovation in the long run. This is a dilemma I have already outlined above. We are getting stuck on local (sub-optimal) peaks of knowledge. Only an empirically grounded understanding of the process of inquiry itself can lead us out of this trap. But this alone is not enough. We also need a better understanding of the social dimension of doing science, which is what we will be discussing next. 3. SCIENCE AS DELIBERATION The third major criticism that naïve realism must face is that it is obsessed with consensus and uniformity. Many people believe that the authority of science stems from unanimity, and is undermined if scientists disagree with each other. Ongoing controversies about climate science or evolutionary biology are good examples of this sentiment. To a naïve realist, the ultimate aim of science is to provide a single unified account — an elusive unified theory of everything — that most accurately represents all of reality. This kind of thinking about science thrives on competition: let the best argument (or theory) prevail. Truth is established by debate, which is won by persuading the majority of experts and stakeholders in a field that some perspective is better than all its competitors. There can only be one factual explanation. Everything else is mere opinion. However, there are good reasons to doubt this view. In fact, uniformity can be bad. This is because all scientific theories are underdetermined by empirical evidence. In other words, there is always an indefinite number of scientific theories able to explain a given set of observed phenomena. For most scientific problems, it is impossible in practice to unambiguously settle on a single best solution based on evidence alone. Even worse: in most situations, we have no way of knowing how many possible theories there actually are. Many alternatives remain unconsidered. Because of all this, the coexistence of competing theories need not be a bad thing. In fact, settling a justified scientific controversy too early may encourage agreement where there is none. It certainly privileges the status quo, which is generally the majority opinion, and it suppresses (and therefore violates) the voices of those who hold a justified minority view that is not easy to dismiss. In summary, too much pressure for unanimity leads to a dictatorship of the majority, and undermines the collective process of discovery within a scientific community. Let us take a closer look at what this process is. Specifically, let us ask which form of information exchange between scientists is most conducive to cultivating and utilizing the collective intelligence of the community. In the face of uncertainty and underdetermination, it is deliberation, not debate which achieves this goal. Deliberation is a form of discussion that is based on dialogue, rather than debate. The main aim of a deliberator is not to win an argument by persuasion, but to gain a comprehensive understanding of all valid perspectives present in the room, and to make the most informed choice possible based on the understanding of those perspectives. What matters most is not an optimal, unanimous outcome of the process, but the quality of the process of deliberation itself, which is greatly enhanced by the presence of non-dismissible minorities. The quality of a scientific theory increases with every challenge it receives. Such challenges can come in the form of empirical tests, or thoughtful and constructive criticism of a theory’s contents. The deliberative process, with its minority positions that provide these challenges, is stifled by too much pressure for a uniform outcome. As long as matters are not settled by evidence and reason, it is better — as a community — to suspend judgement and to let alternative explanations coexist. This allows us to explore. But, like other exploratory processes, deliberation needs time and effort. Deliberative processes cannot (and should not) be rushed. SCIENTIFIC (PSEUDO)CONTROVERSY: AN EXAMPLE OF NATURALIST PHILOSOPHY IN ACTION Scientific controversies provide a powerful example illustrating all three pillars of naturalistic philosophy of science in action. Let us take a quick look at an ongoing debate in evolutionary theory, for instance. It has its historical roots in the reduction of Darwinian evolution to evolutionary genetics, which took place from the 1920s onward. This change of focus away from the organism's struggle for survival towards an evolutionary theory based on the change of gene frequencies in populations is called the Modern Evolutionary Synthesis. In recent decades, a movement has come up that has challenged this purely reductionist approach to evolution. Since this movement was not officially out to overthrow the classical synthesis, but rather to add developmental and ecological aspects to its perspective, it called itself the Extended Evolutionary Synthesis. Since its emergence, there have been several high-profile publications (see here, or here, for example) debating whether such an extension is really necessary or useful or neither. Based on what I have said before about perspectivism and deliberation, you may think that such a diversity of justified positions would be fruitful and conducive to scientific progress in evolutionary biology. Unfortunately, this could not be further from the actual truth. The controversy over the Extended Evolutionary Synthesis is particularly interesting, since the polarization between two dominant positions (which is based on a pseudo-debate, as we shall see) leads to the exclusion of rigorously argued alternative views. The duopoly acts like a monopoly, destroying proper deliberative practice in the process. As Wimsatt points out, the failure to recognize or acknowledge the perspectivist nature of scientific knowledge leads to many misunderstandings in science. Simply put, there are two types of controversies that we may distinguish: the first is a genuine dispute, usually about factual, conceptual, or methodological matters; the second is a territorial conflict, whose causes that are, to an important degree, of a political or sociological nature. The true nature of the latter is often hidden behind a screen of smoke caused by pseudo-debates about matters that could easily be resolved if the participants would only see that they are approaching the same problem (or at least related problems) from a different perspective. Instead of struggling over power and money, the disputants in such controversies could move on by simply learning how to talk to each other across the fence, that is, across their perspectival divide. Clearly, the "controversy" over the Extended Evolutionary Synthesis is a pseudo-debate of this latter kind. One side is interested in the sources of evolutionary variation and its ecological implications, the other about the consequences of natural selection. They are two sides of the same coin. But each community is in direct competition when it comes to funding and influence, which prevents a true dialogue from happening as long as both sides profit from polarization. This is not all we can learn about the importance of perspectivism from this debate, however. Another aspect of the debate is the matter of a synthetic theory for evolution itself. Why extend a synthesis that nobody ever needed in the first place? Evolution is the quintessential process that generates diversity. The sources of variation in evolution are as unpredictable as they are situation-dependent. The idea of developing a synthetic theory for the sources of variation in evolution is patently absurd. The generation of variation among organisms is a highly complex process. What we need to tackle it are as many valid and well justified perspectives as we can get. They should be as consistent as possible with each other, but there is no reason to assume they will ever add up to a general, overarching synthesis. Each evolutionary problem will have its own solution. Some of these will be more or less related to each other, but no more. In fact, if a general account of the sources of variation were possible, then evolution would not be truly open ended or innovative (see Part III). Why is this fundamental issue never even debated or (even better) deliberated? It is because the few people voicing it are rarely heard above the din of the pseudo-debate about unrecognized perspectives. They are drowned out. We do not see the elephant in the room, because we are demolishing the China shop all by ourselves. My assessment of deliberative practice in evolutionary biology is therefore bleak: the process is completely broken, and only very few people even realize it. With better literacy in the naturalistic philosophy of science, this may have all been prevented. The whole pseudo-debate is the consequence of a largely outdated view of science. It is a philosophical problem at heart. SUMMARY: TOWARDS AN ECOLOGICAL VISION FOR SCIENCE Above, I have outlined the three main pillars of a naturalist philosophy of science that is tailored to the needs of practicing researchers in the life sciences and beyond. Its highest aim is to foster and put to good use the collective intelligence of our research communities through proper deliberative process. In order to achieve this, we need research communities that are diverse and whose members are philosophically educated about how to harvest this diversity, when engaged pluralism is a good thing, and when it becomes flabby. Such viable, diverse communities of scientists generate what I would call an “ecological” vision for science, which stands in stark contrast to our current industrial model of doing research. I compare the two approaches in the table below. Note that both models are rough sketches at best, which are highly idealized. They represent very different visions of how research ought to be done — two alternative ethos for science.
I have argued that the naïve realist view of science is not, in fact, realistic at all. In its stead, I have presented a naturalist philosophy of science that adequately takes into account the biases and capabilities of limited human beings, solving problems in a world that will forever exceed our grasp. The ecological research model proposed here is less focused on direct exploitation, and yet, it has the potential to be more productive in the long term than the current industrial system. However, its practical implementation will not be easy, due to the short-term productivity dilemma we have maneuvered ourselves into. Escaping this dilemma requires a deep understanding of the philosophical foundations, as well as the social and cognitive processes that enable and facilitate scientific progress. Identifying and assessing such processes is an empirical problem, which is only beginning to be tackled and understood today. Such empirical investigations must be grounded in a suitable naturalist philosophical framework, and a correspondingly revised ethos of science, This framework must acknowledge the contextual and processual nature of knowledge-production. It needs to focus directly on the quality of this process, rather than being fixated exclusively on the outcome of scientific projects. In this way, naturalist philosophy of science will not only benefit the individual scientist by making her a better researcher, but it will also strive to improve the quality of community-level processes of scientific investigation. It is not merely descriptive, the naturalist philosophy I envisage is changing the way we do science, and is changed by the science it engages with in turn. Apologies: this post has ended up being a little longer than I anticipated. If you're not tired of my ramblings yet, go on to part III, which discusses how we can use naturalist philosophy within biology: a philosophical kind of theoretical biology for practicing biologists to tackle biological problems.
Why would any biologist care about philosophy? This three-part series of blog posts is based on a talk I held at the workshop on "A New Naturalism: Towards a Progressive Theoretical Biology, " which I co-organized with philosophers Dan Brooks and James DiFrisco at the Wissenschaftskolleg zu Berlin in October 2022. You can find part II here, and part III here. You may not know this (few people probably realize it) but it's true: biology urgently needs more philosophy. After decades of rapid progress, which was mainly driven by methodological and technological progress, biology has arrived at a historical turning point. Reductionist approaches to genetic and molecular analysis are being supplemented by large-scale data-driven approaches and multi-level systems modeling. We are beginning to integrate the dynamics of gene regulatory networks with their physical context at the cellular and tissue level. We are even regaining a glimpse of the whole organism as an object of study. We can now turn our focus back on some of the deepest questions in biology: what makes a living system alive? How come it can exert agency? How does it interact with its environment? What factors shape its evolution? These questions pose a number of challenges that require not technological but conceptual progress: we need new ways of thinking to address them. And we need to re-contextualize what we already know in the increasingly complex societal and environmental circumstances we currently find ourselves in. Philosophy can be a powerful thinking tool for biologists (or any other scientist, for that matter). It helps us better understand what we are doing when we do research: how we produce trustworthy knowledge, insights that are adequate for our times, why we ask the questions we ask, what methods we are using, and what kinds of explanation we accept as scientifically valid. It enables us to reflect on what motivates and drives our investigations. It highlights the ethical implications of our work. Moreover, we can apply philosophical approaches to address deep biological problems: philosophy can serve us to clarify concepts, to delineate their appropriate domain of application, to reveal hidden meanings or potential misunderstandings, and to provide new perspectives or angles on old questions. In brief: philosophy can help you become a better and more effective researcher. For that, we need a specific kind of philosophy: philosophy that is tightly connected with the practice of doing research, and is informed by the latest science. Unfortunately, not all philosophy is like that. What we need is not the outdated philosophy of science that most scientists have already heard of: no positivism, Popper, or Kuhn. We also need no armchair philosophers, no far-fetched thought-experiments (on zombies, let's say), no high-level idealized simplifications and over-generalized abstractions. Instead, we need something fresh and novel: a rigorous naturalistic philosophy of biology for the 21st century, the kind of philosophy that is in tune with the latest findings in the life sciences themselves, and is adapted to the best and most realistic accounts of the production of scientific knowledge available today. Our philosophy need not be perfect, but it needs to be practical, applicable, and it needs to keep up and co-evolve with the science it is concerned with. But even more importantly, we need more philosophical thinking within biology, a new philosophical biology. That's not philosophers thinking about biology, its concepts, methods, practices, and theories, but philosophically sound thinking applied by biologists to biological problems. It is a kind of theoretical biology, a practice within biology, but not one that necessarily involves mathematical modeling. Put simply, it's better thinking for biologists. It is the kind of biology the organicists or C.H. Waddington used to practice. Waddington's epigenetic landscape and the work he derived from it (beautifully described in "The Strategy of the Genes") are a great example of the kind of philosophical biology I am talking about. Waddington's work radically reconceptualized the study of embryogenesis and its evolution, framing adequate explanations in terms of novel concepts such as chreodes (developmental trajectories depicted by valleys in the landscape), canalization (structural stability of chreodes represented by the steepness of the valley slopes), and homeorhesis (homeostasis applied to chreodes or the fact that a ball rolling down the landscape would tend to stay at the bottom of a valley). The influence of genes on embryogenesis is depicted by pegs that pull on the landscape through a complex network of ropes, altering the topography in unpredictable ways when mutation alters the arrangement of pegs. This collage of Waddington's illustrations of the landscape is taken from Anderson et al. (2020). Unfortunately, the century-old philosophical tradition of theoretical biology has been all but lost since the early 1970s. Some notable exceptions - torchbearers through the dark ages of molecular reductionism - are the process structuralism of my late masters supervisor and mentor Brian Goodwin (as well as others), which treated developmental processes and their evolution in terms of morphogenetic fields instead of genes as their fundamental units, Stuart Kauffman's work on self-organization in complex networks, the organization of the organism, and its consequences for open-ended evolution (especially in his "Investigations") or, more recently, Terrence Deacon's teleodynamic approach to the organization of living matter (see his "Incomplete Nature"). These researchers have revolutionized how we think about biological processes, organisms, and evolution, but they tend to work in isolation, at the fringes of biology. unnoticed by the mainstream. There would be many others worth mentioning here, but the main point is that examples of conceptually focused biological work are hard to find these days, not even mentioning what struggle it is to try and make a living as this kind of theoretician in biology today. THE LIMITS OF TECHNOLOGICAL AND METHODOLOGICAL PROGRESS Looking at the past sixty years or so of biology (starting with the rise of molecular biology), it is easy to convince yourself that progress in biology predominantly derives from technological and methodological advances, not from better concepts or new ideas. Indeed, we have much to show for in that regard. The rate at which we develop new techniques, produce ever more comprehensive data sets, and publish highly sophisticated technical papers appears to be increasing day by day. So what's not to like? We seem to be succeeding according to all our self-imposed metrics. Well, there is a growing and legitimate concern that this frantic acceleration of technological and methodological progress, this flood of big data, is not always a good thing. First and foremost, the current cult of productivity that comes with this kind of acceleration has a negative impact on researchers (especially young ones), who struggle to keep up with ever increasing demands for a successful career. More generally, it has led to a general neglect (even some disdain) for purely theoretical work in biology. I think this may be the main reason philosophical biology has been obscured in the past few decades. Questions that are not immediately accessible to empirical investigation are dismissed as idle speculation. One such example is the nature of the organism and its role in evolution. Technological progress is so fast that we can always keep ourselves busy with the latest new methodology instead, be it single-cell sequencing, 3D cell culture, or CRISPR gene editing, and the next big technological breakthrough is always just around the corner. Contextualizing our empirical findings within a broader view of life seems unnecessary, as all those intractable theoretical problems will surely become tractable through technological progress in the very near future. This kind of techno-optimism leads to a dangerous loss of context, a sort of technological tunnel vision. There is a worrisome gap that is opening between our power to manipulate living systems and our understanding of the complex consequences of these manipulations. Take the possibility of gene drives in uncontrollable natural environments as an example. Higher-order ecological effects will be inevitable in this case and almost certainly won't be harmless or benign. In fact, this is an example where we are running at increasing speed through the dark woods, blindly. And in the middle of this, we have hit a solid wall in terms of understanding some of the most fundamental concepts and problems in our field, among them the concept of "life" itself. This severely distorts our understanding of our own place in the world. We are drowning in data, yet thirsting for wisdom as E.O. Wilson once so aptly put it. To address these pressing issues, we need to urgently relearn to ask the big questions. François Jacob wrote in his "Logic of Life" in 1975 that "biologists no longer study life." This sounds a little crazy but it is essentially true. Yet, we have not really solved the problem of life (as Jacob implies). The truth is that we have not even confronted it. We simply skirted around it, explained it away by reducing organisms to metaphorical programs and machines. We no longer need to worry about such difficult questions. Life as a biologist is so much easier that way. That may be a smart move in a way, but it's not based on any solid grounding. Granted, it enabled us to better focus on those aspects of living systems that were tractable given whatever technological and conceptual capabilities we had at the time. Still, this kind of reductionism is simply bad philosophy, tainted by poor and outdated metaphysical commitments. It looses the forest for the trees. In fact, it is not too far-fetched to make the claim that we understand what life is (and what it is worth) less than ever before in human history. In other words, we have never been more wrong about life than with our current mechanistic-reductionist approach and its machine view of the organism. The core problem is that we have been carried so far away by the breakneck pace of technological progress that we have started to view the whole world (including all living beings) through the lens of our most advanced technology. We treat life as if it was a computational device that we can fully understand, predict, manipulate, and control. We have forgotten that this machine view of the world is only a metaphor, and not a good one at that. This is dangerous hubris, pure and simple. If we don't take a little time-out to stop and think, we will wipe ourselves from the face of the planet by the mechanistic application of increasingly powerful interventions to complex systems we are not even beginning to understand. We are in danger of losing track of our own limitations. We're on a straight path to oblivion if we let our technological might outpace our wisdom this way. Reconceptualizing life is an important first step, not only towards a deeper understanding of living systems, but also towards a healthier, more sustainable, and less exploitative attitude for humanity towards nature. A NEW ATTITUDE TOWARDS LIFE? Thus, the first conclusion I will draw here is that it is essential that we change the way we study and understand living systems. This is a philosophical problem. It requires new ways of thinking, instead of new technology. Unfortunately, the kind of reflection required for such conceptual change is all too often considered a waste of time in our frenzied academic research environments. We are too busy publishing and perishing. Yet, we urgently need to reconsider what we are doing, we must take the time to reexamine our concepts and practices if we are to continue making progress towards understanding life, ourselves, and our place in the universe. What are we collecting data for? Do any of us even remember? Of course, we do. And there are examples where conceptual progress is being made all across the life science. Yet it remains too disconnected and isolated to be truly effective. What is needed is a broader movement towards conceptual change, a much broader confrontation of these issues that is grounded in the most solid and powerful philosophical ideas we have available today. Such a movement needs a new philosophy of biology that is actually taught and known to practicing researchers, shaping the questions they ask, the methods they use, and the kinds of explanations they consider appropriate. We need to reintroduce philosophy as an essential part of a well-rounded scientific curriculum at our institutions of higher education. In addition, we also need a philosophical kind of theoretical biology that biologists actively engage with in their practice because it is useful to them. The alternative is for us to be buried under rapidly growing heaps of impressive but increasingly incomprehensible data, to wastefully burn through our vast (yet limited) funds and resources, and to end up as ignorant about life as we've ever been. Organisms are not machines. But what are they instead? We currently have no idea. This, broadly put, is why I believe philosophy is so important to contemporary biology. These are exciting times to be a biologist. We are on the cusp of great discoveries that will revolutionize our discipline, but the revolution won't be achieved without better concepts, better questions, and better theories. For the first time in decades, biology needs conceptual change to drive progress. The time is ripe to teach biologists philosophy again: no condescending preaching from the philosophical pulpit, but a kind of philosophy they will like, find plausible, and which they can put to work in their own research practice. Where do we start? In the next post, I will examine what I think is the proper naturalist philosophy of biology for the task. Then, in the final part of this trilogy, I will give you a number of examples that illustrate the philosophical kind of theoretical biology we may want to resurrect in order to tackle some fundamental biological challenges of our current age. Stay tuned!
I bear good tidings! After seven long years in the funding desert, I have secured a major research grant again. And even better: it makes no compromises. This project supports everything I want to do most at this point in my life. I'm still a bit in shock that I actually have this unique opportunity to pursue a whole range of rather radical philosophical, scientific, and artistic projects with financial security for the next three years. Lucky me. In fact, I had more or less decided to never apply for a research grant again, to try and make a life as a freelance teacher and philosopher, when this opportunity came along. And since I considered it my last shot, I wrote it without consideration whether it would actually get funded or not. But then, lo and behold, it did! I'm tremendously happy and excited to be focusing full time on research (and a bit of art) again. The title of the project is "Pushing the Boundaries: Agency, Evolution, and the Dynamic Emergence of Expanding Possibilities." It will be hosted by my wonderful project co-leader and collaborator Prof. Tarja Knuuttila at the Department of Philosophy of the University of Vienna, and will involve numerous collaborations with many of my favorite philosophers, scientists, and artists. But more on that later. FIRST: A WORD ABOUT THE FUNDER The project is funded by the John Templeton Foundation. So let's get a few things out of the way right at the beginning: I am very well aware that many of you have reservations about the Foundation as a funder, especially in the field of evolutionary biology, due to the controversial views of its founder, John Templeton. I want to make a few points very clear: I am a staunch and outspoken atheist (I've tried agnosticism for a while, but couldn't emotionally commit to it), I have no sympathies for the view that science and religion are non-overlapping magisteria (most traditional religious dogma is simply outdated and incompatible with a modern scientific worldview, and I want my metaphysics to always be in tune with the best scientific evidence we have available; more on that here), and I do not believe for a second that our knowledge of evolution leaves room for any kind of supernatural or directed influence (there is a lot we don't understand about the world, but God is never a valid explanation for any of that). Having said that, here are three reasons why I can take money from Templeton and still sleep tight at night. First, the Foundation had absolutely no influence on the content of the project. Nor did they ever try to exert such influence at any point of the application and selection process. Quite the contrary: in stark contrast to the public funding bodies I have had to deal with, they were extremely constructive, letting me reply to reviewers' comments, moderating differences of opinion that were not constructive, and generally helping me to improve the format of the project to fit the requirements and preferences of the Foundation. My experience with Templeton at this level has been 100% positive so far. Second, I've been trying to get my research into organismic agency funded by public funders for years now, without the slightest chance of success. The project is heavily transdisciplinary, trying to redefine how scientists see evolving spaces of the possible, expanding our view of what is a valid scientific explanation beyond strict mechanicism (yet still rigorous and naturalistic), challenging our mechanistic view of organismic behavior and biological evolution, while attempting the impossible task of simulating a whole cell or organism (see below). No panel at any funding body other than Templeton was open minded enough or could muster the transdisciplinary expertise to judge such a project properly and fairly. Also: gatekeepers will be gatekeeping, especially in committees set up by public funders. Templeton offers me a unique opportunity to escape these problematic institutional constraints. Whatever their motives to do this, this project is almost sure to drastically reduce the space for supernatural mysticism in biology rather than justify it in any way. Last but not least, those that receive funding from government agencies or their university would do well to question the motives of those institutions as well. Are the interests and incentives defined by these funders still aligned with the process of doing basic research — with the intrepid exploration of the unknown? I do not think so. The current public funding system is severely broken, with an excessive focus on politically useful short-term outcomes and practical applications (not even mentioning committee cronyism to only fund more of the same). It is going exclusively for the low-hanging fruit. It has become so focused on consensus decision making and reachable objectives that it makes true conceptual innovation all but impossible. I don't want to be part of that system if it cannot give me the freedom to explore. To be honest, I'm much better off with Templeton in that regard. Maybe this is a problem the decision-makers in public funding bodies should more seriously consider? Creativity and innovation are dying in a scientific research system designed for the age of selfies. OK, I GET IT. BUT WHAT IS THE PROJECT ABOUT? In the broadest sense, the project deals with the fact that our modern science, just like our modern worldview, are largely mechanistic. In other words, we see nature as a kind of machine (a “clockwork,” or nowadays maybe more like some sort of computer simulation or computation) that we can emulate, control, predict, and exploit. This view of the world as a machine has led to much empirical success in the natural sciences over the past few centuries, but also restricts the kind of questions we can scientifically address, and it underlies many of humanity’s most existential and pressing challenges today: the interlocking ecological, socioeconomic, political, and meaning crises that form our current meta-crisis. In my opinion, these crises are ultimately all rooted in a fundamental philosophical misunderstanding concerning the nature of the world and our role and place within it. In this project, we pursue the idea of a different kind of science for the 21st century, focused on organismic agency and its role in evolution. It does not view the world as a mechanism, but as a creative process of open-ended evolution. The main focus of our investigation lies on the simple question “how do organisms manage to act on their own behalf?” This, in fact, is what most clearly delineates the living world from the non-living. Unfortunately, the concept of “agency” — the ability of a living organism to act — is still heavily understudied in contemporary biology, probably because it is strongly associated with teleology, a kind of purposeful explanation that is shunned in mechanistic science. Our project aims to address this fundamental issue by providing a philosophical and scientific analysis of organismic agency, which shows that it is fully compatible with the epistemological principles of naturalistic explanation in science. Moreover, we are interested in learning how (or how far) we can capture the organism's ability to act in mathematical models of living cells and organisms, since we do not yet have such models in biology. Indeed, this kind of model may require entirely new kinds of computational and mathematical methods that we have barely begun to explore (see here, for an exceptional example of such an exploration). In our project, we intend to push the boundaries of what we can model and predict in systems that are (or that contain) organismic agents. Such systems not only include populations of evolving organisms, but also higher-level ecosystems (up to the level of the biosphere), and social systems, including the economy. We take a three-pronged approach to study such agential systems. Part 1: The evolutionary emergence of expanding possibility spaces First, we use philosophical analysis to clarify what some of the concepts we use actually mean. Many of these concepts remain vague and ill-defined, their interrelations unclear. In particular, a theory of agency requires us not only to know what it means for an organism to “act on its own behalf,” but also to understand problems such as the emergence of new behavior and new levels of organization in evolution, what makes such novel levels and behavior particularly complex and unpredictable (as opposed to, say, the behavior of a rock), and to explain how such new levels of organization can evolve with a degree of independence from the underlying molecular and genetic mechanisms. The problem with classical dynamical systems approaches to systems biology is that they are historically imported from physics, especially from an approach Lee Smolin has called "physics in a box" where we define a system over a specific domain of reality that contains a given number of interacting objects (encoded by the state variables of the system). The behavior of these objects is then described by a set of rules which are defined outside the objects themselves (the laws of gravity for a classical model of the solar system, for example). Given a specific set of starting and boundary conditions, we can then simulate the temporal evolution of the system within its given frame, by tracing a trajectory through a predefined space of possibilities (the configuration space of the model; see figure below, left). Unfortunately, this approach is fundamentally ill-suited for modeling evolving systems of interacting agents. The main reason is that agents are not objects, and the rules of their behavior is defined from within their peculiar organization. As I have said above: agents act on their own behalf. They write their own rules and those rules constantly evolve in response to the behavior of the agents itself (among other things, such as environmental triggers, of course). Thus, what we get here is a constantly changing space of possibilities that radically co-emerges with the dynamics of the agential system itself. Stuart Kauffman has called this emergent space of possibilities the adjacent possible (see figure above, right). The first part of our project is an attempt to philosophically (re)define concepts such as emergence, complexity, and agency to ground such a view in evolutionary theory and biological research practice. To summarize: in this philosophical part of the project, we push the conceptual boundaries that hinder our understanding of purposive agency and its role in open-ended evolution. To put it in more technical terms: we aim to establish suitable theoretical frameworks that bring together notions such as purposive organismic agency, the complex biological organization that enables it, the radically emergent dynamics underlying its evolution, and the impredicative nature of the resulting dialectic dynamics, i.e., the co-constructive interplay between affordances, goals and actions that produce the behavior and evolution of biological and human agents. These diverse, foundational, and so far massively understudied, aspects of agential evolution will be integrated through the central notion of dynamically growing possibility spaces, constantly driven into the adjacent possible by the intrinsic behavior of agents. Part 2: Modeling the impossible — simulating a living agent In the second part of the project, we take a more practical and less abstract approach. Informed by the philosophical investigations outlined above, we will create a mathematical and computational framework (a bit like a new programming language) that allows us to capture organismic agency in simulations of simple unicellular organisms that behave and evolve. Many efforts have been made to come up with such frameworks in the past. Here, we will use a number of new ideas to extend these existing efforts. In particular, we are interested in capturing the self-maintaining organization of an organism, and the way it generates actions. In addition, we will focus on the role that randomness plays in this process, and how we can generate reliable behavior in light of the many fluctuations within an organism and in the environment it encounters. Our starting point for these efforts will be a highly abstract minimal model of a cellular agent created by Jannie Hofmeyr, based on Robert Rosen's pioneering work on the organization of organisms. In other words, we intend to push the methodological boundaries that limit our ability to model agential dynamics. Today’s most advanced agent-based models and machine-learning methods still depend on externally prescribed goals/targets. A genuine model of agential evolution requires a new modeling paradigm, where evolving agents write their own rules, evolve their own codes, generate and transcend their own boundaries, and choose their own goals. Whether and how this could be achieved remains an open question, not only in biology, but also in social systems, AI, and ALife research. In fact, some theoreticians (including myself) have predicted that this is not entirely possible. By trying anyway, we can find out (a) how far we can get with dynamical model/simulations and which parts of an organism’s organization and behavior such models can capture, and (b) we can strive to better understand and circumscribe the limits of predictability in such systems. In this sense, failing in our task to come up with a perfect model of a living organism may teach us more about life than being successful. Whatever we will encounter on our journey, we are sure to learn something new. Part 3: Why organisms aren't machines As a final part of the project, we intend to bring the discussion of agency and the possibility of a post-mechanistic science to other scientists and the general public, because we believe that a transition towards a more participatory and sustainable view of the world is essential for human survival on the planet. An expanded scientific view of the world as an evolving open-ended process has profound implications. It requires us to accept certain types of teleological explanations (those concerning the goal-oriented behavior of organisms) as perfectly naturalistic and rigorously scientific ways of understanding the world. This greatly expands the range of questions that scientific research can address. It increases our explanatory power compared to a purely mechanistic approach. In the domain of ethics, we can no longer treat and exploit living beings as if they were machines. Even more broadly, we have to give up our Laplacean dreams of control and predictability. We have to learn how to go with the flow, rather than attempting to constantly master and channel it. We have to realize that the future is fundamentally open. This can be scary, but also liberating. We are responsible for our own actions and their consequences. The evolutionary view of the world is empowering. Here, we push the boundaries of public and scientific awareness concerning purposive agency and open-ended evolution. The massive challenges we seek to address require a community-sized effort, positioned at the heart of 21st-century science, as well as a new post-mechanistic view of what science can be. Unfortunately, very few researchers, and even fewer members of the public, currently realize the scope and implications of this problem. We aim to raise awareness through dissemination: a book written for a general readership about why organisms are not machines, and an innovative outreach strategy led by a professional curator, Basak Senova, implemented through my existing arts & science collaboration THE ZoNE, which presents our views on agency and evolution and their scientific and societal ramifications in a serious but playful format called wissenskunst that is accessible and (hopefully) attractive to a broad general public. SOME CONCLUDING PRACTICALITIES This is it for this first quick overview. I'll write more about how things are going over the next few months. As I mentioned, the project will be hosted at the Department of Philosophy of University of Vienna. It will start in December 2022, and will run for a duration of 33 months. It includes two workshops and a larger conference on the topic of agency and evolution, so keep an eye out for announcements. I am particularly proud that this project is not part of Templeton's huge concerted effort to study agency, called "Agency, Directionality, and Function" (website). We very much hope to be constructively synergizing with parts of that larger project, but I see our approach as complementary and orthogonal to their efforts. And most importantly: we remain independent agents! It would be ironic were it otherwise. You can contact me here, if you want to know more about the project.
Serious Play with Evolutionary Ideas Have I mentioned already that I am part of an arts & science collective here in Vienna? It's called THE ZONE. Yes, you're right. I actually did mention it before. What is it about? And what, in general, is the point of arts & science collaborations? This post is the start of an attempt to give some answers to these questions. It is based on a talk I gave on Mar 12, 2022 at "Hope Recycling Station" in Prague as part of an arts & science event organized by the "Transparent Eyeball" collective (Adam Vačkář and Jindřich Brejcha). I'll start with this beautiful etching by polymath poet William Blake, from 1813. It's called "The Reunion of the Soul and the Body," and shows a couple in a wild and ecstatic embrace. I suppose the male figure on the ground represents the body, while the female soul descends from the heavens for a passionate kiss amidst the world (a graveyard with an open grave in the foreground, as far as I can tell) going up in smoke and flames. This image, rich in gnostic symbolism, stands for a way out of the profound crisis of meaning we are experiencing today. Blake's picture graces the cover of one of the weirdest and most psychoactive pieces of literature that I have ever read. In fact, I keep on rereading it. It is William Irwin Thompson's "The Time Falling Bodies Take to Light." This book is a wide-ranging ramble about mythmaking, sex, and the origin of human culture. It sometimes veers a bit too far into esoteric and gnostic realms for my taste. But then, it is also a superabundant source of wonderfully crazy ideas and stunning metaphorical narratives that are profoundly helpful if you're trying to viscerally grasp the human condition, especially the current pickle we're in. It's amazing how much this book, written in the late 1970s, fits the zeitgeist of 2022. It is more timely and important than ever. THREE ORDERS So why is Blake's image on the cover? "Myth is the history of the soul" writes Thompson in his Prologue. What on Earth does that mean? Remember, this is not a religious text but a treatise on mythmaking and its role in culture. (I won't talk about sex in this post, sorry.) Thompson suggests that our world is in flames because we have lost our souls. This is why we can no longer make sense of the world. A new reunion of soul and body is urgently needed. Thompson's soul is no supernatural immortal essence. Instead, the loss of soul represents the loss of narrative order, which is the story you tell of your personal experience and how it fits into a larger meta-narrative about the world. A personal mythos, if you want. We used to have such a mythos but, today, we are no longer able to tell this story of ourselves in a way that gives us a stable and intuitive grip on reality. According to cognitive psychologist John Vervaeke, the narrative order is only one of three orders which we need to get a grip, to make sense of the world. It is the story about ourselves, told in the first person (as an individual or a community). The second-person perspective is called the normative order, our ethics, our ways of co-developing our societies. And the third-person perspective is the nomological order, our science, the rules that describe the structure of the world, which constrains our agency and guides our relationship with reality (our agent-arena relationship). All three orders are in crisis right now. Science is being challenged from all sides in our post-truth world. Moral cohesion is breaking down. But the worst afflicted is the narrative order. We have no story to tell about ourselves anymore. This problem is at the root of all our crises. That is exactly what Thompson means by the soullessness of our time. THE OLD MYTHOS... But what is the narrative order, the mythos, that was lost? As I explain in detail elsewhere, it is the parable (sometimes wrongly called allegory) of Plato's Cave. We are all prisoners in this cave, chained to the wall, with an opening behind our backs that we can't see. Through this opening, light seeps into the cave, casting diffuse shadows of shapes that pass in front of the opening onto the wall opposite us. These shadows are all we can see. They represent the totality of our experiences. In Plato's tale, a philosopher is a prisoner who escapes her shackles to ascend to the world outside the cave. She can now see the real world, beyond appearances, in its true light. For Plato, this world consists of abstract ideal forms, to be understood as the fundamental organizational principles behind appearances. He provides us with a two-world mythology that explains the imperfection of our world, and also our journey towards deeper meaning. This journey is a transformative one. It is central to Plato's parable. He calls it anagoge (ancient Greek for "climb" or "ascent"). The philosopher escaping the cave must become a different person before she can truly see the real world of ideal forms. Without this transformation, she would be blinded by the bright daylight outside the cave. Anagoge involves a complexification of her views and a decentering of her stance, away from egocentric motivations to an omnicentric worldview that encompasses the whole of reality. When she returns to the cave, she is a completely different person. In fact, the other prisoners, her former friends and companions, no longer understand what she is saying, since they have not undergone the same transformations she has. The only way she can make them understand is to convince them to embark on their own journeys. However, most of the prisoners do not want to leave the cave. They are quite comfortable in its warm womb-like enclosure. With his parable, Plato wanted to destroy more ancient mythologies of gods and heroes. Ironically, in doing so, he created an even more powerful myth that governed human meaning-making for almost two-and-a-half millennia. After his death, it was taken up by the Neoplatonists and then by St. Augustine. It entered the mythos of Christianity as the spiritual domain of God, which lies beyond the physical world of our experience. Only faith, not reason, can grant you access. Later, this idea of a transcendent realm was secularized by Immanuel Kant. who postulated a two-world ontology of phenomena and noumena, the latter ("das Ding an sich") completely out of reach for a limited human knower. ... AND ITS DOWNFALL All of this was brutally shattered by Friedrich Nietzsche (although others, such as Georg Wilhelm Friedrich Hegel and Auguste Comte, also contributed enthusiastically to the demolition effort). Nietzsche is the prophet of the meaning crisis. "God is dead, and we have killed him" doesn't leave much room anymore for the spiritual realm of traditional Christianity. What Nietzsche means here is not an atheistic call to arms. It is the observation that traditional religion already has become increasingly irrelevant for a growing number of people, and that this process is inevitable and irreversible in our modern times. Nietzsche also destroys Kant's transcendental noumenal domain, all in just one page of "The Twilight of the Idols," which is unambiguously entitled "History of an Error." When Nietzsche is through with it, two-world mythology is nothing more than a heap of smoking rubble. And things have gotten only worse since then. As Nietzsche predicted, the demolition of the Platonic mythos was followed by an unprecedented wave of cynical nihilism over what we could call the long 20th century, culminating in the postmodern relativism of our post-fact world. Under these circumstances, any attempt at reconstructing the cave would be a fool's hope. A NEW MYTHOS? But we can try to do better than that! What Thompson and Vervaeke want, instead of crawling back into the womb of the cave, is a new mythos, a new history of the soul, (meta-)narratives adequate for the zeitgeist of the 21st century. But who would be our contemporary mythmakers? Thompson points out a few problems in "Falling Bodies:" "The history of the soul is obliterated, the universe is shut out, and on the walls of Plato's cave the experts in the casting of shadows tell the story of Man's rise from ignorance to science through the power of technology." In Thompson's view, scientists are the experts in the casting of shadows, generating ever more sophisticated but shallow appearances, without ever getting to the deep underlying issues. What about artists then? "In the classical era the person who saw history in the light of myth was the prophet, an Isaiah or Jeremiah; in the modern era the person who saw history in the light of myth was the artist, a Blake or a Yeats. But now in our postmodern era the artists have become a degenerate priesthood; they have become not spirits of liberation, but the interior decorators of Plato's cave. We cannot look to them for revolutionary deliverance." Harsh: postmodern artists as the interior decorators of Plato's cave. Shiny surface and distanced irony over deep meaning and radical sincerity. The meaning crisis seems to have fully engulfed both the arts and the sciences. Thompson's pessimistic conclusion is that, in their current state, neither are likely to help us restore the narrative order. WISSENSKUNST This is where Thompson (pictured above) proposes the new practice of wissenskunst. Neither science nor art, yet also a bit of both (in a way). He starts out with a reflection on what a modern-day prophet would be: "The revisioning of history is ... also an act of prophecy―not prophecy in the sense of making predictions, for the universe is too free and open-ended for the manipulations of a religious egotism―but prophecy in the sense of seeing history in the light of myth." Since artists are interior decorators now, and scientists cast ever more intricate shadows in the cave, we need new prophets. But not religious ones. More something like: "If history becomes the medium of our imprisonment, then history must become the medium of our liberation; (to rise, we must push against the ground to which we have fallen). For this radical task, the boundaries of both art and science must be redrawn. Wissenschaft must become Wissenkunst." (Wissenskunst, actually. Correct inflections are important in German!) The task is to rewrite our historical narrative in term of new myths. To create a new narrative order. A story about ourselves. But what does "myth" mean, exactly? In an age of chaos, like ours, myth is often taken to be "a false statement, an opinion popularly held, but one known by scientists and other experts to be incorrect." This is not what Thompson is talking about. Vervaeke captures his sense of myth much better: "Myths are ways in which we express and by which we try to come into right relationship to patterns that are relevant to us either because they are perennial or because they are pressing." So what would a modern myth look like? ZOMBIES! Well, according to Gilles Deleuze and Félix Guattari, there is only one modern myth: zombies! Vervaeke and co-authors tie the zombie apocalypse to our current meaning crisis: zombies are "the fictionally distorted, self-reflected image of modern humanity... zombies are us." The undead live in a meaningless world. They live in herds but never communicate. They are unapproachable, ugly, unlovable. They are homeless, aimlessly wandering, neither dead nor alive. Neither here nor there. They literally destroy meaning by eating brains. In all these ways, zombification reflects our loss of narrative order. Unfortunately, the zombie apocalypse is not a good myth. It only expresses our present predicament, but does not help us understand, solve, or escape it. A successful myth, according to Vervaeke, must "give people advice on how to get into right relationship to perennial or pressing problems." Zombies just don't do that. Zombie movies don't have happy endings (with only one exception that I know of). The loss of meaning they convey is rampant and terminal. Compare this with Plato's myth of the cave, which provides us with a clear set of instruction on how to escape our imperfect world of illusions. Anagoge frees us from our shackles. What's more, it is achievable using only our own faculties of reason. No other tools required. In contrast, you can only run and hide from the undead. There is no escaping them. They are everywhere around you. The zombie-apocalypse is claustrophobic and anxiety-inducing. It leaves us without hope. We need better myths for meaning-making. But how to create them? Philip Ball, in his excellent book about modern myths, points out that you cannot write a modern myth on purpose. Myths arise in a historically contingent manner. In fact, they have no single author. Once a story becomes myth, it mutates and evolves through countless retelling. It is the whole genealogy of stories that comprises the myth. Thompson comes to a very similar conclusion when looking at the Jewish midrashim, for example, which are folkloristic exegeses of the biblical canon. For it to be effective, a myth must become a process that inspires. Just look at the evolution of Plato's two-world mythology from the original to its neoplatonist, Christian, and Kantian successors. So where to begin if we are out to generate a new mythology for modern times? I think there is no other way than to look directly at the processes that drive our ability to make sense of the world. If we see these processes more clearly, we can play with them, spinning off narratives that might, eventually, become the roots of new myths, myths based on cognitive science rather than religious or philosophical parables. THE PROBLEM OF RELEVANCE By now, it should come as no surprise that rationality alone is not sufficent for meaning-making. We have talked about the transformative process of anagoge, in which we need to complexify and decenter our views in order to make sense of the world. What is driving this process? The most basic problem we need to tackle when trying to understand anything is the problem of relevance: how do we decide what is worth understanding in the first place? And once we've settled on some particular aspect of reality, how do we frame the problem so it actually can be understood? A modern mythology must address these fundamental questions. Vervaeke and colleagues call the process involved in identifying relevant features relevance realization. At the danger of simplifying a bit, you can think of it as a kind of "where is Wally" (or "Waldo" for our friends from the U.S.). Reality bombards us with a gazillion of sensory impressions. Take the crowd of people on the beach in the picture above. How do we pick out the relevant one? Where is Wally? We cannot simply reason our way through our search (although some search strategies will, of course, be more reasonable than others). We do not yet have a good understanding of how relevance realization actually works, or what its cognitive basis is, but there are a few aspects of this fundamental process that we know about and that are relevant here. On the one hand, we must realize that relevance realization reaches into the depth of our experience, arising at the very first moments of our existence. A newborn baby (and, indeed, pretty much any living organism) can realize what is relevant to it. We must therefore conclude that this process occurs at a level below that of propositional knowledge. We can pick out what is relevant before we can think logically. On the other hand, relevance realization also encompasses the highest levels of cognition. In fact, we can consider consciousness itself as some kind of higher-order recursive relevance realization. Importantly, relevance realization cannot be captured by an algorithm. The number of potentially relevant aspects of reality is indefinite (and potentially infinite), and cannot be captured in a well-formulated mathematical set, which would be necessary to define an algorithm. What's more, the category of "what we find relevant" does not have any essential properties. What is relevant radically depends on context. In this regard, relevance is a bit like the concept of "adaptation" in evolution. What is adaptive will radically depend on the environmental context. There is no essential property of "that which is adaptive." Similarly, we must constantly adapt to pick out the relevant features of new situations. Thus, in a very broad but also deep sense, relevance realization resembles an evolutionary adaptive process. And just like there is competition between lots of different organisms in evolution, there is a kind of opponent processing going on in relevance realization: different cognitive processes and strategies compete with each other for dominance at each moment. This explains why we can shift attention very quickly and flexibly when required (and sometimes when it isn't), but also why our sense-making is hardly consistent across all situations. This is not a bad thing. Quite the opposite, it allows us to be flexible while maintaining an overall grip on reality. As Groucho Marx is supposed to have said: "I have principles, but if you don't like them, I have others." INVERSE ANAGOGE & SERIOUS PLAY Burdened with all this insight into relevance realization, we can now come up with a revised notion of anagoge, which is appropriate for our secular modern times. It is quite the inverse of Plato's climb into the world of ideals. Anagoge now becomes a transformative journey inside ourselves and into our relationship with the world. A descent instead of an ascent. Transformative learning is a realignment of our relevance realization processes to get a better grip on our situation. We can train this process through practice, but we cannot step outside it to observe and understand it "objectively." We cannot make sense of it, since we make sense through it. Basically, the only way to train our grip on reality is to tackle it through practice, more specifically, to engage in serious play with our processes of relevance realization. To quote metamodern political philosopher Hanzi Freinacht, we must "... assume a genuinely playful stance towards life and existence, a playfulness that demands of us the gravest seriousness, given the ever-present potentials for unimaginable suffering and bliss." Serious playfulness, sincere irony, and informed naivité. This is what it takes to become a metamodern mythmaker. So this is the beginning of our journey. A journey that will eventually yield a new narrative order. Or so we hope. It is not up to us to decide, as we enter THE ZONE between arts and science. Our quest is ambitious, impossible, maybe. But try we must, or the world is lost. This post is based on a lecture held on March 12, 2022 at the "Transparent Eyeball" arts & science event in Prague, which was organized by Adam Vačkář and Jindřich Brejcha. Based on work by William Irwin Thompson, John Vervaeke, and Hanzi Freinacht.
I've been silent on this blog for too long. What about reactivating it with some reflections on its maybe somewhat cryptic title? The phrase "untethered in the Platonic realm" comes from a committee report I received when I applied for a fellowship with a project to critically examine the philosophy underlying the open science movement. The feedback (as you may imagine) was somewhat less than enthusiastic. The statement was placed prominently at the beginning of the report to tell me that philosophy is an activity exclusively done in armchairs, with no practical impact on anything that truly matters in practice. The committee saw my efforts as floating in a purely abstract domain, disconnected from reality. I suspect the phrase was also a somewhat naive (and more than a little pathetic) attempt by the high-profile scientific operators on the panel to showcase their self-assumed philosophical sophistication. What it did was exactly the opposite: it revealed just how ignorant we are these days of the philosophical issues that underlie pretty much all our current misery. To quote cognitive scientist and philosopher John Vervaeke: beneath the myriad crises humanity is experiencing right now, there is a profound crisis of meaning. And what, if not that, is a philosophical problem? Vervaeke's meaning crisis affects almost all aspects of human society. In particular, it affects our connectedness to ourselves, each other, and to our environment. We are quite literally loosing our grip on reality. And believe it or not, all of this is intimately linked to Plato and his allegedly irrelevant and abstract ideas. So why not try to illustrate the importance of philosophy for our practical lives with Plato's allegory of the cave (which is more of a parable, really). I am part of an arts and science collective called THE ZONE. Together with Marcus Neustetter, (who is an amazing artist) we've created a virtual-reality rendition of Plato's cave, which allows us to explore philosophical issues while actually looking at the shadows on the wall (and what causes them). What follows is a summary of some of the ideas we discuss during our mythopoietic philosophical stroll. I'm sure most of you will have heard of Plato's parable of the cave (part of his "Republic"), and are vaguely familiar with what it stands for: we humans are prisoners in a cave, chained with our backs to the wall. An unseen source of light behind our backs provides diffuse and flickering lighting. Shapes are paraded or pass in front of the light source. They cause fleeting shadows on the wall. These shadows are all we can see. They are our reality, but aren't accurate or complete representations of the real world. For Plato, a philosopher (and this would include scientists today) is a prisoner that manages to break their chains and escape the cave. As the philosopher ventures to find the exit, she is first blinded by the light coming from outside. Now we come to what I think is the central and most important aspect of the story, an aspect that is often overlooked. As the philosopher ascends from the cave to the surface, she must adapt to her new conditions. Her transformative journey to the surface is called "anagoge," which simply means "climb" or "ascent" in ancient Greek. It later acquired a mystical and spiritual meaning in the context of Christianity. But for Plato, it is simply the series of changes in yourself that you must go through in order to be able to see the real world for what it is. For Plato, the world the philosopher discovers is an ideal world of timeless absolute forms. This is what we usually associate with his parable of the cave: the invention of what later (via Neoplatonism and Augustine) became the religious and spiritual realm of Christianity, above and beyond the physical realm of our everyday lives. But before we get to the problems associated with that idea, let me point out one more overlooked aspect of the story. An important part of Plato's parable is that the philosopher returns to the cave, eager to tell the other prisoners about the real world and the fact that they are only living in the shadows. Unfortunately, the others do not understand her, since they have not gone through the transformative process of anagoge themselves. Through her journey, the philosopher has become a different kind of person. She quite literally lives in a different world, even after she descends back to the cave. If she wants to share her experience in any meaningful way, she needs to convince the other prisoners to undertake their own journeys. My guess is though that most of them are pretty happy to stay put, chained as they are to the wall in the cave. I cannot emphasize enough how important this story is for the last 2,500 years of human history. Untethered in its abstract realm it is not. And it is at the very root of our current meaning crisis, as Vervaeke points out (I've largely followed his interpretation of Plato above). There is a deep irony in the whole history. Plato's original intention with his tale of abstraction was to fight the superstitious mythological worldviews most of his contemporaries held on to, which were based on anthropomorphized narratives expressed in terms of the acts of gods, heroes, or demons. On the one hand, there is no doubt that Plato did succeed in introducing new, more abstract, more general metaphors for the human condition. On the other hand, all he did was introduce another kind of myth. He invents the two-world mythology of an ideal realm transcending our imperfect world of everyday experiences. One of the most important philosophers of the early 20th century, Alfred North Whitehead, famously quipped that "[t]he safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato." Whitehead also introduced the concept of the fallacy of misplaced concreteness (sometimes called the reification fallacy), which pretty accurately describes what happened to Plato and his cave: this fallacy means you are mistaking something abstract for something concrete. In other words, you are mistaking something that is made up for something real. Oversimplifying just a little bit, we can say that this is what Christians did with the Platonic realm of ideal forms. If this world you live in does not make sense to you, just wait for the next one. It'll be much better. And so, the abstract realm of God became a cornerstone for our meaning-making up until the Renaissance and subsequent historical developments brought all kinds of doubts and troubles into the game. To be fair to Plato, he did not see his two worlds as disconnected and completely separated realms the way Christianity came to interpret him. His worlds were bridged by the transformative journey of anagoge after all. And that is why his story is still relevant today. Some time between the Renaissance and Friedrich Nietzsche declaring God to be dead, Plato's ideal world became not so much implausible, but irrelevant for an increasing number of people. It no longer touched their lives or helped them make sense. The resulting disappearance of Plato's ideal world is succinctly recounted in Nietzsche's "Twilight of the Idols" in what is surely one of the best one-page slams philosophy has ever produced. Unfortunately though, we threw out the baby with the bathwater. With the Platonic realm no longer a place to be untethered in, we also lost the notion of anagoge. This is tragic, because the transformative journey stands for the cultivation of wisdom. Self-transcendence has become associated with superficial MacBuddhism and new-age spiritual bypassing. An escape from reality. To come to know the world, we no longer consider our own personal development as important (other than acquiring tools and methods, but that is hardly transformative). Instead, we believe in the application of the scientific method, narrowly defined as rationality in the form of logical inference applied to factual empirical evidence, as the best way to achieve rigorous understanding. Don't get me wrong: science is great, and its proper application is more important than ever before. What I'm saying here is that science alone is not sufficient to make sense of the world. To achieve that we need to tether Plato's anagoge back to the real world. To understand what's going on, we must concede a central point to Plato: there is much more going on than we are aware of. Much more than we can rationally grasp. Our world contains an indefinite (and potentially infinite) amount of phenomena that may be relevant to us; potentially unlimited differences that make a difference (to use Gregory Bateson's famous term). How do we choose what is important? How do we choose what to care about? This is not a problem we can rationally solve. First of all, any rational search for relevant phenomena will succumb to the problem of combinatorial explosion: there are simply too many possible candidates to rationally choose from. We get stuck trying. What's more, rationality presupposes us to have chosen what to care about. You must have something to think about in the first place. The process of relevance realization, as described by Vervaeke and colleagues, however, happens at a much deeper level than our rational thinking. A level that is deeply experiential, and can only be cultivated by appropriate practice. I have much more to say about that at some later point. Thus, to summarize: the hidden realm that Plato suspected to be elevated above our real world is really not outside his cave, but within every one of us. An alternative metaphor for anagoge, without the requirement of a lost world of ideal forms, is to enter our shadows, to discover what is within them. This is what we are exploring with Marcus. Self-transcendence as an inward journey. Immanent transcendence, if you want. We are turning Plato's cave inside out. The hidden mystery is right there, not behind our backs, not in front of our noses, not inside our heads, but embedded in the way we become who we are. Here we can turn to Whitehead again, who noticed that to criticize the philosophy of your time, you must direct our attention to those fundamental assumptions which everyone presupposes, assumptions that appear so obvious that people do not know they are assuming them, because no alternative ways of putting things have ever occurred to them. The assumption that reality can be rationally understood is one of these in our late modern times. It blinds us to a number of obvious insights. One of them is that we need to go inside us to get a better grip on reality. This is not religious or new-age woo. It is existential. As the late E. O. Wilson rightly observed (in the context of tackling our societal and ecological issues): we are drowning in information, while starving for wisdom. We can gather more data forever. We can follow the textbook and apply the scientific method like an algorithm. We can formulate a theory of everything (that will be really about nothing). But without self-transcendence, we will never make any sense of the world. And we, as artists, philosophers, and scientists, have completely forgotten about that. Perhaps, because we're too busy competing in our respective rat races, and don't allow ourselves to engage in idle play anymore. But I digress... There is the irony again: it's not Plato, but the scientists on that selection panel that are completely disconnected from reality. They've lost their grip to an extent that they'd never even realize it. Where does that leave us? What do we need to do? There are a bunch of theoretical and practical ideas that I would like to talk about in future posts to this blog. But one thing is central: we can't just think our way through this in our armchairs. Philosophy is important. But I concede this point to my committee of conceited condescending panelists: philosophy is only truly relevant if it touches on our practices of living, on our institutions, on our society. It is time for philosophy to come out of the ivory tower again. We need a philosophy that is not only thought. We need a philosophy that is practiced. The ancients, like Plato, were practitioners. Let's tether Plato back to the real world, where he can have his rightful impact. Just like his philosopher who ultimately must return to the cave to complete her transformative journey. Watch the first performance of THE ZONE in Plato's Cave.
VR landscaping and images by Marcus Neustetter. Much of this blog entry is based on John Vervaeke's amazing work. Check out his life-changing lecture Awakening from the Meaning Crisis here. Or start with the summary of his ideas as presented on the Jim Rutt Show [Episode 1,2,3,4,5]. So, this is as good a reason as any to wake up from my blogging hibernation/estivation that lasted almost a year, and start posting content on my web site again. What killed me this last year, was a curious lack of time (for someone who doesn't actually have any job), and a gross surplus of perfectionism. Some blog posts got begun, but never finished. And so on. And so forth. So here we are: I'm writing a very short post today, since the link I'll post will speak for itself, literally. I've had the pleasure, a couple of weeks ago, to talk to Paul Middlebrooks (@pgmid) who runs the fantastic "Brain Inspired" podcast. Paul is a truly amazing interviewer. He found me on YouTube, through my "Beyond Networks" lecture series. During our discussion, we covered an astonishingly wide range of topics, from the limits of dynamical systems modeling, to process thinking, to agency in evolution, to open-ended evolutionary innovation, to AI and agency, life, intelligence, deep learning, autonomy, perspectivism, the limitations of mechanistic explanation (even the dynamic kind), and the problem with synthesis (and the extended evolutionary synthesis, in particular) in evolutionary biology. The episode is now online. Check it out by clicking on the image below. Paul also has a break-down of topics on his website, with precise times, so you can home in on your favorite without having to listen to all the rest. Before I go, let me say this: please support Paul and his work via Patreon. He has an excellent roster of guests (not counting myself), talking about a lot of really fascinating topics.
This is the English translation of an article that was originally published in German as part of the annual essay collection of Laborjournal (publication date Jul 7, 2020). Science finds itself exposed to an increasingly anti-intellectual and post-factual social climate. Few people realise, however, that the foundations of academic research are also threatened from within, by an unhealthy cult of productivity and spreading career-oriented self-censorship. Here I present a quick diagnosis with a few preliminary suggestions on how to tackle these problems. In Raphael's "School of Athens" (above) we see the ideal of the ancient Academy: philosophers of various persuasions think and argue passionately but rationally about the deep and existential problems of our world. With Hypatia, there is even a woman present at this boy's club (center left). These thinkers are protected by an impressive vault from the trivialities of the outside world, while the blue sky in the background opens up a space for daring flights of fancy. The establishment of modern universities — beginning in the early 19th century in Berlin — was very much inspired by this lofty vision. THE RESEARCH FACTORY Unfortunately, we couldn't be further from this ideal today. Modern academic research resembles an automated factory more than the illustrious discussion circle depicted by Raphael. Over the past few decades, science has been trimmed for efficiency according to the principles of the free-market economy. This is not only happening in the natural sciences, by the way, but also increasingly in the social sciences and the humanities. The more money the taxpayer invests in academia, the higher the expectation of rapid returns. The outcomes of scientific projects should have social impact and provide practical solutions to concrete problems. Even evolutionary theorists must fill out the corresponding section in their grant applications. Science is seen as a "deus ex machina" for solving our societal and technological problems. Just like we go to the doctor to get instant pain relief, we expect science to provide instant solutions to complex problems, or at the very least, a steady stream of publications, which are supposed to eventually lead to such solutions. The more money goes into the system, the more applied wisdom is expected to flow from the other end of the research pipeline. Or so the story goes. Unfortunately, basic research doesn't work that way at all. And, regrettably, applied science will get stuck quickly if we no longer do any real basic science. As Louis Pasteur once said: there is no applied research, only research and its practical applications. There are no short cuts to innovation. Just think about the history of the laser, theoretically predicted by Albert Einstein in 1917. The first functional ruby laser was constructed in 1960, and mass market applications of laser technology only began in the 1980s. A similar story can be told for Paul Dirac's 1928 prediction of the positron, which was confirmed experimentally in 1932. The first PET-scanner came to market in the 1970s. Or let's take PCR, of Covid-19 test fame. The polymerase chain reaction goes back to the serendipitous discovery of a high-temperature polymerase from a thermophilic bacterium first described by microbiologists Thomas Brock and Hudson Freeze (no joke!) in the hot springs of Yellowstone Park in the 1960s. PCR wasn't widely used in the laboratory until the 1990s. A study from 2013 by William H. Press — then a science advisor to Barack Obama — presents studies by economist and Nobel-laureate Robert Solow, which look at the positive feedback between innovation, technology, and the wealth of various nations. Solow draws two key conclusions from his work. First, technological innovation is responsible for about 85% of U.S. economic growth over the past hundred years or so. Second, the richest countries today are those that had first set up a strong tradition in basic research. Press argues, building on Solow's insights, that basic research must be generously funded by the state. One reason is that it is impossible to predict which fundamental discoveries will lead to technological innovations. Second, the path to application can take decades, as the examples above illustrate. Finally, breakthroughs in basic science often have a low appropiability, that is, money gained from their application rarely flows back to the original investor. Think of Asian CD and DVD players equipped with lasers based on U.S. research and development, which yielded massive profits while outcompeting more expensive (and less good) products of American make. This is the economic argument why state-funded basic research is more important than ever. EFFICIENCY OR DIVERSITY? But here exactly lies the problem: basic research simply does not work according to the rules of the free market. Nevertheless, we have an academic research system that is increasingly dominated by these rules. Mathematicians Donald and Stuart Geman note that the focus of fundamental breakthroughs in science has shifted during the 20th century from conceptual to technological advances: from the radical revolution in our worldview brought about by quantum and relativity theory to the sequencing of the human genome which, in the end, yielded disappointingly few medical advances or new insights into human nature. A whole variety of complex historical reasons are responsible for this shift. One of these is undoubtedly the massive transformation in the incentive structure for researchers. We have established a monoculture. A monoculture of efficiency and accountability, which leads to an impoverished intellectual environment that is no longer able to nourish innovative research ideas, even though there is more money available for science than ever before. Isn't it ironic that this money would be more efficiently invested if there was less pressure for efficiency in research? Researchers that need to be constantly productive to progress in their careers, must constantly appear busy. This is absolutely fatal, particularly for theoretically and philosophically oriented projects. First of all, good theory requires creativity which needs time, inspiration, and a certain kind of productive leisure. Second, the most important and radical intellectual breakthroughs are far ahead of their time, without immediately obvious practical application, and generally associated with a high level of risk. Who tackles complex problems will fail more often. Some breakthroughs are only recognised in hindsight, long after they have been made. Few researchers today can muster the time and courage to devote themselves to projects with such uncertain outcomes. The time of the romantics is over; now the pragmatists are in charge. Those who want to be successful in current-day academia — especially at an early stage of their careers — must focus on tractable problems in established fields, the low-hanging fruit. This optimises personal productivity and chances of success, but in turn diminishes diversity and originality of thinking in academic research overall, and wastes the best years of too many intrepid young explorers. Unfortunately, originality cannot be measured, while productivity can. Originality often leads to noteworthy conceptual innovations, but productivity on its own rarely does. Goodhart's Law — named after a British economist — says that a measure of success ceases to be useful once it has become an incentive. This is happening in almost all areas of society at the moment, as pointedly described by U.S. historian Jerry Z. Muller in his excellent book "The Tyranny of Metrics." In science, Goodhart's Law leads to increased self-citations, a flood of ever shorter publications (approaching what is called the minimal publishing unit) with an ever increasing number of co-authors, as well as more and more academic clickbait — sensational titles in glossy journals — that deliver less and less substance. Put succinctly: successful researchers are more concerned about their public image and their professional networks today than ever before, a tendency which is hardly conducive to depth of insight. What follows from all this is widespread career-oriented self-censorship among academics. If you want to be successful in science, you need to adapt to the system. Nowhere (with the potential exception of the arts) is this more harmful than in basic research. It leads to shallowness, it fosters narcissism and opportunism, and it produces more appearance than substance, problems which are gravely exacerbated by the constant acceleration of academic practice. Nobody has time anymore to follow complex trains of thought. An argument either fits your thinking habits, what you see as the zeitgeist of your field, or it is preemptively trashed upon review. In the U.S., for example, an empirical study has found that those biomedical grant applications are favoured that continue the work of previously successful projects. More of the same, instead of exploration where it is most promising. And so the monoculture becomes more monotonous yet. FROM AN INDUSTRIAL TO AN ECOLOGICAL MODEL OF RESEARCH PRACTICE How can we escape this vicious circle? It is not going to be easy. First, those that are profiting most from the current system are extremely complacent and powerful. They can show, through their quantitative metrics, that academic science is more productive than ever. The loss of originality (and the suffering of the victims of this system) is hard to measure, and therefore no major issue. What cannot be measured does not exist. In addition, the current flurry of technological innovations (mostly in the area of information technology) give us the impression that we have the world and our lives more under control than ever. All of this supports the impression that science is fully performing its societal function. But appearances can be deceptive. Indeed, we do not need more facts to tackle the existential problems of humanity. What we do need is deeper insight, more wisdom, and just like originality, these cannot be measured. There are cracks appearing in the facade of modern science, which suggest we must change our attitude. I've already mentioned the Human Genome Project, which cost a lot of money, but did not deliver the expected profusion of cures (or any deeper insight into human nature). Even less convincing is the performance of the Human Brain Project so far, which promised us a simulation of the entire human prefrontal cortex, for a mere billion euros. Not much happened, but this is not surprising, because it was never clear what kind of insights we would gain from such a simulation anyway. These are signs that the technology-enamoured and -fixated system we've created is about to hit a wall. Since the main problem of academic science is an increasing intellectual monoculture, it is tempting to use ecology as a model and inspiration for a potential reform. As mentioned at the outset, the current model of academic research is indoctrinated by free-market ideology. It is an industrial system. We want control over the world we live in. We want measurable and efficient production. We foster this through competition. As in the examples of industrial agriculture and economic markets, the shadow side of this cult of productivity is risk-aversion and the potential of a ruinous race to the bottom. What we need is an ecological reform of academic research! Pretty literally. We need to shift from a paradigm of control to a paradigm of participation. Young researchers should be taken seriously, properly supported, and encouraged to take risk and responsibility. What we want is not maximal production, but maximal depth, sustainability, and reproducibility of scientific results. We want societal relevance based on deep insight rather than technological miracle cures. We need an open and collaborative research system that values the diversity of perspectives and approaches in science. We need a focus on innovation. In brief, we need more lasting quality rather than short-term quantity. Our scientific problems, therefore, mirror those in society at large pretty exactly. STEPS TOWARDS AN ECOLOGICAL RESEARCH ECOSYSTEM How is this supposed to work in practice? I assume that I am mostly addressing practicing researchers here. This is why I focus on propositions that can be implemented without major changes in national or international research policy. Let me classify them into four general topics:
Last week, I discussed an article published by Mike Levin and Dan Dennett in Aeon. I really don't want to obsess about this rather mediocre piece of science writing, but it does bring up a number of points that warrant some additional discussion. The article makes a number of strong claims about agency and cognition in biology. It confused me with its lack of precision and a whole array of rather strange thought experiments and examples. Since I've published my earlier post, several Tweeps (especially a commenter called James of Seattle) have helped me understand the argument a little better. Much obliged! This results in an interpretation of the article that veers radically away from panpsychism into a direction that's more consistent with Dennett's earlier work. Let me try to paraphrase:
The argument Levin and Dennett present is not exactly new. Points (1) to (3) are almost identical to Ernst Mayr's line of reasoning from 1961, which popularised the notion of "teleonomy"—denoting evolved behaviour driven by a genetic program, that seems teleological because it was adapted to its function by natural selection. At least, there is a tangible argument here that I can criticise. And it's interesting. Not because of what it says (I still don't think that it talks about agency in any meaningful way), but more because of what it's based on—its episteme, to use Foucault's term. To be more specific: this interpretation reveals that the authors' world view rests on a double layer of metaphors that massively oversimplify what's really going on. Let me explain. ORGANISMS ≠ MACHINES The first metaphorical layer on which the argument rests is the machine conception of the organism (MCO). It is the reason we use terms such as "mechanism," "machinery," "program," "design," "control," and so on, to describe cells and other living systems. Levin and Dennett use a typical and very widespread modern version of the MCO, which is based on computer metaphors. This view considers cells to be information-processing machines, an assumption that doesn't even have be justified anymore. As Richard Lewontin (one of my big intellectual heroes) points out: "[T]he ur-metaphor of all modern science, the machine model that we owe to Descartes, has ceased to be a metaphor and has become the unquestioned reality: Organisms are no longer like machines, they are machines." Philosopher Dan Nicholson has written a beautiful and comprehensive critique of this view in an article published in 2013, which is called "Organisms ≠ Machines." (The only philosophical article I know with an unequal sign in it, but maybe there are others?) Dan points out that the machine metaphor seems justified by several parallels between machines and organisms. They are both bounded physical systems. They both act according to physical law. They both use and modify energy and transform part of it into work. They are both hierarchically structured and internally differentiated. They can both be described relationally in terms of causal interactions (as blueprints and networks, respectively). And they both are organised in a way that makes them operate towards the attainment of certain goals. Because of this, they can both be characterised in functional terms: knives are for cutting, lungs are for breathing. But, as Dan points out, the most obvious similarities are not always the most important ones! In fact, there are three reasons why the machine metaphor breaks down, all of which are intimately connected to the topic of organismic agency—the real kind, which enables organisms to initiate causal effects on their environments from within their system boundaries (see my earlier post). Here they are:
These are three pretty fundamental ways in which organisms are not at all like machines! And true agency depends on all of them, since it requires self-maintaining organisation, the kind that underlies intrinsic purpose, inter-dependence, and the open-ended, transient structure of the organism. To call preprogrammed evolved responses "agency" is to ignore these fundamental differences completely. Probably not a good thing if we really want to understand what life is (or what agency is, for that matter). INTENTIONAL OVERKILL The second metaphorical layer on which Levin and Dennett's argument rests is the intentional stance. Something really weird happens here: basically, the authors have done their best to convince us that organisms are machines. But then they suddenly pretend they're not. That they act with intentionality. Confused yet? I certainly am. The trick here is a subtle switch of meaning in the term "agency." While originally defined as a preprogrammed autonomous response of the cell (shaped by evolution), it now becomes something very much like true agency (the kind that involves action originating from within the system). This switch is justified by the argument that the cell is only acting as if it has intention. Intentionality is a useful metaphor to describe the machine-like but autonomous behaviour of the cell. It is a useful heuristic. In a way, that's ok. Even Dan Nicholson agrees that this heuristic can be productive when studying well-differentiated parts of an organism (such as cells). But is this sane, is it safe, more generally? I don't think so. The intentional stance creates more problems than it solves. For example, it leads the authors to conflate agency and cognition. This is because the intentional stance makes it easy to overlook the main difference between the two: cognitive processes—such as decision-making—involve true intentionality. Arguments and scenarios are weighed against each other. Alternatives considered. Basic agency, in contrast, does not require intentionality at all. It simply means that an organism selects from a repertoire of alternative behaviours according to its circumstances. It initiates a given activity in pursuit of a goal. But it need not be aware of its intentions. As mentioned earlier, agency and cognition are related, but they are not the same. Bacteria have agency, but no cognition. This point is easily lost if we consider all biological behaviour to be intentional. The metaphor fails in this instance, but we're easily fooled into forgetting that it was a metaphor in the first place. The exact opposite also happens, of course. If we take all intentionality to be metaphorical, we are bound to trivialise it in animals (like human beings) with a nervous system. The metaphorical overkill that is happening here is really not helping anyone grasping the full complexity of the problems we are facing. It explains phenomena such as agency and intentionality away, instead of taking them seriously. While the intentional stance is supposed to fix some of the oversimplifications of the machine metaphor, all it does is making them worse. The only thing this layering of metaphors achieves is obfuscation. We're fooling ourselves by hiding the fact that we've drastically oversimplified our view of life. Not good. And why, you ask, would we do this? What do we gain through this kind of crass self-deception? Well, in the end, the whole convoluted argument is just there to save a purely mechanistic approach to cellular behaviour, while also justifying teleological explanations. We need this metaphorical overkill because we don't believe that we can be scientific without seeing the world as a mechanistic clockwork. This is a complicated topic. We'll revisit it very, very soon on this blog. I promise. EMMENTAL CHEESE ONTOLOGY In the meantime, let's see what kind of philosophical monster is being created here. The machine view and the intentional stance are both approaches to reality—they are ontologies in the philosophical sense of the term—that suit a particular way of seeing science, but don't really do justice to the complexity and depth of the phenomena we're trying to explain. In fact, they are so bad that they resemble layered slices of Emmental cheese: bland, full of holes, and with a slightly fermented odour. Ultimately, what we're doing here is creating a fiction, a simulation of reality. Jean Beaudrillard calls this hyperreality, British filmmaker Adam Curtis calls it HyperNormalisation. It's the kind of model of reality we know to be wrong, but we still accept it. Because it's useful in some ways. Because it's comforting and predictable. Because we see no alternative. Not just fake news, but a whole fake world.
It's not cognition, but metaphors all the way down. Of course, the responsibility for this sorry state of affairs can't all be pinned on this one popular-science article. It's been going on since Descartes brought us the clockwork universe. Levin and Dennett's piece is just a beautiful example of the kind of mechanistic oversimplification modernity has generated. It demonstrates that this kind of science is reaching its limits. It may not have exhausted its usefulness quite yet, but it is certainly in the process of exhausting its intellectual potential. Postmodern criticisms—such as those by Foucault and Baudrillard, who I've mentioned above—are hitting home. But they don't provide an alternative model for scientific knowledge, leaving us to drift in a sea of pomo-flavoured relativism. What we need is a new kind of science, rested on more adequate philosophical foundations, that answers to those criticisms. One of the main missions of this blog is to introduce you to such an alternative. A metamodern science for the 21st century. The revolution is coming. Join it. Or stay with the mechanistic reactionaries. It's up to you. |
Johannes Jäger
Life beyond dogma! Archives
May 2024
Categories
All
|
Proudly powered by Weebly