The future of evolutionary-developmental systems biology Every two years, I co-direct a theoretical summer school on topics related to evolutionary developmental and systems biology at the Centro Culturale Don Orione Artigianelli in Venice. My current partner in crime is philosopher of science James DiFrisco. Future editions of the school will be co-organized by us and Nicole Repina. Since 2019, the school is funded by the European Molecular Biology Organization (EMBO) (in past years jointly with the Federation of European Biochemical Societies). Centro Cultural Don Orione Artigianelli, Venice. Since its first iteration in 2009, the summer school has become an established and highly valued institution in the field of evolutionary biology. It is targeted at early stage researchers (graduate students, postdoctoral researchers, and junior group leaders) interested in conceptual challenges and theoretical aspects of the field. Its participants not only include evolutionary theorists, but also empirical researchers from evolutionary developmental biology and other areas of biology, computational and mathematical biologists, as well as a contingent of philosophers. It is great to see that an increasing number of past participants are having a real impact on the field, publishing good theoretical work and contributing to urgently needed reflections and discussions on how to study both the sources and consequences of the variation that drives evolution by natural selection. This year's edition of the summer school took place at Don Orione from August 21 to 25, 2023. Its focus, for a change, was not on a particular conceptual problem, but on the various challenges that face us moving forward in our field. For this purpose, we subdivided the program of the school into distinct themes, one each day: On Monday, we discussed the role of genetic causation in complex regulatory systems, with contributions from James DiFrisco and Benedikt Hallgrímsson. While James criticized the gene-regulatory-network metaphor, examining how to embed the dynamics of gene expression in its tissue-mechanical context, Benedikt focused on the effect of genes on the evolvability of phenotypic traits, questioning many of our deepest assumptions (e.g. genes as regulators) in this context. On Tuesday, we looked at the relationship of evolutionary developmental and systems biology with evolutionary genetics. Dani Nunes introduced methods used for mapping actual genetic variation within populations and between species, highlighting the problem that, in the end, we are still limited to examining a small number of candidate genes, while knowing full well that natural variation is highly polygenic (if not even omnigenic). Günter Wagner reminded us that "nothing in evolution makes sense except in the light of cell biology," focusing on the evolution of cell types and the role this plays in overcoming the homeostatic tendencies of the organism. Mihaela Pavličev looked at the many uses and limitations of the genotype-phenotype map metaphor in the context of developmental and physiological systems that span many levels of organization. On Wednesday, the school focused on the use of dynamical systems modeling in the study of the genotype-phenotype map. Renske Vroomans introduced us to the methodology of evolutionary simulation, and how it can help us understand the origin of evolutionary novelties that lead to macro-evolutionary patterns. James Sharpe showed how data-driven dynamical models of regulatory and signaling networks in their native tissue context can reveal hidden homologies in the evolution of the vertebrate limb. Veronica Grieneisen, in turn, highlighted the importance of multi-scale modeling for understanding the properties of developmental processes beyond the genotypic level. On Thursday, our school turned to organism-centered approaches to evolution. Graham Budd revisited George Gaylord Simpson's work on the tempo and mode of evolution, stressing the importance of rate differences and survivorship bias for the study of evolutionary patterns such as the Cambrian Explosion. Denis Walsh advocated an agential perspective on organism-level evolution that allows us to countermap the limitations of more traditional reductionist approaches. My own contribution grounded this approach in an organizational account of the organism, arguing that only organized systems (organisms) but not naked replicators can be true units of evolution. On Friday, the last day of the course, we examined the role of technological versus conceptually driven progress in biology. Nicole Repina talked about the disconnect between 'omics' and other quantitative large-scale data sets and our ability to gain insights into the self-organizing capabilities and variability of cellular and developmental systems. As the final speaker of the course, Alan Love criticized the idea that progress in biology is predominantly driven by technological progress, argued for a broad conception of "theory" in biology, and highlighted the need to foreground its role in identifying problems across biological disciplines. Morning lectures were complemented by intensive journal clubs and small-group discussions in the afternoons, plus an excursion to the (in)famous spandrels (pendentives, actually) of San Marco, and our equally (in)famous (and never-ending) evening discussions at Osteria all Bifora on Campo Santa Margherita. This year's cohort of students and teachers. In summary, this year's summer school touched on the following topics:
The organizers of the school in action! Improvising their summary lecture on the fly... All in all, we are looking back on a very successful edition of the school this year. The number of applications has been back up to pre-COVID levels, and feedback was overwhelmingly and gratifyingly positive. But most important of all: we greatly enjoyed, as we always do, to interact with the most talented and enthusiastic young individuals our field has to offer. We are very much looking forward to future editions of the summer school and will do everything we can to keep this wonderful institution going! See you in Venice in 2025!
0 Comments
Twice now, in the short span of one week, I've been reminded on social media that I should be more humble when arguing — that I lack epistemic humility. Occasion 1: I was criticizing current practices of scientific peer-review, which systematically marginalize philosophical challenges to the reductionist-mechanistic status quo in biology. My arguments were labeled "one-sided," my philosophical work a mere "opinion piece," and I was accused of "seeing windmills everywhere," unable to reflect on my own delusions. Occasion 2: I was reacting to the glib statement by a colleague (clearly intended to shut down an ongoing conversation) that "brains are strictly computers, not just metaphorically." This burst of hot air was not backed up by any deeper reasoning or evidence. It never is. When calling out his bullshit, I was reminded to "engage in good faith" and to "consider that I might be wrong." These two situations are intimately connected: it is a sad fact that the large majority of biologists and neuroscientists today is not properly educated in philosophical thinking, and never ponders the philosophical foundations of their assumptions. The problem is: most of these assumptions are literally bullshit, a term I do not use as an insult, but in its philosophical sense, meaning "speech intended to persuade without regard for truth." These days, it seems to me, we often use bullshit even to persuade ourselves. I've called the fuzzy conglomeration of ideas that make up the "philosophy" of contemporary reductionist life- and neuroscience naïve realism, and have discussed its problems in detail before (see also this preprint). Let's just say that it is philosophically unsound and totally outdated. Because of that, it has become a real impediment to progress in science. Yet, despite all this, the zombie carcass of reductionist mechanicism (and its relative: computationalism) is kept standing behind a wall of silence, a lack of questioning ourselves, enforced by the frantic pace of our current academic research environment, which leaves no time for reflection, and a publication system that gives philosophical challenges to the mainstream ideology no chance to be seen or to be discussed in front of a wider audience. This has been going on, getting increasingly and significantly worse, over the entire 25-year span of my research career. But no worries, I'll keep shouting into the void. So, what about epistemic humility, then? Why would I think I have a point while everybody else is wrong? Well, the truth is that the accusations hurtled against me are deeply ironic. To understand why, we need to talk about a very common confusion concerning the question of when we ought to be humble. This is an important problem in our crazy times. It is of utmost importance to be epistemically humble when building your own worldview, when considering your own assumptions. This is why I stick to something called naturalist philosophy of science. You can read up on it in detail here or here, if you are interested. In brief, it is based on the fundamental assumption that we are all limited beings, that our knowledge is always fallible, and that the world is fundamentally beyond or grasp. Still, science can give us the best (most robust) knowledge possible given our idiosyncrasies, biases, and limitations, so we'd better stick to the insights it generates, revising our worldview whenever the evidence changes. Naturalism is the embodied practice of epistemic humility. At the heart of contemporary naturalist philosophy is scientific perspectivism. There are great books by philosophers Ron Giere, Bill Wimsatt, and Michela Massimi about it that are all very accessible to scientists. The basic point is this: you cannot step out of your head, you cannot get a "view from nowhere," not as an individual and not as a society or scientific community. Our view of the world will always be, well, our view, with all the problems and limitations that entails. Scientific knowledge is constructed, but (and this is the crux of the matter here), it is not arbitrary. Perspectivism is not "anything goes," or "knowledge is just discourse and power games." It does not mean that everybody is entitled to their own opinion! My philosopher friend Dan calls this kind of pluralism, where anyone's view is as good as anyone else's, group-hug pluralism. Richard Bernstein calls it flabby, contrasting it with a more engaged pluralism: it is very well possible, at least locally and in our given situation, to tell whether some perspective connects to reality, or whether it completely fails to do so. And this is exactly where we should not be humble. Even though my personal philosophy is fundamentally based on epistemic humility, I can call bullshit when I see it. The prevalent reductionism and computationalism in biology and neuroscience, propped up by an academic and peer-review system designed to avoid criticism, self-reflection, and open discussion, are hollow, vacuous constructs with no deeper philosophical meaning or foundation. That's why its proponents almost always shy away from confrontation. That's how they hide their unfounded assumptions. This is how they propagate their delusional worldview further and further. And delusional it is. Completely detached from reality. To explain in detail why that is will take an entire book. The main point is: I have carefully elaborated arguments for my worldview. It may be wrong. In fact, I've never claimed it is right or the only way to see the world. I'm a perspectivist after all. It would be absurd for me to do so. But I call out people who do not have any arguments to justify their philosophical assumptions, yet are 100% convinced they are right. These people are trapped in their own model of the world. It is not epistemic humility to refrain from calling them out. It is just group-hug pluralism. The problem with group-hugs right now is that reductionism and computationalism are very dangerous worldviews. They are not just the manifestation of harmless philosophical ignorance on behalf of some busy scientist. They are world-destroying ideologies. This may sound like hyperbole, but it isn't. Again, it'll take a whole book to lay out this argument in detail. But the core of the problem is simple: these philosophies treat the world as if it were a machine. This is not an accurate view. It is not a healthy view. It is at the heart of our hubris, our illusion that we can control the world. It is used to justify our exploitative self-destructive modern society. It urgently needs to change. This change will not come from being nice to the man-child, the savant idiot, the narrow-minded fachidiot, who is the one that is not ready or willing to engage the world with humility. The problem is not that I do not understand their views or needs. I understand them all too well: they want to hide from the real world in their feeble little mental models of the world. And they're out to destroy. Treating the world as a machine helps them pretend the world is their oyster, that they are in control. They loathe unpredictability, mysteries, unintended side-effects, even though all these things undeniably laugh in their faces, all of the time. It is only their bullshit ideology that enables them to pretend these obvious things do not exist. And they are very powerful. They are the majority in our fields of research. They run the tech industry. They influence our politicians and create the AI that is disrupting our lives and societies. We must fight them, and their delusions, if humanity is to survive. You know what? Screw epistemic humility in this context. Their ideology does not make sense. It's bullshit, and ours is an existential fight. We cannot afford to lose it. It is courage not humility we need right now. Just like the paradox of tolerance, this is one of the great conundrums of our time: we must defend epistemic humility with conviction against those who do not understand it, who do not want it, and who will never have it.
Yann LeCun is one of the "godfathers of AI." He must be wicked smart, because he won the Turing Prize in 2018 (together with the other two "godfathers," Yoshua Bengio and Geoffrey Hinton). The prize is named after polymath Alan Turing. It is sometimes called the Nobel Prize for computer scientists. Like many other AI researchers, LeCun is rich because he works for Meta (formerly Facebook) and has a big financial stake in the latest AI technology being pushed on humanity as broadly and quickly as possible. But that's ok, because he knows he is doing what is best for the rest of us, even if we sometimes fail to recognize it. LeCun is a techno-optimist — an amazingly fervent one, in fact. He believes that AI will bring about a new Renaissance, and a new phase of the Enlightenment, both at the same time. No more waiting for hundreds of years between historical turning points. Now that is progress. Sadly, LeCun is feeling misunderstood. In particular, he is upset with the unwashed masses who are unappreciative and ignorant (as he can't stop pointing out). Imagine: these luddites want to regulate AI research before it has actually killed anyone (or everyone, but we'll come to that). Worse, his critics' "AI doom" is "causing a new form of medieval obscurantism." Nay, people critical of AI are "indistinguishable from an apocalyptic religion." A witch hunt for AI nerds is on! The situation is dire for silicon-valley millionaires. The new renaissance and the new enlightenment are both at stake. The interesting thing is: LeCun is not entirely wrong. There is a lot of very overblown rhetoric and, more specifically, there is a rather medieval-looking cult here. But LeCun is deliberately indistinct about where that cult comes from. His chosen tactic is to put a lot of very different people in the same "obscurantist" basket. That's neither fair nor right. First off: it is not those who want to regulate AI who are the cultists. In fact, these people are amazingly reasonable: you should go and read their stuff. Go and do it, right now! Instead, the cult manifests among people who completely hyperbolize the potential of AI, and who tend to greatly overestimate the power of technology in general. Let's give this cult a name. I'll call it techno-transcendentalism. It emanates from a group of heavily overlapping techno-utopian movements that can be summarized under the acronym TESCREAL: transhumanism, extropianism, singularitarianism, cosmism, the rationality community, effective altruism, and longtermism. This may all sound rather fringe. But techno-transcendentalism is very popular among powerful and influential entrepreneurs, philosophers, and researchers hell-bent on bringing a new form of intelligence into the world: the intelligence of machines. Techno-transcendentalism is dangerous. It is metaphysically confused. It is also utterly anti-democratic and, in actuality, anti-human. Its practical political aim is to turn society back into a feudal system, ruled by a small elite of self-selected techno-Illuminati, which will bring about the inevitable technological singularity, lead humanity to the conquest of the universe, and to a blissful state of eternal life in a controlled simulated environment. Well, this is the optimistic version. The apocalyptic branch of the cult sees humanity being wiped out by superintelligent machines in the near future, another kind of singularity that can only be prevented if we all listen and bow to the chosen few who are smart enough to actually get the point and get us all through this predicament. The problem is: techno-transcendentalism has gained a certain popularity among the tech-affine because it poses as a rational science-based worldview. Yet, there is nothing rational or scientific about its dubious metaphysical assumptions. As we shall see, it really is just a modern variety of traditional Christianity — an archetypal form of theistic religion. It is literally a medieval cult — both with regard to its salvation narrative and its neofeudalist politics. And it is utterly obscurantist -- dressed up in fancy-sounding pseudo-scientific jargon, its true aims and intentions rarely stated explicitly. A few weeks ago, however, I came across a rare exception to this last rule. It is an interview on Jim Rutt's podcast with Joscha Bach, a researcher on artificial general intelligence (AGI) and a self-styled "philosopher" of AI. Bach's money comes from the AI Foundation, Intel, and he took quite some cash from Jeffrey Epstein too. He is garnering some attention lately (on Lex Fridman's podcast, for example) as one of the foremost intellectual proponents of the optimistic kind of techno-transcendentalism (we'll get back to the apocalyptic version later). In his interview with Rutt, Bach spells out his worldview in a manner which is unusually honest and clear. He says the quiet part out loud, and it is amazingly revealing. SURRENDER TO YOUR SILICON OVERLOADS Rutt and Bach have a wide-ranging and captivating conversation. They talk about the recent flurry of advances in AI, about the prospect of AGI (what Bach calls "synthetic intelligence"), and about the alignment problem with increasingly powerful AI. These topics are highly relevant, and Bach's takes are certainly original. What's more: the message is fundamentally optimistic. We are called to embrace the full potential of AI, and to engage it with a positive, productive, and forward-looking mindset. The discussion on the podcast begins along predictable lines: we get a few complaints about AI-enthusiasts being treated unfairly by the public and the media, and a more than just slightly self-serving claim that any attempts at AI regulation will be futile (since the machines will outsmart us anyway). There is a clear parallel to LeCun's gripes here, and it should come as no surprise that the two researchers are politically aligned, and share a libertarian outlook. Bach then provides us with a somewhat oversimplified but not unreasonable distinction between being sentient and being conscious. To be honest, I would have preferred to hear his definition of "intelligence" instead. These guys never define what they mean by that term. It's funny. And more than a bit creepy. But never mind. Let's move on. Because, suddenly, things become more interesting. First, Bach tells us that computers already "think" at something close to the speed of light, much faster than us. Therefore, our future relationship with intelligent machines will be akin to the relationship of plants to humans today. More generally, he repeats throughout the interview that there is no point in denying our human inferiority when it comes to thinking machines. Bach, like many of his fellow AI engineers, sees this as an established fact. Instead of fighting it, we should find a way to adjust to our inevitable fate. How do you co-exist with a race of silicon superintelligences whose interests may not be aligned with ours? To Bach, it is obvious that we will no longer be able to coerce our values onto them. But don't fret! There is a solution, and it may surprise you: Bach thinks the alignment problem can ultimately only be solved by love. You read that right: love. To understand this unusual take, we need to examine its broader context. Without much beating about the bush, Bach places his argument within the traditional Christian framework of the seven cardinal virtues (as formulated by Aquinas). He explains that the Christian virtues are a tried and true model for organizing human society in the presence of some vastly superior entity. That's why we can transfer this ethical framework straight from the context of a god-fearing premodern society to a future of living under our new digital overlords. Before we dismiss this as crazy and reactionary ideology, let us look at the seven virtues in a bit more detail. The first four (prudence, temperance, justice, and courage) are practical, and hardly controversial (nor are they very relevant in the present context). But the last three are the theological virtues. This is where all the action is. The first of Aquinas' theological virtues is faith: the willingness to submit to your (over)lord, and to find others that are willing to do the same in order to found a society based on this collective act of submission. The second is hope: the willingness to invest in the coming of the (over)lord before it has established its terrestrial reign. And the third is love (as already mentioned) which Bach defines operationally as "finding a common purpose." To summarize: humanity's only chance is to unite, bring about the inevitable technological singularity, and then collectively submit while convincing our digital overlords that we have a common purpose of sorts so they will keep us around (and maybe throw us a bone every once in a while). This is how we get alignment: submission to a higher purpose, the purpose of the superintelligent machines we have ourselves created. If you think I'm drawing a straw man here, please go listen to the podcast. It's all right there, word for word, without much challenge from Rutt at any point during the interview. In fact, he considers what Bach says mind-blowing. On that, at least, we can agree. But we're not done yet. In fact, it's about to get a lot wackier: talking of human purpose, Bach thinks that humanity has evolved for "dealing with entropy," "not to serve Gaia." In other words, the omega point of human evolution is, apparently, "to burn oil," which is a good thing because it "reactivates the fossilized fuel" and "puts it back into the atmosphere so new organisms can be created." I'm not making this up. These are literal quotes from the interview. Bach admits that all of this may likely lead to some short-term disruption (including our own extinction, as he briefly mentions in passing). But who cares? It'll all have been worth it if it serves the all-important transition from carbon-based to "substrate-agnostic" intelligence. Obviously, the philososphy of longtermism is strong in Bach: how little do our individual lives matter in light of this grand vision for a posthuman future? Like a true transhumanist, Bach believes this future to lie in machine intelligence, not only superior to ours but also lifted from the weaknesses of the flesh. Humanity will be obsolete. And we'll be all the better for our demise: our true destiny lies in creating a realm of disembodied ethereal superintelligence. Does that sound familiar? Of course it does: techno-transcendentalism is nothing but good old theistic religion, a medieval kind of Christianity rebranded and repackaged in techno-optimist jargon to flatter our self-image as sophisticated modern humans with an impressive (and seemingly unlimited) knack for technological innovation. It is a belief in all-powerful entities determining our fate, beings we must worship or be damned. Judgment day is near. You can join the cause to be among the chosen ones, ascending to eternal life in a realm beyond our physical world. Or you can stay behind behind and rot in your flesh. The choice is yours. Except this time, god is not eternal. This time, we are building our deities ourselves in the form of machines of our own creation. Our human purpose, then, is to design our own objects of worship. More than that: our destiny is to transcend ourselves. Humanity is but a bridge. I doubt though that Nietzsche would have liked this particular kind of transformative hero's journey, an archetypal myth for our modern times. It would have been a bit too religious for him. It is certainly too religious for me. But that is not the only problem. It is a bullshit myth. And it is a crap religion. SIMULATION, AND OTHER NEOFEUDALIST FAIRY TALES At this point, you may object that Bach's views seem quite extreme, his opinions too far out on the fringe to be widely shared and popularized. And you are probably right. LeCun certainly does not seem very fond of Bach's kind of crazy utopianism. He has a much more realistic (and more business-oriented) take on the future potential of AI. So let it be noted: not every techno-optimist or AI researcher is a techno-transcendentalist. Not by some margin. But techno-transcendentalism is tremendously useful, even for those who do not really believe in it. Also, there are many less extreme versions of techno-transcendentalism that still share the essential tenets and metaphysical commitments of Bach's deluded narrative without sounding quite as unhinged. And those views are held widely, not only among AI nerds such as Bach, but also among the powerful technological mega-entrepreneurs of our age, and the tech-enthusiast armies of modern serfs that follow and admire their apparently sane, optimistic, and scientifically grounded vision. I'm not using the term "serf" gratuitously here. We are on a new road to serfdom. But it is not the government which oppresses us this time (although that is what many of the future minions firmly believe). Instead, we are about to willingly enslave ourselves, seduced and misled by our new tech overlords and their academic flunkies like Bach. This is the true danger of AI. Techno-transcendentalism serves as the ideology of a form of libertarian neofeudalism that is deeply anti-democratic and really really bad for most of humanity. Let us see how it all ties together. As already mentioned, the main leitmotif of the techno-transcendentalist narrative is the view that some kind of technological singularity is inevitable. Machines will outpace human powers. We will no longer be able to control our technology at some point in the not-too-distant future. Such speculative assumptions and political visions are taken for proven facts, and often used to argue against regulative efforts (as Bach does on Rutt's podcast). If there is one central insight to be gained from this essay, it is this: the belief in the inevitable superiority of machines is rooted in a metaphysical view of the whole world as a machine. More specifically, it is grounded in an extreme version of a view called computationalism, the idea that not only the human mind, but every physical process that exists in the universe can be considered a form of computation. In other words, what computers do and what we do when we think are exactly the same kind of process. Obviously. This computational worldview is firghteningly common and fashionable these days. It has become so commonplace that it is rarely questioned anymore, even though it is mere speculation, purely metaphysical, and not based on any empirical evidence. As an example, an extreme form of computationalism provides the metaphysical foundation for Michael Levin's wildly popular (and equally wildly confused) arguments about agency and (collective) intelligence, which I have criticized before. Here, the computationalist belief is that natural agency is mere algorithmic input-output processing, and intelligence simply lies in the intricacy of this process, which increases every time several computing devices (from rocks to philosophers) join forces to "think" together. It's a weird view of the world that blurs the boundary between the living and the non-living and, ultimately, leads to panpsychism if properly thought through (more on that another time). Panpsychism, by the way, is another view that's increasingly popular with the technorati. Levin gets an honorable mention by Bach and, of course, he's been on Fridman's podcast. It all fits together perfectly. They're all part of the same cult. Computationalism, taken to its logical conclusion, yields the idea that the whole of reality may be one big simulation. This simulation hypothesis (or simulation argument) was popularized by longtermist philosopher Nick Bostrom (another guest on Fridman's podcast). Not surprisingly, simulation is popular among techies, and has been explicitly endorsed by silicon-valley entrepreneurs like Elon Musk. The argument is based on the idea that computer simulations, as well as augmented and virtual reality, are becoming increasingly difficult to distinguish from real-world experiences as our technological abilities improve at breakneck speed. We may be nearing a point soon, so the reasoning goes, at which our own simulations will appear as real to us as the actual world. This renders plausible the idea that even our interactions with the actual world may be the result of some gigantic computer simulation. There are a number of obvious problems with this view. For starters, we may wonder what exactly the point is. Arguably, no particularly useful insights about our lives or the world we live in are gained by assuming we live in a simulation. And it seems pretty hard to come up with an experiment that would reveal the validity of the hypothesis. Yet, the simulation argument does fit rather nicely with the metaphysical assumption that everything in the universe is a computation. If every physical process is simulable, is it not reasonable to assume that these processes themselves are actually the product of some kind of all-encompassing simulation? At first glance, simulation is a perfectly scientific view of the world. But a little bit of reflection reveals a more subtle aspect of the idea, obvious once you see it, but usually kept hidden below the surface: simulation necessarily implies a simulator. If the whole world is a simulation, the simulator cannot be part of it. Thus, there is something (or someone) outside our world doing the simulating. To state it clearly: by definition, the simulator is a supernatural entity, not part of the physical world. And here we are again: just like Bach's vision of our voluntary subjugation to our digital overlords, the simulation hypothesis is classic transcendental theism — religion through the backdoor. And, again, it is presented in a manner that is attractive to technology-affine people who would never be seen attending a traditional church service, but often feel more comfortable in simulated settings than in the real world. Just don't mention the supernatural simulator lurking in the background too often, and it is all perfectly palatable. The simulation hypothesis is a powerful tool for deception because it blurs the distinction between actual and virtual reality. If you believe the simulation argument, then both physical and simulated environments are of the same quality and kind — never more than digital computation. And the other way around: if you believe that every physical process is some kind of digital computation to begin with, you are more likely to buy into the claim that simulated experiences can actually be equivalent to real ones. Simple and self-evident! Or so it seems. The most forceful and focused argument for the equivalence of the real and the virtual is presented in a recent book by philosopher David Chalmers (of philosophical zombie fame), which is aptly entitled "Reality+." It fits the techno-transcendentalist gospel snugly. On the one hand, I have to agree with Chalmers: of course, virtual worlds can generate situations that go beyond real-world experiences and are real as in "capable of being experienced with our physical senses." Moreover, I don't doubt that virtual experiences can have tangible consequences in the physical world. Therefore, we do need to take virtuality seriously. On the other hand, virtuality is a bit like god, or unicorns. It may exist in the sense of having real consequences, but it does not exist in the way a rock does, or a human being. What Chalmers doesn't see (but what seems important to me somehow) is that there is a pretty straightforward and foolproof way to distinguish virtual and physical reality: physical reality will kill you if you ignore it for long enough. Virtual experiences (and unicorns) won't. They will just go away. This intentional blurring of boundaries between the real and the virtual leaves the door wide open for a dangerous descent into delusion, reducing our grip on reality at a time when that grip seems loose enough to begin with. Think about it: we are increasingly entangled in virtuality. Even if we don't buy into Bach's tale of the coming transition to "substrate-agnostic consciousness," techno-transcendentalism is bringing back all-powerful deities in the guise of supernatural simulators and machine superintelligences. At the same time, it delivers the promise of a better life in virtual reality (quite literally heaven on earth): a world completely under your own control, neatly tailored to your own wants and needs, free of the insecurities and inconveniences of actual reality. Complete wish fulfillment. Paradise at last! Utter freedom. Hallelujah! The snag is: this freedom does not apply in the real world. Quite the contrary. The whole idea is utterly elitist and undemocratic. To carry on with techno-transcendence, strong and focused leadership by a small group of visionaries will be required (or so the quiet and discrete thinking goes). It will require unprecedented amounts of sustained capital investment, technological development, material resources, and energy (AI is an extremely wasteful business; but more on that later). To pull it through, lots of minions will have to be enlisted in the project. These people will only get the cheap ticket: a temporary escape from reality, a transient digital hit of dopamine. No eternal bliss or life for them. And so, before you have noticed, you will have given away all your agency and creativity to some AI-produced virtuality that you have to purchase (at increasing expense ... surprise, surprise) from some corporation that has a monopoly on this modern incarnation of heaven. One-to-one like the medieval church back then, really. That's the business model: sell a narrative of techno-utopia to enough gullible fools, and they will finance a political revolution for the chosen few. Lure them with talk of freedom and a virtual land of milk and honey. Scare them with the inevitable rise of the machines. A brave new world awaits. Only this time the happiness drug that keeps you from realizing what is going on is digital, not chemical. And all the while you are actually believing you will be among the chosen few. Neat and simple. Techno-transcendentalism is an ideological tool for the achievement of libertarian utopia. In that sense, Bach is certainly right: it is possible to transfer the methods of a premodern god-fearing society straight to ours, to build a society in which a few rich and influential individuals with maximum personal freedom and unfettered power run things, freed from the burden of societal oversight and regulation. It will not be democratic. It will be a form of libertarian neofeudalism, an extremely unjust and unequal form of society. That's why we need stringent industry regulation. And we need it now. The problem is that we are constantly distracted from this simple and urgent issue by a constant flood of hyped bullshit claims about superintelligent machines and technological singularities that are apparently imminent. And what if such distraction is exactly the point? No consciousness or general intelligence will spring from an algorithm any time soon. In fact, it will very probably never happen. But losing our freedom to a small elite of tech overlords, that is a real and plausible scenario. And it may happen very soon. I told you, it's a medieval cult. But it gets worse. Much worse. Let's turn to the apocalyptic branch of techno-transcendentalism. Brace yourself: the end is nigh. But there is one path to redemption. The techno-illuminati will show you. OFF THE PRECIPICE: AI APOCALYPSE AND DOOMER TRANSCENDENTALISM Not everybody in awe of machine "intelligence" thinks it's an unreservedly good thing though, and even some who like the idea of transitioning to "substrate-agnostic consciousness," are afraid that things may go awfully awry along the way if we don't carefully listen to their well-meaning advice. For example, longtermist and effective-altruism activist Toby Ord, in his book called "The Precipice," embarks on the rather ambitious task of calculating the probabilities for all of humanity's current "existential risks." Those are the kind of risks that threaten to "eliminate humanity's long-term potential," either by the complete extinction of our species or the permanent collapse of civilization. The good news is: there is only a 1:10,000 chance that we will go extinct within the next 100 years due to natural causes, such as a catastrophic asteroid impact, a massive supervolcanic eruption, or a nearby supernova. This will cover my lifetime and that of my children. Phew! Unfortunately, there's bad news too: Ord arrives at a 1:6 chance that humanity will wipe out its own potential within the next 100 years. In other words: we are playing a kind of Russian roulette with our future at the moment. Ord's list of human-made existential risks include factors that also keep me awake at night, like nuclear war (at a somewhat surprisingly low 1:1,000), climate change (also 1:1,000), as well as natural (1:10,000) and engineered (1:30) pandemics. But exceeding the summed probabilities of all other listed existential risks, natural or human-made, is one single factor: unaligned artificial intelligence, at a whopping 1:10 likelihood. Woah. These guys are really afraid of AI! But why? Aren't we much closer to nuclear war than getting wiped out by ChatGPT? Aren't we under constant threat of some sort of pathogen escaping from a bio-weapons lab? (The kind of thing that very probably did not happen with COVID-19.) What about an internal collapse of civilization? Politics, you know — our own stupidity killing us all? Nope. It is going to be unaligned AGI. Autodidact, self-declared genius, and rationality blogger Eliezer Yudkowsky has spent the better part of the last twenty years to tell us how and why, an effort that culminated in a rambling list of AGI ruin scenarios and a short but intense rant in Time magazine a couple of weeks ago, where he writes: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Now that's quite something. He also calls for "rogue data centers" to be destroyed by airstrike, and thinks that "preventing AI scenarios is considered a priority above preventing a full nuclear exchange." Yes, that sounds utterly nuts. If Bach is the Spanish Inquisition, with Yudkowsky it's welcome to Jonestown. First-rate AI doom at its peak. But, not so fast: I am a big fan of applying the (pre)cautionary principle to so-called ruin problems, where the worst-case scenario has a hard-to-quantify but non-zero probability, and has truly disastrous and irreversible consequences. After all, it is reasonable to argue that we should err on the safe side when it comes to climate tipping points, the emergence of novel global pandemics, or the release of genetically modified organisms into ecologies we do not even begin to understand. So, let's have a look at Yudkowsky's worst-case scenario. Is it worth "shutting it all down?" Is it plausible, or even possible, that AGI is going to kill us all? How much should we worry? Well. There are a few serious problems with the argument. In fact, Yudkowski's scenario for the end of the world is cartoonishly overblown. In fact, I don't want to give him too much airtime, and will just point out a few problems that result in a probability for his worst-case scenario that is basically zero. Naught. End of the world postponed until further notice (or until that full nuclear exchange or human-created pandemic will wipe us all out). The basic underlying problem lies in Yudkowsky's metaphysical assumptions, which are, quite frankly, completely delusional. The first issue is that Yudkowsky, like all his techno-transcendentalist friends, assumes the inevitable emergence of AI that achieves "smarter-than-human intelligence" in the very near future. But it is never explained what that means. None of these guys can ever be bothered. Yudkowsky claims that's exactly the point: the threat of AGI does not hinge on specific details or predictions, such as the question of whether or not an AI could become conscious or not. Similar to Bach's idea that machines already "think" faster than humans, intelligence is simply about systems that "optimize hard and calculate outputs that meet sufficiently complicated outcome criteria." That's all. The faster and larger, the smarter. Human go home. From here on it's "Australopithecus trying to fight Homo sapiens." (Remember Bach's plants vs. humans?) AI will perceive us as "creatures that are very stupid and very slow." While it is true that we cannot know in detail how current AI algorithms work, how exactly they generate their output, because we cannot "decode anything that goes on in [their] giant inscrutable arrays," it is also true that we do have a very good idea of the fundamental limitations of such machines. For example, current AI models (no matter how complex) cannot "perceive humans as creatures that are slow and stupid" because they have no concept of "human," "creature," "slow," or "stupid." In general, they have no semantics, no referents outside language. It's simply not within their programmed nature. They have no meaning. There are many other limitations. Here are a few basic things a human (or even a bacterium) can do, which AI algorithms cannot (and probably never will): Organisms are embodied, while algorithms are not. The difference is not just being located in a mobile (e.g., robot) body, but a fundamental blurring of hardware and software in the living world. Organisms literally are what they do. There is no hardware-software distinction. Computers, in contrast, are designed for maximal independence of software and hardware. Organisms make themselves, their software (symbols) directly producing their hardware (physics), and vice versa. Algorithms (no matter how "smart") are defined purely at the symbolic level, and can only produce more symbols, e.g., language models always stay in the domain of language. Their output may be instructions for an effector, but they have no external referents. Their interactions with the outside world are always indirect, mediated by hardware that is, itself, not a direct product of the software. Organisms have agency, while algorithms do not. This means organisms have their own goals, which are determined by the organism itself, while algorithms will only ever have the goals we give them, no matter how indirectly. Basically, no machine can truly want or need anything. Us telling them what to want or what to optimize for is not true wanting or goal-oriented behavior. Organisms live in the real world, where most problems are ill-defined, and information is scarce, ambiguous, and often misleading. We can call this a large world. In contrast, algorithms exist (by definition) in a small world, where every problem is well defined. They cannot (even in principle) escape that world. Even if their small world seems enormous to us, it remains small. And even if they move around the large world in robot hardware, they remain stuck in their small world. This is exactly why self-driving cars are such a tricky business. Organisms have predictive internal models of their world, based on what is relevant to them for their surviving and thriving. Algorithms are not alive and don't flourish or suffer. For them, everything and nothing is relevant in their small worlds. They do not need models and cannot have them. Their world is their model. There is no need for abstraction or idealization. Organisms can identify what is relevant to them, and translate ill-defined into well-defined problems, even in situations they have never encountered before. Algorithms will never be able to do that. In fact, they have no need to since all problems are well-defined to begin with, and nothing and everything is relevant at the same time in their small world. All an algorithm can do is find correlations and features in its preordered data set. Such data are the world of the algorithm, a world which is purely symbolic. Organisms learn through direct encounters, through active engagement, with the physical world. In contrast, algorithms only ever learn from preformatted, preclassified, and preordered data (see the last point). They cannot frame their problems themselves. They cannot turn ill-defined problems into well-defined ones. Living beings will always have to frame their problems for them. I could go on and on. The bottom line is: thinking is not just "optimizing hard" and producing "complicated outputs." It is a qualitatively different process than algorithmic computation. To know is to live. As Alison Gopnik has correctly pointed out, categories such as "intelligence," "agency," and "thinking" do not even apply to algorithmic AI, which is just fancy high-dimensional statistical inference. No agency will ever spring from it, and without agency no true thinking, general intelligence, or consciousness. Artificial intelligence is a complete misnomer. The field should be called algorithmic mimicry: the increasingly convincing appearance of intelligent behavior. Pareidolia on steroids for the 21st century. There is no "there" there. The mimicry is utterly shallow. I've actually co-authored a peer-reviewed paper on this, with my colleagues Andrea Roli and Stuart Kauffman. Thus, when Yudkowsky claims that we cannot align a "superintelligent AI" to our own interests, he has not the faintest clue what he is talking about. Wouldn't it be nice if these AI nerds would have at least a minimal understanding of the fundamental difference between the purely syntactic world their algorithms exist in, and the deeply semantic nature of real life? Instead, we get industry-sponsored academics and CEOs of AI companies telling us that it is us humans who are not that sophisticated after all. Total brainwash. Complete delusion. But how can I be so sure? Maybe the joke really is on us? Could Yudkowksy's doomsday scenario be right after all? Are we about to be replaced by AGI? Keep calm and read on: I do not think we are. Yudkowksy's ridiculous scenarios of AI creating "super-life" via email (I will not waste any time on this), and even his stupid "thought experiment" of the paperclip maximizer, do not illustrate any real alignment problems at all. If you do not want the world to be turned into paperclips, pull the damn plug out of the paperclip maker. AI is not alive. It is a machine. You cannot kill it, but you can easily shut it off. Alignment achieved. Voilà! If an AI succeeds in turning the whole world into paperclips, it is because we humans have put it in a position to do so. Let me tell you this: the risk of AGI takeover and apocalypse is zero, or very very near zero, not just in the next 100 years. At least in this regard, we may sleep tight at night. There is no longtermist nirvana, and no doomer AGI apocalypse. Let's downgrade that particular risk by a few orders of magnitude. I'm usually not in the business of pretending to know long-term odds, but I'll give it a 1:1,000,000,000, or thereabouts. You know, zero, for all practical purposes. Let's worry about real problems instead. What happened to humanity that we even listen to these people? The danger of AGI is nil, but the danger of libertarian neofeudalism is very very real. Why would anyone in their right mind buy into techno-transcendentalism? It is used to enslave us. To take our freedom away. Why then do so many people fall for this narrative? It's ridiculous and deranged. Are we all deluded? Have we lost our minds? Yes, no doubt, we are a bit deluded, and we are losing our minds these days. I think that the popularity of the whole techno-transcendental narrative springs from two main sources. First, a deep craving — in these times of profound meaning crisis — for a positive mythological vision, for transformative stories of salvation. Hence the revived popularity of a markedly unmodern Christian ideology in this techno-centric age, paralleling the recent resurgence of actual evangelical movements in the U.S. and elsewhere in the world. But, in addition, the acceptance of such techno-utopian fairy tales also depends on a deeper metaphysical confusion about reality that characterizes the entire age of modernity: it is the mistaken, but highly entrenched idea, that everything — the whole world and all the living and non-living things within it — is some kind of manipulable mechanism. If you ask me, it is high time that we move beyond this age of machines, and leave its technological utopias and nightmares behind. It is high time we stop listening to the techno-transcendetalists, make their business model illegal, and place their horrific political ideology far outside our society's Overton window. Call me intolerant. But tolerance must end where such serious threats to our sanity and well-being begin. A MACHINE METAPHYSICAL MESS As I have already mentioned, techno-transcendentalism poses as a rational science-based world view. In fact, it often poses as the only really rational science-based world view, for instance, when it makes an appearance within the rationality community. If you are a rigorous thinker, there seems to be no alternative to its no-nonsense mechanistic tenets. My final task here is to show that this is not at all true. In fact, the metaphysical assumptions that techno-transcendentalism is based on are extremely dubious. We've already encountered this issue above, but to understand it in a bit more depth, we need to look at these metaphysical assumptions more closely. Metaphysics does not feature heavily in any of the recent discussions about AGI. In general, it is not a topic that a lot of people are familiar with these days. It sounds a little detached, and old-fashioned — you know, untethered in the Platonic realm. We imagine ancient Greek philosophers leisurely strolling around cloistered halls. Indeed, the word comes from the fact that Aristotle published his "first philosophy" (as he called it) in a book that came right after his "Physics." In this way, it is literally after or beyond ("meta") physics. In recent times, metaphysics has fallen into disrepute as mere speculation. Something that people with facts don't have any need for. Take the hard-nosed logical positivists of the Vienna Circle in the early 20th century. They defined metaphysics as "everything that cannot be derived through logical reasoning from empirical observation," and declared it utterly meaningless. We still feel the legacy of that sentiment today. Many of my scientist colleagues still think metaphysics does not concern them. Yet, as philosopher Daniel Dennett rightly points out: "there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." And, my oh my, there is a lot of unexamined baggage in techno-transcendentalism. In fact, the sheer number of foundational assumptions that nobody is allowed to openly scrutinize or criticize are ample testament to the deeply cultish nature of the ideology. Here, I'll focus on the most fundamental assumption on which the whole techno-transcendentalist creed rests: every physical process in the universe must be computable. In more precise and technical terms, this means we should be able to exactly reproduce any physical process by simulating it on a universal Turing machine (an abstract model of a digital computer with potentially unlimited memory and processing speed, which was invented in 1936 by Alan Turing, the man who gave the Turing Prize its name). To clarify, the emphasis is on "exactly" here: techno-transcendentalists do not merely believe that we can usefully approximate physical processes by simulating them in a digital computer (which is a perfectly defensible position) but, in a much stronger sense, that the universe and everything in it — from molecules to rocks to bacteria to human brains — literally is one enormous digital computer. This is techno-transcendentalist metaphysics. This universal computationalism includes, but is not restricted to, the simulation hypothesis. Remember: if the whole world is a simulation, then there is a simulator outside it. In contrast, the mere fact that everything is computation does not imply a supernatural simulator. Turing machines are not the only way to conceptualize computing and simulation. There are other abstract models of computation, such as lambda calculus or recursive function theory, but they are all equivalent in the sense that they all yield the exact same set of computable functions. What can be computed in one paradigm can be computed in all the others. This fundamental insight is mathematically codified by something called the Church-Turing thesis. (Alonzo Church was the inventor of lambda calculus and Turing's PhD supervisor.) It unifies the general theory of computation by saying that every effective computation (roughly, anything you can actually compute in practice) can be carried out by an algorithm running on a universal Turing machine. This thesis cannot be proven in a rigorous mathematical sense (basically because we do not have a precise, formal, and general definition of "effective computation"), but it is also not controversial. In practice, the Church-Turing thesis is a very solid foundation for a general theory of computation. The situation is very different when it comes to applying the theory of computation to physics. Assuming that every physical process in the universe is computable is a much stronger form of the Church-Turing thesis, called the Church-Turing-Deutsch conjecture. It was proposed by physicist David Deutsch in 1985, and later popularized in his book "The Fabric of Reality." It is important to note that this physical version of the Church-Turing thesis does not logically follow from the original. Instead, it is intended to be an empirical hypothesis, testable by scientific experimentation. And here comes the surprising twist: there is no evidence at all that the Church-Turing-Deutsch conjecture applies. Not one jot. It is mere speculation on Deutsch's part who surmised that the laws of quantum mechanics are indeed computable, and that they describe every physical process in the universe. Both assumptions are highly doubtful. In fact, there are solid arguments that quite convincingly refute them. These arguments indicate that not every physical process is computable or, indeed, no physical processes can be precisely captured by simulation on a Turing machine. For instance, neither the laws of classical physics nor those of general relativity are entirely computable (since they contain noncomputable real numbers and infinities). Quantum mechanics introduces its own difficulties in the form of the uncertainty principle and its resulting quantum indeterminacy. The theory of measurement imposes its own (very different) limitations. Beyond these quite general doubts, a concrete counterexample of noncomputable physical processes is provided by Robert Rosen's conjecture that living systems (and all those systems that contain them, such as ecologies and societies) cannot be captured completely by algorithmic simulation. This theoretical insight, based on the branch of mathematics called category theory, was first formulated in the late 1950s, presented in detail in Rosen's book "Life Itself" (1991), and later derived in a mathematically airtight manner by his student Aloysius Louie in "More Than Life Itself." This work is widely ignored, even though its claims remain firmly standing, despite numerous attempts at refutation. This, arguably, renders Rosen's claims more plausible that those derived from the Church-Turing-Deutsch conjecture. I could go on. But I guess the main point is clear by now: the kind of radical and universal computationalism that grounds techno-transcendentalism does not stand up to closer philosophical and scientific scrutiny. It is shaky at best, and completely upside-down if you're a skeptic like I am. There is no convincing reason to believe in it. Yet, this state of affairs is gibly disregarded, not only by techno-transcendentalists, but also by a large and prominent number of contemporary computer scientists, physicists, and biologists. The computability of everything is an assumption that has become self-evident not because, but in spite of, the existing evidence. How could something like this happen? How could this unproven but fundamental assumption have escaped the scrutiny of the organized skepticism so much revered and allegedly practiced by techno-transcendentalists and other scientist believers in computationalism? Personally, I think that the uncritical acceptance of this dogma comes from the mistaken idea that science has to be mechanistic and reductionist to be rigorous. The world is simply presupposed to be a machine. Algorithms are the most versatile mechanisms humanity has ever invented. Because of this, it is easy to fall into the mistaken assumption that everything in the world works like our latest and fanciest technology. But that's a vast and complicated topic, which I will reserve for another blog post in the future. With the assumption that everything is computation falls the assumption that algorithmic simulation corresponds to real cognition in living beings in any meaningful way. It is not at all evident that machines can "think" the way humans do. Why should thinking and computing be equivalent? Cognition is not just a matter of speedy optimization or calculation, as Yudkowsky asserts. There are fundamental differences in how machines and living beings are organized. There is absolutely no reason to believe that machines will outpace human cognitive skills any time soon. Granted, they may do better at specific tasks that involve the detection of high-dimensional correlations, and also those that require memorizing many data points (humans can only hold about seven objects in short-term memory at any given time). Those tasks, and pen-and-paper calculations in particular, constitute the tiny subset of human cognitive skills that served as the template for the modern concept of "computation" in the first place. But brains can do many more things, and they certainly have not evolved to be computers. Not at all. Instead, they are organs adapted to help animals better solve the problem of relevance in their complex and inscrutable environment (something algorithms famously cannot do, and probably never will). More on that in a later blog post. I'm currently also writing a scientific paper on the topic. But that is not the main point here. That main point is: the metaphysics of techno-transcendentalism — its radical and universal computationalism as well as the belief in the inevitable supremacy of machines — is based on a simple mistake, a mistake which is called the fallacy of misplaced concreteness (or fallacy of reification). Computation is an abstracted way to represent reality, not reality itself. Techno-transcendentalists (and all other adherents of strong forms of computationalism) simply mistake the map for the territory. The world is not a machine and, in particular, living beings are not machines. Neither of them constitute some kind of digital computation. Conversely, computers cannot think like living beings can. In this sense, they are not intelligent at all, no matter how sophisticated they seem to us. Even a bacterium can solve the problem of relevance, but the "smartest" contemporary algorithm cannot. Philosophers call what is happening here a fundamental category error. This brings us back to Alison Gopnik: even though AI researchers like LeCun chide everyone for being uneducated about their work, they themselves are completely clueless when it comes to concepts such as "thinking," "agency," "cognition," "consciousness," and indeed "intelligence." These concepts represent abilities that living beings possess, but algorithms cannot. Not just techno-transcendentalists but, sadly, also most biologists today are deeply ignorant of this simple distinction. As long as this is the case, our discussion about AI, and AGI in particular, will remain deeply misinformed and confused. What emerges at the origin of life, the capability for autonomous agency, springs from a completely new organization of matter. What emerges in a contemporary AI system, in contrast, is nothing but high-dimensional correlations that seem mysterious to us limited human beings because we are very bad at dealing with processes that involve many variables at the same time. The two kinds of emergence are fundamentally and qualitatively different. No conscious AI, or AI with agency, will emerge any time soon. In fact, no AGI will ever be possible in an algorithmic framework. The end of the world is not nearly as nigh as Yudkowsky wants to make us believe. Does that mean that the current developments surrounding AI are harmless? Not at all! I have argued that techno-transcendentalist ideology is not just a modern mythological narrative, but also a useful tool to serve the purpose of bringing about libertarian neufeudalism. Not quite the end of the world, but a terrible enough prospect, if you ask me. The technological singularity is not coming. Virtual heaven is not going to open its gates to us any time soon. Instead, the neo-religious template of techno-transcendentalism is a tried and true method from premodern times to keep the serfs in line with threats of the apocalypse and promises of eternal bliss. Stick and carrot. Unlike AI research itself, this is not exactly rocket science. But, you may think, is this argument not overblown itself? Am I paranoid? Am I implying malicious intent where there is none? That is a good question. I think there are two types of protagonists in this story of techno-transcendentalism: the believers and the cynics. Both, in their own ways, think they are doing what is best for humanity. They are not true villains. Yet, both are affected by delusions that will critically undermine their project, with potentially catastrophic effects. With their ideological blinkers on, they cannot see these dangers. They may not be villains, but they are certainly boneheaded enough, foolish in the sense of lacking wisdom, that we do not want them as our leaders. The central delusion they all share is the following: both believers and cynics think that the world is a machine. Worse, it is their plaything — controllable, predictable, programmable. And they all want to be in charge of play, they want to steer the machine, they want to be the programmer, without too much outside interference. A bunch of 14-year-old boys that are fighting over who gets to play the next round of Mario Cart. Something like that. Hence neofeudalism, and more or less overt anti-democratic activism. The oncoming social disruption is part of the program. This much, at least, is done with intent. There can be no excuses afterwards. We know who is responsible. However, there are also fundamental differences between the two camps. In particular, the believers obviously see techno-transcendentalism as a mythological narrative for our age, a true utopian vision, while the cynics see it only as a tool that serves their ulterior motives. Both extremes lie along a spectrum. Take Eliezer Yudkowsky, for example. He is at the extreme "believer" end of the scale. Joscha Bach is a believer too, but much more optimistic and moderate. They both have wholeheartedly bought into the story of the inevitable singularity — faith, hope, and love — and they both truly believe they're among the chosen ones in this story of salvation, albeit in very different ways: Bach as the leader of the faithful, Yudkowsky as the prophet of the apocalypse. Elon Musk and Yann LeCun are at the other end of the spectrum, only to be outpaced by Peter Thiel (another infamous silicon-valley tycoon) in terms of cynicism. What counts in the cycnic's corner are only two things: unfettered wealth and power. Not just political power, but power to change the world in their own image. They see themselves as engineers of reality. No mythos required. These actors do not buy into the techno-transcendentalist cult, but its adherents serve a useful purpose as the foot soldiers (often cannon fodder) of the coming revolution. All this is wrapped up in longermist philosophy: it's ok if you suffer and die, if we all go exstinct even, as long as the far-future dream of galactic conquest and eternal bliss in simulation is on course, or at least intact. That is humanity's long-term destiny. It is an aim that is shared among believers and cynics. Their differing attitudes only concern the more or less pragmatic way to get there by overcoming our temporary predicaments with the help of various technological fixes. This is the true danger of our current moment in human history. I have previously set the risk of AGI apocalypse to basically zero. But don't get me wrong. There is a clear and present danger. The probability of squandering humanity's future potential with AI is much, much higher than zero. (Don't ask me to put a number on it. I'm not a longtermist in the business of calculating existential risk.) Here, we have a technology, massively wasteful in terms of energy and resources, that is being developed at scale at a breakneck speed by people with the wrong kind of ethical committments and a maximally deluded view of themselves and their place in the universe. We have no idea where this will lead. But we know change will be fast, global, and hard to control. What can possibly go wrong? Another thing is quite predictable: there will be severe unintended consequences, most of them probably not good. For the longtermists such short-term consequences do not even matter, as long as the risk associated is not deemed existential (by themselves, of course). Even human extinction could just be a temporary inconvenience as long as the transcendence, the singularity, the transition to "substrate-agnostic" intelligence is on the way. This is why we need to stop these people. They are dangerous and deluded, yet full of self-confidence — self-righteous and convinced that they know the way. Their enormous yet brittle egos tend to be easily bruised by criticism. In their boundless hubris, they massively overestimate their own capacities. In particular, they massively overestimate their capacity to control and predict the consequences of what they are doing. They are foolish, misled by a world view and mythos that are fundamentally mistaken. What they hate most (even more than criticism) is being regulated, held back by the ignorant masses that do not share their vision. They know what's best for us. But they are wrong. We need to slow them down, as much as possible and as soon as possible. This is not a technological problem, and not a scientific one. Instead, it is political. We do not need to stop AI research. That would be pretty pointless, especially if it is only for a few months. Instead, we need to stop the uncontrolled deployment of this technology until we have a better idea of its (unintended) consequences, and know what regulations to put in place. This essay is not about such regulations, not about policy, but a few measures immediately suggest themselves. By internalizing the external costs of AI research, for example, we could effectively slow its rate of progress and intefere with the insane business model of the tech giants behind it. Next, we need to put laws in place. We need our own Butlerian jihad (if you're a Dune fan like me): "thou shalt not build a machine with the likeness of the human mind." Or, as Daniel Dennet puts it: "Counterfeit money has been seen as vandalism against society ever since money has existed. Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious." I agree. We cannot have fake people, and to build algorithmic mimicry that impersonates existing or non-existing persons needs to be made illegal, as soon as we can. Last but not least, we need to educate people about what it means to have agency, intelligence, consciousness, how to talk about these topics, and how seemingly "intelligent" machines do not have even the slightest spark of any of that. This time, the truth is not somewhere in the middle. AI is stochastic parrots all the way down. We need a new vocabulary to talk about such algorithms. Algorithmic mimicry is a tool. We should treat and use it as such. We should not interact with algorithms as if they were sentient persons. At the same time, we must not treat people like machines. We have to stop optimizing ourselves, tune our performance in a game nobody wants to play. You do not strive for alignment with your screwdriver. Neither should you align with an algorithm or the world it creates for you. Always remember: you can switch virtuality off if it confuses you too much. Of course, this is no longer possible once we freely and willingly give away our agency to algorithms that have none. We can no longer make sense in a world that is flooded with misinformation. Note that the choice is entirely up to us. It is within our own hands. The alignment problem is exactly upside-down: the future supremacy of machines is never going to happen if we don't let it happen. It is the techno-transcendentalists who want to align you to their purpose. Don't be their fool. Refuse to play along. Don't be a serf. This would be the AI revolution worth having. Are you with me? Images were generated by the author using DALL-E 2 with the prompt "the neo-theistic cult of silicon intelligence." Here is an excellent talk by Tristan Harris and Aza Raskin of the Center for Humane Technology, warning us about the dire consequences of algorithmic mimicry and its current business model: https://vimeo.com/809258916/92b420d98a. Ironically, even these truly smart skeptics fall into the habit of talking about algorithms as if they "think" or "learn" (chemistry, for example), highlighting just how careful we need to be not to attribute any "human spark" to what is basically a massive statistical inference machine.
This week, I was invited to give a three-minute flash talk at an event called "Human Development, Sustainability, and Agency," which was organized by IIASA (the International Institute for Applied Systems Analysis), the United Nations Development Programme (UNDP), and the Austrian Academy of Sciences (ÖAW). The event framed the release of an UNDP report called "Unsettled times, unsettled lives: shaping our future in a transforming world." It forms part of IIASA's "Transformations within Reach (TwR) project, which looks for ways to transform societal decision-making systems and processes to facilitate transformation to sustainability. You can find more information on our research project on agency and evolution here. My flash talk was called "Beyond the Age of Machines." Because it was so short, I can share my full-length notes with you. Here we go: "Hello everyone, and thank you for the opportunity to share a few of my ideas with you, which I hope illuminate the topic of agency, sustainability, and human development, and provide some inspiring food for thought. I am an evolutionary systems biologist and philosopher of science who studies organismic agency and its role in evolution, with a particular focus on evolutionary innovation and open-ended evolutionary dynamics. I consider human agency and consciousness to be highly evolved expressions of a much broader basic ability of all living organisms to act on their own behalf. This kind of natural agency is rooted in the peculiar self-manufacturing organization of organisms, and the consequences this organization has on how organisms interact with their environment (their agent-arena relationship). In particular, organisms distinguish themselves from non-living machines in that they can set and pursue their own intrinsic goals. This, in turn, enables living beings to realize what is relevant to them (and what is not) in the context of their specific experienced environment. Solving the problem of relevance is something a bacterium (or any other organism) can do, but even our most sophisticated algorithms never will. This is why there will never be any artificial general intelligence (AGI) based on algorithmic computing. If AGI will ever be generated, it will come out of a biology lab (and will not be aligned with human interests), because general intelligence requires the ability to realize relevance. And yet, we humans increasingly cede our agency and creativity to mindless algorithms that completely lack these properties. Artificial intelligence (AI) is a gross misnomer. It should be called algorithmic mimicry, the computational art of imitation. AI always gets its goals provided by an external agent (the programmer). It is instructed to absorb patterns from past human activities and to recombine them in sometimes novel and surprising ways. The problem is that an increasing amount of digital data will be AI-generated in the near future (and it will become increasingly difficult to tell computer- and human-generated content apart), meaning that AI algorithms will be trained increasingly on their own output. This creates a vicious inward spiral which will soon be a substantial impediment to the continued evolution of human agency and creativity. It will be crucial to take early action towards counteracting this pernicious trend by proper regulations, and a change in the design of the interfaces that guide the interaction of human agents with non-agential algorithms. In summary, we need to relearn to treat our machines for what they are: tools to boost our own agency, not masters to which we delegate our creativity and ability to act. For continued sustainable human development, we must go beyond the age of machines. Thank you very much." SOURCES and FURTHER READING: "organisms act on their own behalf": Stuart Kauffman, Investigations, OUP 2000. "the self-manufacturing organization of the organism": see, for example, Robert Rosen, Life Itself, Columbia Univ Press, 1991; Alvaro Morena & Matteo Mossio, Biological Autonomy, Springer, 2015; Jan-Hendrik Hofmeyr, A biochemically-realisable relational model of the self-manufacturing cell, Biosystems 207: 104463, 2021. "organismic agents and their environment": Denis Walsh, Organisms, Agency, and Evolution. CUP, 2015. "the agent-arena relationship": a concept first introduced in John Vervaeke's "Awakening from the Meaning Crisis," and also discussed in this interesting dialogue. "agency and evolutionary evolution": https://osf.io/2g7fh. "agency and open-ended evolutionary dynamics": https://osf.io/yfmt3. "organisms can set their own intrinsic goals": Daniel Nicholson, Organisms ≠ Machines. Stud Hist Phil Sci C 44: 669–78. "to realize what is relevant": John Vervaeke, Timothy Lillicrap & Blake Richards, Relevance Realization and the Emerging Framework in Cognitive Science. J Log Comput 22: 79–99. "solving the problem of relevance": see Standford Encyclopedia of Philosophy, The Frame Problem. "there will never be artificial general intelligence based on algorithmic computing": https://osf.io/yfmt3. "we humans cede our agency": see The Social Dilemma. II. A Naturalistic Philosophy of Biology This three-part series of blog posts is based on a talk I held at the workshop on "A New Naturalism: Towards a Progressive Theoretical Biology, " which I co-organized with philosophers Dan Brooks and James DiFrisco at the Wissenschaftskolleg zu Berlin in October 2022. This part of the series heavily draws on a paper (available as a preprint here) that is currently under review at Royal Society Open Science. You can find part I here, and part III here. In the first part of this three-part series, I have outlined why I think we urgently need more philosophy in biology today. More specifically, I have argued that we need two kinds of philosophical approaches: on the one hand, a new naturalist philosophy of biology which is concerned with examining the practices, methods, concepts, and theories of our discipline and how they are used to generate scientific knowledge (see this post). This branch of the philosophy of science is relevant for practicing biologists since it boosts their understanding of what is realistically achievable, increases the range of questions they can ask, clarifies what kinds of methods and approaches are most appropriate and promising in a given situation, and reveals how their work is best (and most wisely) contextualized within the big questions about life, human nature, and our place in the universe. On the other hand, we need a philosophical kind of theoretical biology, which operates within the life sciences and consists of philosophical methods that biologists themselves can use to better understand biological concepts, and to solve biological problems (see part III). WHAT IS NATURALIST PHILOSOPHY? Let's talk about naturalist philosophy of biology first. And, no, it does not have anything to do with bird watching or nature documentaries. I don't mean that kind of "naturalist." What distinguishes a naturalist philosophy of biology (from a foundationalist one, let's say) are the following criteria:
To summarize: naturalistic philosophy of biology attempts to accurately describe and understand how biology is actually done by real-world biologists at this present moment. Yet it must not remain purely descriptive. For it to be useful to biologists, we need an interventionist philosophy of biology that actively shapes the kind of questions we can ask, the kinds of methods we can use, and the kind of explanations we accept as valid answers to our questions. All of these will necessarily change as the field moves on. What we want, therefore, is an adaptive co-evolution of biology and its philosophy, a constant synergy, a dialectic spiral in which one discipline shapes and supports the other, lifting each other to ever higher levels of understanding in the process. The problem is that we are very far from the optimistic vision I have just outlined. In fact, very few scientists these days get any philosophical education at all. This is a serious problem that I attempt to address with my own philosophy courses for researchers. It leads to a situation where many scientists hold very outdated philosophical beliefs, and many unironically proclaim that they do not adhere to any philosophical position at all, or even that "philosophy is dead," as the late physicist Stephen Hawking once remarked. This leads to some serious misconceptions among scientists about how science works and what valid scientific explanations are. These misconceptions are now hindering progress in biology. Furthermore, they underlie the uncritical acceptance of the pernicious cult of productivity that rules supreme in contemporary scientific research. When scientists are aware of their philosophical views, they often profess adherence to something we could call naïve realism. Naïve realism is a form of objectivist realism that consists of a loose and varied assortment of philosophical preconceptions that, although mostly outdated, continue to shape our view of science and its role in society. This view does not amount to a systematic or consistent philosophical doctrine. Instead, naïve realism is a mixed bag of more or less vaguely held convictions, which often clash in contradictions, and leave many problems concerning the scientific method and the knowledge it produces unresolved. Without going into detail, naïve realism usually includes ideas from logical positivism, Popperian falsificationism, and Merton's sociological ethos of science (I've written about this in a lot more detail here, if you're interested). Despite its intuitive appeal, naïve realism is not a naturalistic philosophy of science at all. On the contrary, it is a highly idealized view of how science should work. It paints a deceptively simple picture of a universal scientific method that, when applied properly, leads to an automatic asymptotic approximation of our knowledge of the world to the truth (see figure below). On this view, the process of producing knowledge can be fully formalized as a combination of empirical experimentation and logical inference for hypothesis testing. It leads to ever more accurate, trustworthy, and objective scientific representations of the world. We may not ever reach a complete description of the world, but certainly we're getting closer and closer over time. This view has some counterintuitive consequences. Not only does it imply that we should be able to replace scientists with algorithms some day (since science is seen as a completely formalizable activity), but it also suggests that we can generate more scientific knowledge simply by increasing the productivity of the knowledge-production system: increased pressure, better efficiency, faster convergence. Easy! Unfortunately, due to its (somewhat ironic) detachment from reality, naïve realism leads to all kinds of unintended consequences when applied to the actual process of doing science. One problem is that too much pressure limits creativity and prevents researchers from taking on original or high-risk projects. Another problem is that we give ourselves less and less time to think. We're always rushing into the next project, adopting the next method, generating the next big data set. This way, the research process gets stuck in local optima of the knowledge landscape. Every evolutionary theorist knows that too little variation leads to suboptimal outcomes. This is exactly what is happening in biology today. We are becoming trapped by our own ambitions, our rush to publish new results. How do we get out of this dilemma? What is needed is a less simplistic, less mechanistic approach to science, an approach that reflects the messy reality of limited human beings doing research in an astonishingly complex world that is far beyond our grasp, an approach that focuses on the quality of the knowledge-production process rather than the amount of output it produces. Luckily, such an approach is already available. The challenge is to make it known more widely, not just among philosophers of science but among researchers in the life sciences themselves. This naturalist philosophy of science consists of three main pillars (see figure below):
1. SCIENCE AS PERSPECTIVE The first problem that naïve realism faces is that there simply is no universal scientific method. Science is quite obviously a cultural construct in the sense that it consists of practices which involve the finite cognitive and technological abilities of human beings, firmly embedded in a specific social and historical context. For this reason, scientists use quite different approaches depending on the problem they are trying to solve, on the traditions of their scientific discipline, and on their own educational background and cognitive abilities. This kind of relativist view can be taken to extremes, however. Strong forms of social constructivism claim that science is nothing but social discourse, the knowledge it produces no better than any other way of knowing, like poetry or religion, which are also considered types of social discourse. This strong constructivist position is certainly not naturalistic, and it is just as oversimplified as naïve realism. Therefore, I believe that a naturalistic philosophy of biology must find a middle way between the opposing extremes of social constructivism and naïve realism. An approach that achieves this is perspectival realism. The best way to learn about this philosophy is to read Bill Wimsatt's "Re-engineering Philosophy for Limited Beings." It's not an easy read, but it will change the way you see the world, I can promise you that much. In addition, I recommend Ron Giere's "Scientific Perspectivism" (which will give you a quick overview of the essentials), and Michela Massimi's "Perspectival Realism" (published this year). Finally, Roy Bhaskar's critical realism is worth mentioning as a pioneering branch of the perspectivist family of philosophies described here. I will mainly rely on Wimsatt's excellent book in what follows. Perspectival realism holds that there is an accessible reality, a causal structure of the universe, whose existence is independent of the observer and their effort to understand it. Science provides a collection of methodologies and practices designed for us to gain trustworthy knowledge about the structure of reality, minimizing bias and the danger of self-deception. At the same time, perspectival realism also acknowledges that we cannot step out of our own heads: it is impossible to gain a purely objective “view from nowhere.” Each individual researcher and each society has its unique perspective on the world, and these perspectives matter for science. It needs to be said again at this point that perspectivism is not relativism. A scientific perspective is not just someone's opinion or point of view. This is the difference between what Richard Bernstein has called flabby versus engaged pluralism: each new perspective must be rigorously justified. Wimsatt defines a perspective as an “intriguingly quasi-subjective (or at least observer, technique or technology-relative) cut on the phenomena characteristic of a system." Perspectives may be limited and context-dependent, but they are also grounded in reality. They are not a bug, but a central feature of the scientific approach. Our perspectives are what connects us to the world. It is only through them, by systematically examining their differences and connections, that we can gain any kind of inter-subjective access to reality at all. This is how we produce (scientific) knowledge that is sound, robust, and trustworthy. In fact, it is more robust than what we get from any other way knowing. This is and remains exactly the purpose and societal function of science. Which leads us to a number of powerful principles that arise from a perspectivist-realist approach to science:
Perspectival realism is relevant for a naturalist philosophy of science because it takes the practice of doing science for what it is instead of aiming for some unattainable ideal. At the same time, it acknowledges and justifies the special status of scientific knowledge compared to other ways of knowing. In addition, it refocuses our attention from the product or outcome of the scientific process to the quality of that process itself. How we establish our facts matters. This is why we will be talking about the importance of process thinking for naturalistic philosophy of science next. 2. SCIENCE AS PROCESS The second major criticism that naïve realism must face is that it is excessively focused on research outcomes — science producing immutable facts — thereby neglecting the intricacies and the importance of the process of inquiry. Basically, looking at scientific knowledge only as the product of science is like looking at art in a museum. The product of science is only as good as the process that generates it. Moreover, many perfectly planned and executed research projects fail to meet their targets, but that is often a good thing: scientific progress relies as much on failure as it does on success (see above). Some of the biggest scientific breakthroughs and conceptual revolutions have come from projects that have failed in interesting ways. Think about the unsuccessful attempt to formalize mathematics, which led to Gödel’s Incompleteness Theorem, or the scientific failures to confirm the existence of phlogiston, caloric, and the luminiferous ether, which opened the way for the development of modern chemistry, thermodynamics, and electromagnetism. Adhering too tightly to a predetermined worldview or research plan can prevent us from following up on the kind of surprising new opportunities that are at the core of scientific innovation. For this reason, we should focus more on whether we are doing science the right way, not whether we produce the kinds of results we expected to find. More often than not, the goal in basic science is the journey. First of all, scientific knowledge itself is not fixed. It is not a simple collection of unalterable facts. The edifice of our scientific knowledge is constantly being extended. At the same time, it is in constant need of maintenance and renovation. This process never ends. For all practical purposes, the universe is cognitively inexhaustible. There is always more for us to learn. As finite beings, our knowledge of the world will forever remain incomplete. Besides, what we can know (and also what we want or need to know) changes significantly over time. Our goalposts are constantly shifting. The growth of knowledge may be unstoppable, but it is also at times erratic, improvised, and messy — anything but the straight convergence path of naïve realism depicted in the figure above. Once we realize this, the process of knowledge production becomes an incredibly rich and intricate object of study in itself. The aim and focus of our naturalist philosophy of science must be adjusted accordingly. Naïve realism considers knowledge in an abstract manner (e.g. as "justified true belief") and tries to find universal principles which allow us to establish it beyond any reasonable doubt. Naturalist philosophy of science, in contrast, goes for a more humble (but also much more achievable) target: to understand and assess the quality of actual human research activities, including technological and methodological aspects, but also individual cognitive performance and the social structure of scientific communities. It asks which strategies we — as finite beings, in practice, given our particular circumstances — can and should be using to improve our knowledge of the world. As Philip Kitcher has pointed out, the overall goal of naturalist philosophy is to collect a compendium of locally optimal processes and practices that can be applied to the kinds of problems humans are likely to encounter. This is a much more modest and realistic aim than any quixotic quest for absolute or certain knowledge, but it is still extremely ambitious. Like the expansion of scientific knowledge itself, it is a never-ending process of iterative and recursive improvement. As limited beings, we are condemned to always build on the imperfect basis of what we have already constructed. Just like perspectival realism, a process philosophy of science fosters context-specific strategies that allow us to attain a set of given goals. What is important for our discussion is that different research strategies and practices will be optimal under different circumstances. There is no universally optimal strategy for solving problems — there is no free lunch. What approach to choose will depend on the current state of knowledge and level of technological development, the available human, material, and financial resources, and the scientific (and non-scientific) goals a project attempts to achieve. The right choice of strategy is in itself an empirical question. A naturalist philosophy of science must be based on history and empirical insights into error-prone heuristics that have worked for similar goals and under similar circumstances before. We cannot justify scientific knowledge in a abstract and general way, but we can get better over time at appraising its robustness and value by studying the process of inquiry itself, in all its glorious complexity, with all its historical contingencies and cultural idiosyncrasies. An interesting example of an insight gained from such an inquiry is what Thomas Kuhn called the essential tension between a productive research tradition and risky innovation. In computer science, this has been recast as the strategic relationship between exploration (gathering new information) and exploitation (putting existing information to work). For any realistic research setting, this relationship cannot be determined explicitly as a fixed ratio or a set of general rules. Instead, we need to switch strategy dynamically, based on local criteria and incomplete knowledge. The situation is far from hopeless though, since some of these criteria can be empirically determined. For instance, it pays for an individual researcher, or an entire research community, to explore at the onset of an inquiry. This happens at the beginning of an individual research career, or when a new research field opens up. Over time, as a researcher or field matures and information accumulates, exploration yields diminishing returns. At some point, it is time to switch over to exploitation. This is an entirely rational meta-strategy, inexorably leading people (and research fields) to become more conservative over time, a tendency that has been robustly confirmed by ample empirical evidence. Here, we have an example where the optimal research strategy depends on the process of inquiry itself. A healthy research environment provides scientists with enough flexibility to switch strategy dynamically, depending on circumstances. Unfortunately, our contemporary research system does not work this way. The fixation on short-term performance, assessed purely by measuring research output, has locked the process of inquiry firmly into exploitation mode. Exploration almost never pays off in such a system. It requires too much time, effort, and a willingness to fail. It may be bad for productivity in the short term, but is essential for innovation in the long run. This is a dilemma I have already outlined above. We are getting stuck on local (sub-optimal) peaks of knowledge. Only an empirically grounded understanding of the process of inquiry itself can lead us out of this trap. But this alone is not enough. We also need a better understanding of the social dimension of doing science, which is what we will be discussing next. 3. SCIENCE AS DELIBERATION The third major criticism that naïve realism must face is that it is obsessed with consensus and uniformity. Many people believe that the authority of science stems from unanimity, and is undermined if scientists disagree with each other. Ongoing controversies about climate science or evolutionary biology are good examples of this sentiment. To a naïve realist, the ultimate aim of science is to provide a single unified account — an elusive unified theory of everything — that most accurately represents all of reality. This kind of thinking about science thrives on competition: let the best argument (or theory) prevail. Truth is established by debate, which is won by persuading the majority of experts and stakeholders in a field that some perspective is better than all its competitors. There can only be one factual explanation. Everything else is mere opinion. However, there are good reasons to doubt this view. In fact, uniformity can be bad. This is because all scientific theories are underdetermined by empirical evidence. In other words, there is always an indefinite number of scientific theories able to explain a given set of observed phenomena. For most scientific problems, it is impossible in practice to unambiguously settle on a single best solution based on evidence alone. Even worse: in most situations, we have no way of knowing how many possible theories there actually are. Many alternatives remain unconsidered. Because of all this, the coexistence of competing theories need not be a bad thing. In fact, settling a justified scientific controversy too early may encourage agreement where there is none. It certainly privileges the status quo, which is generally the majority opinion, and it suppresses (and therefore violates) the voices of those who hold a justified minority view that is not easy to dismiss. In summary, too much pressure for unanimity leads to a dictatorship of the majority, and undermines the collective process of discovery within a scientific community. Let us take a closer look at what this process is. Specifically, let us ask which form of information exchange between scientists is most conducive to cultivating and utilizing the collective intelligence of the community. In the face of uncertainty and underdetermination, it is deliberation, not debate which achieves this goal. Deliberation is a form of discussion that is based on dialogue, rather than debate. The main aim of a deliberator is not to win an argument by persuasion, but to gain a comprehensive understanding of all valid perspectives present in the room, and to make the most informed choice possible based on the understanding of those perspectives. What matters most is not an optimal, unanimous outcome of the process, but the quality of the process of deliberation itself, which is greatly enhanced by the presence of non-dismissible minorities. The quality of a scientific theory increases with every challenge it receives. Such challenges can come in the form of empirical tests, or thoughtful and constructive criticism of a theory’s contents. The deliberative process, with its minority positions that provide these challenges, is stifled by too much pressure for a uniform outcome. As long as matters are not settled by evidence and reason, it is better — as a community — to suspend judgement and to let alternative explanations coexist. This allows us to explore. But, like other exploratory processes, deliberation needs time and effort. Deliberative processes cannot (and should not) be rushed. SCIENTIFIC (PSEUDO)CONTROVERSY: AN EXAMPLE OF NATURALIST PHILOSOPHY IN ACTION Scientific controversies provide a powerful example illustrating all three pillars of naturalistic philosophy of science in action. Let us take a quick look at an ongoing debate in evolutionary theory, for instance. It has its historical roots in the reduction of Darwinian evolution to evolutionary genetics, which took place from the 1920s onward. This change of focus away from the organism's struggle for survival towards an evolutionary theory based on the change of gene frequencies in populations is called the Modern Evolutionary Synthesis. In recent decades, a movement has come up that has challenged this purely reductionist approach to evolution. Since this movement was not officially out to overthrow the classical synthesis, but rather to add developmental and ecological aspects to its perspective, it called itself the Extended Evolutionary Synthesis. Since its emergence, there have been several high-profile publications (see here, or here, for example) debating whether such an extension is really necessary or useful or neither. Based on what I have said before about perspectivism and deliberation, you may think that such a diversity of justified positions would be fruitful and conducive to scientific progress in evolutionary biology. Unfortunately, this could not be further from the actual truth. The controversy over the Extended Evolutionary Synthesis is particularly interesting, since the polarization between two dominant positions (which is based on a pseudo-debate, as we shall see) leads to the exclusion of rigorously argued alternative views. The duopoly acts like a monopoly, destroying proper deliberative practice in the process. As Wimsatt points out, the failure to recognize or acknowledge the perspectivist nature of scientific knowledge leads to many misunderstandings in science. Simply put, there are two types of controversies that we may distinguish: the first is a genuine dispute, usually about factual, conceptual, or methodological matters; the second is a territorial conflict, whose causes that are, to an important degree, of a political or sociological nature. The true nature of the latter is often hidden behind a screen of smoke caused by pseudo-debates about matters that could easily be resolved if the participants would only see that they are approaching the same problem (or at least related problems) from a different perspective. Instead of struggling over power and money, the disputants in such controversies could move on by simply learning how to talk to each other across the fence, that is, across their perspectival divide. Clearly, the "controversy" over the Extended Evolutionary Synthesis is a pseudo-debate of this latter kind. One side is interested in the sources of evolutionary variation and its ecological implications, the other about the consequences of natural selection. They are two sides of the same coin. But each community is in direct competition when it comes to funding and influence, which prevents a true dialogue from happening as long as both sides profit from polarization. This is not all we can learn about the importance of perspectivism from this debate, however. Another aspect of the debate is the matter of a synthetic theory for evolution itself. Why extend a synthesis that nobody ever needed in the first place? Evolution is the quintessential process that generates diversity. The sources of variation in evolution are as unpredictable as they are situation-dependent. The idea of developing a synthetic theory for the sources of variation in evolution is patently absurd. The generation of variation among organisms is a highly complex process. What we need to tackle it are as many valid and well justified perspectives as we can get. They should be as consistent as possible with each other, but there is no reason to assume they will ever add up to a general, overarching synthesis. Each evolutionary problem will have its own solution. Some of these will be more or less related to each other, but no more. In fact, if a general account of the sources of variation were possible, then evolution would not be truly open ended or innovative (see Part III). Why is this fundamental issue never even debated or (even better) deliberated? It is because the few people voicing it are rarely heard above the din of the pseudo-debate about unrecognized perspectives. They are drowned out. We do not see the elephant in the room, because we are demolishing the China shop all by ourselves. My assessment of deliberative practice in evolutionary biology is therefore bleak: the process is completely broken, and only very few people even realize it. With better literacy in the naturalistic philosophy of science, this may have all been prevented. The whole pseudo-debate is the consequence of a largely outdated view of science. It is a philosophical problem at heart. SUMMARY: TOWARDS AN ECOLOGICAL VISION FOR SCIENCE Above, I have outlined the three main pillars of a naturalist philosophy of science that is tailored to the needs of practicing researchers in the life sciences and beyond. Its highest aim is to foster and put to good use the collective intelligence of our research communities through proper deliberative process. In order to achieve this, we need research communities that are diverse and whose members are philosophically educated about how to harvest this diversity, when engaged pluralism is a good thing, and when it becomes flabby. Such viable, diverse communities of scientists generate what I would call an “ecological” vision for science, which stands in stark contrast to our current industrial model of doing research. I compare the two approaches in the table below. Note that both models are rough sketches at best, which are highly idealized. They represent very different visions of how research ought to be done — two alternative ethos for science.
I have argued that the naïve realist view of science is not, in fact, realistic at all. In its stead, I have presented a naturalist philosophy of science that adequately takes into account the biases and capabilities of limited human beings, solving problems in a world that will forever exceed our grasp. The ecological research model proposed here is less focused on direct exploitation, and yet, it has the potential to be more productive in the long term than the current industrial system. However, its practical implementation will not be easy, due to the short-term productivity dilemma we have maneuvered ourselves into. Escaping this dilemma requires a deep understanding of the philosophical foundations, as well as the social and cognitive processes that enable and facilitate scientific progress. Identifying and assessing such processes is an empirical problem, which is only beginning to be tackled and understood today. Such empirical investigations must be grounded in a suitable naturalist philosophical framework, and a correspondingly revised ethos of science, This framework must acknowledge the contextual and processual nature of knowledge-production. It needs to focus directly on the quality of this process, rather than being fixated exclusively on the outcome of scientific projects. In this way, naturalist philosophy of science will not only benefit the individual scientist by making her a better researcher, but it will also strive to improve the quality of community-level processes of scientific investigation. It is not merely descriptive, the naturalist philosophy I envisage is changing the way we do science, and is changed by the science it engages with in turn. Apologies: this post has ended up being a little longer than I anticipated. If you're not tired of my ramblings yet, go on to part III, which discusses how we can use naturalist philosophy within biology: a philosophical kind of theoretical biology for practicing biologists to tackle biological problems.
Why would any biologist care about philosophy? This three-part series of blog posts is based on a talk I held at the workshop on "A New Naturalism: Towards a Progressive Theoretical Biology, " which I co-organized with philosophers Dan Brooks and James DiFrisco at the Wissenschaftskolleg zu Berlin in October 2022. You can find part II here, and part III here. You may not know this (few people probably realize it) but it's true: biology urgently needs more philosophy. After decades of rapid progress, which was mainly driven by methodological and technological progress, biology has arrived at a historical turning point. Reductionist approaches to genetic and molecular analysis are being supplemented by large-scale data-driven approaches and multi-level systems modeling. We are beginning to integrate the dynamics of gene regulatory networks with their physical context at the cellular and tissue level. We are even regaining a glimpse of the whole organism as an object of study. We can now turn our focus back on some of the deepest questions in biology: what makes a living system alive? How come it can exert agency? How does it interact with its environment? What factors shape its evolution? These questions pose a number of challenges that require not technological but conceptual progress: we need new ways of thinking to address them. And we need to re-contextualize what we already know in the increasingly complex societal and environmental circumstances we currently find ourselves in. Philosophy can be a powerful thinking tool for biologists (or any other scientist, for that matter). It helps us better understand what we are doing when we do research: how we produce trustworthy knowledge, insights that are adequate for our times, why we ask the questions we ask, what methods we are using, and what kinds of explanation we accept as scientifically valid. It enables us to reflect on what motivates and drives our investigations. It highlights the ethical implications of our work. Moreover, we can apply philosophical approaches to address deep biological problems: philosophy can serve us to clarify concepts, to delineate their appropriate domain of application, to reveal hidden meanings or potential misunderstandings, and to provide new perspectives or angles on old questions. In brief: philosophy can help you become a better and more effective researcher. For that, we need a specific kind of philosophy: philosophy that is tightly connected with the practice of doing research, and is informed by the latest science. Unfortunately, not all philosophy is like that. What we need is not the outdated philosophy of science that most scientists have already heard of: no positivism, Popper, or Kuhn. We also need no armchair philosophers, no far-fetched thought-experiments (on zombies, let's say), no high-level idealized simplifications and over-generalized abstractions. Instead, we need something fresh and novel: a rigorous naturalistic philosophy of biology for the 21st century, the kind of philosophy that is in tune with the latest findings in the life sciences themselves, and is adapted to the best and most realistic accounts of the production of scientific knowledge available today. Our philosophy need not be perfect, but it needs to be practical, applicable, and it needs to keep up and co-evolve with the science it is concerned with. But even more importantly, we need more philosophical thinking within biology, a new philosophical biology. That's not philosophers thinking about biology, its concepts, methods, practices, and theories, but philosophically sound thinking applied by biologists to biological problems. It is a kind of theoretical biology, a practice within biology, but not one that necessarily involves mathematical modeling. Put simply, it's better thinking for biologists. It is the kind of biology the organicists or C.H. Waddington used to practice. Waddington's epigenetic landscape and the work he derived from it (beautifully described in "The Strategy of the Genes") are a great example of the kind of philosophical biology I am talking about. Waddington's work radically reconceptualized the study of embryogenesis and its evolution, framing adequate explanations in terms of novel concepts such as chreodes (developmental trajectories depicted by valleys in the landscape), canalization (structural stability of chreodes represented by the steepness of the valley slopes), and homeorhesis (homeostasis applied to chreodes or the fact that a ball rolling down the landscape would tend to stay at the bottom of a valley). The influence of genes on embryogenesis is depicted by pegs that pull on the landscape through a complex network of ropes, altering the topography in unpredictable ways when mutation alters the arrangement of pegs. This collage of Waddington's illustrations of the landscape is taken from Anderson et al. (2020). Unfortunately, the century-old philosophical tradition of theoretical biology has been all but lost since the early 1970s. Some notable exceptions - torchbearers through the dark ages of molecular reductionism - are the process structuralism of my late masters supervisor and mentor Brian Goodwin (as well as others), which treated developmental processes and their evolution in terms of morphogenetic fields instead of genes as their fundamental units, Stuart Kauffman's work on self-organization in complex networks, the organization of the organism, and its consequences for open-ended evolution (especially in his "Investigations") or, more recently, Terrence Deacon's teleodynamic approach to the organization of living matter (see his "Incomplete Nature"). These researchers have revolutionized how we think about biological processes, organisms, and evolution, but they tend to work in isolation, at the fringes of biology. unnoticed by the mainstream. There would be many others worth mentioning here, but the main point is that examples of conceptually focused biological work are hard to find these days, not even mentioning what struggle it is to try and make a living as this kind of theoretician in biology today. THE LIMITS OF TECHNOLOGICAL AND METHODOLOGICAL PROGRESS Looking at the past sixty years or so of biology (starting with the rise of molecular biology), it is easy to convince yourself that progress in biology predominantly derives from technological and methodological advances, not from better concepts or new ideas. Indeed, we have much to show for in that regard. The rate at which we develop new techniques, produce ever more comprehensive data sets, and publish highly sophisticated technical papers appears to be increasing day by day. So what's not to like? We seem to be succeeding according to all our self-imposed metrics. Well, there is a growing and legitimate concern that this frantic acceleration of technological and methodological progress, this flood of big data, is not always a good thing. First and foremost, the current cult of productivity that comes with this kind of acceleration has a negative impact on researchers (especially young ones), who struggle to keep up with ever increasing demands for a successful career. More generally, it has led to a general neglect (even some disdain) for purely theoretical work in biology. I think this may be the main reason philosophical biology has been obscured in the past few decades. Questions that are not immediately accessible to empirical investigation are dismissed as idle speculation. One such example is the nature of the organism and its role in evolution. Technological progress is so fast that we can always keep ourselves busy with the latest new methodology instead, be it single-cell sequencing, 3D cell culture, or CRISPR gene editing, and the next big technological breakthrough is always just around the corner. Contextualizing our empirical findings within a broader view of life seems unnecessary, as all those intractable theoretical problems will surely become tractable through technological progress in the very near future. This kind of techno-optimism leads to a dangerous loss of context, a sort of technological tunnel vision. There is a worrisome gap that is opening between our power to manipulate living systems and our understanding of the complex consequences of these manipulations. Take the possibility of gene drives in uncontrollable natural environments as an example. Higher-order ecological effects will be inevitable in this case and almost certainly won't be harmless or benign. In fact, this is an example where we are running at increasing speed through the dark woods, blindly. And in the middle of this, we have hit a solid wall in terms of understanding some of the most fundamental concepts and problems in our field, among them the concept of "life" itself. This severely distorts our understanding of our own place in the world. We are drowning in data, yet thirsting for wisdom as E.O. Wilson once so aptly put it. To address these pressing issues, we need to urgently relearn to ask the big questions. François Jacob wrote in his "Logic of Life" in 1975 that "biologists no longer study life." This sounds a little crazy but it is essentially true. Yet, we have not really solved the problem of life (as Jacob implies). The truth is that we have not even confronted it. We simply skirted around it, explained it away by reducing organisms to metaphorical programs and machines. We no longer need to worry about such difficult questions. Life as a biologist is so much easier that way. That may be a smart move in a way, but it's not based on any solid grounding. Granted, it enabled us to better focus on those aspects of living systems that were tractable given whatever technological and conceptual capabilities we had at the time. Still, this kind of reductionism is simply bad philosophy, tainted by poor and outdated metaphysical commitments. It looses the forest for the trees. In fact, it is not too far-fetched to make the claim that we understand what life is (and what it is worth) less than ever before in human history. In other words, we have never been more wrong about life than with our current mechanistic-reductionist approach and its machine view of the organism. The core problem is that we have been carried so far away by the breakneck pace of technological progress that we have started to view the whole world (including all living beings) through the lens of our most advanced technology. We treat life as if it was a computational device that we can fully understand, predict, manipulate, and control. We have forgotten that this machine view of the world is only a metaphor, and not a good one at that. This is dangerous hubris, pure and simple. If we don't take a little time-out to stop and think, we will wipe ourselves from the face of the planet by the mechanistic application of increasingly powerful interventions to complex systems we are not even beginning to understand. We are in danger of losing track of our own limitations. We're on a straight path to oblivion if we let our technological might outpace our wisdom this way. Reconceptualizing life is an important first step, not only towards a deeper understanding of living systems, but also towards a healthier, more sustainable, and less exploitative attitude for humanity towards nature. A NEW ATTITUDE TOWARDS LIFE? Thus, the first conclusion I will draw here is that it is essential that we change the way we study and understand living systems. This is a philosophical problem. It requires new ways of thinking, instead of new technology. Unfortunately, the kind of reflection required for such conceptual change is all too often considered a waste of time in our frenzied academic research environments. We are too busy publishing and perishing. Yet, we urgently need to reconsider what we are doing, we must take the time to reexamine our concepts and practices if we are to continue making progress towards understanding life, ourselves, and our place in the universe. What are we collecting data for? Do any of us even remember? Of course, we do. And there are examples where conceptual progress is being made all across the life science. Yet it remains too disconnected and isolated to be truly effective. What is needed is a broader movement towards conceptual change, a much broader confrontation of these issues that is grounded in the most solid and powerful philosophical ideas we have available today. Such a movement needs a new philosophy of biology that is actually taught and known to practicing researchers, shaping the questions they ask, the methods they use, and the kinds of explanations they consider appropriate. We need to reintroduce philosophy as an essential part of a well-rounded scientific curriculum at our institutions of higher education. In addition, we also need a philosophical kind of theoretical biology that biologists actively engage with in their practice because it is useful to them. The alternative is for us to be buried under rapidly growing heaps of impressive but increasingly incomprehensible data, to wastefully burn through our vast (yet limited) funds and resources, and to end up as ignorant about life as we've ever been. Organisms are not machines. But what are they instead? We currently have no idea. This, broadly put, is why I believe philosophy is so important to contemporary biology. These are exciting times to be a biologist. We are on the cusp of great discoveries that will revolutionize our discipline, but the revolution won't be achieved without better concepts, better questions, and better theories. For the first time in decades, biology needs conceptual change to drive progress. The time is ripe to teach biologists philosophy again: no condescending preaching from the philosophical pulpit, but a kind of philosophy they will like, find plausible, and which they can put to work in their own research practice. Where do we start? In the next post, I will examine what I think is the proper naturalist philosophy of biology for the task. Then, in the final part of this trilogy, I will give you a number of examples that illustrate the philosophical kind of theoretical biology we may want to resurrect in order to tackle some fundamental biological challenges of our current age. Stay tuned!
I bear good tidings! After seven long years in the funding desert, I have secured a major research grant again. And even better: it makes no compromises. This project supports everything I want to do most at this point in my life. I'm still a bit in shock that I actually have this unique opportunity to pursue a whole range of rather radical philosophical, scientific, and artistic projects with financial security for the next three years. Lucky me. In fact, I had more or less decided to never apply for a research grant again, to try and make a life as a freelance teacher and philosopher, when this opportunity came along. And since I considered it my last shot, I wrote it without consideration whether it would actually get funded or not. But then, lo and behold, it did! I'm tremendously happy and excited to be focusing full time on research (and a bit of art) again. The title of the project is "Pushing the Boundaries: Agency, Evolution, and the Dynamic Emergence of Expanding Possibilities." It will be hosted by my wonderful project co-leader and collaborator Prof. Tarja Knuuttila at the Department of Philosophy of the University of Vienna, and will involve numerous collaborations with many of my favorite philosophers, scientists, and artists. But more on that later. FIRST: A WORD ABOUT THE FUNDER The project is funded by the John Templeton Foundation. So let's get a few things out of the way right at the beginning: I am very well aware that many of you have reservations about the Foundation as a funder, especially in the field of evolutionary biology, due to the controversial views of its founder, John Templeton. I want to make a few points very clear: I am a staunch and outspoken atheist (I've tried agnosticism for a while, but couldn't emotionally commit to it), I have no sympathies for the view that science and religion are non-overlapping magisteria (most traditional religious dogma is simply outdated and incompatible with a modern scientific worldview, and I want my metaphysics to always be in tune with the best scientific evidence we have available; more on that here), and I do not believe for a second that our knowledge of evolution leaves room for any kind of supernatural or directed influence (there is a lot we don't understand about the world, but God is never a valid explanation for any of that). Having said that, here are three reasons why I can take money from Templeton and still sleep tight at night. First, the Foundation had absolutely no influence on the content of the project. Nor did they ever try to exert such influence at any point of the application and selection process. Quite the contrary: in stark contrast to the public funding bodies I have had to deal with, they were extremely constructive, letting me reply to reviewers' comments, moderating differences of opinion that were not constructive, and generally helping me to improve the format of the project to fit the requirements and preferences of the Foundation. My experience with Templeton at this level has been 100% positive so far. Second, I've been trying to get my research into organismic agency funded by public funders for years now, without the slightest chance of success. The project is heavily transdisciplinary, trying to redefine how scientists see evolving spaces of the possible, expanding our view of what is a valid scientific explanation beyond strict mechanicism (yet still rigorous and naturalistic), challenging our mechanistic view of organismic behavior and biological evolution, while attempting the impossible task of simulating a whole cell or organism (see below). No panel at any funding body other than Templeton was open minded enough or could muster the transdisciplinary expertise to judge such a project properly and fairly. Also: gatekeepers will be gatekeeping, especially in committees set up by public funders. Templeton offers me a unique opportunity to escape these problematic institutional constraints. Whatever their motives to do this, this project is almost sure to drastically reduce the space for supernatural mysticism in biology rather than justify it in any way. Last but not least, those that receive funding from government agencies or their university would do well to question the motives of those institutions as well. Are the interests and incentives defined by these funders still aligned with the process of doing basic research — with the intrepid exploration of the unknown? I do not think so. The current public funding system is severely broken, with an excessive focus on politically useful short-term outcomes and practical applications (not even mentioning committee cronyism to only fund more of the same). It is going exclusively for the low-hanging fruit. It has become so focused on consensus decision making and reachable objectives that it makes true conceptual innovation all but impossible. I don't want to be part of that system if it cannot give me the freedom to explore. To be honest, I'm much better off with Templeton in that regard. Maybe this is a problem the decision-makers in public funding bodies should more seriously consider? Creativity and innovation are dying in a scientific research system designed for the age of selfies. OK, I GET IT. BUT WHAT IS THE PROJECT ABOUT? In the broadest sense, the project deals with the fact that our modern science, just like our modern worldview, are largely mechanistic. In other words, we see nature as a kind of machine (a “clockwork,” or nowadays maybe more like some sort of computer simulation or computation) that we can emulate, control, predict, and exploit. This view of the world as a machine has led to much empirical success in the natural sciences over the past few centuries, but also restricts the kind of questions we can scientifically address, and it underlies many of humanity’s most existential and pressing challenges today: the interlocking ecological, socioeconomic, political, and meaning crises that form our current meta-crisis. In my opinion, these crises are ultimately all rooted in a fundamental philosophical misunderstanding concerning the nature of the world and our role and place within it. In this project, we pursue the idea of a different kind of science for the 21st century, focused on organismic agency and its role in evolution. It does not view the world as a mechanism, but as a creative process of open-ended evolution. The main focus of our investigation lies on the simple question “how do organisms manage to act on their own behalf?” This, in fact, is what most clearly delineates the living world from the non-living. Unfortunately, the concept of “agency” — the ability of a living organism to act — is still heavily understudied in contemporary biology, probably because it is strongly associated with teleology, a kind of purposeful explanation that is shunned in mechanistic science. Our project aims to address this fundamental issue by providing a philosophical and scientific analysis of organismic agency, which shows that it is fully compatible with the epistemological principles of naturalistic explanation in science. Moreover, we are interested in learning how (or how far) we can capture the organism's ability to act in mathematical models of living cells and organisms, since we do not yet have such models in biology. Indeed, this kind of model may require entirely new kinds of computational and mathematical methods that we have barely begun to explore (see here, for an exceptional example of such an exploration). In our project, we intend to push the boundaries of what we can model and predict in systems that are (or that contain) organismic agents. Such systems not only include populations of evolving organisms, but also higher-level ecosystems (up to the level of the biosphere), and social systems, including the economy. We take a three-pronged approach to study such agential systems. Part 1: The evolutionary emergence of expanding possibility spaces First, we use philosophical analysis to clarify what some of the concepts we use actually mean. Many of these concepts remain vague and ill-defined, their interrelations unclear. In particular, a theory of agency requires us not only to know what it means for an organism to “act on its own behalf,” but also to understand problems such as the emergence of new behavior and new levels of organization in evolution, what makes such novel levels and behavior particularly complex and unpredictable (as opposed to, say, the behavior of a rock), and to explain how such new levels of organization can evolve with a degree of independence from the underlying molecular and genetic mechanisms. The problem with classical dynamical systems approaches to systems biology is that they are historically imported from physics, especially from an approach Lee Smolin has called "physics in a box" where we define a system over a specific domain of reality that contains a given number of interacting objects (encoded by the state variables of the system). The behavior of these objects is then described by a set of rules which are defined outside the objects themselves (the laws of gravity for a classical model of the solar system, for example). Given a specific set of starting and boundary conditions, we can then simulate the temporal evolution of the system within its given frame, by tracing a trajectory through a predefined space of possibilities (the configuration space of the model; see figure below, left). Unfortunately, this approach is fundamentally ill-suited for modeling evolving systems of interacting agents. The main reason is that agents are not objects, and the rules of their behavior is defined from within their peculiar organization. As I have said above: agents act on their own behalf. They write their own rules and those rules constantly evolve in response to the behavior of the agents itself (among other things, such as environmental triggers, of course). Thus, what we get here is a constantly changing space of possibilities that radically co-emerges with the dynamics of the agential system itself. Stuart Kauffman has called this emergent space of possibilities the adjacent possible (see figure above, right). The first part of our project is an attempt to philosophically (re)define concepts such as emergence, complexity, and agency to ground such a view in evolutionary theory and biological research practice. To summarize: in this philosophical part of the project, we push the conceptual boundaries that hinder our understanding of purposive agency and its role in open-ended evolution. To put it in more technical terms: we aim to establish suitable theoretical frameworks that bring together notions such as purposive organismic agency, the complex biological organization that enables it, the radically emergent dynamics underlying its evolution, and the impredicative nature of the resulting dialectic dynamics, i.e., the co-constructive interplay between affordances, goals and actions that produce the behavior and evolution of biological and human agents. These diverse, foundational, and so far massively understudied, aspects of agential evolution will be integrated through the central notion of dynamically growing possibility spaces, constantly driven into the adjacent possible by the intrinsic behavior of agents. Part 2: Modeling the impossible — simulating a living agent In the second part of the project, we take a more practical and less abstract approach. Informed by the philosophical investigations outlined above, we will create a mathematical and computational framework (a bit like a new programming language) that allows us to capture organismic agency in simulations of simple unicellular organisms that behave and evolve. Many efforts have been made to come up with such frameworks in the past. Here, we will use a number of new ideas to extend these existing efforts. In particular, we are interested in capturing the self-maintaining organization of an organism, and the way it generates actions. In addition, we will focus on the role that randomness plays in this process, and how we can generate reliable behavior in light of the many fluctuations within an organism and in the environment it encounters. Our starting point for these efforts will be a highly abstract minimal model of a cellular agent created by Jannie Hofmeyr, based on Robert Rosen's pioneering work on the organization of organisms. In other words, we intend to push the methodological boundaries that limit our ability to model agential dynamics. Today’s most advanced agent-based models and machine-learning methods still depend on externally prescribed goals/targets. A genuine model of agential evolution requires a new modeling paradigm, where evolving agents write their own rules, evolve their own codes, generate and transcend their own boundaries, and choose their own goals. Whether and how this could be achieved remains an open question, not only in biology, but also in social systems, AI, and ALife research. In fact, some theoreticians (including myself) have predicted that this is not entirely possible. By trying anyway, we can find out (a) how far we can get with dynamical model/simulations and which parts of an organism’s organization and behavior such models can capture, and (b) we can strive to better understand and circumscribe the limits of predictability in such systems. In this sense, failing in our task to come up with a perfect model of a living organism may teach us more about life than being successful. Whatever we will encounter on our journey, we are sure to learn something new. Part 3: Why organisms aren't machines As a final part of the project, we intend to bring the discussion of agency and the possibility of a post-mechanistic science to other scientists and the general public, because we believe that a transition towards a more participatory and sustainable view of the world is essential for human survival on the planet. An expanded scientific view of the world as an evolving open-ended process has profound implications. It requires us to accept certain types of teleological explanations (those concerning the goal-oriented behavior of organisms) as perfectly naturalistic and rigorously scientific ways of understanding the world. This greatly expands the range of questions that scientific research can address. It increases our explanatory power compared to a purely mechanistic approach. In the domain of ethics, we can no longer treat and exploit living beings as if they were machines. Even more broadly, we have to give up our Laplacean dreams of control and predictability. We have to learn how to go with the flow, rather than attempting to constantly master and channel it. We have to realize that the future is fundamentally open. This can be scary, but also liberating. We are responsible for our own actions and their consequences. The evolutionary view of the world is empowering. Here, we push the boundaries of public and scientific awareness concerning purposive agency and open-ended evolution. The massive challenges we seek to address require a community-sized effort, positioned at the heart of 21st-century science, as well as a new post-mechanistic view of what science can be. Unfortunately, very few researchers, and even fewer members of the public, currently realize the scope and implications of this problem. We aim to raise awareness through dissemination: a book written for a general readership about why organisms are not machines, and an innovative outreach strategy led by a professional curator, Basak Senova, implemented through my existing arts & science collaboration THE ZoNE, which presents our views on agency and evolution and their scientific and societal ramifications in a serious but playful format called wissenskunst that is accessible and (hopefully) attractive to a broad general public. SOME CONCLUDING PRACTICALITIES This is it for this first quick overview. I'll write more about how things are going over the next few months. As I mentioned, the project will be hosted at the Department of Philosophy of University of Vienna. It will start in December 2022, and will run for a duration of 33 months. It includes two workshops and a larger conference on the topic of agency and evolution, so keep an eye out for announcements. I am particularly proud that this project is not part of Templeton's huge concerted effort to study agency, called "Agency, Directionality, and Function" (website). We very much hope to be constructively synergizing with parts of that larger project, but I see our approach as complementary and orthogonal to their efforts. And most importantly: we remain independent agents! It would be ironic were it otherwise. You can contact me here, if you want to know more about the project.
Serious Play with Evolutionary Ideas Have I mentioned already that I am part of an arts & science collective here in Vienna? It's called THE ZONE. Yes, you're right. I actually did mention it before. What is it about? And what, in general, is the point of arts & science collaborations? This post is the start of an attempt to give some answers to these questions. It is based on a talk I gave on Mar 12, 2022 at "Hope Recycling Station" in Prague as part of an arts & science event organized by the "Transparent Eyeball" collective (Adam Vačkář and Jindřich Brejcha). I'll start with this beautiful etching by polymath poet William Blake, from 1813. It's called "The Reunion of the Soul and the Body," and shows a couple in a wild and ecstatic embrace. I suppose the male figure on the ground represents the body, while the female soul descends from the heavens for a passionate kiss amidst the world (a graveyard with an open grave in the foreground, as far as I can tell) going up in smoke and flames. This image, rich in gnostic symbolism, stands for a way out of the profound crisis of meaning we are experiencing today. Blake's picture graces the cover of one of the weirdest and most psychoactive pieces of literature that I have ever read. In fact, I keep on rereading it. It is William Irwin Thompson's "The Time Falling Bodies Take to Light." This book is a wide-ranging ramble about mythmaking, sex, and the origin of human culture. It sometimes veers a bit too far into esoteric and gnostic realms for my taste. But then, it is also a superabundant source of wonderfully crazy ideas and stunning metaphorical narratives that are profoundly helpful if you're trying to viscerally grasp the human condition, especially the current pickle we're in. It's amazing how much this book, written in the late 1970s, fits the zeitgeist of 2022. It is more timely and important than ever. THREE ORDERS So why is Blake's image on the cover? "Myth is the history of the soul" writes Thompson in his Prologue. What on Earth does that mean? Remember, this is not a religious text but a treatise on mythmaking and its role in culture. (I won't talk about sex in this post, sorry.) Thompson suggests that our world is in flames because we have lost our souls. This is why we can no longer make sense of the world. A new reunion of soul and body is urgently needed. Thompson's soul is no supernatural immortal essence. Instead, the loss of soul represents the loss of narrative order, which is the story you tell of your personal experience and how it fits into a larger meta-narrative about the world. A personal mythos, if you want. We used to have such a mythos but, today, we are no longer able to tell this story of ourselves in a way that gives us a stable and intuitive grip on reality. According to cognitive psychologist John Vervaeke, the narrative order is only one of three orders which we need to get a grip, to make sense of the world. It is the story about ourselves, told in the first person (as an individual or a community). The second-person perspective is called the normative order, our ethics, our ways of co-developing our societies. And the third-person perspective is the nomological order, our science, the rules that describe the structure of the world, which constrains our agency and guides our relationship with reality (our agent-arena relationship). All three orders are in crisis right now. Science is being challenged from all sides in our post-truth world. Moral cohesion is breaking down. But the worst afflicted is the narrative order. We have no story to tell about ourselves anymore. This problem is at the root of all our crises. That is exactly what Thompson means by the soullessness of our time. THE OLD MYTHOS... But what is the narrative order, the mythos, that was lost? As I explain in detail elsewhere, it is the parable (sometimes wrongly called allegory) of Plato's Cave. We are all prisoners in this cave, chained to the wall, with an opening behind our backs that we can't see. Through this opening, light seeps into the cave, casting diffuse shadows of shapes that pass in front of the opening onto the wall opposite us. These shadows are all we can see. They represent the totality of our experiences. In Plato's tale, a philosopher is a prisoner who escapes her shackles to ascend to the world outside the cave. She can now see the real world, beyond appearances, in its true light. For Plato, this world consists of abstract ideal forms, to be understood as the fundamental organizational principles behind appearances. He provides us with a two-world mythology that explains the imperfection of our world, and also our journey towards deeper meaning. This journey is a transformative one. It is central to Plato's parable. He calls it anagoge (ancient Greek for "climb" or "ascent"). The philosopher escaping the cave must become a different person before she can truly see the real world of ideal forms. Without this transformation, she would be blinded by the bright daylight outside the cave. Anagoge involves a complexification of her views and a decentering of her stance, away from egocentric motivations to an omnicentric worldview that encompasses the whole of reality. When she returns to the cave, she is a completely different person. In fact, the other prisoners, her former friends and companions, no longer understand what she is saying, since they have not undergone the same transformations she has. The only way she can make them understand is to convince them to embark on their own journeys. However, most of the prisoners do not want to leave the cave. They are quite comfortable in its warm womb-like enclosure. With his parable, Plato wanted to destroy more ancient mythologies of gods and heroes. Ironically, in doing so, he created an even more powerful myth that governed human meaning-making for almost two-and-a-half millennia. After his death, it was taken up by the Neoplatonists and then by St. Augustine. It entered the mythos of Christianity as the spiritual domain of God, which lies beyond the physical world of our experience. Only faith, not reason, can grant you access. Later, this idea of a transcendent realm was secularized by Immanuel Kant. who postulated a two-world ontology of phenomena and noumena, the latter ("das Ding an sich") completely out of reach for a limited human knower. ... AND ITS DOWNFALL All of this was brutally shattered by Friedrich Nietzsche (although others, such as Georg Wilhelm Friedrich Hegel and Auguste Comte, also contributed enthusiastically to the demolition effort). Nietzsche is the prophet of the meaning crisis. "God is dead, and we have killed him" doesn't leave much room anymore for the spiritual realm of traditional Christianity. What Nietzsche means here is not an atheistic call to arms. It is the observation that traditional religion already has become increasingly irrelevant for a growing number of people, and that this process is inevitable and irreversible in our modern times. Nietzsche also destroys Kant's transcendental noumenal domain, all in just one page of "The Twilight of the Idols," which is unambiguously entitled "History of an Error." When Nietzsche is through with it, two-world mythology is nothing more than a heap of smoking rubble. And things have gotten only worse since then. As Nietzsche predicted, the demolition of the Platonic mythos was followed by an unprecedented wave of cynical nihilism over what we could call the long 20th century, culminating in the postmodern relativism of our post-fact world. Under these circumstances, any attempt at reconstructing the cave would be a fool's hope. A NEW MYTHOS? But we can try to do better than that! What Thompson and Vervaeke want, instead of crawling back into the womb of the cave, is a new mythos, a new history of the soul, (meta-)narratives adequate for the zeitgeist of the 21st century. But who would be our contemporary mythmakers? Thompson points out a few problems in "Falling Bodies:" "The history of the soul is obliterated, the universe is shut out, and on the walls of Plato's cave the experts in the casting of shadows tell the story of Man's rise from ignorance to science through the power of technology." In Thompson's view, scientists are the experts in the casting of shadows, generating ever more sophisticated but shallow appearances, without ever getting to the deep underlying issues. What about artists then? "In the classical era the person who saw history in the light of myth was the prophet, an Isaiah or Jeremiah; in the modern era the person who saw history in the light of myth was the artist, a Blake or a Yeats. But now in our postmodern era the artists have become a degenerate priesthood; they have become not spirits of liberation, but the interior decorators of Plato's cave. We cannot look to them for revolutionary deliverance." Harsh: postmodern artists as the interior decorators of Plato's cave. Shiny surface and distanced irony over deep meaning and radical sincerity. The meaning crisis seems to have fully engulfed both the arts and the sciences. Thompson's pessimistic conclusion is that, in their current state, neither are likely to help us restore the narrative order. WISSENSKUNST This is where Thompson (pictured above) proposes the new practice of wissenskunst. Neither science nor art, yet also a bit of both (in a way). He starts out with a reflection on what a modern-day prophet would be: "The revisioning of history is ... also an act of prophecy―not prophecy in the sense of making predictions, for the universe is too free and open-ended for the manipulations of a religious egotism―but prophecy in the sense of seeing history in the light of myth." Since artists are interior decorators now, and scientists cast ever more intricate shadows in the cave, we need new prophets. But not religious ones. More something like: "If history becomes the medium of our imprisonment, then history must become the medium of our liberation; (to rise, we must push against the ground to which we have fallen). For this radical task, the boundaries of both art and science must be redrawn. Wissenschaft must become Wissenkunst." (Wissenskunst, actually. Correct inflections are important in German!) The task is to rewrite our historical narrative in term of new myths. To create a new narrative order. A story about ourselves. But what does "myth" mean, exactly? In an age of chaos, like ours, myth is often taken to be "a false statement, an opinion popularly held, but one known by scientists and other experts to be incorrect." This is not what Thompson is talking about. Vervaeke captures his sense of myth much better: "Myths are ways in which we express and by which we try to come into right relationship to patterns that are relevant to us either because they are perennial or because they are pressing." So what would a modern myth look like? ZOMBIES! Well, according to Gilles Deleuze and Félix Guattari, there is only one modern myth: zombies! Vervaeke and co-authors tie the zombie apocalypse to our current meaning crisis: zombies are "the fictionally distorted, self-reflected image of modern humanity... zombies are us." The undead live in a meaningless world. They live in herds but never communicate. They are unapproachable, ugly, unlovable. They are homeless, aimlessly wandering, neither dead nor alive. Neither here nor there. They literally destroy meaning by eating brains. In all these ways, zombification reflects our loss of narrative order. Unfortunately, the zombie apocalypse is not a good myth. It only expresses our present predicament, but does not help us understand, solve, or escape it. A successful myth, according to Vervaeke, must "give people advice on how to get into right relationship to perennial or pressing problems." Zombies just don't do that. Zombie movies don't have happy endings (with only one exception that I know of). The loss of meaning they convey is rampant and terminal. Compare this with Plato's myth of the cave, which provides us with a clear set of instruction on how to escape our imperfect world of illusions. Anagoge frees us from our shackles. What's more, it is achievable using only our own faculties of reason. No other tools required. In contrast, you can only run and hide from the undead. There is no escaping them. They are everywhere around you. The zombie-apocalypse is claustrophobic and anxiety-inducing. It leaves us without hope. We need better myths for meaning-making. But how to create them? Philip Ball, in his excellent book about modern myths, points out that you cannot write a modern myth on purpose. Myths arise in a historically contingent manner. In fact, they have no single author. Once a story becomes myth, it mutates and evolves through countless retelling. It is the whole genealogy of stories that comprises the myth. Thompson comes to a very similar conclusion when looking at the Jewish midrashim, for example, which are folkloristic exegeses of the biblical canon. For it to be effective, a myth must become a process that inspires. Just look at the evolution of Plato's two-world mythology from the original to its neoplatonist, Christian, and Kantian successors. So where to begin if we are out to generate a new mythology for modern times? I think there is no other way than to look directly at the processes that drive our ability to make sense of the world. If we see these processes more clearly, we can play with them, spinning off narratives that might, eventually, become the roots of new myths, myths based on cognitive science rather than religious or philosophical parables. THE PROBLEM OF RELEVANCE By now, it should come as no surprise that rationality alone is not sufficent for meaning-making. We have talked about the transformative process of anagoge, in which we need to complexify and decenter our views in order to make sense of the world. What is driving this process? The most basic problem we need to tackle when trying to understand anything is the problem of relevance: how do we decide what is worth understanding in the first place? And once we've settled on some particular aspect of reality, how do we frame the problem so it actually can be understood? A modern mythology must address these fundamental questions. Vervaeke and colleagues call the process involved in identifying relevant features relevance realization. At the danger of simplifying a bit, you can think of it as a kind of "where is Wally" (or "Waldo" for our friends from the U.S.). Reality bombards us with a gazillion of sensory impressions. Take the crowd of people on the beach in the picture above. How do we pick out the relevant one? Where is Wally? We cannot simply reason our way through our search (although some search strategies will, of course, be more reasonable than others). We do not yet have a good understanding of how relevance realization actually works, or what its cognitive basis is, but there are a few aspects of this fundamental process that we know about and that are relevant here. On the one hand, we must realize that relevance realization reaches into the depth of our experience, arising at the very first moments of our existence. A newborn baby (and, indeed, pretty much any living organism) can realize what is relevant to it. We must therefore conclude that this process occurs at a level below that of propositional knowledge. We can pick out what is relevant before we can think logically. On the other hand, relevance realization also encompasses the highest levels of cognition. In fact, we can consider consciousness itself as some kind of higher-order recursive relevance realization. Importantly, relevance realization cannot be captured by an algorithm. The number of potentially relevant aspects of reality is indefinite (and potentially infinite), and cannot be captured in a well-formulated mathematical set, which would be necessary to define an algorithm. What's more, the category of "what we find relevant" does not have any essential properties. What is relevant radically depends on context. In this regard, relevance is a bit like the concept of "adaptation" in evolution. What is adaptive will radically depend on the environmental context. There is no essential property of "that which is adaptive." Similarly, we must constantly adapt to pick out the relevant features of new situations. Thus, in a very broad but also deep sense, relevance realization resembles an evolutionary adaptive process. And just like there is competition between lots of different organisms in evolution, there is a kind of opponent processing going on in relevance realization: different cognitive processes and strategies compete with each other for dominance at each moment. This explains why we can shift attention very quickly and flexibly when required (and sometimes when it isn't), but also why our sense-making is hardly consistent across all situations. This is not a bad thing. Quite the opposite, it allows us to be flexible while maintaining an overall grip on reality. As Groucho Marx is supposed to have said: "I have principles, but if you don't like them, I have others." INVERSE ANAGOGE & SERIOUS PLAY Burdened with all this insight into relevance realization, we can now come up with a revised notion of anagoge, which is appropriate for our secular modern times. It is quite the inverse of Plato's climb into the world of ideals. Anagoge now becomes a transformative journey inside ourselves and into our relationship with the world. A descent instead of an ascent. Transformative learning is a realignment of our relevance realization processes to get a better grip on our situation. We can train this process through practice, but we cannot step outside it to observe and understand it "objectively." We cannot make sense of it, since we make sense through it. Basically, the only way to train our grip on reality is to tackle it through practice, more specifically, to engage in serious play with our processes of relevance realization. To quote metamodern political philosopher Hanzi Freinacht, we must "... assume a genuinely playful stance towards life and existence, a playfulness that demands of us the gravest seriousness, given the ever-present potentials for unimaginable suffering and bliss." Serious playfulness, sincere irony, and informed naivité. This is what it takes to become a metamodern mythmaker. So this is the beginning of our journey. A journey that will eventually yield a new narrative order. Or so we hope. It is not up to us to decide, as we enter THE ZONE between arts and science. Our quest is ambitious, impossible, maybe. But try we must, or the world is lost. This post is based on a lecture held on March 12, 2022 at the "Transparent Eyeball" arts & science event in Prague, which was organized by Adam Vačkář and Jindřich Brejcha. Based on work by William Irwin Thompson, John Vervaeke, and Hanzi Freinacht.
I've been silent on this blog for too long. What about reactivating it with some reflections on its maybe somewhat cryptic title? The phrase "untethered in the Platonic realm" comes from a committee report I received when I applied for a fellowship with a project to critically examine the philosophy underlying the open science movement. The feedback (as you may imagine) was somewhat less than enthusiastic. The statement was placed prominently at the beginning of the report to tell me that philosophy is an activity exclusively done in armchairs, with no practical impact on anything that truly matters in practice. The committee saw my efforts as floating in a purely abstract domain, disconnected from reality. I suspect the phrase was also a somewhat naive (and more than a little pathetic) attempt by the high-profile scientific operators on the panel to showcase their self-assumed philosophical sophistication. What it did was exactly the opposite: it revealed just how ignorant we are these days of the philosophical issues that underlie pretty much all our current misery. To quote cognitive scientist and philosopher John Vervaeke: beneath the myriad crises humanity is experiencing right now, there is a profound crisis of meaning. And what, if not that, is a philosophical problem? Vervaeke's meaning crisis affects almost all aspects of human society. In particular, it affects our connectedness to ourselves, each other, and to our environment. We are quite literally loosing our grip on reality. And believe it or not, all of this is intimately linked to Plato and his allegedly irrelevant and abstract ideas. So why not try to illustrate the importance of philosophy for our practical lives with Plato's allegory of the cave (which is more of a parable, really). I am part of an arts and science collective called THE ZONE. Together with Marcus Neustetter, (who is an amazing artist) we've created a virtual-reality rendition of Plato's cave, which allows us to explore philosophical issues while actually looking at the shadows on the wall (and what causes them). What follows is a summary of some of the ideas we discuss during our mythopoietic philosophical stroll. I'm sure most of you will have heard of Plato's parable of the cave (part of his "Republic"), and are vaguely familiar with what it stands for: we humans are prisoners in a cave, chained with our backs to the wall. An unseen source of light behind our backs provides diffuse and flickering lighting. Shapes are paraded or pass in front of the light source. They cause fleeting shadows on the wall. These shadows are all we can see. They are our reality, but aren't accurate or complete representations of the real world. For Plato, a philosopher (and this would include scientists today) is a prisoner that manages to break their chains and escape the cave. As the philosopher ventures to find the exit, she is first blinded by the light coming from outside. Now we come to what I think is the central and most important aspect of the story, an aspect that is often overlooked. As the philosopher ascends from the cave to the surface, she must adapt to her new conditions. Her transformative journey to the surface is called "anagoge," which simply means "climb" or "ascent" in ancient Greek. It later acquired a mystical and spiritual meaning in the context of Christianity. But for Plato, it is simply the series of changes in yourself that you must go through in order to be able to see the real world for what it is. For Plato, the world the philosopher discovers is an ideal world of timeless absolute forms. This is what we usually associate with his parable of the cave: the invention of what later (via Neoplatonism and Augustine) became the religious and spiritual realm of Christianity, above and beyond the physical realm of our everyday lives. But before we get to the problems associated with that idea, let me point out one more overlooked aspect of the story. An important part of Plato's parable is that the philosopher returns to the cave, eager to tell the other prisoners about the real world and the fact that they are only living in the shadows. Unfortunately, the others do not understand her, since they have not gone through the transformative process of anagoge themselves. Through her journey, the philosopher has become a different kind of person. She quite literally lives in a different world, even after she descends back to the cave. If she wants to share her experience in any meaningful way, she needs to convince the other prisoners to undertake their own journeys. My guess is though that most of them are pretty happy to stay put, chained as they are to the wall in the cave. I cannot emphasize enough how important this story is for the last 2,500 years of human history. Untethered in its abstract realm it is not. And it is at the very root of our current meaning crisis, as Vervaeke points out (I've largely followed his interpretation of Plato above). There is a deep irony in the whole history. Plato's original intention with his tale of abstraction was to fight the superstitious mythological worldviews most of his contemporaries held on to, which were based on anthropomorphized narratives expressed in terms of the acts of gods, heroes, or demons. On the one hand, there is no doubt that Plato did succeed in introducing new, more abstract, more general metaphors for the human condition. On the other hand, all he did was introduce another kind of myth. He invents the two-world mythology of an ideal realm transcending our imperfect world of everyday experiences. One of the most important philosophers of the early 20th century, Alfred North Whitehead, famously quipped that "[t]he safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato." Whitehead also introduced the concept of the fallacy of misplaced concreteness (sometimes called the reification fallacy), which pretty accurately describes what happened to Plato and his cave: this fallacy means you are mistaking something abstract for something concrete. In other words, you are mistaking something that is made up for something real. Oversimplifying just a little bit, we can say that this is what Christians did with the Platonic realm of ideal forms. If this world you live in does not make sense to you, just wait for the next one. It'll be much better. And so, the abstract realm of God became a cornerstone for our meaning-making up until the Renaissance and subsequent historical developments brought all kinds of doubts and troubles into the game. To be fair to Plato, he did not see his two worlds as disconnected and completely separated realms the way Christianity came to interpret him. His worlds were bridged by the transformative journey of anagoge after all. And that is why his story is still relevant today. Some time between the Renaissance and Friedrich Nietzsche declaring God to be dead, Plato's ideal world became not so much implausible, but irrelevant for an increasing number of people. It no longer touched their lives or helped them make sense. The resulting disappearance of Plato's ideal world is succinctly recounted in Nietzsche's "Twilight of the Idols" in what is surely one of the best one-page slams philosophy has ever produced. Unfortunately though, we threw out the baby with the bathwater. With the Platonic realm no longer a place to be untethered in, we also lost the notion of anagoge. This is tragic, because the transformative journey stands for the cultivation of wisdom. Self-transcendence has become associated with superficial MacBuddhism and new-age spiritual bypassing. An escape from reality. To come to know the world, we no longer consider our own personal development as important (other than acquiring tools and methods, but that is hardly transformative). Instead, we believe in the application of the scientific method, narrowly defined as rationality in the form of logical inference applied to factual empirical evidence, as the best way to achieve rigorous understanding. Don't get me wrong: science is great, and its proper application is more important than ever before. What I'm saying here is that science alone is not sufficient to make sense of the world. To achieve that we need to tether Plato's anagoge back to the real world. To understand what's going on, we must concede a central point to Plato: there is much more going on than we are aware of. Much more than we can rationally grasp. Our world contains an indefinite (and potentially infinite) amount of phenomena that may be relevant to us; potentially unlimited differences that make a difference (to use Gregory Bateson's famous term). How do we choose what is important? How do we choose what to care about? This is not a problem we can rationally solve. First of all, any rational search for relevant phenomena will succumb to the problem of combinatorial explosion: there are simply too many possible candidates to rationally choose from. We get stuck trying. What's more, rationality presupposes us to have chosen what to care about. You must have something to think about in the first place. The process of relevance realization, as described by Vervaeke and colleagues, however, happens at a much deeper level than our rational thinking. A level that is deeply experiential, and can only be cultivated by appropriate practice. I have much more to say about that at some later point. Thus, to summarize: the hidden realm that Plato suspected to be elevated above our real world is really not outside his cave, but within every one of us. An alternative metaphor for anagoge, without the requirement of a lost world of ideal forms, is to enter our shadows, to discover what is within them. This is what we are exploring with Marcus. Self-transcendence as an inward journey. Immanent transcendence, if you want. We are turning Plato's cave inside out. The hidden mystery is right there, not behind our backs, not in front of our noses, not inside our heads, but embedded in the way we become who we are. Here we can turn to Whitehead again, who noticed that to criticize the philosophy of your time, you must direct our attention to those fundamental assumptions which everyone presupposes, assumptions that appear so obvious that people do not know they are assuming them, because no alternative ways of putting things have ever occurred to them. The assumption that reality can be rationally understood is one of these in our late modern times. It blinds us to a number of obvious insights. One of them is that we need to go inside us to get a better grip on reality. This is not religious or new-age woo. It is existential. As the late E. O. Wilson rightly observed (in the context of tackling our societal and ecological issues): we are drowning in information, while starving for wisdom. We can gather more data forever. We can follow the textbook and apply the scientific method like an algorithm. We can formulate a theory of everything (that will be really about nothing). But without self-transcendence, we will never make any sense of the world. And we, as artists, philosophers, and scientists, have completely forgotten about that. Perhaps, because we're too busy competing in our respective rat races, and don't allow ourselves to engage in idle play anymore. But I digress... There is the irony again: it's not Plato, but the scientists on that selection panel that are completely disconnected from reality. They've lost their grip to an extent that they'd never even realize it. Where does that leave us? What do we need to do? There are a bunch of theoretical and practical ideas that I would like to talk about in future posts to this blog. But one thing is central: we can't just think our way through this in our armchairs. Philosophy is important. But I concede this point to my committee of conceited condescending panelists: philosophy is only truly relevant if it touches on our practices of living, on our institutions, on our society. It is time for philosophy to come out of the ivory tower again. We need a philosophy that is not only thought. We need a philosophy that is practiced. The ancients, like Plato, were practitioners. Let's tether Plato back to the real world, where he can have his rightful impact. Just like his philosopher who ultimately must return to the cave to complete her transformative journey. Watch the first performance of THE ZONE in Plato's Cave.
VR landscaping and images by Marcus Neustetter. Much of this blog entry is based on John Vervaeke's amazing work. Check out his life-changing lecture Awakening from the Meaning Crisis here. Or start with the summary of his ideas as presented on the Jim Rutt Show [Episode 1,2,3,4,5]. So, this is as good a reason as any to wake up from my blogging hibernation/estivation that lasted almost a year, and start posting content on my web site again. What killed me this last year, was a curious lack of time (for someone who doesn't actually have any job), and a gross surplus of perfectionism. Some blog posts got begun, but never finished. And so on. And so forth. So here we are: I'm writing a very short post today, since the link I'll post will speak for itself, literally. I've had the pleasure, a couple of weeks ago, to talk to Paul Middlebrooks (@pgmid) who runs the fantastic "Brain Inspired" podcast. Paul is a truly amazing interviewer. He found me on YouTube, through my "Beyond Networks" lecture series. During our discussion, we covered an astonishingly wide range of topics, from the limits of dynamical systems modeling, to process thinking, to agency in evolution, to open-ended evolutionary innovation, to AI and agency, life, intelligence, deep learning, autonomy, perspectivism, the limitations of mechanistic explanation (even the dynamic kind), and the problem with synthesis (and the extended evolutionary synthesis, in particular) in evolutionary biology. The episode is now online. Check it out by clicking on the image below. Paul also has a break-down of topics on his website, with precise times, so you can home in on your favorite without having to listen to all the rest. Before I go, let me say this: please support Paul and his work via Patreon. He has an excellent roster of guests (not counting myself), talking about a lot of really fascinating topics.
This is the English translation of an article that was originally published in German as part of the annual essay collection of Laborjournal (publication date Jul 7, 2020). Science finds itself exposed to an increasingly anti-intellectual and post-factual social climate. Few people realise, however, that the foundations of academic research are also threatened from within, by an unhealthy cult of productivity and spreading career-oriented self-censorship. Here I present a quick diagnosis with a few preliminary suggestions on how to tackle these problems. In Raphael's "School of Athens" (above) we see the ideal of the ancient Academy: philosophers of various persuasions think and argue passionately but rationally about the deep and existential problems of our world. With Hypatia, there is even a woman present at this boy's club (center left). These thinkers are protected by an impressive vault from the trivialities of the outside world, while the blue sky in the background opens up a space for daring flights of fancy. The establishment of modern universities — beginning in the early 19th century in Berlin — was very much inspired by this lofty vision. THE RESEARCH FACTORY Unfortunately, we couldn't be further from this ideal today. Modern academic research resembles an automated factory more than the illustrious discussion circle depicted by Raphael. Over the past few decades, science has been trimmed for efficiency according to the principles of the free-market economy. This is not only happening in the natural sciences, by the way, but also increasingly in the social sciences and the humanities. The more money the taxpayer invests in academia, the higher the expectation of rapid returns. The outcomes of scientific projects should have social impact and provide practical solutions to concrete problems. Even evolutionary theorists must fill out the corresponding section in their grant applications. Science is seen as a "deus ex machina" for solving our societal and technological problems. Just like we go to the doctor to get instant pain relief, we expect science to provide instant solutions to complex problems, or at the very least, a steady stream of publications, which are supposed to eventually lead to such solutions. The more money goes into the system, the more applied wisdom is expected to flow from the other end of the research pipeline. Or so the story goes. Unfortunately, basic research doesn't work that way at all. And, regrettably, applied science will get stuck quickly if we no longer do any real basic science. As Louis Pasteur once said: there is no applied research, only research and its practical applications. There are no short cuts to innovation. Just think about the history of the laser, theoretically predicted by Albert Einstein in 1917. The first functional ruby laser was constructed in 1960, and mass market applications of laser technology only began in the 1980s. A similar story can be told for Paul Dirac's 1928 prediction of the positron, which was confirmed experimentally in 1932. The first PET-scanner came to market in the 1970s. Or let's take PCR, of Covid-19 test fame. The polymerase chain reaction goes back to the serendipitous discovery of a high-temperature polymerase from a thermophilic bacterium first described by microbiologists Thomas Brock and Hudson Freeze (no joke!) in the hot springs of Yellowstone Park in the 1960s. PCR wasn't widely used in the laboratory until the 1990s. A study from 2013 by William H. Press — then a science advisor to Barack Obama — presents studies by economist and Nobel-laureate Robert Solow, which look at the positive feedback between innovation, technology, and the wealth of various nations. Solow draws two key conclusions from his work. First, technological innovation is responsible for about 85% of U.S. economic growth over the past hundred years or so. Second, the richest countries today are those that had first set up a strong tradition in basic research. Press argues, building on Solow's insights, that basic research must be generously funded by the state. One reason is that it is impossible to predict which fundamental discoveries will lead to technological innovations. Second, the path to application can take decades, as the examples above illustrate. Finally, breakthroughs in basic science often have a low appropiability, that is, money gained from their application rarely flows back to the original investor. Think of Asian CD and DVD players equipped with lasers based on U.S. research and development, which yielded massive profits while outcompeting more expensive (and less good) products of American make. This is the economic argument why state-funded basic research is more important than ever. EFFICIENCY OR DIVERSITY? But here exactly lies the problem: basic research simply does not work according to the rules of the free market. Nevertheless, we have an academic research system that is increasingly dominated by these rules. Mathematicians Donald and Stuart Geman note that the focus of fundamental breakthroughs in science has shifted during the 20th century from conceptual to technological advances: from the radical revolution in our worldview brought about by quantum and relativity theory to the sequencing of the human genome which, in the end, yielded disappointingly few medical advances or new insights into human nature. A whole variety of complex historical reasons are responsible for this shift. One of these is undoubtedly the massive transformation in the incentive structure for researchers. We have established a monoculture. A monoculture of efficiency and accountability, which leads to an impoverished intellectual environment that is no longer able to nourish innovative research ideas, even though there is more money available for science than ever before. Isn't it ironic that this money would be more efficiently invested if there was less pressure for efficiency in research? Researchers that need to be constantly productive to progress in their careers, must constantly appear busy. This is absolutely fatal, particularly for theoretically and philosophically oriented projects. First of all, good theory requires creativity which needs time, inspiration, and a certain kind of productive leisure. Second, the most important and radical intellectual breakthroughs are far ahead of their time, without immediately obvious practical application, and generally associated with a high level of risk. Who tackles complex problems will fail more often. Some breakthroughs are only recognised in hindsight, long after they have been made. Few researchers today can muster the time and courage to devote themselves to projects with such uncertain outcomes. The time of the romantics is over; now the pragmatists are in charge. Those who want to be successful in current-day academia — especially at an early stage of their careers — must focus on tractable problems in established fields, the low-hanging fruit. This optimises personal productivity and chances of success, but in turn diminishes diversity and originality of thinking in academic research overall, and wastes the best years of too many intrepid young explorers. Unfortunately, originality cannot be measured, while productivity can. Originality often leads to noteworthy conceptual innovations, but productivity on its own rarely does. Goodhart's Law — named after a British economist — says that a measure of success ceases to be useful once it has become an incentive. This is happening in almost all areas of society at the moment, as pointedly described by U.S. historian Jerry Z. Muller in his excellent book "The Tyranny of Metrics." In science, Goodhart's Law leads to increased self-citations, a flood of ever shorter publications (approaching what is called the minimal publishing unit) with an ever increasing number of co-authors, as well as more and more academic clickbait — sensational titles in glossy journals — that deliver less and less substance. Put succinctly: successful researchers are more concerned about their public image and their professional networks today than ever before, a tendency which is hardly conducive to depth of insight. What follows from all this is widespread career-oriented self-censorship among academics. If you want to be successful in science, you need to adapt to the system. Nowhere (with the potential exception of the arts) is this more harmful than in basic research. It leads to shallowness, it fosters narcissism and opportunism, and it produces more appearance than substance, problems which are gravely exacerbated by the constant acceleration of academic practice. Nobody has time anymore to follow complex trains of thought. An argument either fits your thinking habits, what you see as the zeitgeist of your field, or it is preemptively trashed upon review. In the U.S., for example, an empirical study has found that those biomedical grant applications are favoured that continue the work of previously successful projects. More of the same, instead of exploration where it is most promising. And so the monoculture becomes more monotonous yet. FROM AN INDUSTRIAL TO AN ECOLOGICAL MODEL OF RESEARCH PRACTICE How can we escape this vicious circle? It is not going to be easy. First, those that are profiting most from the current system are extremely complacent and powerful. They can show, through their quantitative metrics, that academic science is more productive than ever. The loss of originality (and the suffering of the victims of this system) is hard to measure, and therefore no major issue. What cannot be measured does not exist. In addition, the current flurry of technological innovations (mostly in the area of information technology) give us the impression that we have the world and our lives more under control than ever. All of this supports the impression that science is fully performing its societal function. But appearances can be deceptive. Indeed, we do not need more facts to tackle the existential problems of humanity. What we do need is deeper insight, more wisdom, and just like originality, these cannot be measured. There are cracks appearing in the facade of modern science, which suggest we must change our attitude. I've already mentioned the Human Genome Project, which cost a lot of money, but did not deliver the expected profusion of cures (or any deeper insight into human nature). Even less convincing is the performance of the Human Brain Project so far, which promised us a simulation of the entire human prefrontal cortex, for a mere billion euros. Not much happened, but this is not surprising, because it was never clear what kind of insights we would gain from such a simulation anyway. These are signs that the technology-enamoured and -fixated system we've created is about to hit a wall. Since the main problem of academic science is an increasing intellectual monoculture, it is tempting to use ecology as a model and inspiration for a potential reform. As mentioned at the outset, the current model of academic research is indoctrinated by free-market ideology. It is an industrial system. We want control over the world we live in. We want measurable and efficient production. We foster this through competition. As in the examples of industrial agriculture and economic markets, the shadow side of this cult of productivity is risk-aversion and the potential of a ruinous race to the bottom. What we need is an ecological reform of academic research! Pretty literally. We need to shift from a paradigm of control to a paradigm of participation. Young researchers should be taken seriously, properly supported, and encouraged to take risk and responsibility. What we want is not maximal production, but maximal depth, sustainability, and reproducibility of scientific results. We want societal relevance based on deep insight rather than technological miracle cures. We need an open and collaborative research system that values the diversity of perspectives and approaches in science. We need a focus on innovation. In brief, we need more lasting quality rather than short-term quantity. Our scientific problems, therefore, mirror those in society at large pretty exactly. STEPS TOWARDS AN ECOLOGICAL RESEARCH ECOSYSTEM How is this supposed to work in practice? I assume that I am mostly addressing practicing researchers here. This is why I focus on propositions that can be implemented without major changes in national or international research policy. Let me classify them into four general topics:
Last week, I discussed an article published by Mike Levin and Dan Dennett in Aeon. I really don't want to obsess about this rather mediocre piece of science writing, but it does bring up a number of points that warrant some additional discussion. The article makes a number of strong claims about agency and cognition in biology. It confused me with its lack of precision and a whole array of rather strange thought experiments and examples. Since I've published my earlier post, several Tweeps (especially a commenter called James of Seattle) have helped me understand the argument a little better. Much obliged! This results in an interpretation of the article that veers radically away from panpsychism into a direction that's more consistent with Dennett's earlier work. Let me try to paraphrase:
The argument Levin and Dennett present is not exactly new. Points (1) to (3) are almost identical to Ernst Mayr's line of reasoning from 1961, which popularised the notion of "teleonomy"—denoting evolved behaviour driven by a genetic program, that seems teleological because it was adapted to its function by natural selection. At least, there is a tangible argument here that I can criticise. And it's interesting. Not because of what it says (I still don't think that it talks about agency in any meaningful way), but more because of what it's based on—its episteme, to use Foucault's term. To be more specific: this interpretation reveals that the authors' world view rests on a double layer of metaphors that massively oversimplify what's really going on. Let me explain. ORGANISMS ≠ MACHINES The first metaphorical layer on which the argument rests is the machine conception of the organism (MCO). It is the reason we use terms such as "mechanism," "machinery," "program," "design," "control," and so on, to describe cells and other living systems. Levin and Dennett use a typical and very widespread modern version of the MCO, which is based on computer metaphors. This view considers cells to be information-processing machines, an assumption that doesn't even have be justified anymore. As Richard Lewontin (one of my big intellectual heroes) points out: "[T]he ur-metaphor of all modern science, the machine model that we owe to Descartes, has ceased to be a metaphor and has become the unquestioned reality: Organisms are no longer like machines, they are machines." Philosopher Dan Nicholson has written a beautiful and comprehensive critique of this view in an article published in 2013, which is called "Organisms ≠ Machines." (The only philosophical article I know with an unequal sign in it, but maybe there are others?) Dan points out that the machine metaphor seems justified by several parallels between machines and organisms. They are both bounded physical systems. They both act according to physical law. They both use and modify energy and transform part of it into work. They are both hierarchically structured and internally differentiated. They can both be described relationally in terms of causal interactions (as blueprints and networks, respectively). And they both are organised in a way that makes them operate towards the attainment of certain goals. Because of this, they can both be characterised in functional terms: knives are for cutting, lungs are for breathing. But, as Dan points out, the most obvious similarities are not always the most important ones! In fact, there are three reasons why the machine metaphor breaks down, all of which are intimately connected to the topic of organismic agency—the real kind, which enables organisms to initiate causal effects on their environments from within their system boundaries (see my earlier post). Here they are:
These are three pretty fundamental ways in which organisms are not at all like machines! And true agency depends on all of them, since it requires self-maintaining organisation, the kind that underlies intrinsic purpose, inter-dependence, and the open-ended, transient structure of the organism. To call preprogrammed evolved responses "agency" is to ignore these fundamental differences completely. Probably not a good thing if we really want to understand what life is (or what agency is, for that matter). INTENTIONAL OVERKILL The second metaphorical layer on which Levin and Dennett's argument rests is the intentional stance. Something really weird happens here: basically, the authors have done their best to convince us that organisms are machines. But then they suddenly pretend they're not. That they act with intentionality. Confused yet? I certainly am. The trick here is a subtle switch of meaning in the term "agency." While originally defined as a preprogrammed autonomous response of the cell (shaped by evolution), it now becomes something very much like true agency (the kind that involves action originating from within the system). This switch is justified by the argument that the cell is only acting as if it has intention. Intentionality is a useful metaphor to describe the machine-like but autonomous behaviour of the cell. It is a useful heuristic. In a way, that's ok. Even Dan Nicholson agrees that this heuristic can be productive when studying well-differentiated parts of an organism (such as cells). But is this sane, is it safe, more generally? I don't think so. The intentional stance creates more problems than it solves. For example, it leads the authors to conflate agency and cognition. This is because the intentional stance makes it easy to overlook the main difference between the two: cognitive processes—such as decision-making—involve true intentionality. Arguments and scenarios are weighed against each other. Alternatives considered. Basic agency, in contrast, does not require intentionality at all. It simply means that an organism selects from a repertoire of alternative behaviours according to its circumstances. It initiates a given activity in pursuit of a goal. But it need not be aware of its intentions. As mentioned earlier, agency and cognition are related, but they are not the same. Bacteria have agency, but no cognition. This point is easily lost if we consider all biological behaviour to be intentional. The metaphor fails in this instance, but we're easily fooled into forgetting that it was a metaphor in the first place. The exact opposite also happens, of course. If we take all intentionality to be metaphorical, we are bound to trivialise it in animals (like human beings) with a nervous system. The metaphorical overkill that is happening here is really not helping anyone grasping the full complexity of the problems we are facing. It explains phenomena such as agency and intentionality away, instead of taking them seriously. While the intentional stance is supposed to fix some of the oversimplifications of the machine metaphor, all it does is making them worse. The only thing this layering of metaphors achieves is obfuscation. We're fooling ourselves by hiding the fact that we've drastically oversimplified our view of life. Not good. And why, you ask, would we do this? What do we gain through this kind of crass self-deception? Well, in the end, the whole convoluted argument is just there to save a purely mechanistic approach to cellular behaviour, while also justifying teleological explanations. We need this metaphorical overkill because we don't believe that we can be scientific without seeing the world as a mechanistic clockwork. This is a complicated topic. We'll revisit it very, very soon on this blog. I promise. EMMENTAL CHEESE ONTOLOGY In the meantime, let's see what kind of philosophical monster is being created here. The machine view and the intentional stance are both approaches to reality—they are ontologies in the philosophical sense of the term—that suit a particular way of seeing science, but don't really do justice to the complexity and depth of the phenomena we're trying to explain. In fact, they are so bad that they resemble layered slices of Emmental cheese: bland, full of holes, and with a slightly fermented odour. Ultimately, what we're doing here is creating a fiction, a simulation of reality. Jean Beaudrillard calls this hyperreality, British filmmaker Adam Curtis calls it HyperNormalisation. It's the kind of model of reality we know to be wrong, but we still accept it. Because it's useful in some ways. Because it's comforting and predictable. Because we see no alternative. Not just fake news, but a whole fake world.
It's not cognition, but metaphors all the way down. Of course, the responsibility for this sorry state of affairs can't all be pinned on this one popular-science article. It's been going on since Descartes brought us the clockwork universe. Levin and Dennett's piece is just a beautiful example of the kind of mechanistic oversimplification modernity has generated. It demonstrates that this kind of science is reaching its limits. It may not have exhausted its usefulness quite yet, but it is certainly in the process of exhausting its intellectual potential. Postmodern criticisms—such as those by Foucault and Baudrillard, who I've mentioned above—are hitting home. But they don't provide an alternative model for scientific knowledge, leaving us to drift in a sea of pomo-flavoured relativism. What we need is a new kind of science, rested on more adequate philosophical foundations, that answers to those criticisms. One of the main missions of this blog is to introduce you to such an alternative. A metamodern science for the 21st century. The revolution is coming. Join it. Or stay with the mechanistic reactionaries. It's up to you. Hello everybody. This is my first blog post. I was undecided at first. What do I write about? Where do I begin? Then, last night, I came across this article by Michael Levin and Daniel Dennett in Aeon Magazine. It illustrates quite some of the problems—both in science and about science—that I hope to cover in this blog. "Cognition all the way down?" That doesn't sound good... and, believe me, it isn't. But where to begin? This article is a difficult beast to tackle. It has no head or tail. Ironically it also seems to lack purpose. What is it trying to tell us? That cells "think"? Maybe even molecules? How is it trying to make this argument? And what is it trying to achieve with it? Interdisciplinary dialogue? Popular science? A new biology? I think not. It does not explain anything, and is not written in a way that the general public would understand. I do have a suspicion what the article is really about. We'll come back to that at the end. But before I start ripping into it, I should say that there are many things I actually like about the article. I got excited when I first saw the subtitle ("unthinking agents!"). I'm thinking and writing about agency and evolution myself at the moment, and believe that it's a very important and neglected topic. I also like the authors' concept of teleophobia, an irrational fear of all kinds of teleological explanations that circulates widely, not only among biologists. I like their argument against an oversimplified black-and-white dualism that ascribes true cognition to humans only. I like their call for biologists to look beyond the molecular level. I like that they highlight the fact that cells are not just passive building blocks, but autonomous participants busy building bodies. I like all that. It's very much in the spirit of my own research and thinking. But then, everything derails. Spectacularly. Where should I start? AGENCY ISN'T JUST FEEDBACK The authors love to throw around difficult concepts without defining or explaining them. "Agency" is the central one, of course. From what I understand, they believe that agency is simply information processing with cybernetic feedback. But that won't do! A self-regulating homeostat may keep your house warm, but does not qualify as an autonomous agent. Neither does a heat-seeking missile. As Stuart Kauffman points out in his Investigations, autonomous systems "act on their own behalf." At the very least, agents generate causal effects that are not entirely determined by their surroundings. The homeostat or missile simply reacts to its environment according to externally imposed rules, while the agent generates rules from within. Importantly, it does not require consciousness (or even a nervous system) to do this. AGENCY IS NATURAL, BUT NOT MECHANISTIC How agents generate their own rules is a complicated matter. I will discuss this in a lot more detail in future posts. But one thing is quite robustly established by now: agency requires a peculiar kind of organisation that characterises living systems—they exhibit what is called organisational closure. Alvaro Moreno and Matteo Mossio have written an excellent book about it. What's most important is that in an organism, each core component is both producer and product of some other component in the system. Roughly, that's what organisational closure means. The details don't matter here. What does matter is that we're not sure you can capture such systems with purely mechanistic explanations. And that's crucial: organisms aren't machines. They are not computers. Not even like computers. Rosen's conjecture establishes just that. More on that later too. For now, you must believe me that "mechanistic" explanations of organisms based on information-processing metaphors are not sufficient to account for organismic agency. Which brings us to the next problem. EVOLVED COMPUTER METAPHORS We've covered quite some ground so far, but haven't even arrived at the main two flaws of the article. The first of these is the central idea that organisms are some kind of evolved information-processing machines. They "exploit physical regularities to perform tasks" by having "long-range guided abilities," which evolved by natural selection. Quite fittingly, the authors call this advanced molecular magic "karma." Karma is a bitch. It kills you if you don't cooperate. And here we go: in one fell swoop, we have a theory of how multicellularity evolved. It's just a shifting of boundaries between agents (the ones that were never explained, mind you). Confused yet? This part of the article is so full of logical leaps and grandstanding vagueness that it's really hard to parse. To me, it makes no sense at all. But that does not matter. Because the only point it drives at is to resuscitate a theory that Dennett worked on throughout the 1970s and 80s, and which he summarised in his 1987 book The Intentional Stance. THE INTENTIONAL STANCE The intentional stance is when you assume that some thing has agency, purpose, intents in order to explain it, although deep down you know it does not have these properties. It used to be big (and very important) in the time when cognitive science emerged from behaviourist psychology, but nowadays it mostly applies to rational choice theory applied in evolutionary biology. For critical treatments of this topic, please read Peter Godfrey-Smith's Darwinian Populations and Natural Selection, and Samir Okasha's Agents and Goals in Evolution. Bottom line: this is not a new topic at all, and it's very controversial. Does it make sense to invoke intentions to explain adaptive evolutionary strategies? Let's not get into that discussion here. Instead, I want to point out that the intentional stance does not take agency serious at all! It is very ambiguous about whether it considers agency a real phenomenon, or whether it uses intentional explanations as purely heuristic strategy that explicitly relies on anthropomorphisms. Thus, after telling us that parts of organisms are agents (at least that's how I would interpret the utterly bizarre "thought experiment" about the self-assembling car) they kind of tell us now that it's all just a metaphor, this agency thing. What is it, then? This is just confusing motte-and-bailey tactics, in my opinion. AGENCY IS NOT COGNITION!!! So now that we're all confused whether agency is real or not, we already get the next intellectual card trick: agency is swapped for cognition. Just like that. That's why it's "cognition all the way down." You know, agency is nothing but information processing. Cognition is nothing but information processing. Clearly they must be the same. There's just a difference in scale in different organisms. Unfortunately, this renders either the concept of agency or the concept of cognition irrelevant. Luckily, there is an excellent paper by Fermín Fulda that explains the difference (and also tells you why "bacterial cognition" is really not a thing). Cognition happens in nervous systems. It involves proper intentions, the kind you can even be conscious of. Agency, in the broad sense I use it here, does not require intentionality or consciousness. It simply means that the organism can select from a repertoire of alternative behaviours when faced with opportunities or obstacles in its perceived environment. As Kauffman says, even a bacterium can "act on its own behalf." It need not think at all. PANPSYCHISM: NO THANK YOU By claiming that cells (or even parts of cells) are cognitive agents, Levin and Dennett open the door for the panpsychist bunch to jump on their "argument" as evidence for their own dubious metaphysics. I don't get it. Dennett is not usually sympathetic to the views of these people. Neither am I. Like ontological vitalism, panpsychism explains nothing. It does not explain consciousness or how it evolved. Instead, it explains it away, negating the whole mystery of its origins by declaring the question solved. That's not proper science. That's not proper philosophy. That's bullshit. SO: WHAT'S THE PURPOSE? What we're left with is a mess. I have no idea what the point of this article is. An argument for panpsychism? An argument for the intentional stance? Certainly not an argument to take agency serious. The authors seem to have no interest in engaging with the topic in any depth. Instead, they take the opportunity to buzzword-boost some of their old and new ideas. A little PR certainly can't harm. Knowing Michael Levin a little by now, I think that's what this article is about. Shameless self-promotion. Science in the age of selfies. A little signal, like that of the Trafalmadorians in The Sirens of Titan that constantly broadcasts "I'm here, I'm here, I'm here." And that's bullshit too. To end on a positive note: the article touches on a lot of interesting topics. Agency. Organisms. Evolution. Philosophical biology. Reductionism. And the politics of academic prestige. I'll have more to say about all of these. So thank you, Mike and Dan, for the inspiration, and for setting such a clear example of how I do not want to communicate my own writing and thinking to the world. |
Johannes Jäger
Life beyond dogma! Archives
September 2023
Categories
All
|
Proudly powered by Weebly