Metaphors All the Way Down
Last week, I discussed an article published by Mike Levin and Dan Dennett in Aeon. I really don't want to obsess about this rather mediocre piece of science writing, but it does bring up a number of points that warrant some additional discussion. The article makes a number of strong claims about agency and cognition in biology. It confused me with its lack of precision and a whole array of rather strange thought experiments and examples.
Since I've published my earlier post, several Tweeps (especially a commenter called James of Seattle) have helped me understand the argument a little better. Much obliged!
This results in an interpretation of the article that veers radically away from panpsychism into a direction that's more consistent with Dennett's earlier work. Let me try to paraphrase:
The argument Levin and Dennett present is not exactly new. Points (1) to (3) are almost identical to Ernst Mayr's line of reasoning from 1961, which popularised the notion of "teleonomy"—denoting evolved behaviour driven by a genetic program, that seems teleological because it was adapted to its function by natural selection.
At least, there is a tangible argument here that I can criticise. And it's interesting. Not because of what it says (I still don't think that it talks about agency in any meaningful way), but more because of what it's based on—its episteme, to use Foucault's term.
To be more specific: this interpretation reveals that the authors' world view rests on a double layer of metaphors that massively oversimplify what's really going on. Let me explain.
ORGANISMS ≠ MACHINES
The first metaphorical layer on which the argument rests is the machine conception of the organism (MCO). It is the reason we use terms such as "mechanism," "machinery," "program," "design," "control," and so on, to describe cells and other living systems.
Levin and Dennett use a typical and very widespread modern version of the MCO, which is based on computer metaphors. This view considers cells to be information-processing machines, an assumption that doesn't even have be justified anymore. As Richard Lewontin (one of my big intellectual heroes) points out: "[T]he ur-metaphor of all modern science, the machine model that we owe to Descartes, has ceased to be a metaphor and has become the unquestioned reality: Organisms are no longer like machines, they are machines."
Philosopher Dan Nicholson has written a beautiful and comprehensive critique of this view in an article published in 2013, which is called "Organisms ≠ Machines." (The only philosophical article I know with an unequal sign in it, but maybe there are others?) Dan points out that the machine metaphor seems justified by several parallels between machines and organisms. They are both bounded physical systems. They both act according to physical law. They both use and modify energy and transform part of it into work. They are both hierarchically structured and internally differentiated. They can both be described relationally in terms of causal interactions (as blueprints and networks, respectively). And they both are organised in a way that makes them operate towards the attainment of certain goals. Because of this, they can both be characterised in functional terms: knives are for cutting, lungs are for breathing. But, as Dan points out, the most obvious similarities are not always the most important ones!
In fact, there are three reasons why the machine metaphor breaks down, all of which are intimately connected to the topic of organismic agency—the real kind, which enables organisms to initiate causal effects on their environments from within their system boundaries (see my earlier post). Here they are:
These are three pretty fundamental ways in which organisms are not at all like machines! And true agency depends on all of them, since it requires self-maintaining organisation, the kind that underlies intrinsic purpose, inter-dependence, and the open-ended, transient structure of the organism. To call preprogrammed evolved responses "agency" is to ignore these fundamental differences completely. Probably not a good thing if we really want to understand what life is (or what agency is, for that matter).
The second metaphorical layer on which Levin and Dennett's argument rests is the intentional stance. Something really weird happens here: basically, the authors have done their best to convince us that organisms are machines. But then they suddenly pretend they're not. That they act with intentionality. Confused yet? I certainly am.
The trick here is a subtle switch of meaning in the term "agency." While originally defined as a preprogrammed autonomous response of the cell (shaped by evolution), it now becomes something very much like true agency (the kind that involves action originating from within the system). This switch is justified by the argument that the cell is only acting as if it has intention. Intentionality is a useful metaphor to describe the machine-like but autonomous behaviour of the cell. It is a useful heuristic. In a way, that's ok. Even Dan Nicholson agrees that this heuristic can be productive when studying well-differentiated parts of an organism (such as cells). But is this sane, is it safe, more generally? I don't think so.
The intentional stance creates more problems than it solves. For example, it leads the authors to conflate agency and cognition. This is because the intentional stance makes it easy to overlook the main difference between the two: cognitive processes—such as decision-making—involve true intentionality. Arguments and scenarios are weighed against each other. Alternatives considered. Basic agency, in contrast, does not require intentionality at all. It simply means that an organism selects from a repertoire of alternative behaviours according to its circumstances. It initiates a given activity in pursuit of a goal. But it need not be aware of its intentions. As mentioned earlier, agency and cognition are related, but they are not the same. Bacteria have agency, but no cognition. This point is easily lost if we consider all biological behaviour to be intentional. The metaphor fails in this instance, but we're easily fooled into forgetting that it was a metaphor in the first place.
The exact opposite also happens, of course. If we take all intentionality to be metaphorical, we are bound to trivialise it in animals (like human beings) with a nervous system. The metaphorical overkill that is happening here is really not helping anyone grasping the full complexity of the problems we are facing. It explains phenomena such as agency and intentionality away, instead of taking them seriously. While the intentional stance is supposed to fix some of the oversimplifications of the machine metaphor, all it does is making them worse. The only thing this layering of metaphors achieves is obfuscation. We're fooling ourselves by hiding the fact that we've drastically oversimplified our view of life. Not good.
And why, you ask, would we do this? What do we gain through this kind of crass self-deception? Well, in the end, the whole convoluted argument is just there to save a purely mechanistic approach to cellular behaviour, while also justifying teleological explanations. We need this metaphorical overkill because we don't believe that we can be scientific without seeing the world as a mechanistic clockwork. This is a complicated topic. We'll revisit it very, very soon on this blog. I promise.
EMMENTAL CHEESE ONTOLOGY
In the meantime, let's see what kind of philosophical monster is being created here. The machine view and the intentional stance are both approaches to reality—they are ontologies in the philosophical sense of the term—that suit a particular way of seeing science, but don't really do justice to the complexity and depth of the phenomena we're trying to explain. In fact, they are so bad that they resemble layered slices of Emmental cheese: bland, full of holes, and with a slightly fermented odour.
Ultimately, what we're doing here is creating a fiction, a simulation of reality. Jean Beaudrillard calls this hyperreality, British filmmaker Adam Curtis calls it HyperNormalisation. It's the kind of model of reality we know to be wrong, but we still accept it. Because it's useful in some ways. Because it's comforting and predictable. Because we see no alternative. Not just fake news, but a whole fake world.
It's not cognition, but metaphors all the way down.
Of course, the responsibility for this sorry state of affairs can't all be pinned on this one popular-science article. It's been going on since Descartes brought us the clockwork universe. Levin and Dennett's piece is just a beautiful example of the kind of mechanistic oversimplification modernity has generated. It demonstrates that this kind of science is reaching its limits. It may not have exhausted its usefulness quite yet, but it is certainly in the process of exhausting its intellectual potential. Postmodern criticisms—such as those by Foucault and Baudrillard, who I've mentioned above—are hitting home. But they don't provide an alternative model for scientific knowledge, leaving us to drift in a sea of pomo-flavoured relativism. What we need is a new kind of science, rested on more adequate philosophical foundations, that answers to those criticisms. One of the main missions of this blog is to introduce you to such an alternative. A metamodern science for the 21st century.
The revolution is coming. Join it. Or stay with the mechanistic reactionaries. It's up to you.
Hello everybody. This is my first blog post. I was undecided at first. What do I write about? Where do I begin? Then, last night, I came across this article by Michael Levin and Daniel Dennett in Aeon Magazine. It illustrates quite some of the problems—both in science and about science—that I hope to cover in this blog.
"Cognition all the way down?" That doesn't sound good... and, believe me, it isn't. But where to begin? This article is a difficult beast to tackle. It has no head or tail. Ironically it also seems to lack purpose. What is it trying to tell us? That cells "think"? Maybe even molecules? How is it trying to make this argument? And what is it trying to achieve with it? Interdisciplinary dialogue? Popular science? A new biology? I think not. It does not explain anything, and is not written in a way that the general public would understand. I do have a suspicion what the article is really about. We'll come back to that at the end.
But before I start ripping into it, I should say that there are many things I actually like about the article. I got excited when I first saw the subtitle ("unthinking agents!"). I'm thinking and writing about agency and evolution myself at the moment, and believe that it's a very important and neglected topic. I also like the authors' concept of teleophobia, an irrational fear of all kinds of teleological explanations that circulates widely, not only among biologists. I like their argument against an oversimplified black-and-white dualism that ascribes true cognition to humans only. I like their call for biologists to look beyond the molecular level. I like that they highlight the fact that cells are not just passive building blocks, but autonomous participants busy building bodies. I like all that. It's very much in the spirit of my own research and thinking.
But then, everything derails. Spectacularly. Where should I start?
AGENCY ISN'T JUST FEEDBACK
The authors love to throw around difficult concepts without defining or explaining them. "Agency" is the central one, of course. From what I understand, they believe that agency is simply information processing with cybernetic feedback. But that won't do! A self-regulating homeostat may keep your house warm, but does not qualify as an autonomous agent. Neither does a heat-seeking missile. As Stuart Kauffman points out in his Investigations, autonomous systems "act on their own behalf." At the very least, agents generate causal effects that are not entirely determined by their surroundings. The homeostat or missile simply reacts to its environment according to externally imposed rules, while the agent generates rules from within. Importantly, it does not require consciousness (or even a nervous system) to do this.
AGENCY IS NATURAL, BUT NOT MECHANISTIC
How agents generate their own rules is a complicated matter. I will discuss this in a lot more detail in future posts. But one thing is quite robustly established by now: agency requires a peculiar kind of organisation that characterises living systems—they exhibit what is called organisational closure. Alvaro Moreno and Matteo Mossio have written an excellent book about it. What's most important is that in an organism, each core component is both producer and product of some other component in the system. Roughly, that's what organisational closure means. The details don't matter here. What does matter is that we're not sure you can capture such systems with purely mechanistic explanations. And that's crucial: organisms aren't machines. They are not computers. Not even like computers. Rosen's conjecture establishes just that. More on that later too. For now, you must believe me that "mechanistic" explanations of organisms based on information-processing metaphors are not sufficient to account for organismic agency. Which brings us to the next problem.
EVOLVED COMPUTER METAPHORS
We've covered quite some ground so far, but haven't even arrived at the main two flaws of the article. The first of these is the central idea that organisms are some kind of evolved information-processing machines. They "exploit physical regularities to perform tasks" by having "long-range guided abilities," which evolved by natural selection. Quite fittingly, the authors call this advanced molecular magic "karma." Karma is a bitch. It kills you if you don't cooperate. And here we go: in one fell swoop, we have a theory of how multicellularity evolved. It's just a shifting of boundaries between agents (the ones that were never explained, mind you). Confused yet? This part of the article is so full of logical leaps and grandstanding vagueness that it's really hard to parse. To me, it makes no sense at all. But that does not matter. Because the only point it drives at is to resuscitate a theory that Dennett worked on throughout the 1970s and 80s, and which he summarised in his 1987 book The Intentional Stance.
THE INTENTIONAL STANCE
The intentional stance is when you assume that some thing has agency, purpose, intents in order to explain it, although deep down you know it does not have these properties. It used to be big (and very important) in the time when cognitive science emerged from behaviourist psychology, but nowadays it mostly applies to rational choice theory applied in evolutionary biology. For critical treatments of this topic, please read Peter Godfrey-Smith's Darwinian Populations and Natural Selection, and Samir Okasha's Agents and Goals in Evolution. Bottom line: this is not a new topic at all, and it's very controversial. Does it make sense to invoke intentions to explain adaptive evolutionary strategies? Let's not get into that discussion here. Instead, I want to point out that the intentional stance does not take agency serious at all! It is very ambiguous about whether it considers agency a real phenomenon, or whether it uses intentional explanations as purely heuristic strategy that explicitly relies on anthropomorphisms. Thus, after telling us that parts of organisms are agents (at least that's how I would interpret the utterly bizarre "thought experiment" about the self-assembling car) they kind of tell us now that it's all just a metaphor, this agency thing. What is it, then? This is just confusing motte-and-bailey tactics, in my opinion.
AGENCY IS NOT COGNITION!!!
So now that we're all confused whether agency is real or not, we already get the next intellectual card trick: agency is swapped for cognition. Just like that. That's why it's "cognition all the way down." You know, agency is nothing but information processing. Cognition is nothing but information processing. Clearly they must be the same. There's just a difference in scale in different organisms. Unfortunately, this renders either the concept of agency or the concept of cognition irrelevant. Luckily, there is an excellent paper by Fermín Fulda that explains the difference (and also tells you why "bacterial cognition" is really not a thing). Cognition happens in nervous systems. It involves proper intentions, the kind you can even be conscious of. Agency, in the broad sense I use it here, does not require intentionality or consciousness. It simply means that the organism can select from a repertoire of alternative behaviours when faced with opportunities or obstacles in its perceived environment. As Kauffman says, even a bacterium can "act on its own behalf." It need not think at all.
PANPSYCHISM: NO THANK YOU
By claiming that cells (or even parts of cells) are cognitive agents, Levin and Dennett open the door for the panpsychist bunch to jump on their "argument" as evidence for their own dubious metaphysics. I don't get it. Dennett is not usually sympathetic to the views of these people. Neither am I. Like ontological vitalism, panpsychism explains nothing. It does not explain consciousness or how it evolved. Instead, it explains it away, negating the whole mystery of its origins by declaring the question solved. That's not proper science. That's not proper philosophy. That's bullshit.
SO: WHAT'S THE PURPOSE?
What we're left with is a mess. I have no idea what the point of this article is. An argument for panpsychism? An argument for the intentional stance? Certainly not an argument to take agency serious. The authors seem to have no interest in engaging with the topic in any depth. Instead, they take the opportunity to buzzword-boost some of their old and new ideas. A little PR certainly can't harm. Knowing Michael Levin a little by now, I think that's what this article is about. Shameless self-promotion. Science in the age of selfies. A little signal, like that of the Trafalmadorians in The Sirens of Titan that constantly broadcasts "I'm here, I'm here, I'm here." And that's bullshit too.
To end on a positive note: the article touches on a lot of interesting topics. Agency. Organisms. Evolution. Philosophical biology. Reductionism. And the politics of academic prestige. I'll have more to say about all of these. So thank you, Mike and Dan, for the inspiration, and for setting such a clear example of how I do not want to communicate my own writing and thinking to the world.
Life beyond dogma!