Last week, I discussed an article published by Mike Levin and Dan Dennett in Aeon. I really don't want to obsess about this rather mediocre piece of science writing, but it does bring up a number of points that warrant some additional discussion. The article makes a number of strong claims about agency and cognition in biology. It confused me with its lack of precision and a whole array of rather strange thought experiments and examples. Since I've published my earlier post, several Tweeps (especially a commenter called James of Seattle) have helped me understand the argument a little better. Much obliged! This results in an interpretation of the article that veers radically away from panpsychism into a direction that's more consistent with Dennett's earlier work. Let me try to paraphrase:
The argument Levin and Dennett present is not exactly new. Points (1) to (3) are almost identical to Ernst Mayr's line of reasoning from 1961, which popularised the notion of "teleonomy"—denoting evolved behaviour driven by a genetic program, that seems teleological because it was adapted to its function by natural selection. At least, there is a tangible argument here that I can criticise. And it's interesting. Not because of what it says (I still don't think that it talks about agency in any meaningful way), but more because of what it's based on—its episteme, to use Foucault's term. To be more specific: this interpretation reveals that the authors' world view rests on a double layer of metaphors that massively oversimplify what's really going on. Let me explain. ORGANISMS ≠ MACHINES The first metaphorical layer on which the argument rests is the machine conception of the organism (MCO). It is the reason we use terms such as "mechanism," "machinery," "program," "design," "control," and so on, to describe cells and other living systems. Levin and Dennett use a typical and very widespread modern version of the MCO, which is based on computer metaphors. This view considers cells to be information-processing machines, an assumption that doesn't even have be justified anymore. As Richard Lewontin (one of my big intellectual heroes) points out: "[T]he ur-metaphor of all modern science, the machine model that we owe to Descartes, has ceased to be a metaphor and has become the unquestioned reality: Organisms are no longer like machines, they are machines." Philosopher Dan Nicholson has written a beautiful and comprehensive critique of this view in an article published in 2013, which is called "Organisms ≠ Machines." (The only philosophical article I know with an unequal sign in it, but maybe there are others?) Dan points out that the machine metaphor seems justified by several parallels between machines and organisms. They are both bounded physical systems. They both act according to physical law. They both use and modify energy and transform part of it into work. They are both hierarchically structured and internally differentiated. They can both be described relationally in terms of causal interactions (as blueprints and networks, respectively). And they both are organised in a way that makes them operate towards the attainment of certain goals. Because of this, they can both be characterised in functional terms: knives are for cutting, lungs are for breathing. But, as Dan points out, the most obvious similarities are not always the most important ones! In fact, there are three reasons why the machine metaphor breaks down, all of which are intimately connected to the topic of organismic agency—the real kind, which enables organisms to initiate causal effects on their environments from within their system boundaries (see my earlier post). Here they are:
These are three pretty fundamental ways in which organisms are not at all like machines! And true agency depends on all of them, since it requires self-maintaining organisation, the kind that underlies intrinsic purpose, inter-dependence, and the open-ended, transient structure of the organism. To call preprogrammed evolved responses "agency" is to ignore these fundamental differences completely. Probably not a good thing if we really want to understand what life is (or what agency is, for that matter). INTENTIONAL OVERKILL The second metaphorical layer on which Levin and Dennett's argument rests is the intentional stance. Something really weird happens here: basically, the authors have done their best to convince us that organisms are machines. But then they suddenly pretend they're not. That they act with intentionality. Confused yet? I certainly am. The trick here is a subtle switch of meaning in the term "agency." While originally defined as a preprogrammed autonomous response of the cell (shaped by evolution), it now becomes something very much like true agency (the kind that involves action originating from within the system). This switch is justified by the argument that the cell is only acting as if it has intention. Intentionality is a useful metaphor to describe the machine-like but autonomous behaviour of the cell. It is a useful heuristic. In a way, that's ok. Even Dan Nicholson agrees that this heuristic can be productive when studying well-differentiated parts of an organism (such as cells). But is this sane, is it safe, more generally? I don't think so. The intentional stance creates more problems than it solves. For example, it leads the authors to conflate agency and cognition. This is because the intentional stance makes it easy to overlook the main difference between the two: cognitive processes—such as decision-making—involve true intentionality. Arguments and scenarios are weighed against each other. Alternatives considered. Basic agency, in contrast, does not require intentionality at all. It simply means that an organism selects from a repertoire of alternative behaviours according to its circumstances. It initiates a given activity in pursuit of a goal. But it need not be aware of its intentions. As mentioned earlier, agency and cognition are related, but they are not the same. Bacteria have agency, but no cognition. This point is easily lost if we consider all biological behaviour to be intentional. The metaphor fails in this instance, but we're easily fooled into forgetting that it was a metaphor in the first place. The exact opposite also happens, of course. If we take all intentionality to be metaphorical, we are bound to trivialise it in animals (like human beings) with a nervous system. The metaphorical overkill that is happening here is really not helping anyone grasping the full complexity of the problems we are facing. It explains phenomena such as agency and intentionality away, instead of taking them seriously. While the intentional stance is supposed to fix some of the oversimplifications of the machine metaphor, all it does is making them worse. The only thing this layering of metaphors achieves is obfuscation. We're fooling ourselves by hiding the fact that we've drastically oversimplified our view of life. Not good. And why, you ask, would we do this? What do we gain through this kind of crass self-deception? Well, in the end, the whole convoluted argument is just there to save a purely mechanistic approach to cellular behaviour, while also justifying teleological explanations. We need this metaphorical overkill because we don't believe that we can be scientific without seeing the world as a mechanistic clockwork. This is a complicated topic. We'll revisit it very, very soon on this blog. I promise. EMMENTAL CHEESE ONTOLOGY In the meantime, let's see what kind of philosophical monster is being created here. The machine view and the intentional stance are both approaches to reality—they are ontologies in the philosophical sense of the term—that suit a particular way of seeing science, but don't really do justice to the complexity and depth of the phenomena we're trying to explain. In fact, they are so bad that they resemble layered slices of Emmental cheese: bland, full of holes, and with a slightly fermented odour. Ultimately, what we're doing here is creating a fiction, a simulation of reality. Jean Beaudrillard calls this hyperreality, British filmmaker Adam Curtis calls it HyperNormalisation. It's the kind of model of reality we know to be wrong, but we still accept it. Because it's useful in some ways. Because it's comforting and predictable. Because we see no alternative. Not just fake news, but a whole fake world.
It's not cognition, but metaphors all the way down. Of course, the responsibility for this sorry state of affairs can't all be pinned on this one popular-science article. It's been going on since Descartes brought us the clockwork universe. Levin and Dennett's piece is just a beautiful example of the kind of mechanistic oversimplification modernity has generated. It demonstrates that this kind of science is reaching its limits. It may not have exhausted its usefulness quite yet, but it is certainly in the process of exhausting its intellectual potential. Postmodern criticisms—such as those by Foucault and Baudrillard, who I've mentioned above—are hitting home. But they don't provide an alternative model for scientific knowledge, leaving us to drift in a sea of pomo-flavoured relativism. What we need is a new kind of science, rested on more adequate philosophical foundations, that answers to those criticisms. One of the main missions of this blog is to introduce you to such an alternative. A metamodern science for the 21st century. The revolution is coming. Join it. Or stay with the mechanistic reactionaries. It's up to you.
2 Comments
justin benedict nimmo
4/1/2024 16:29:29
I fear you have misunderstood Levin's work. As I understand it, he does not argue that we are machines - quite the contrary. Nor does he discount that cells have autopoetic intention, he merely refrains from elaborating on intention and/or consciousness pending further empirical proof.
Reply
31/5/2024 23:27:26
Great point Yogi. While I agree somewhat with the above comment, your core point about the organization of life being missing is spot on. On the other hand, in terms of "closure" (organizational, operational, constraint, auto-poetic), Levin's work demand that we add "informational" closure, with bio-electricity a deeper more universal layer of information processing across self-similar levels in Kantian Wholes. Once information is in the story, you get to how sentient creatures interact with the causal flows of nature via simple binary sensory cues, as I have said the "fundamental semantic information bit".
Reply
Leave a Reply. |
Johannes Jäger
Life beyond dogma! Archives
May 2024
Categories
All
|