This is the English translation of an article that was originally published in German as part of the annual essay collection of Laborjournal (publication date Jul 7, 2020).
Science finds itself exposed to an increasingly anti-intellectual and post-factual social climate. Few people realise, however, that the foundations of academic research are also threatened from within, by an unhealthy cult of productivity and spreading career-oriented self-censorship. Here I present a quick diagnosis with a few preliminary suggestions on how to tackle these problems.
In Raphael's "School of Athens" (above) we see the ideal of the ancient Academy: philosophers of various persuasions think and argue passionately but rationally about the deep and existential problems of our world. With Hypatia, there is even a woman present at this boy's club (center left). These thinkers are protected by an impressive vault from the trivialities of the outside world, while the blue sky in the background opens up a space for daring flights of fancy. The establishment of modern universities — beginning in the early 19th century in Berlin — was very much inspired by this lofty vision.
THE RESEARCH FACTORY
Unfortunately, we couldn't be further from this ideal today. Modern academic research resembles an automated factory more than the illustrious discussion circle depicted by Raphael. Over the past few decades, science has been trimmed for efficiency according to the principles of the free-market economy. This is not only happening in the natural sciences, by the way, but also increasingly in the social sciences and the humanities. The more money the taxpayer invests in academia, the higher the expectation of rapid returns. The outcomes of scientific projects should have social impact and provide practical solutions to concrete problems. Even evolutionary theorists must fill out the corresponding section in their grant applications. Science is seen as a "deus ex machina" for solving our societal and technological problems. Just like we go to the doctor to get instant pain relief, we expect science to provide instant solutions to complex problems, or at the very least, a steady stream of publications, which are supposed to eventually lead to such solutions. The more money goes into the system, the more applied wisdom is expected to flow from the other end of the research pipeline.
Or so the story goes. Unfortunately, basic research doesn't work that way at all. And, regrettably, applied science will get stuck quickly if we no longer do any real basic science. As Louis Pasteur once said: there is no applied research, only research and its practical applications. There are no short cuts to innovation. Just think about the history of the laser, theoretically predicted by Albert Einstein in 1917. The first functional ruby laser was constructed in 1960, and mass market applications of laser technology only began in the 1980s. A similar story can be told for Paul Dirac's 1928 prediction of the positron, which was confirmed experimentally in 1932. The first PET-scanner came to market in the 1970s. Or let's take PCR, of Covid-19 test fame. The polymerase chain reaction goes back to the serendipitous discovery of a high-temperature polymerase from a thermophilic bacterium first described by microbiologists Thomas Brock and Hudson Freeze (no joke!) in the hot springs of Yellowstone Park in the 1960s. PCR wasn't widely used in the laboratory until the 1990s.
A study from 2013 by William H. Press — then a science advisor to Barack Obama — presents studies by economist and Nobel-laureate Robert Solow, which look at the positive feedback between innovation, technology, and the wealth of various nations. Solow draws two key conclusions from his work. First, technological innovation is responsible for about 85% of U.S. economic growth over the past hundred years or so. Second, the richest countries today are those that had first set up a strong tradition in basic research.
Press argues, building on Solow's insights, that basic research must be generously funded by the state. One reason is that it is impossible to predict which fundamental discoveries will lead to technological innovations. Second, the path to application can take decades, as the examples above illustrate. Finally, breakthroughs in basic science often have a low appropiability, that is, money gained from their application rarely flows back to the original investor. Think of Asian CD and DVD players equipped with lasers based on U.S. research and development, which yielded massive profits while outcompeting more expensive (and less good) products of American make. This is the economic argument why state-funded basic research is more important than ever.
EFFICIENCY OR DIVERSITY?
But here exactly lies the problem: basic research simply does not work according to the rules of the free market. Nevertheless, we have an academic research system that is increasingly dominated by these rules. Mathematicians Donald and Stuart Geman note that the focus of fundamental breakthroughs in science has shifted during the 20th century from conceptual to technological advances: from the radical revolution in our worldview brought about by quantum and relativity theory to the sequencing of the human genome which, in the end, yielded disappointingly few medical advances or new insights into human nature. A whole variety of complex historical reasons are responsible for this shift. One of these is undoubtedly the massive transformation in the incentive structure for researchers. We have established a monoculture. A monoculture of efficiency and accountability, which leads to an impoverished intellectual environment that is no longer able to nourish innovative research ideas, even though there is more money available for science than ever before. Isn't it ironic that this money would be more efficiently invested if there was less pressure for efficiency in research?
Researchers that need to be constantly productive to progress in their careers, must constantly appear busy. This is absolutely fatal, particularly for theoretically and philosophically oriented projects. First of all, good theory requires creativity which needs time, inspiration, and a certain kind of productive leisure. Second, the most important and radical intellectual breakthroughs are far ahead of their time, without immediately obvious practical application, and generally associated with a high level of risk. Who tackles complex problems will fail more often. Some breakthroughs are only recognised in hindsight, long after they have been made. Few researchers today can muster the time and courage to devote themselves to projects with such uncertain outcomes. The time of the romantics is over; now the pragmatists are in charge. Those who want to be successful in current-day academia — especially at an early stage of their careers — must focus on tractable problems in established fields, the low-hanging fruit. This optimises personal productivity and chances of success, but in turn diminishes diversity and originality of thinking in academic research overall, and wastes the best years of too many intrepid young explorers. Unfortunately, originality cannot be measured, while productivity can. Originality often leads to noteworthy conceptual innovations, but productivity on its own rarely does.
Goodhart's Law — named after a British economist — says that a measure of success ceases to be useful once it has become an incentive. This is happening in almost all areas of society at the moment, as pointedly described by U.S. historian Jerry Z. Muller in his excellent book "The Tyranny of Metrics." In science, Goodhart's Law leads to increased self-citations, a flood of ever shorter publications (approaching what is called the minimal publishing unit) with an ever increasing number of co-authors, as well as more and more academic clickbait — sensational titles in glossy journals — that deliver less and less substance. Put succinctly: successful researchers are more concerned about their public image and their professional networks today than ever before, a tendency which is hardly conducive to depth of insight.
What follows from all this is widespread career-oriented self-censorship among academics. If you want to be successful in science, you need to adapt to the system. Nowhere (with the potential exception of the arts) is this more harmful than in basic research. It leads to shallowness, it fosters narcissism and opportunism, and it produces more appearance than substance, problems which are gravely exacerbated by the constant acceleration of academic practice. Nobody has time anymore to follow complex trains of thought. An argument either fits your thinking habits, what you see as the zeitgeist of your field, or it is preemptively trashed upon review. In the U.S., for example, an empirical study has found that those biomedical grant applications are favoured that continue the work of previously successful projects. More of the same, instead of exploration where it is most promising. And so the monoculture becomes more monotonous yet.
FROM AN INDUSTRIAL TO AN ECOLOGICAL MODEL OF RESEARCH PRACTICE
How can we escape this vicious circle? It is not going to be easy. First, those that are profiting most from the current system are extremely complacent and powerful. They can show, through their quantitative metrics, that academic science is more productive than ever. The loss of originality (and the suffering of the victims of this system) is hard to measure, and therefore no major issue. What cannot be measured does not exist. In addition, the current flurry of technological innovations (mostly in the area of information technology) give us the impression that we have the world and our lives more under control than ever. All of this supports the impression that science is fully performing its societal function.
But appearances can be deceptive. Indeed, we do not need more facts to tackle the existential problems of humanity. What we do need is deeper insight, more wisdom, and just like originality, these cannot be measured. There are cracks appearing in the facade of modern science, which suggest we must change our attitude. I've already mentioned the Human Genome Project, which cost a lot of money, but did not deliver the expected profusion of cures (or any deeper insight into human nature). Even less convincing is the performance of the Human Brain Project so far, which promised us a simulation of the entire human prefrontal cortex, for a mere billion euros. Not much happened, but this is not surprising, because it was never clear what kind of insights we would gain from such a simulation anyway. These are signs that the technology-enamoured and -fixated system we've created is about to hit a wall.
Since the main problem of academic science is an increasing intellectual monoculture, it is tempting to use ecology as a model and inspiration for a potential reform. As mentioned at the outset, the current model of academic research is indoctrinated by free-market ideology. It is an industrial system. We want control over the world we live in. We want measurable and efficient production. We foster this through competition. As in the examples of industrial agriculture and economic markets, the shadow side of this cult of productivity is risk-aversion and the potential of a ruinous race to the bottom.
What we need is an ecological reform of academic research! Pretty literally. We need to shift from a paradigm of control to a paradigm of participation. Young researchers should be taken seriously, properly supported, and encouraged to take risk and responsibility. What we want is not maximal production, but maximal depth, sustainability, and reproducibility of scientific results. We want societal relevance based on deep insight rather than technological miracle cures. We need an open and collaborative research system that values the diversity of perspectives and approaches in science. We need a focus on innovation. In brief, we need more lasting quality rather than short-term quantity. Our scientific problems, therefore, mirror those in society at large pretty exactly.
STEPS TOWARDS AN ECOLOGICAL RESEARCH ECOSYSTEM
How is this supposed to work in practice? I assume that I am mostly addressing practicing researchers here. This is why I focus on propositions that can be implemented without major changes in national or international research policy. Let me classify them into four general topics:
Last week, I discussed an article published by Mike Levin and Dan Dennett in Aeon. I really don't want to obsess about this rather mediocre piece of science writing, but it does bring up a number of points that warrant some additional discussion. The article makes a number of strong claims about agency and cognition in biology. It confused me with its lack of precision and a whole array of rather strange thought experiments and examples.
Since I've published my earlier post, several Tweeps (especially a commenter called James of Seattle) have helped me understand the argument a little better. Much obliged!
This results in an interpretation of the article that veers radically away from panpsychism into a direction that's more consistent with Dennett's earlier work. Let me try to paraphrase:
The argument Levin and Dennett present is not exactly new. Points (1) to (3) are almost identical to Ernst Mayr's line of reasoning from 1961, which popularised the notion of "teleonomy"—denoting evolved behaviour driven by a genetic program, that seems teleological because it was adapted to its function by natural selection.
At least, there is a tangible argument here that I can criticise. And it's interesting. Not because of what it says (I still don't think that it talks about agency in any meaningful way), but more because of what it's based on—its episteme, to use Foucault's term.
To be more specific: this interpretation reveals that the authors' world view rests on a double layer of metaphors that massively oversimplify what's really going on. Let me explain.
ORGANISMS ≠ MACHINES
The first metaphorical layer on which the argument rests is the machine conception of the organism (MCO). It is the reason we use terms such as "mechanism," "machinery," "program," "design," "control," and so on, to describe cells and other living systems.
Levin and Dennett use a typical and very widespread modern version of the MCO, which is based on computer metaphors. This view considers cells to be information-processing machines, an assumption that doesn't even have be justified anymore. As Richard Lewontin (one of my big intellectual heroes) points out: "[T]he ur-metaphor of all modern science, the machine model that we owe to Descartes, has ceased to be a metaphor and has become the unquestioned reality: Organisms are no longer like machines, they are machines."
Philosopher Dan Nicholson has written a beautiful and comprehensive critique of this view in an article published in 2013, which is called "Organisms ≠ Machines." (The only philosophical article I know with an unequal sign in it, but maybe there are others?) Dan points out that the machine metaphor seems justified by several parallels between machines and organisms. They are both bounded physical systems. They both act according to physical law. They both use and modify energy and transform part of it into work. They are both hierarchically structured and internally differentiated. They can both be described relationally in terms of causal interactions (as blueprints and networks, respectively). And they both are organised in a way that makes them operate towards the attainment of certain goals. Because of this, they can both be characterised in functional terms: knives are for cutting, lungs are for breathing. But, as Dan points out, the most obvious similarities are not always the most important ones!
In fact, there are three reasons why the machine metaphor breaks down, all of which are intimately connected to the topic of organismic agency—the real kind, which enables organisms to initiate causal effects on their environments from within their system boundaries (see my earlier post). Here they are:
These are three pretty fundamental ways in which organisms are not at all like machines! And true agency depends on all of them, since it requires self-maintaining organisation, the kind that underlies intrinsic purpose, inter-dependence, and the open-ended, transient structure of the organism. To call preprogrammed evolved responses "agency" is to ignore these fundamental differences completely. Probably not a good thing if we really want to understand what life is (or what agency is, for that matter).
The second metaphorical layer on which Levin and Dennett's argument rests is the intentional stance. Something really weird happens here: basically, the authors have done their best to convince us that organisms are machines. But then they suddenly pretend they're not. That they act with intentionality. Confused yet? I certainly am.
The trick here is a subtle switch of meaning in the term "agency." While originally defined as a preprogrammed autonomous response of the cell (shaped by evolution), it now becomes something very much like true agency (the kind that involves action originating from within the system). This switch is justified by the argument that the cell is only acting as if it has intention. Intentionality is a useful metaphor to describe the machine-like but autonomous behaviour of the cell. It is a useful heuristic. In a way, that's ok. Even Dan Nicholson agrees that this heuristic can be productive when studying well-differentiated parts of an organism (such as cells). But is this sane, is it safe, more generally? I don't think so.
The intentional stance creates more problems than it solves. For example, it leads the authors to conflate agency and cognition. This is because the intentional stance makes it easy to overlook the main difference between the two: cognitive processes—such as decision-making—involve true intentionality. Arguments and scenarios are weighed against each other. Alternatives considered. Basic agency, in contrast, does not require intentionality at all. It simply means that an organism selects from a repertoire of alternative behaviours according to its circumstances. It initiates a given activity in pursuit of a goal. But it need not be aware of its intentions. As mentioned earlier, agency and cognition are related, but they are not the same. Bacteria have agency, but no cognition. This point is easily lost if we consider all biological behaviour to be intentional. The metaphor fails in this instance, but we're easily fooled into forgetting that it was a metaphor in the first place.
The exact opposite also happens, of course. If we take all intentionality to be metaphorical, we are bound to trivialise it in animals (like human beings) with a nervous system. The metaphorical overkill that is happening here is really not helping anyone grasping the full complexity of the problems we are facing. It explains phenomena such as agency and intentionality away, instead of taking them seriously. While the intentional stance is supposed to fix some of the oversimplifications of the machine metaphor, all it does is making them worse. The only thing this layering of metaphors achieves is obfuscation. We're fooling ourselves by hiding the fact that we've drastically oversimplified our view of life. Not good.
And why, you ask, would we do this? What do we gain through this kind of crass self-deception? Well, in the end, the whole convoluted argument is just there to save a purely mechanistic approach to cellular behaviour, while also justifying teleological explanations. We need this metaphorical overkill because we don't believe that we can be scientific without seeing the world as a mechanistic clockwork. This is a complicated topic. We'll revisit it very, very soon on this blog. I promise.
EMMENTAL CHEESE ONTOLOGY
In the meantime, let's see what kind of philosophical monster is being created here. The machine view and the intentional stance are both approaches to reality—they are ontologies in the philosophical sense of the term—that suit a particular way of seeing science, but don't really do justice to the complexity and depth of the phenomena we're trying to explain. In fact, they are so bad that they resemble layered slices of Emmental cheese: bland, full of holes, and with a slightly fermented odour.
Ultimately, what we're doing here is creating a fiction, a simulation of reality. Jean Beaudrillard calls this hyperreality, British filmmaker Adam Curtis calls it HyperNormalisation. It's the kind of model of reality we know to be wrong, but we still accept it. Because it's useful in some ways. Because it's comforting and predictable. Because we see no alternative. Not just fake news, but a whole fake world.
It's not cognition, but metaphors all the way down.
Of course, the responsibility for this sorry state of affairs can't all be pinned on this one popular-science article. It's been going on since Descartes brought us the clockwork universe. Levin and Dennett's piece is just a beautiful example of the kind of mechanistic oversimplification modernity has generated. It demonstrates that this kind of science is reaching its limits. It may not have exhausted its usefulness quite yet, but it is certainly in the process of exhausting its intellectual potential. Postmodern criticisms—such as those by Foucault and Baudrillard, who I've mentioned above—are hitting home. But they don't provide an alternative model for scientific knowledge, leaving us to drift in a sea of pomo-flavoured relativism. What we need is a new kind of science, rested on more adequate philosophical foundations, that answers to those criticisms. One of the main missions of this blog is to introduce you to such an alternative. A metamodern science for the 21st century.
The revolution is coming. Join it. Or stay with the mechanistic reactionaries. It's up to you.
Hello everybody. This is my first blog post. I was undecided at first. What do I write about? Where do I begin? Then, last night, I came across this article by Michael Levin and Daniel Dennett in Aeon Magazine. It illustrates quite some of the problems—both in science and about science—that I hope to cover in this blog.
"Cognition all the way down?" That doesn't sound good... and, believe me, it isn't. But where to begin? This article is a difficult beast to tackle. It has no head or tail. Ironically it also seems to lack purpose. What is it trying to tell us? That cells "think"? Maybe even molecules? How is it trying to make this argument? And what is it trying to achieve with it? Interdisciplinary dialogue? Popular science? A new biology? I think not. It does not explain anything, and is not written in a way that the general public would understand. I do have a suspicion what the article is really about. We'll come back to that at the end.
But before I start ripping into it, I should say that there are many things I actually like about the article. I got excited when I first saw the subtitle ("unthinking agents!"). I'm thinking and writing about agency and evolution myself at the moment, and believe that it's a very important and neglected topic. I also like the authors' concept of teleophobia, an irrational fear of all kinds of teleological explanations that circulates widely, not only among biologists. I like their argument against an oversimplified black-and-white dualism that ascribes true cognition to humans only. I like their call for biologists to look beyond the molecular level. I like that they highlight the fact that cells are not just passive building blocks, but autonomous participants busy building bodies. I like all that. It's very much in the spirit of my own research and thinking.
But then, everything derails. Spectacularly. Where should I start?
AGENCY ISN'T JUST FEEDBACK
The authors love to throw around difficult concepts without defining or explaining them. "Agency" is the central one, of course. From what I understand, they believe that agency is simply information processing with cybernetic feedback. But that won't do! A self-regulating homeostat may keep your house warm, but does not qualify as an autonomous agent. Neither does a heat-seeking missile. As Stuart Kauffman points out in his Investigations, autonomous systems "act on their own behalf." At the very least, agents generate causal effects that are not entirely determined by their surroundings. The homeostat or missile simply reacts to its environment according to externally imposed rules, while the agent generates rules from within. Importantly, it does not require consciousness (or even a nervous system) to do this.
AGENCY IS NATURAL, BUT NOT MECHANISTIC
How agents generate their own rules is a complicated matter. I will discuss this in a lot more detail in future posts. But one thing is quite robustly established by now: agency requires a peculiar kind of organisation that characterises living systems—they exhibit what is called organisational closure. Alvaro Moreno and Matteo Mossio have written an excellent book about it. What's most important is that in an organism, each core component is both producer and product of some other component in the system. Roughly, that's what organisational closure means. The details don't matter here. What does matter is that we're not sure you can capture such systems with purely mechanistic explanations. And that's crucial: organisms aren't machines. They are not computers. Not even like computers. Rosen's conjecture establishes just that. More on that later too. For now, you must believe me that "mechanistic" explanations of organisms based on information-processing metaphors are not sufficient to account for organismic agency. Which brings us to the next problem.
EVOLVED COMPUTER METAPHORS
We've covered quite some ground so far, but haven't even arrived at the main two flaws of the article. The first of these is the central idea that organisms are some kind of evolved information-processing machines. They "exploit physical regularities to perform tasks" by having "long-range guided abilities," which evolved by natural selection. Quite fittingly, the authors call this advanced molecular magic "karma." Karma is a bitch. It kills you if you don't cooperate. And here we go: in one fell swoop, we have a theory of how multicellularity evolved. It's just a shifting of boundaries between agents (the ones that were never explained, mind you). Confused yet? This part of the article is so full of logical leaps and grandstanding vagueness that it's really hard to parse. To me, it makes no sense at all. But that does not matter. Because the only point it drives at is to resuscitate a theory that Dennett worked on throughout the 1970s and 80s, and which he summarised in his 1987 book The Intentional Stance.
THE INTENTIONAL STANCE
The intentional stance is when you assume that some thing has agency, purpose, intents in order to explain it, although deep down you know it does not have these properties. It used to be big (and very important) in the time when cognitive science emerged from behaviourist psychology, but nowadays it mostly applies to rational choice theory applied in evolutionary biology. For critical treatments of this topic, please read Peter Godfrey-Smith's Darwinian Populations and Natural Selection, and Samir Okasha's Agents and Goals in Evolution. Bottom line: this is not a new topic at all, and it's very controversial. Does it make sense to invoke intentions to explain adaptive evolutionary strategies? Let's not get into that discussion here. Instead, I want to point out that the intentional stance does not take agency serious at all! It is very ambiguous about whether it considers agency a real phenomenon, or whether it uses intentional explanations as purely heuristic strategy that explicitly relies on anthropomorphisms. Thus, after telling us that parts of organisms are agents (at least that's how I would interpret the utterly bizarre "thought experiment" about the self-assembling car) they kind of tell us now that it's all just a metaphor, this agency thing. What is it, then? This is just confusing motte-and-bailey tactics, in my opinion.
AGENCY IS NOT COGNITION!!!
So now that we're all confused whether agency is real or not, we already get the next intellectual card trick: agency is swapped for cognition. Just like that. That's why it's "cognition all the way down." You know, agency is nothing but information processing. Cognition is nothing but information processing. Clearly they must be the same. There's just a difference in scale in different organisms. Unfortunately, this renders either the concept of agency or the concept of cognition irrelevant. Luckily, there is an excellent paper by Fermín Fulda that explains the difference (and also tells you why "bacterial cognition" is really not a thing). Cognition happens in nervous systems. It involves proper intentions, the kind you can even be conscious of. Agency, in the broad sense I use it here, does not require intentionality or consciousness. It simply means that the organism can select from a repertoire of alternative behaviours when faced with opportunities or obstacles in its perceived environment. As Kauffman says, even a bacterium can "act on its own behalf." It need not think at all.
PANPSYCHISM: NO THANK YOU
By claiming that cells (or even parts of cells) are cognitive agents, Levin and Dennett open the door for the panpsychist bunch to jump on their "argument" as evidence for their own dubious metaphysics. I don't get it. Dennett is not usually sympathetic to the views of these people. Neither am I. Like ontological vitalism, panpsychism explains nothing. It does not explain consciousness or how it evolved. Instead, it explains it away, negating the whole mystery of its origins by declaring the question solved. That's not proper science. That's not proper philosophy. That's bullshit.
SO: WHAT'S THE PURPOSE?
What we're left with is a mess. I have no idea what the point of this article is. An argument for panpsychism? An argument for the intentional stance? Certainly not an argument to take agency serious. The authors seem to have no interest in engaging with the topic in any depth. Instead, they take the opportunity to buzzword-boost some of their old and new ideas. A little PR certainly can't harm. Knowing Michael Levin a little by now, I think that's what this article is about. Shameless self-promotion. Science in the age of selfies. A little signal, like that of the Trafalmadorians in The Sirens of Titan that constantly broadcasts "I'm here, I'm here, I'm here." And that's bullshit too.
To end on a positive note: the article touches on a lot of interesting topics. Agency. Organisms. Evolution. Philosophical biology. Reductionism. And the politics of academic prestige. I'll have more to say about all of these. So thank you, Mike and Dan, for the inspiration, and for setting such a clear example of how I do not want to communicate my own writing and thinking to the world.