I don’t know what is more surprising: the fact that in just a month a short 497-word letter (IIT-Concerned et al, 2023) that has been read almost 60,000 times and downloaded more that 11,000 has generated such a backlash in the consciousness world, or the fact that I was invited to sign it. Because I really was not expecting to be asked to sign it, to be honest. After all, unlike most—if not all—of my fellow co-signers, I don’t work on consciousness. I used to, though. I was in college in the early 2000nds, diligently reading and studying philosophy of mind, so I obviously fell in love with the problem of consciousness. I devoured the “classics”—Churchland, Lycan, Dennett, Chalmers, Hurley—but also some less “mainstream” views, including Zoltan Torey, Rodolfo Llinás and (gulp) Roger Penrose, among many others. Then I had the great fortune of studying under Dennett, with whom I talked about consciousness often. In fact, I wrote my writing sample for grad school on consciousness; a critical evaluation of Searle’s “biological dualism”, if I recall correctly. By the time I started grad school I was still so interested in consciousness that my first lab rotation in the psychology department at UNC was in Joe Hopfinger’s attention lab, as my intention at the time was to work on attention and its relation to consciousness.
In grad school, however, my other passion—memory—took over and my interest in attention and consciousness declined. In retrospect, this switch of interests was not unprincipled. As I progressed in grad school I became sort of an industrial cognitive scientist, fascinated by the kind of tightly controlled, rigorous, and self-contained experimental designs of good-old fashion memory researchers—think Slamecka or even Tulving in the 60s and 70s. The world of perfectly controlled (or so I thought then) independent variables, and straightforward behavioral measures such as reaction times, hit- and false-alarm rates, became my world. At the same time—we are talking 2005-2006—I was getting frustrated with the many terminological disagreements among researchers exploring whether attention was necessary or sufficient for consciousness. I read a bazillion papers on this issue, and I couldn’t help but think that, often, authors were just talking past one another because they were using the terms “attention” and “consciousness”, and even “necessary” and “sufficient”, differently. One would think that operationalizing a term would help to prevent such semantic disagreements. But my sense is that even though many researchers said that they were operationalizing their terms—typically with such locutions as “in this paper, we operationalize ‘attention’ as so-and-so”—the truth is that they often didn’t, because their alleged operationalizations weren’t tied to a measure. Or, when they were, the measure was rather idiosyncratic, dependent on the exact paradigm employed, and often hard to commensurate with other measures in the offing.
I guess this is somewhat similar to the situation Bridgman (1927)—to whom we owe the notion of “operationalization”—experienced in physics. Frustrated by the terminological disagreements among theoretical physicists on how to understand the meaning of theoretical terms, he advocated for anchoring them in specific measurement operations. There is plenty wrong with Bridgman’s operationalism (e.g., Gilles, 1972), no doubt, but his concerns were still valid. Likewise, I was starting to feel that there was no clarity as to whether a stimulus had or had not been attended, how long attention lasted, whether reportability was a sufficient criterion for conscious awareness, and all that good stuff. The world of memory, with its simple word, item and category learning paradigms, and such basic terms as ‘old’ and ‘new’ and ‘hits’ and ‘misses’, gave me some sort of methodological solace.
So, to go back to the story, I was very surprised when Alan and Hakwan asked me to sign the letter. I haven’t published on consciousness in over 10 years (my last paper on the topic was De Brigard, 2011), I have never been to the ASSC meeting, and I also never went to Tucson, even though I always wanted to—but mainly because I was a young grad student and I heard it was basically a mini-Burning Man for consciousness nerds. Yet, the truth is that even today I still have an interest in consciousness, and I love reading books and articles that offer the latest “scientific theory of consciousness” or “of subjective experience”. (Recently, for instance, I read and enjoyed Godfrey-Smith’s “Other Minds” and “Metazoa” and Graziano’s “Rethinking Consciousness”). As a result, I definitively feel that now I am an outsider among consciousness researchers; we don’t hang out together anymore, as it were, but we are still friends on Facebook. So perhaps is not that absurd that they asked me to sign the letter—or so I told myself.
But should I’ve? More importantly, am I qualified to sign this letter? At the time, I knew just a little about the adversarial collaboration, the pre-preprint of the results, the media coverage, and even the events that transpired at the 25th anniversary of the ASSC, in New York. What the letter sought to achieve—or what I saw the letter as wanting to accomplish—was to register an “expression of concern”, something that is not terribly unusual in the sciences. Expressions of concern are issued sometimes when an article is published in a high-profile venue, and other researchers, who are experts in the field, note something worrisome about the paper, perhaps not caught during the peer-review process—for instance about the way in which the data was handled or analyzed, or about the way in which a fundamental aspect of the research is characterized. As a result, the purpose of the letter seemed pretty straightforward to me: it was an expression of concern by experts in the field who wanted to register that a view that is presented as a leading scientific theory is, in fact, not so. (I must confess that, at the time, I thought the manuscript with the results of the adversarial collaboration (Cogitate et al, bioRxiv) was already accepted for publication, rather than being still under review. That wouldn’t have changed my decision to sign, though; I just think it is a bit unusual to issue expressions of concern for unpublished manuscripts. But in this era of Rxivs, open-science, pre-registrations, contributions in proceedings that are published at breakneck speed, and the infectious dissemination of manuscripts through social media, perhaps it isn’t that absurd.)
Anyways, as with many drafts, the paper was posted in the archive. In the next couple of days, my email, Facebook Messager, even Linkedln, flooded with notes, mostly in support but some with quite explicit hate. Twitter—or whatever is the name of its current devalued iteration—was the worst. What surprised me, though, is that most of the disagreement was voiced against the parts of the letter that I found less controversial. In particular, the idea of treating IIT as pseudoscience. That IIT is a pseudoscientific theory is, I think, the least controversial part of the letter. Other aspects of the letter are perhaps more controversial. Maybe the reaction of the media was not as uniform as the letter depicts. Also, using the fact that IIT endorses panpsychism as an argument against it, is rather weak. Accusing a scientific theory of being unscientific because it has unpalatable metaphysical consequences is not a good argument: plenty of bona-fide scientific theories have commitments that were likely seen as metaphysically unpalatable when they were proposed—think about what it must’ve been rejecting corpuscularism or aether. Don’t get me wrong, though. I don’t think that panpsychism is unproblematic. As a metaphysical theory, it is quite murky. More importantly: I don’t think panpsychism contributes to solve the problem of consciousness at all. To say that everything is conscious does not help us in any way to understand how or why is it that we are conscious the way we are. As Bill Lycan quipped once in class, panpsychism turns the mind-body problem into a mind-mind problem.
Why do I think that the appellative “pseudoscience” is appropriate to describe IIT? Let me start to answer that question with a full disclosure, and a story. The disclosure is that I actually did read Tononi’s “Phi: A Voyage from the Brain to the Soul”, a couple of years after it came out. I also read a few other papers Tononi had published before the book, aimed at “explaining” IIT. I did it mainly because I’d heard people quietly rumoring, in philosophy and neuroscience conferences, that IIT was just “mathematical mumbo-jumbo” or “technical BS”. And around that time, in May 2014, Scott Aaronson published what many—including me—thought was a decisive argument against IIT: he proved that, using its own mathematical apparatus, IIT predicts consciousness in “physical systems that no sane person would regard as particularly “conscious” at all” (Aaronson, 2014). “Another one bites the dust”, I must’ve thought at the time. One more theory of consciousness that have shown to be just wrong. On occasion I’d see a philosopher here and there publish a paper on IIT, but I didn’t think much of it. I really had no idea that IIT was not debunked. And I really had no clue that, instead, it was growing in popularity.
Now, the story. I was invited to give a talk at the Max Plank Institute for Human Development in Berlin, in December of 2018, for the Symposium on Consciousness: Nature/Culture. Given that I received the invitation just a few weeks before the event, it was very clear that I was the replacement of a replacement of a replacement for someone that had to cancel at the last minute. Nevertheless, I accepted—mainly because it was my chance to meet some minds I admired, such as Michael Graziano, Lucia Melloni and, yes, Giulio Tononi. I wasn’t ready yet to think of IIT as mathematical mumbo-jumbo, but I wasn’t convinced it was a scientific theory either. I was really hoping I had the chance to hear Tononi’s talk, and maybe even chat with him about IIT, Aaronson’s criticisms, and—why not! —panpsychism.
I can’t recall all the details of his talk, but I remember it had to do with sleep and the conscious experience of dreaming. It had some empirical data, mostly high-density EEG and intracranial recordings. And the data was, of course, interpreted within the theoretical framework of IIT. What I remember vividly, is me asking Tononi why, if the theory assumes that the basic units are neurons, coarse measures such as those afforded by EEG or even intracranial recordings are suitable for providing evidence for the theory. I remember that his answer baffled me, for it didn’t seem to matter that much that the basic unit—the “elements in a state”—were neurons. Neuronal populations, brain areas (whatever that means), voxels, what-have-you, could potentially be relevant basic mechanisms. This was very surprising, for I thought that a requirement of the theory is that the elements are clearly hierarchically and mechanistically organized, according to what the axiom of “composition” demands. These measures are not sensitive to these idealized hierarchies. I was missing something.
Thankfully, I had the great fortune of sitting in front of Tononi that evening, at the dinner table. And what followed was an extraordinary and delightful conversation, which lasted until midnight, and that convinced me of one thing: that IIT is not a scientific but a metaphysical theory. Indeed, Tononi’s talk of “top-down approach”, of starting from “consciousness itself”, was eerily reminiscent of modern rationalism. It felt as if I was talking with a sort of contemporary Leibniz, who’s trying to convince us of the existence of monads—pesky metaphysical basic units that make up the universe but, lacking extension, are empirically undetectable. Or perhaps a sort of contemporary Kant, trying to prove that there must be a transcendental self, a unity of apperception that can’t be empirically found but that needs to be assumed given the structure of our mind. I left the dinner, and the conference, thinking that IIT, if it was a theory of consciousness at all, it was a metaphysical one, the soundness of which was to be evaluated the same way other metaphysical theories of consciousness are evaluated: by arguments and reasoning, rather than using EEGs, fMRIs, and all this other empirical mumbo-jumbo.
I was very surprised, then, not only to see that IIT was heralded as a leading empirical theory of consciousness, but that it has been used to vie for research grants that are destined to scientific projects. It just didn’t fit. It is as if Leibniz applied to NSF to get money to build a large hadron collider so he could find monads, or if Kant had put on an NIH grant to find the seat of the transcendental self in the brain by sticking people in an MRI. It is not simply that it is hard to understand how IIT can be empirically interpretable, is that it does not seem to be the sort of theory that can be empirically interpretable. I had this impression, by the way, not only from my conversation with Tononi 5 years ago, but from actually reading stuff that he and Koch have written more recently (Incidentally, I had no idea Koch was such a huge fan of IIT. My recollection from 20 years ago, is that he thought that the claustrum was the neural correlate of consciousness!) Consider, for instance, the vagueness of this statement, from Tononi and Koch (2015): “physical systems are considered as elements in a state, such as neurons or logic gates that are either ON or OFF. All that is required is that such elements have two or more internal states, inputs that can influence these states in a certain way and outputs that in turn depend on these states” (p. 6-7). Should we take it at face value the locution “such as” and consider neurons as the basic elements in a state? If so, shall we think of the mathematical architecture of the IIT as a super-duper computational cognitive neuroscience model of how interacting neurons bring about consciousness? Because if so, then things start to look messy.
For one, given the number of neurons in the brain, and the amount of connections between them, calculating phi is simply intractable. In fact, it looks like it is intractable for a whole lot of physical systems, much simpler than human brains (Barrett and Mediano, 2019). The axiom of composition that is essential to the theory assumes at best a naïve, and at worst a false, hierarchical structure of neural systems. There is a silly way in which brains are hierarchical: some things are bigger than others. But if we have learned anything from the last 150 years of neuroscientific research, is that functional hierarchies do not mirror physical ones. Different physical levels are causally constrained by all sorts of influences from other levels, leaving us with a picture of the brain as a system that is rather—how can I put it—“entangled” (Pessoa, 2022). Or think about the difficulty of giving an empirical interpretation to the postulate of “intrinsic existence”, which states that a system—that is, a neuron—“must exist intrinsically”, which means that it “must have a cause-effect power”, which in turn is defined as “make a difference to the probability of some past and future state of the system”. The philosopher in me screams in agony, of course, not only because it is false to say that for a neuron to exist it must have a cause-effect power (think, a la Max Black, of a possible universe in which just a lonely neuron exists) but also because the notion of causation described in this axiom is unnervingly similar to so-called “probability-raising theories of causation”, which the past four decades of philosophical and statistical research have shown to be extraordinarily problematic (Hitchcock, 2021).
Or take this other gem, from the same 2015 paper: “A theory is the more powerful the more it makes correct predictions that violate prior expectations. One counterintuitive prediction of IIT is that a system such as the cerebral cortex may generate experience even if the majority of its pyramidal neurons are nearly silent” (p. 9). First of all, why would a theory be more powerful when it corroborates predictions that violate prior expectations? This seems to get things exactly backwards. Theories are stronger when their corroborated predictions conform to prior expectations. Doesn’t the theory of general relativity grow stronger whenever a prediction that conforms to what’s expected is corroborated? Maybe what they mean is a violation of prior expectations, not of IIT, but of neuroscience in general? Is this what they mean when they talk about “nearly silent” neurons? I must confess that I am baffled by this notion. What is a silent neuron? A neuron not firing? Because neurons are doing all sorts of things even when they aren’t firing. If neurons are “silent” during resting state, are they “super silent” during the refractory period? And, more importantly, why should we think that because a neuron isn’t firing at a certain time, it isn’t causally contributing to a downstream effect? From the point of view of neurobiology, these issues are profoundly controversial, if not downright wrong[1]. And, sad to say, this only scratches the surface of many of the details of the formal machinery of the IIT that would have to be ironed out if we are to think of it as a computational model of how neurons bring about or instantiate consciousness.
Leaving aside the complications with its mathematical formalisms, the obscurity with which the empirical interpretation of IIT is shrouded makes it very hard not to think of it as closer to metaphysics than to an empirical theory. And since IIT has been strongly promoted as an empirical theory, with allegedly supporting experimental findings published in scientific journals and research grants normally earmarked to advance scientific projects, it is even harder not to think of it as a paradigmatic case of pseudoscience, of the likes of mesmerism and Lysenkoism. While I imagine that there is a big sociological component as to why IIT continues to receive such popular support, there is also another reason why I think it hasn’t been dismissed as empirically uninterpretable: it looks like a very sophisticated computational theory of the cognitive neuroscience of consciousness. Computational models in cognitive neuroscience involve all sorts of parameters, sometimes embedded in fanciful formalisms, that are interpretable by fitting them to brain and behavioral data. Here’s an example. For a long time, it has been known that striatal dopamine is involved in reward processing and has been shown to modulate behavior in inter-temporal choice tasks. Recently, Wagner and colleagues (2020) tested a drift diffusion computational model by manipulating the level of striatal dopamine by increasing its release using haloperidol, a D2-receptor antagonist. Sure enough, with this direct manipulation of dopamine release during inter-temporal choice trials, a drift diffusion model with a non-linear trialwise drift rate scaling fit both the neural and the behavioral data quite well. Now, is this the sort of thing that we are supposed to do with IIT? Maybe, but if so, I have yet to find a paper that does exactly that. And my suspicion—which I am happy to be corrected on—is that I haven’t found them because they don’t exist. I wouldn’t even know how to do with IIT, understood as a computational model, what Wagner et al (2020) did with their DDM model. For if we are to interpret each basic element as a neuron, we not only don’t have the data to test it, but also—as mentioned above—computing phi would be mathematically impossible.
Given some of the hate mail and nasty tweets I got, I must’ve been wrong about the empirical interpretability of the mathematical apparatus of IIT because, apparently, there is an app for that. More precisely, looks like there is a cute Python toolbox, “PyPhi”, that helps you to calculate the phi value of any system by “unfolding the full cause-effect structure of discrete dynamical systems of binary elements” (Mayner et al., 2018). Being an outsider, I must confess I didn’t know such a tool existed, so I haven’t had the chance to play with it. But apparently, I don’t have to, as others have used this very tool to show that all published phi values have been chosen arbitrarily from a large set of “equally valid alternatives” (Hanson and Walker, 2023). More worryingly, sometimes one solution gives a phi = 0, while another one gives you a phi > 0. If there is no principled way to solve this non-uniqueness problem in IIT, it’s hard not to conclude that the users of these algorithms are basically—how can I put it—“phi-hacking”.
So IIT is playing alongside other research programs for scientific clout, even though it is difficult—if not impossible—to interpret it empirically, and even though it has all the makings of a metaphysical rather than an empirical theory, however problematic it may be. Yet, judging by some of the lovely messages I’ve received, we are still not justified in calling it “pseudoscience”. Let’s explore some of these messages. A very strident one I received said that “I should know better, as there is no clear criterion of demarcation [between science and pseudoscience]” for which, evidently, “I should be ashamed of calling myself a philosopher”. Well, apparently it does not take a philosopher to show that this argument is invalid. There are lots of cases of things that are clearly p or not-p, even when there isn’t a clear criterion of demarcation between things that are p or not-p. That there is no criterion for demarcating when someone is bald versus not-bald does not mean that there aren’t bald people as well as people that aren’t bald. Likewise, there are very clear instances of pseudoscience, even if we don’t have a clear criterion of demarcation.
Other comments were slightly more sophisticated. Some claimed, for instance, that the “IIT is obviously a science” because it makes “predictions that have been empirically validated”. Tweets, like 500-word limit letters to the editor, do not offer much space for elaboration, so that one message had little by way of support. But others chimed in with links to papers that, somehow, are supposed to report empirical corroborations of predictions of IIT. In particular, they referenced Massimini et al (2005) and Casali et al (2013) as clear empirical corroborations of predictions derived from the theory. I read them, as carefully as I could, yet I couldn’t see how their measure of “integration”—what they call “perturbational complexity index” or PCI—is in any way derived from the formalism of IIT. If I understand correctly—and I’ve been involved in some network modeling of imaging data myself (e.g., Huang et al, 2021; Setton et al., 2022; Uddin et al, 2023)—PCI is basically a measure of compressibility of a ginormous binary matrix. And insofar as it is primarily a topological measure, it behooves us to interpret it, at least initially, as ontologically neutral. This is because there are lots of ways in which different underlying mechanisms—indeed, different underlying materials—can yield similar measures. In other words, there are lots of biological reasons as to why these experimental manipulations would have the topological properties uncovered by these two studies, and nothing of what I read indicates that they derive mathematically from the formalism of IIT. I don’t deny that the experiments are cool, or that PCI could have some predictive value—in fact, I’ve argued that one can use topological measures to generate predictions and guide interventions even if they lack clear empirical and/or mechanistic interpretations (Gessell et al., 2020). What I deny is that they are derived from IIT, the way in which a measure of temperature is derived from the second law of thermodynamics, say, or the motion of a pendulum is derived from the harmonic oscillation equation.[2]
Incidentally, some Twitter users claimed that IIT has been empirically verified by studies showing that undergoing TMS in parieto-occipital cortex causes certain effects on our conscious visual experience, perhaps akin to momentary blindsight or inattentional blindness. I am baffled by these assertions to be honest, for I don’t understand how these findings are predictions of the theory, rather than observations that a theory of consciousness needs to accommodate. Everyone who has ever been TMS-ed in occipital cortex while doing a visual task (myself included) knows that your visual experience is profoundly affected, and any respectable theory of consciousness should explain why this is the case. As a result, I don’t see how a particular theory gets to claim authority over such findings. They constitute observations that a theory of consciousness needs to account for, rather than predictions with which a theory of consciousness is tested. It is like saying that my theory of pain must be right because it predicts that if I punch you in the face, it will hurt. Any theory of pain would predict that. What the theory of pain should do is explain why, when you are punched in the face, it hurts. And it is unclear whether IIT would do a better job at that than other theories of consciousness in the offing.
To be fair, there were some people online that voiced support for the letter in ways that I also find somewhat problematic. Some said, for instance, that IIT is obviously a pseudoscience because its predictions are unverifiable or empirically untestable. My sense is that what happens is sort of the opposite: what IIT allegedly predicts seems to be verifiable by too much. Consider, for instance, this bit: “IIT also predicts that the NCC [neural correlate of consciousness] is not necessarily fixed, but may expand, shrink and even move within a given brain depending on various conditions” (Tononi and Koch, 2015, p. 10). How much could it expand or shrink? In which direction? How far can it move? And what are the conditions under which these changes can occur? The worry is that the vagueness of these claims allows supporters of IIT to basically accommodate any finding to fit the theory. This worry is of course reminiscent of Popper’s criticism of Marxism and psychoanalysis, and I think that the vagueness and borderline triviality of some of these alleged predictions put IIT in the same boat.
Speaking of Popper, others voiced their support for the letter by arguing that IIT is unfalsifiable, and that, as such, it should be considered a pseudoscience. As a philosopher, I am sometimes surprised by how easily scientists accept falsificationism as a reliable demarcation criterion. In his brilliant work, “Conjectures and Refutations: The growth of scientific knowledge” (1963), Popper offered a straightforward way of distinguishing science from pseudoscience: “the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability” (p. 44). The problem is that, almost immediately, philosophers of science identified serious difficulties with falsificationism. First of all, it is hard to know exactly when a prediction has been falsified. If you ever conducted an experiment in your chemistry class, you’d remember when the liquid in your beaker did not exactly measure what the formula predicted or when the volume of some catalyst or another ended up being slightly off from what the math said. Should you go about proclaiming that inorganic chemistry is falsified? Of course not, there is always error in measurements and some wiggle room for these observations to vary. How much wiggle room? Well, it is hard to say, and the less of a good handle on the error term you have—as in occurs in psychology and neuroscience—the less sure you can be about the size of the wiggle room. Second, falsificationism does relatively well with some sciences, but it may mark off as pseudoscience certain scientific doctrines we want to consider scientific, such as cosmology and evolutionary biology. Finally, and perhaps more interestingly, as philosopher of science Larry Laudan (1983) famously argued, it seems as though any theory, however wacky it may be, would end up being falsifiable by any number of ascertainably improbable observations a practitioner of that science would accept as a valid falsification. Ditto for IIT. Tononi could accept that if one were to find a conscious system composed of, say, only one basic element or two basic elements that do not interact with one another, then the theory would be falsified. Of course, there is not such a thing. But were there to be such a thing, IIT would be falsified. In sum, neither verificationism nor falsificationism gives us satisfactory reasons to claim that IIT is a pseudoscience: the former because IIT is verifiable—although by highly probable observations—and the latter because it is falsifiable, albeit by very improbable ones.
It is worth discussing one last, perhaps more conciliatory kind of message many voiced: why call it “pseudoscience”, which is loaded with all sorts of negative social connotations, as opposed to “non-scientific” or simply “false”? My response here is that I’m not sure these are better labels for IIT. When I think about a theory or a doctrine that is not scientific, I think of something like astrology. People say things like “she’s self-absorbed because she’s a Libra” or “he’s obviously holding a grudge because he’s a Scorpio”, or stuff like that. They seem like explanations, and they often work as such. In fact, people may even curb their behavior on the basis of these “explanations”, deciding never to date a Leo again, for being Pisces, they are not compatible. But, for me, the difference between a non-scientific theory, like astrology, and a pseudoscientific one, like IIT, is that the former is not employed by its practitioners to vie for funding or recognition that’s destined for scientific enterprises. I doubt that NIH gets many applications based on the scientific promise of astrology, and the kinds of magazines in which astrological predictions are published tend not to be found in the same stands as scientific journals. We have a word for a non-scientific theory that wants to pass as a scientific one: pseudoscience.
What about “false”? Well, I think in a sense all signatories agree that IIT is false, but the letter wanted to convey something other than simply an assessment of its veridicality. In fact, among the many angry messages I read, there were several that pointed at the fact that the letter did not contain any arguments as to why IIT is wrong. I agree. That was not the purpose of the letter. To be honest, it may have been less effective in creating the kind of effect it did, had it been yet another paper arguing why IIT is false. After all, there are dozens of papers out there criticizing the theory, arguing for its unverifiability, for the unclarity and unintuitiveness of its axioms, and even proving the intractability and arbitrariness of its mathematical apparatus—and yet, the theory is still heralded as a leading empirical theory of consciousness. Many have call it “false”, and have argued for that claim, and yet it had made no difference. What the letter sought to do, was not to call attention on its falsehood, but on the fact that it continues to be regarded as a beautifully dressed emperor among scientific theories when, in realty, it has no clothes.
Let me end with a couple of thoughts that have bothered me about the reaction to this letter as well as the structure of this adversarial collaboration. The first thought has to do with one particular reaction that was voiced neither by friends nor foes of IIT, but by those watching from the sidelines: isn’t it the case that some of the same concerns that are presented against IIT can also apply to other scientific theories of consciousness, such as the global neuronal workspace hypothesis or the attention schema theory? Although my sense is that many of the concerns I discussed here are germane to IIT and likely don’t apply to other theories of consciousness, I also think that there is something to this worry. Many, if not all, of our current scientific theories of consciousness employ all sorts of place-holder terms for yet-to-be-understood brain processes that are supposed to play critical roles in giving rise to conscious awareness, and I can’t help but think that a careful exploration will often show that their empirical interpretability is far from straightforward. Moreover, I wouldn’t be surprised if part of the reason as to why this letter elicited such an emotional reaction from so many researchers, is that they saw that for every finger pointing at IIT there were three pointing at themselves: if IIT is pseudoscience, how do I know that my favored neuroscientific theory of consciousness is not? Worse: how do I know that my preferred neuroscientific theory of attention, memory, or perception, shouldn’t be call pseudoscientific too? Could it be possible that calling IIT pseudoscientific would open the floodgates to questioning the scientific legitimacy of other theories in cognitive neuroscience? Here’s the thing: this wouldn’t be such a bad thing. Look: we—as in we cognitive psychologists/neuroscientists—work on trying to understand how is it that such an incredibly complex organ as the brain is connected to such an incredibly weird stuff as the mind. This is a really hard task, and we all strive for our theories to be closer to thermodynamics or molecular biology than they are to alchemy or humourism. Yet, many of the neurocognitive theories we now regard as scientific will likely be considered pseudoscience in years to come, for all sorts of different reasons (Gordin, 2021). We must therefore approach our labor with a healthy dose of intellectual humility and skepticism, and we should always strive to clearly articulate the ontic commitments of our theories and models. Although it may be primarily a social sanction, the declaration of a theory as pseudoscientific is not simply the product of name-calling, but of the slow and painstaking process of scientific revision, which involves the conduction and reporting of experimental results, the careful articulation of arguments and theoretical positions and, yes, the occasional publication of brief letters with expressions of concern.
The last thought has to do with the structure of this adversarial collaboration. To be honest, I find it very strange that this whole thing was predicated on the idea that a single study, containing just a handful of experiments, could be used to disprove an entire theory. Any philosopher of science could tell you that science does not work that way. Theories rarely, if ever, get falsified in toto by a single study. In fact, even what’s arguably the best example of a theoretical refutation by experimentation—Eddington and Dyson’s experiment measuring the curvature of light deflected by the sun during the solar eclipse of May 29, 1919—was far from being uncontroversial, let alone sufficient to render Einstein’s theory of general relativity immediately acceptable by the scientific community. So, expecting that a single study could refute a whole theory of consciousness and corroborate another one is odd. Nevertheless, there is a very important lesson to be learned from the Eddington and Dyson experiment, an insight that actually inspired Popper to think that falsification was very important in science: “the impressive thing about this case”—he wrote—“is the risk involved in a prediction of this kind” (cited in Gordin, 2021, p. 4. My emphasis). After all, any mismatch between the predicted curvature and the observed measurements that couldn’t be accounted for by known sources of measurement error would render Einstein’s theory false. Experimentally, this was a very severe test and a big risk for Einstein.
Unfortunately, my perception is that this is not what occurred with the design and execution of the adversarial collaboration. Everything I’ve read suggests that the agreed upon predictions were rather vague, somewhat idiosyncratic, and definitively not risky. The chances of the data not speaking directly in favor or against either of the two tested “theories” were huge. In fact, this is exactly what happened. And it is not surprising. Did anyone really think that the authors of two of the most cited and well-known theories of consciousness, whose huge reputations are built precisely upon the alleged fact that their theories are empirically supported, were going to put their heads together and really come up with an experiment that would make one of them fall from academic glory with absolute certainty? Of course not. The idea of adversarial collaborations is wonderful in principle, but in practice it is really hard to implement.
Not that anyone cares, but if it was up to me, I would suggest that if there is money for adversarial collaborations, it should be funneled not toward studies testing whole theories, but rather more constrained experiments, aimed at testing bold and risky predictions that clearly derive from a theory, in which incongruencies between prediction and measurement can be accounted for by accepted values for the error term, and for which the results require as little interpretation and admit as little post-hoc rationalization as possible. In simpler words, I think adversarial collaborations should abide by what philosopher of science Michael Strevens calls “the iron law of explanation”, which invites scientists to resolve their differences by conducting empirical tests the results of which require no further interpretation, by letting the evidence speak for itself, as it were (Strevens, 2020). For the neuroscience of consciousness—indeed, for cognitive neuroscience in general—this would demand theories that are much more transparent in how the phenomenon relates to the relevant measure as well as more clarity as to when a source of error is known or expected. This may not be sufficient to assuage our concerns for the scientific legitimacy of our preferred neuroscientific theories of mental phenomena, but I think it would constitute a step in the right direction. Whether this is possible for IIT, I doubt it, but I am happy to be proven wrong—just as supporters of IIT should, if they want to show us why we are mistaken in calling it a pseudoscience.
References
Aaronson, S. (2014). Why I am not an integrated information theorist (or, The Unconscious Expander). https://scottaaronson.blog/?p=1799
Barrett, A. B., & Mediano, P. A. (2019). The Phi measure of integrated information is not well-defined for general physical systems. Journal of Consciousness Studies, 26(1-2), 11-20.
Bridgman, P. W. (1927). The logic of modern physics (Vol. 3). Macmillan.
Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine, 5(198), 198ra105-198ra105.
Cogitate Consortium et al, (bioRxiv). An adversarial collaboration to critically evaluate theories of consciousness. https://doi.org/10.1101/2023.06.23.546249
De Brigard, F. (2012). The role of attention in conscious recollection. Frontiers in Psychology, 3, 29.
Gessell, B.S., Stanley, M.L., Geib, B. & De Brigard, F. (2020). Prediction and topological models in neuroscience. Calzavarini, F., & Viola, M. (Eds.). Neural Mechanisms: New challenges in the philosophy of neuroscience. Springer. pp. 35-56.
Gillies, D. A. (1972). Operationalism. Synthese, 25: 1–24.
Hanson, J. R., & Walker, S. I. (2023). On the non-uniqueness problem in integrated information theory. Neuroscience of Consciousness, 2023(1), niad014.
Hitchcock, C. (2021). Probabilistic Causation. The Stanford Encyclopedia of Philosophy. E.N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2021/entries/causation-probabilistic/>.
Huang, S.†, Faul, L.†, Sevinc, G., Mwilambwe-Tshilobo, L., Setton, R., Lockrow, A.W., Ebner, N.C., Turner, G.R., Spreng, R.N.†, & De Brigard, F†. (2021). Age differences in intuitive moral decision-making: Associations with inter-network neural connectivity. Psychology and Aging. 36(8): 902-916.
IIT-Concerned et al. (PsyArXiv). The Integrated Information Theory of Consciousness as Pseudoscience. https://doi.org/10.31234/osf.io/zsr78
Laudan, L. (1983). The demise of the demarcation problem. In Physics, philosophy and psychoanalysis: Essays in honour of Adolf Grünbaum (pp. 111-127). Dordrecht: Springer Netherlands.
Massimini, M., Ferrarelli, F., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). Breakdown of cortical effective connectivity during sleep. Science, 309(5744), 2228-2232.
Mayner, W. G., Marshall, W., Albantakis, L., Findlay, G., Marchman, R., & Tononi, G. (2018). PyPhi: A toolbox for integrated information theory. PLoS computational biology, 14(7), e1006343.
Pessoa, L. (2022). The entangled brain: How perception, cognition, and emotion are woven together. MIT Press.
Popper, K. R. (1963). Science as falsification. Conjectures and refutations. 33-39.
Sanchez-Vives, M. V., & McCormick, D. A. (2000). Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nature neuroscience, 3(10), 1027-1034.
Setton, R.†, Mwilambwe-Tshilobo, L.†, Girn, M., Lockrow, A.W., Baracchini, G., Hughes, C., Lowe, A.J., Cassidy, B.N., Li, J., Luh, W.-M., Bzdok, D., Leahy, R.M., Ge, T., Marguilies, D.S., Mišić, B., Bernhardt., B.C., Stevens, W.D., De Brigard, F., Kundu, P., Turner, G.R. & Spreng, R.N. (2023). Age differences in the functional architecture of the human brain. Cerebral Cortex. 33(1): 114-134.
Strevens, M. (2020). The knowledge machine: How irrationality created modern science. Liveright Publishing.
Tononi, G., & Koch, C. (2015). Consciousness: here, there and everywhere?. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668), 20140167.
Uddin, L.Q., Betzel, R.F., Cohen, J.R., Damoiseaux, J.S., De Brigard, F., Eickhoff, S.B., Fornito, A., Gratton, C., Gordon, E.M., Laird, A.R., Larson-Prior, L., McIntosh, A.R., Nickerson, L.D., Pessoa, L., Pinho, A.L., Poldrack, R.A., Razi, A., Sadaghiani, S., Shine, J.M., Yendiki, A., Yeo, B.T., & Spreng, R.N. (2023). Controversies and progress on standardization of large-scale brain networks nomenclature. Network Neuroscience. 7(3): 864-905.
Wagner, B., Clos, M., Sommer, T., & Peters, J. (2020). Dopaminergic modulation of human intertemporal choice: A diffusion model analysis using the D2-receptor antagonist haloperidol. Journal of Neuroscience, 40(41), 7936-7948.
[1] I did some searching to try to understand what they could possibly mean by “silent neurons”. In his 2004 paper, which I think was the first published version of IIT, Tononi references “silent states”, but not neurons. The first reference to “silent neurons” I could find, was in Massimini et al (2005), where the term in introduced in reference to a finding by Sanchez-Vives and McCormick (2000). But this paper does not use the term “silent neuron” at all. What this paper reports, is the in vtiro generation of slow (0.1 – 0.5 Hz) oscillations in slices of the neocortex of a ferret. This finding is definitively not at the level of a neuron, and it definitively does not show the inactivity or “silence” of a neuron. So, again, I am not sure what the notion refers to.
[2] Some may argue here that the measures aren’t supposed to be “derived” but “inspired” by the theory. I find this response unsatisfying. I can be inspired to develop a particular measure of attentional binding in object perception from having read Kant’s transcendental schematism, but my experimental results wouldn’t constitute empirical proof of Kant’s theory. Inspiration simply doesn’t constitute the kind of connection one needs to establish between a measure and a theory such that the former can lend empirical support to the latter.
A good read, Felipe! For adversarial collaboration, you're right one way to go is to restrict them to highly informative experiments that terminate further interpretation. But check our new Neuron paper providing a Bayesian approach to adversarial collaborations that can score quite diverse theories against each other, as long as there is any difference in predictions. Then one theory can get ahead in the ongoing Bayesian horse race, allowing the rest of the scientific field to better place their bets (even if the losing theorist keeps flogging their theory). Adversaries then are wise to consider carefully what to say about the opponent's predictions (eg if they are considered banal, then make the same prediction, which renders that prediction uninformative). This moves much focus on to the bridge principles from theory to prediction - which is what you discuss for IIT (silent neurons, grid structure etc). Conceived like this, adversarial collaborations become quite interesting tools for science even for fledling fields like consciousness, or well-established ones like memory.
A. W. Corcoran, J. Hohwy and K. J. Friston. Accelerating scientific progress through Bayesian adversarial collaboration. Neuron 2023 DOI: 10.1016/j.neuron.2023.08.027
Some interesting points of view here. I point you to the article below for a different perspective.
The strength of weak integrated information theory
https://www.sciencedirect.com/science/article/pii/S1364661322000924