I don’t know what is more surprising: the fact that in just a month a short 497-word letter (IIT-Concerned et al, 2023) that has been read almost 60,000 times and downloaded more that 11,000 has generated such a backlash in the consciousness world, or the fact that I was invited to sign it. Because I really was not expecting to be asked to sign it, to be honest. After all, unlike most—if not all—of my fellow co-signers, I don’t work on consciousness. I used to, though. I was in college in the early 2000nds, diligently reading and studying philosophy of mind, so I obviously fell in love with the problem of consciousness. I devoured the “classics”—Churchland, Lycan, Dennett, Chalmers, Hurley—but also some less “mainstream” views, including Zoltan Torey, Rodolfo Llinás and (gulp) Roger Penrose, among many others. Then I had the great fortune of studying under Dennett, with whom I talked about consciousness often. In fact, I wrote my writing sample for grad school on consciousness; a critical evaluation of Searle’s “biological dualism”, if I recall correctly. By the time I started grad school I was still so interested in consciousness that my first lab rotation in the psychology department at UNC was in Joe Hopfinger’s attention lab, as my intention at the time was to work on attention and its relation to consciousness.
A good read, Felipe! For adversarial collaboration, you're right one way to go is to restrict them to highly informative experiments that terminate further interpretation. But check our new Neuron paper providing a Bayesian approach to adversarial collaborations that can score quite diverse theories against each other, as long as there is any difference in predictions. Then one theory can get ahead in the ongoing Bayesian horse race, allowing the rest of the scientific field to better place their bets (even if the losing theorist keeps flogging their theory). Adversaries then are wise to consider carefully what to say about the opponent's predictions (eg if they are considered banal, then make the same prediction, which renders that prediction uninformative). This moves much focus on to the bridge principles from theory to prediction - which is what you discuss for IIT (silent neurons, grid structure etc). Conceived like this, adversarial collaborations become quite interesting tools for science even for fledling fields like consciousness, or well-established ones like memory.
A. W. Corcoran, J. Hohwy and K. J. Friston. Accelerating scientific progress through Bayesian adversarial collaboration. Neuron 2023 DOI: 10.1016/j.neuron.2023.08.027
A good read, Felipe! For adversarial collaboration, you're right one way to go is to restrict them to highly informative experiments that terminate further interpretation. But check our new Neuron paper providing a Bayesian approach to adversarial collaborations that can score quite diverse theories against each other, as long as there is any difference in predictions. Then one theory can get ahead in the ongoing Bayesian horse race, allowing the rest of the scientific field to better place their bets (even if the losing theorist keeps flogging their theory). Adversaries then are wise to consider carefully what to say about the opponent's predictions (eg if they are considered banal, then make the same prediction, which renders that prediction uninformative). This moves much focus on to the bridge principles from theory to prediction - which is what you discuss for IIT (silent neurons, grid structure etc). Conceived like this, adversarial collaborations become quite interesting tools for science even for fledling fields like consciousness, or well-established ones like memory.
A. W. Corcoran, J. Hohwy and K. J. Friston. Accelerating scientific progress through Bayesian adversarial collaboration. Neuron 2023 DOI: 10.1016/j.neuron.2023.08.027
Some interesting points of view here. I point you to the article below for a different perspective.
The strength of weak integrated information theory
https://www.sciencedirect.com/science/article/pii/S1364661322000924