I've read this several times, but only just now read Chalmer's response, with which I agree with.
While Aaronson identifies concerns with the math, as indicated above, he views IIT as a serious, honorable attempt to address what he calls the Pretty-Hard Problem of Consciousness. (In other words, it's not rubbish, despite the opinion of our resident HCT guru.) As noted, Koch's enthusiasm and Chalmer's public interest should be considered as well. (None of which means of course that IIT is correct.)
However, I think Aaronson's main critique of IIT is foolish: "In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all."
Chalmers' response:
If consciousness is fundamental, and it may be, then humans don't have the market cornered. There may be many non-human systems that are having experiences.
Not only do I think Aaronson's critique is erroneous, I also wonder if it is ultimately born out of a confusion regarding consciousness.
He has apparently shown that some very simple systems could, according to IIT, be experiencing consciousness. As noted, for this reason, he rejects IIT.
In this case, my concern is with his last statement. First, consciousness is not the same as intelligence. I don't think I need to say anything more than that.
Second, for a system to be conscious is for it to feel like something to be that system. What does it feel like to be a system that simply applies the matrix W to an input vector x? I haven't a clue. Perhaps it feels like a prickly rash, perhaps it feels like a shade of purple, perhaps it feels like lavender. It probably doesn't feel like any of those things. And it doesn't have to in order to feel like something.
Third, consciousness is not the same as the human mind and all its wonderful myriad of qualities. That is, for a system to be conscious does not mean that a system must see like us, hear like us, smell like us, taste like us, touch like us, have thoughts like us, have emotions like us, or moods like us, etc.
If consciousness is fundamental, then different systems will mold it into minds in different ways. Do we think that what it feels like to be a bat is the same as what it feels like to be a dog, or a shrew, or someone who is deaf, blind, quadriplegic, and in a vegetative state?
IIT may be wrong, but I don't think Aaronson's intuition proves that it is.
I found Aaronson's reply back to Chalmers, which I think may be helpful:
David Chalmers #125: Thanks very much for the comment! As I once told you, reading
The Conscious Mind as a teenager had a significant impact on my thinking, so it’s an honor to have you here on my blog.
Now, regarding your distinction between PHP1 and PHP2: for me, like for Alex Mennen #132, the key question is whether it’s possible to articulate a sense in which a solution to the Pretty-Hard Problem could “still be verifiably correct,” even though it rendered absurd-seeming judgments about consciousness or unconsciousness in cases where we thought we already knew the answers. And, if so, what would the means of verification be?
It’s worthwhile keeping in front of us what we’re talking about. We’re not talking about a scientific hypothesis that contradicts common sense in some subtle but manifestly-testable way, in a realm far removed from everyday experience—like relativity telling us about clocks ticking slower for the twin on the spaceship. Rather, we’re talking about a theory that predicts, let’s say, that
a bag of potato chips is more conscious than you or me. (No, I have no reason to think a bag of potato chips has a large Φ-value, but it will suffice for this discussion.)
I don’t know about you, but if the world’s thousand wisest people assured me that such a theory had been shown to be correct, my reaction wouldn’t be terror that I had gravely underestimated the consciousness of potato-chip bags, or that I’d inadvertently committed mass murder (or at least chipslaughter) at countless snacktimes. My reaction, instead, would be that these wise people must be using the word “consciousness” to mean something different than what I meant by that word—and that the very fact that potato-chip bags were “conscious” by their definition was virtually a proof of that semantic disagreement. So, following a strategy you once recommended, I’d simply want to ban the word “consciousness” from the discussion, and see whether the wise people could
then convey to me the content of what had been discovered about potato-chip bags.
By contrast, suppose there were an improved consciousness-measure Φ’, and suppose Φ’ assigned tiny values to livers, existing computers,
and my Vandermonde system, large values to human brains, and somewhat smaller but still large values to chimpanzee and dolphin brains; and after years of study, no one could construct any system with a large Φ’ that
didn’t seem to look and act like a brain. In that case, I wouldn’t be able to rule out the hypothesis that what people were referring to by large Φ’ was indeed what I meant by consciousness, and would accordingly be interested in knowing the Φ’-values for fetuses, coma patients, AIs, and various other interesting cases.
What I’m missing, right now, is what sort of state of affairs could possibly convince me that (a) potato-chip bags have property X, and (b) property X refers to
the same thing that I had previously meant by “consciousness.”