. . . Maybe our sense of experience is dependent on the way the switches themselves work, and arises as a byproduct of the functioning of the system. In other words, substrate independence may require more than replicating the neuronal switches, but also the fields and other properties generated by biological cells, and it might not be possible to do that with anything other than biological cells arranged in a precise way. Or maybe it has something to do with the beliefs found in the land of woo. I don't know ( though I wouldn't personally place any bets there ).
I take that as a modification/moderation of what you referred to as a 'straw man' [without identifying the straw man] in your response last night to my earlier question: "So the next question is what is meant by 'artificial neurons'?" If so, I think you're now on the right track in approaching an understanding of what Tononi-Koch are saying and doing with IIT.3. In the earlier two versions of IIT it was presupposed that human consciousness (on the basis of which mind develops) could be explained in terms of quantities of 'information' that somehow produced the qualities of information we recognize in our own consciousnesses and those of others. Tononi and Koch make it very clear in the paper we've been referring to in this thread that they now recognize that qualitative experience is what calls for explanation in consciousness studies and philosophy of mind. And so they now begin with characteristics of human consciousness previously recognized through phenomenological descriptions of human responses to the encountered environments in which we and other animals live. Somewhere along the way in their attempt to build an informational theory of consciousness quantified in terms of sufficient 'integration' of information they have recognized (perhaps with help from others working in consciousness studies and philosophy of mind) that the 'integration' they postulate arises in the interaction of subjective and objective poles of
experience already elaborated in phenomenological philosophy (and implicit as well in Jaak Panksepp's identification of 'affectivity' of primitive organisms leading to proto-consciousness and ultimately to consciousness as we experience it). If their original goal was to explain/account for how consciousness arises in a material world described in solely 'objectivist' terms, it appears that they are no longer comfortable working within those terms, within that presupposition. IIT.3 thus now pursues an inquiry distinguished from the question whether 'artificial intelligences' can be expected to experience qualitative consciousness as it shows up in the behaviors of animals and humans interacting in and with their physical environments in time.
Yesterday Soupie linked (in the C&P thread) another paper by Tononi-Koch, also published in May of this year, in which they elaborate the differences between IIT1.-2. and IIT3 for a technically knowledgeable audience of researchers who have been attempting to use the earlier versions of IIT in their own research:
http://www.ploscompbiol.org/article/fetchObject.action?uri=info:doi/10.1371/journal.pcbi.1003588&representation=PDF
That paper is daunting for ordinary readers, but it's concluding paragraphs are not difficult to understand:
“. . . the primary aim of IIT 3.0 is simply to begin characterizing, in
a self-consistent and explicit manner, the fundamental properties
of consciousness and of the physical systems that can
support it. Hopefully, heuristic measures and experimental
approaches inspired by this theoretical framework will make
it possible to test some of the predictions of the theory
[14,69]. Deriving bounded approximations to the explicit
formalism of IIT 3.0 is also crucial for establishing in more
complex networks how some of the properties described
here scale with system size and as a function of system
architecture.
The above formulation of IIT 3.0 is also incomplete:
i) We did not discuss the relationship between MICS and specific
aspects of phenomenology, such as the clustering into
modalities and submodalities, and the characteristic ‘‘feel’’ of
different aspects of experience (space, shape, color and so on;
but see [4–6,18]).
ii) In the examples above, we assumed that
the ‘‘micro’’ spatio-temporal grain size of elementary logic
gates updating every time step was optimal. In general,
however, for any given system the optimal grain size needs
to be established by examining at which spatio-temporal level
integrated information reaches a maximum [20]. In terms of
integrated information, then, the macro may emerge over the
micro, just like the whole may emerge above the parts.
iii) While emphasizing that meaning is always internal to a
complex (it is self-generated and self-referential), we did not
discuss in any detail how meaning originates through the
nesting of concepts within MICS (its holistic nature).
iv)
In IIT, the relationship between the MICS generated by a complex of
mechanisms, such as a brain, and the environment to which it
is adapted, is not one of ‘‘information processing’’, but rather
one of ‘‘matching’’ between internal and external causal
structures [4,6]. Matching can be quantified as the distance
between the set of MICS generated when a system interacts
with its typical environment and those generated when it is
exposed to a structureless (‘‘scrambled’’) version of it [6,70].
The notion of matching, and the prediction that adaptation to
an environment should lead to an increase in matching and
thereby to an increase in consciousness, will be investigated in
future work, both by evolving simulated agents in virtual
environments (‘‘animats’’ [71–73]), and through neurophysiological
experiments.
v) IIT 3.0 explicitly treats integrated
information and causation as one and the same thing, but
the many implications of this approach need to be explored
in depth in future work. For example, IIT implies that
each individual consciousness is a local maximum of causal
power. Hence, if having causal power is a requirement
for existence, then consciousness is maximally real. Moreover,
it is real in and of itself – from its own intrinsic
perspective – without the need for an external observer to
come into being."