The following is important re AI and the immense gap between how we understand the world experientially and culturally and what gets lost or distorted in the misunderstanding of the gap between language and lived reality. How can AI avoid an immense confusion concerning what is real and palpable and that which is reduced to concepts and images in language itself?
Robo-writers: the rise and risks of language-generating AI
The following paper should help to clarify this issue:
HYPOTHESIS AND THEORY ARTICLE
Front. Psychol., 09 May 2011 |
https://doi.org/10.3389/fpsyg.2011.00086
The radical plasticity thesis: how the brain learns to be conscious
Axel Cleeremans*
Consciousness, Cognition and Computation Group, Université Libre de Bruxelles, Bruxelles, Belgium
In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain’s continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to
know they possess certain first-order states and that have learned to
care more about certain states than about others. This is what I call the “Radical Plasticity Thesis.” In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves “signal detection on the mind”; the conscious mind is the brain’s (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.
Consider the humble but proverbial thermostat. A thermostat is a simple device that can turn a furnace on or off depending on whether the current temperature exceeds a set threshold. Thus, the thermostat can appropriately be said to be
sensitive to temperature. But is there some sense in which the thermostat can be characterized as being
aware of temperature? Contra
Chalmers (1996), I will argue that there is no sense in which the thermostat can be characterized as being aware of temperature. There are two important points that I would like to emphasize in developing this argument. The first is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not
know that it is sensitive to temperature. The second point is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not
care about whether its environment is hot or cold. I will further argue that these two features – knowledge of one’s own internal states and the emotional value associated with such knowledge – are constitutive of conscious experience. Finally, I will argue that learning (or, more generally, plasticity) is necessary for both features to emerge in cognitive systems. From this, it follows that consciousness is something that the brain learns to do through continuously operating mechanisms of neural plasticity. This I call the “Radical Plasticity Thesis.”
Information processing can undoubtedly take place without consciousness, as abundantly demonstrated not only by empirical evidence (the best example of which is probably blindsight), but also by the very fact that extremely powerful information-processing machines, namely computers, have now become ubiquitous. Only but a few would be willing to grant any quantum of conscious experience to contemporary computers, yet they are undeniably capable of sophisticated information processing – from recognizing faces to analyzing speech, from winning chess tournaments to helping prove theorems. Thus, consciousness is not information processing; experience is an “extra ingredient” (
Chalmers, 2007a) that comes over and beyond mere computation.
With this premise in mind – a premise that just restates
Chalmers’ (1996) hard problem, that is, the question of
why it is the case that information processing is accompanied by experience in humans and other higher animals, there are several ways in which one can think about the problem of consciousness.
One is to simply state, as per Dennett (e.g.,
Dennett, 1991,
2001) that there is nothing more to explain. Experience is
just (a specific kind of) information processing in the brain; the contents of experience are
just whatever representations have come to dominate processing at some point in time (“fame in the brain”); consciousness is
just a harmless illusion. From this perspective, it is easy to imagine that machines will be conscious when they have accrued sufficient complexity; the reason they are not conscious now is simply because they are not sophisticated enough: They lack the appropriate architecture perhaps, they lack sufficiently broad and diverse information-processing abilities, and so on. Regardless of what is missing, the basic point here is that there is no reason to assume that conscious experience is anything special. Instead, all that is required is one or several yet-to-be-identified functional mechanisms: Recurrence, perhaps (
Lamme, 2003), stability of representation (
O’Brien and Opie, 1999), global availability (
Baars, 1988;
Dehaene et al., 1998), integration and differentiation of information (
Tononi, 2003,
2007), or the involvement of higher-order representations (
Rosenthal, 1997,
2006), to name just a few.
Another perspective is to consider that
experience will never be amenable to a satisfactory functional explanation. Experience, according to some (e.g.,
Chalmers, 1996), is precisely what is left over once all functional aspects of consciousness have been explained. Notwithstanding the fact that so defined, experience is simply not something one can approach from a scientific point of view, this position recognizes that consciousness is a unique (a
hard) problem in the Cognitive Neurosciences. But that is a different thing from saying that a reductive account is not possible. A non-reductive account, however, is exactly what Chalmers’ Naturalistic Dualism attempts to offer, by proposing that information, as a matter of ontology, has a dual aspect, – a physical aspect and a phenomenal aspect. “Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing” (
Chalmers, 2007b, p. 366). This position leads him to defend the possibility that experience is a fundamental aspect of reality. Thus, even thermostats, for instance, may be endowed with very simple experiences, in virtue of the fact that they can toggle in two different states.
What, however, do we mean when we speak of “subjective experience” or of “quale”? The simplest definition of these concepts (
Nagel, 1974) goes right to the heart of the matter: “Experience” is
what it feels like for a conscious organism to be that organism. There is something it is like for a bat to be a bat; there is nothing it is like for a stone to be a stone. As
Chalmers (2007a) puts it: “When we see, for instance, we
experience visual sensations: The felt quality of redness, the experience of dark and light, the quality of depth in a visual field” (p. 226).
Let us try to engage in some phenomenological analysis at this point to try to capture what it means for each of us to have an experience. Imagine you see a patch of red (
Humphrey, 2006). You now have a
red experience – something that a camera recording the same patch of red will most definitely
not have. What is the difference between you and the camera?
Tononi (2007), from whom I borrow this simple thought experiment, points out that one key difference is that when you see the patch of red, the state you find yourself in is but one among billions, whereas for a simple light-sensitive device, it is perhaps one of only two possible states – thus the state conveys a lot more
differentiated information for you than for a light-sensitive diode. A further difference is that you are able to
integrate the information conveyed by many different inputs, whereas the chip on a camera can be thought of as a mere array of independent sensors among which there is no interaction.
Hoping not to sound presumptuous, it strikes me, however, that both Chalmers’ (somewhat paradoxically) and Tononi’s analyses miss fundamental facts about experience: Both analyze it as a rather abstract dimension or aspect of information, whereas experience –
what it feels like – is anything but abstract. On the contrary, what we mean when we say that seeing a patch of red elicits an “experience” is that the seeing
does something to us – in particular, we might feel one or several emotions, and we may associate the redness with memories of red. Perhaps seeing the patch of red makes you remember the color of the dress that your prom night date wore 20 years ago. Perhaps it evokes a vague anxiety, which we now know is also shared by monkeys (
Humphrey, 1971). To a synesthete, perhaps seeing the color red will evoke the number 5. The point is that if conscious experience is what it feels like to be in a certain state, then “What it feels like” can only mean the specific set of associations that have been established by experience between the stimulus or the situation you now find yourself in, on the one hand, and your memories, on the other. This is what one means by saying that there is something it is like to be you in this state rather than nobody or somebody else: The set of memories evoked by the stimulus (or by actions you perform, etc.), and, crucially, the set of emotional states associated with each of these memories. This is essentially the perspective that
Damasio (2010) defends.
Thus, a first point about the very notion of subjective experience I would like to make here is that it is difficult to see what experience could mean beyond (1) the emotional value associated with a state of affairs, and (2) the vast, complex, richly structured, experience-dependent network of associations that the system has learned to associate with that state of affairs. “What it feels like” for me to see a patch of red at some point seems to be entirely exhausted by these two points. Granted, one could still imagine an agent that accesses specific memories, possibly associated with emotional value, upon seeing a patch of red and who fails to “experience” anything. But I surmise that this would be mere simulation: One
could design such a zombie agent, but any real agent that is driven by self-developed motivation, and that cannot help but be influenced by his emotional states will undoubtedly have experiences much like ours.
Hence, there is nothing it is like for the camera to see the patch of red simply because it does not care: The stimulus is meaningless; the camera lacks even the most basic machinery that would make it possible to ascribe any interpretation to the patch of red; it is instead just a mere recording device for which nothing matters. There is nothing it is like to be that camera at that point in time simply because (1) the experience of different colors do not do anything to the camera; that is, colors are not associated with different emotional valences; and (2) the camera has no brain with which to register and process its own states. It is easy to imagine how this could be different. To hint at my forthcoming argument, a camera could, for instance, keep a record of the colors it is exposed to, and come to “like” some colors better than others. Over time, your camera would like different colors than mine, and it would also know that in some non-trivial sense. Appropriating one’s mental contents for oneself is the beginning of individuation, and hence the beginning of a
self.
Thus a second point about experience that I perceive as crucially important is that it does not make any sense to speak of experience without an
experiencer who experiences the experiences. Experience is, almost by definition (“what it feels like”), something that takes place not in
any physical entity but rather only in special physical entities, namely cognitive agents.
Chalmers’ (1996) thermostat fails to be conscious because, despite the fact that it can find itself in different internal states, it lacks the ability to remove itself from the causal chain which it instantiates. In other words, it lacks knowledge
that it can find itself in different states; it is but a mere mechanism that responds to inputs in certain ways. While there is indeed something to be experienced there (the different states the thermostat can find itself in), there is no one home to be the
subject of these experiences – the thermostat simply lacks the appropriate machinery to do so. The required machinery, I surmise, minimally involves the ability to
know that one finds itself in such or such a state. . . .”
Also this paper:
Cleeremans, A., Timmermans, B. & Pasquali, A. (2007) Consciousness and metarepresentation: A computational sketch.
Neural Networks
20(9):1032-9.
Redirecting