Soupie
Paranormal Adept
I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition | NeuroBanter
My own thinking has been that an organism can have subjective experience, but lack self-aware consciousness (and the ability to reflect on their subjective experience).
However, the author's musings about metacognition and consciousness, as well as Graziano's theory, imply that subjective experience — the "what it's like" — arises in part due to meta-awareness.
This is not my current belief.
How consciousness works – Michael Graziano – Aeon
The related question I have asked is: How does information become aware of itself?
I think Graziano has something here, but it's not the answer to the Hard Problem. Like the first paper on metacognition, it might indicate that self-awareness (or meta-awareness) is a necessary ingredient for subjective experience.
If this is the case, then organisms lacking meta- or self-awareness will not have human-like phenomenal experience — not just the inability to report them.
Graziano suggests that his model provides an answer to the hard problem, and while attention/awareness may ultimately play a role in the realization of consciousness, I think what Graziano's model does best is describe how the brain creates a model of the mental self. The "I" that resides inside my body instead of the big tree out front.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223025/pdf/nihms328502.pdf
Again, I think Graziano has something here: the same brain processes that allow us to project awareness and intention on other objects (often times erroneously) likely play a role in the creation of our own "I" centered in our bodies.
Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.
In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!
This is important because it changes how we think about metacognition. ...
The discovery of blind insight changes the way we think about decision-making. ... Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference. ...
This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them.
In the comments, the author is asked to expand on how these research relates to consciousness:In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!
This is important because it changes how we think about metacognition. ...
The discovery of blind insight changes the way we think about decision-making. ... Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference. ...
This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them.
Well that’s the highly speculative bit! In visual perception, we are already starting to see how top-down expectations can have dramatic influences on conscious contents – or on what reaches consciousness in the first place (we are writing up some of these experiments now; a key issue here is to distinguish expectation from attention, which is difficult but possible). So the thought is that if metacognition involves top-down processes – perhaps in shaping expectations about the statistics of probability distributions underlying perceptual decision – then the act of making a metacognitive judgement could actually shape (or maybe even ‘give rise to’ in the sense of crossing a threshold) – the corresponding conscious contents. This fits nicely with the idea that behavioural report of an experience (which by definition involves metacognition) is an action, not just a passive read-out of some pre-existing information. And actions can shape how perceptions arise from sensations. What we need to do next is to develop theoretical models of how metacognitive judgements actually arise, and then do the experiments to show whether (and how) engaging in these judgements changes the conscious contents these judgements are about.
My own thinking has been that an organism can have subjective experience, but lack self-aware consciousness (and the ability to reflect on their subjective experience).
However, the author's musings about metacognition and consciousness, as well as Graziano's theory, imply that subjective experience — the "what it's like" — arises in part due to meta-awareness.
This is not my current belief.
How consciousness works – Michael Graziano – Aeon
Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem.
The related question I have asked is: How does information become aware of itself?
This question is scientifically approachable, and the attention schema theory supplies the outlines of an answer.
One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B. Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. ...
The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. ...
What are out-of-body experiences then? One view might be that no such things exist, that charlatans invented them to fool us. Yet such experiences can be induced in the lab, as a number of scientists have now shown. A person can genuinely be made to feel that her centre of awareness is disconnected from her body. The very existence of the out-of-body experience suggests that awareness is a computation and that the computation can be disrupted. Systems in the brain not only compute the information that I am aware, but also compute a spatial framework for it, a location, and a perspective. Screw up the computations, and I screw up my understanding of my own awareness. ...
One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B. Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. ...
The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. ...
What are out-of-body experiences then? One view might be that no such things exist, that charlatans invented them to fool us. Yet such experiences can be induced in the lab, as a number of scientists have now shown. A person can genuinely be made to feel that her centre of awareness is disconnected from her body. The very existence of the out-of-body experience suggests that awareness is a computation and that the computation can be disrupted. Systems in the brain not only compute the information that I am aware, but also compute a spatial framework for it, a location, and a perspective. Screw up the computations, and I screw up my understanding of my own awareness. ...
I think Graziano has something here, but it's not the answer to the Hard Problem. Like the first paper on metacognition, it might indicate that self-awareness (or meta-awareness) is a necessary ingredient for subjective experience.
If this is the case, then organisms lacking meta- or self-awareness will not have human-like phenomenal experience — not just the inability to report them.
Graziano suggests that his model provides an answer to the hard problem, and while attention/awareness may ultimately play a role in the realization of consciousness, I think what Graziano's model does best is describe how the brain creates a model of the mental self. The "I" that resides inside my body instead of the big tree out front.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223025/pdf/nihms328502.pdf
... Second, people routinely compute the state of awareness of other people. A fundamental
part of social intelligence is the ability to compute information of the type, “Bill is aware of
X.” In the present proposal, the awareness we attribute to another person is our
reconstruction of that person’s attention. This social capability to reconstruct other people’s
attentional state is probably dependant on a specific network of brain areas that evolved to
process social information, though the exact neural instantiation of social intelligence is still
in debate.
Third, in the present hypothesis, the same machinery that computes socially relevant
information of the type, “Bill is aware of X,” also computes information of the type, “I am
aware of X.” When we introspect about our own awareness, or make decisions about the
presence or absence of our own awareness of this or that item, we rely on the same circuitry
whose expertise is to compute information about other people’s awareness.
Fourth, awareness is best described as a perceptual model. It is not merely a cognitive or
semantic proposition about ourselves that we can verbalize. Instead it is a rich informational
model that includes, among other computed properties, a spatial structure. A commonly
overlooked or entirely ignored component of social perception is spatial localization. Social
perception is not merely about constructing a model of the thoughts and emotions of another
person, but also about binding those mental attributes to a location. We do not merely
reconstruct that Bill believes this, feels that, and is aware of the other, but we perceive those
mental attributes as localized within and emanating from Bill. In the present hypothesis,
through the use of the social perceptual machinery, we assign the property of awareness to a
location within ourselves. ...
part of social intelligence is the ability to compute information of the type, “Bill is aware of
X.” In the present proposal, the awareness we attribute to another person is our
reconstruction of that person’s attention. This social capability to reconstruct other people’s
attentional state is probably dependant on a specific network of brain areas that evolved to
process social information, though the exact neural instantiation of social intelligence is still
in debate.
Third, in the present hypothesis, the same machinery that computes socially relevant
information of the type, “Bill is aware of X,” also computes information of the type, “I am
aware of X.” When we introspect about our own awareness, or make decisions about the
presence or absence of our own awareness of this or that item, we rely on the same circuitry
whose expertise is to compute information about other people’s awareness.
Fourth, awareness is best described as a perceptual model. It is not merely a cognitive or
semantic proposition about ourselves that we can verbalize. Instead it is a rich informational
model that includes, among other computed properties, a spatial structure. A commonly
overlooked or entirely ignored component of social perception is spatial localization. Social
perception is not merely about constructing a model of the thoughts and emotions of another
person, but also about binding those mental attributes to a location. We do not merely
reconstruct that Bill believes this, feels that, and is aware of the other, but we perceive those
mental attributes as localized within and emanating from Bill. In the present hypothesis,
through the use of the social perceptual machinery, we assign the property of awareness to a
location within ourselves. ...
Again, I think Graziano has something here: the same brain processes that allow us to project awareness and intention on other objects (often times erroneously) likely play a role in the creation of our own "I" centered in our bodies.
Last edited: