• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal

Free episodes:

Status
Not open for further replies.
@Constance - is Tononi saying that consciousness is necessary given the proper organization?

The opening of this lecture, he discusses it in terms of ". . .what I might do to you in a few years" and evokes the Zombie experiement, but then veers away and he ends with:

"why would it (consciousness) evolve b/c if people have the idea that conciousness is an epiphemenon that sits on top of the working of the brain and the brain does all the behavior, why should it evolve at all? - this theory gives a very clear answer to that question . . . consciousness is a fundamental property, like mass or charge" . . . and then it cuts off and I can't find the full talk on the internet. I was hoping he would deal with mental causation. Maybe it's in the paper?


This is a whole interesting area - b/c it means qualia or what it "feels like" would have to figure in evolutionarily and so would have to be causal?
 
@Constance

Raises lots of fascinating questions:

http://www.nytimes.com/2010/09/21/science/21consciousness.html?pagewanted=3&_r=2&sq=integrated information theory&st=nyt&scp=1

"In one series of experiments, researchers put people in vegetative or minimally conscious states into fMRI scanners and asked them to think about playing tennis. In some patients, regions of the brain became active in a pattern that was a lot like that in healthy subjects.

Dr. Tononi thinks these experiments identify consciousness in some patients, but they have serious limitations. “It’s complicated to put someone in a scanner,” he said. He also notes that thinking about tennis for 30 seconds can demand a lot from people with brain injuries. “If you get a response I think it’s proof that’s someone’s there, but if you don’t get it, it’s not proof of anything,” Dr. Tononi said."

This is where false negatives become critical . . . and it still doesn't quantify subjectivity - level of consciousness doesn't equate to the quality of subjective experience - for example, some of the more pleasant subjective states I've experienced have been coming to consciousness from sleep . . . let's say we meause Phi there and it's lower than someone in a persistent vegetative state? Perhaps that person is in a continuously blissful state - should we sustain that life? I've also had some very disturbing if incoherent experiences coming out of what I assume is deep sleep. Now we don't know if our comatose patient is in a perpetually painful subjective state composed of inchoate horror.

If my Phi falls below the level of a dog . . . do you pull the plug on me? You wouldn't on a dog obviously because everything is fine for the dog - but what does a dog-level of Phi mean to me? Compare to IQ - a person with a normal IQ for a dog (I know that's not really possible to compare) wouldn't be doing very well as a human being - and I bring that up b/c we've made these kind of decisions (sterilization for example) using IQ tests in the past. Would we sign a living will based on a specific Phi level>?

But what am I experiencing for a given level of Phi? We have to rely on verbal reports as indicated in the experiment.

Should a President candidate have to take a Phi test?

But if we take the technology a step further - and we have a way to represent the images and sounds in my head - convert these "signals" to a screen - we still don't know what I am experiencing? Suppose it shows a beautiful dream of me running in the field with my dog - and this dream repeats over and over and over . . . do we pull the plug b/c nothing else seems to be going on or do we presume I'm having a great time . . . or will I get bored and how would you tell from the screen? Now, suppose it's a horrific nightmare image playing over and over? And who has to look at this - family? Do the decide it's time to pull the plug? Do we have to inform them that despite what they see on the screen - I might be very peaceful and happy?

@Soupie:

Now, the crux of all this is to do with the hard problem - bc a solution to the hard problem would give us the means to objectively answer all of the above questions . . . so until we have that, I don't think we have answered the hard problem of consciousness. And to me, that's more than a little gap in any existing theory. And I think Chalmers, Nagel and Tononi might not dis-agree!
 
@scmder You have a double negative in the first sentence and I'm not sure what it's referring to - when you say it doesn't mean it's untrue - what point are you saying is true? Epiphenomenalism?

Epiphenomenalism being undesirable <> Epiphenomenalism being true or untrue

@scmder The whole point of the argument is that I would be unable to show a living human with no qualia - that a Zombie would be indistinguishable from anyone else.

Haha okay. Show me a zombie indistinguishable from anyone else and I'll show you a working steam engine with no steam.

So let's combine the discussion of Tononi's consciousness meter with science-fiction. Let's assume Zombie's aren't metaphysically impossible but they don't yet exist - so the meter is now an accurate guide to consciousness.

Have you ever read Phillip K Dick's Second Variety? It was filmed as Screamers. Another story - more disturbing to me, was filmed with Gary Sinise in 2001 as Imposter

Impostor (2001) - IMDb

So let's assume robotics has come to this level: AI+ so it's smarter than we are.

But the Phi meter should still work, right? In fact, it might indicate who the robot is by showing a higher level of Phi. This means robot's trying to infilitrate human society would have to be no more conscious than humans as defined by Tononi . . . but what does that mean? I'm not sure.

And what does even a human level of Phi mean in a robot? I'm not sure of that either - the subjective experience I guess would be very different - what's it like to be a robot could be very, very different than what it's like to be a human.

But - what if it isn't anything to be like a robot? Because, although they never existed before - this new form of intelligence might be entirely unconscious (I don't see anything in Tononi's theory that says otherwise) - even if it were part of our evolutionary history to be conscious - it might only be so b/c of the material used (carbon for example) or because of other evolutionary happenstance - how do we know there aren't other ways to build a brain - we certasinly admit and ascribe tremendous capacity to our own unconscious mind (but this may be a misnomer by Tononi's theory) - ... or are there arguments for mental causation in humans that would apply to this entirely other form of intelligence and to all possible forms?

This is what would be involved in dealing with the Zombie argument. John Searle, by the way - thinks that there is something in the brain itself that is necessary for consciousness - so that anything conscious would be a brain (narrowly defined - if not a human brain) - what arguments are there that the only kind of consciousness is human consciousness. Is that intuitive? Or is unappealing?

Point is - the Phi meter would discriminate the conscious human from the unconscious but not the conscious human from the unconscious robot.
 
@scmder You have a double negative in the first sentence and I'm not sure what it's referring to - when you say it doesn't mean it's untrue - what point are you saying is true? Epiphenomenalism?

Epiphenomenalism being undesirable <> Epiphenomenalism being true or untrue

@scmder The whole point of the argument is that I would be unable to show a living human with no qualia - that a Zombie would be indistinguishable from anyone else.

Haha okay. Show me a zombie indistinguishable from anyone else and I'll show you a working steam engine with no steam.

That's packed very tightly, whether by intention or not. You seem to be saying that, as you see it, a) humans are distinguishable from zombies [which by definition do not experience the world the way we do because they lack consciousness], but that the only way we could distinguish them from ourselves is that they would not, like working team engines, produce 'steam'. Steam appears in your system to stand in for 'qualia', which -- again in your view -- are produced by the brain's reception of naked 'information' [information without qualities, such as digital binary information] in the world rather than through consciously apprehended experiences of embodied consciousnesses existing in the phenomenological thickness of the natural world we are born into. Or maybe you're saying something else. {?} It would help if you would work harder to clarify what you're saying. I realize that you tried earlier on to make your IPS theory of reality clear, but it made no sense to me then and that situation continues. Maybe it's time to discontinue this dialogue?
 
What animals do this? Even in our fighting we are unique.........



What machine would do the above - what would have to be present to be 'human' in this way? Though we do anthropomorphize our teddy bears - even in this famous scene, the 'soul' shows up (the dove} - it's the only way we have of understanding consciousness/awareness........


I wish I could watch the clips - but I love the Blade Runner scene - and the way Roy Batty shows mercy to Decker before dying. Hauer was perfectly cast and has been a favorite actor - it's been years since I've seen it, but I remeber he was very powerful in Escape from Sobibor.
 
@Scmder Is Tononi saying that consciousness is necessary given the proper organization?

I'm not sure if that is Tonini's belief, but that is my belief.

I am a panpsychist; I believe that all information processing systems of X level of complexity emit qualia.

Re: epiphomenalism

My intuition is that having a stream of experience (integrated information) is epiphenomenal, but that a self-aware stream of experience (integrated information) has causal influence on the system. This is where theory of mind and self-regulation would enter the picture.
 
That's packed very tightly, whether by intention or not. You seem to be saying that, as you see it, a) humans are distinguishable from zombies [which by definition do not experience the world the way we do because they lack consciousness], but that the only way we could distinguish them from ourselves is that they would not, like working team engines, produce 'steam'. Steam appears in your system to stand in for 'qualia', which -- again in your view -- are produced by the brain's reception of naked 'information' [information without qualities, such as digital binary information] in the world rather than through consciously apprehended experiences of embodied consciousnesses existing in the phenomenological thickness of the natural world we are born into. Or maybe you're saying something else. {?} It would help if you would work harder to clarify what you're saying. I realize that you tried earlier on to make your IPS theory of reality clear, but it made no sense to me then and that situation continues. Maybe it's time to discontinue this dialogue?
I can reply to this later... I gotta run now. See my comparison of my IPS/music analogy to Tonini's ITT. I can't make it much clearer than that. :)

But yes, you are right. You don't grok my view. I'm not sure why though.
 
@Scmder Is Tononi saying that consciousness is necessary given the proper organization?

I'm not sure if that is Tonini's belief, but that is my belief.



Re: epiphomenalism

My intuition is that having a stream of experience (integrated information) is epiphenomenal, but that a self-aware stream of experience (integrated information) has causal influence on the system. This is where theory of mind and self-regulation would enter the picture.

What is the causal mechanism of experience on the physical system?
 
@Scmder What is the causal mechanism of experience on the physical system?

Haha, you couldn't wait to ask that, huh! ;) I already gave you an answer. You must have missed it in your rush to throw paradoxes at me.

@Constance You seem to be saying that, as you see it, a) humans are distinguishable from zombies [which by definition do not experience the world the way we do because they lack consciousness], but that the only way we could distinguish them from ourselves is that they would not, like working team engines, produce 'steam'. Steam appears in your system to stand in for 'qualia', which -- again in your view -- are produced by the brain's reception of naked 'information' [information without qualities, such as digital binary information] in the world rather than through consciously apprehended experiences of embodied consciousnesses existing in the phenomenological thickness of the natural world we are born into...

It would help if you would work harder to clarify what you're saying.

Oh, that gave me a chuckle!

Since you seem to "understand" Tonini's ITT, we can say that I share that view.

What you don't seem to get is that just because a brain has experiences (in ITT that would be integrated information) that doesn't mean there is an awareness that one is having experiences.

I'm not sure how I can state that any more clearly. You may disagree with this concept, but that doesn't mean 1) it's unclear, or 2) wrong.

Let me try this:

A cat has qualia. A cat is not aware that it is a cat with qualia.

A human has qualia. A human is aware that it is a human with qualia.

Thoughts?
 
@Scmder What is the causal mechanism of experience on the physical system?

Haha, you couldn't wait to ask that, huh! ;) I already gave you an answer. You must have missed it in your rush to throw paradoxes at me.

@Constance You seem to be saying that, as you see it, a) humans are distinguishable from zombies [which by definition do not experience the world the way we do because they lack consciousness], but that the only way we could distinguish them from ourselves is that they would not, like working team engines, produce 'steam'. Steam appears in your system to stand in for 'qualia', which -- again in your view -- are produced by the brain's reception of naked 'information' [information without qualities, such as digital binary information] in the world rather than through consciously apprehended experiences of embodied consciousnesses existing in the phenomenological thickness of the natural world we are born into...

It would help if you would work harder to clarify what you're saying.

Oh, that gave me a chuckle!

Since you seem to "understand" Tonini's ITT, we can say that I share that view.

What you don't seem to get is that just because a brain has experiences (in ITT that would be integrated information) that doesn't mean there is an awareness that one is having experiences.

I'm not sure how I can state that any more clearly. You may disagree with this concept, but that doesn't mean 1) it's unclear, or 2) wrong.

Let me try this:

A cat has qualia. A cat is not aware that it is a cat with qualia.

A human has qualia. A human is aware that it is a human with qualia.

Thoughts?

I'm just trying to clarify your position. So yes, I missed it - would you mind repeating it? The question, to be specific:

Phenomenal states deal with the first-person aspect of the mind, whereas psychological states deal with the third-person aspect of the mind.

Mental Causation
Book review of David Chalmers
“The paradox to be explained is not that body and mind communicate but that cognition and consciousness communicate.”

So the issue is how a subjective experience has a causal effect on a physical system. In the case of I am hungry so I go to the refrigerator - what initiates the action is the subjective feeling of hunger, not any physical mechanism that preceded the subjective feeling of hunger. Otherwise, we could say they co-occur and/or that the physical mechanism accounts for both.

As far as I know, that's not a paradox.
 
@scmder “The paradox to be explained is not that body and mind communicate but that cognition and consciousness communicate.” ... As far as I know, that's not a paradox.

Huh?

Here's my "position":
How do qualia supervene on the IPS from which they emerge? I'm not sure, but I believe they do. Like Chalmers suggests, this is in need of exploring.

How does a flock supervene on the individual birds of which it is composed? How does liquid supervene on the molecules of which it is composed? How does a forest supervene on the trees of which it is composed? In all these cases, it's clear that the emergent property supervenes, albeit indirectly, on the units of which it is composed. I think qualia and IPSs are no different.

Re: epiphomenalism

My intuition is that having a stream of experience (integrated information) is epiphenomenal, but that a self-aware stream of experience (integrated information) has causal influence on the system. This is where theory of mind and self-regulation would enter the picture.
 
@scmder “The paradox to be explained is not that body and mind communicate but that cognition and consciousness communicate.” ... As far as I know, that's not a paradox.

Huh?

Here's my "position":

That's a quote from the Chalmer's book review - not my words. I don't think of the question as being a paradox.

See SEP on mental causation - specifically the problems with Property Dualism.

Mental Causation (Stanford Encyclopedia of Philosophy)

One possibility is that subjective experience just isn't causal - but is just what it feels like as things are happening - some experiments seem to be able to predict a person's actions before they make a decision to do something (under certain conditions) . . . which supports (but doesn't prove) the idea that our subjective experience doesn't have any effect on what we actually do - that we can do everything we do without it feeling like anything at all - if the feeling comes along a few thousandths of a second after the decision is actually made, then the big question is why does it comes along at all? This clips out any evolutionary explanation in terms of we do because it has survival value . . . other explanations in terms of consciousness being fundamental have possibilities - but then the question is why do conscious experiences match what is happening to us? If the subjective experience of pain is not causal - then why should we feel pain?
 
Last edited by a moderator:
@scmder “The paradox to be explained is not that body and mind communicate but that cognition and consciousness communicate.” ... As far as I know, that's not a paradox.

Huh?

Here's my "position":

As far as I know - no theory solves the hard problem of consciousness or mental causation . . . Chalmers says that all the time - even in the video I posted, I think it's Chalmers who even says we don't have any idea what such a theory would look like in detail, Nagel would agree I'm pretty sure . . . so all of this seems to calls for a paradigm change in how we view "what-is" . . . or perhaps we have to have a theory within our current physics (because all sciences right now are seen as reducing to physics) that solves the hard problem and can thus make predictions about subjective experience the same way it makes predictions about physical things . . . OR it somehow has to show that there really is no hard problem of consciousness (e.g. eliminative materialism) but I haven't seen anything that does either one of these without major challenges. This doesn't mean we throw out everything we know or that we can't make progress.

@Constance posted this a while back on quantum theories of consciousness:

Quantum Approaches to Consciousness (Stanford Encyclopedia of Philosophy)

It is widely accepted that consciousness or, more generally, mental activity is in some way correlated to the behavior of the material brain. Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness. Several programmatic approaches answering this question affirmatively, proposed in recent decades, will be surveyed. It will be pointed out that they make different epistemological assumptions, refer to different neurophysiological levels of description, and use quantum theory in different ways. For each of the approaches discussed, problematic and promising features will be equally highlighted.
 
Steve wrote:

. . . what does even a human level of Phi mean in a robot? I'm not sure of that either - the subjective experience I guess would be very different - what's it like to be a robot could be very, very different than what it's like to be a human.

The question is whether subjective experience would/could take place in a robot. Subjective experience is another way of saying 'consciousness'. Robotics and AI triggered the development of interdisciplinary Consciousness Studies because computer engineers sought to build machine intelligences superior to human intelligences. It soon became apparent to them that consciousness is involved in the development of intelligence (indeed requisite for it in the naturally evolved world) and that their machines would need to duplicate consciousness somehow. AI engineers have issued, for a long time now, a promissory note that they will eventually build conscious robots. I doubt it.

- what's it like to be a robot could be very, very different than what it's like to be a human.

I think it would have to be. For one thing, consciousness as we know it is influenced by subconscious impulses and ideations, as well as by collectively unconscious ones expressed throughout human cultural history in archetypes. The individual subconscious and the collective unconscious exist by virtue of memory accrued over the lifetime of an individual and millenia of our species' history. All of this is also 'information', and not of a digitized numeric or algorhythmic type. It is information that informs our bodies, our feelings, our intuitions, and our intelligence in sensory and global ways.

.
 
@Scmder What is the causal mechanism of experience on the physical system?

Haha, you couldn't wait to ask that, huh! ;) I already gave you an answer. You must have missed it in your rush to throw paradoxes at me.

I don't think your response to Steve/smcder (quoting some earlier claims of yours) actually answered his question. Also, he is not 'throwing paradoxes' at you but calling your attention to critical issues debated in Consciousness Studies in order to understand how you respond to them in terms of your unique theory.

@Constance You seem to be saying that, as you see it, a) humans are distinguishable from zombies [which by definition do not experience the world the way we do because they lack consciousness], but that the only way we could distinguish them from ourselves is that they would not, like working team engines, produce 'steam'. Steam appears in your system to stand in for 'qualia', which -- again in your view -- are produced by the brain's reception of naked 'information' [information without qualities, such as digital binary information] in the world rather than through consciously apprehended experiences of embodied consciousnesses existing in the phenomenological thickness of the natural world we are born into...

It would help if you would work harder to clarify what you're saying.
Oh, that gave me a chuckle!

Since you seem to "understand" Tonini's ITT, we can say that I share that view.

I don't think you do since you ignore his recognition of the role of subjective phenomenal experience in human consciousness and cognition.

What you don't seem to get is that just because a brain has experiences (in ITT that would be integrated information) that doesn't mean there is an awareness that one is having experiences.

In human consciousness, information is integrated by and through more than the brain (which for you seems to be identical to a computer). Information is first felt and integrated through the body, which experiences a palpable world via all of its primary senses (and perhaps subtler ones that we are not yet aware of). Our perception of phenomena in the world (through our eyes, our ears, our sense of touch, etc.) leads to the perception of our own point of view on phenomena, then to reflection on our phenomenal experience itself {already incipient cognition}, and thence to our recognition of our consciousness of an environment in which we exist as a self standing in some degree apart from our surroundings -- and capable of thinking about the relation of consciousness and mind to the physical world {voila: philosophy, and especially phenomenological philosophy}.


Let me try this:

A cat has qualia. A cat is not aware that it is a cat with qualia.

My cat is undoubtedly aware of the qualia she experiences, just as humans are.

Self-awareness/self-definition of one's being a cat or a human or a dolphin is not the issue relative to establishing the experiential reality of phenomenal consciousness as such in humans and in many animals, informing those beings who possess it about how to navigate in and survive in the local physical world.

A human has qualia. A human is aware that it is a human with qualia.

Indeed, and that is just the beginning of what a human becomes aware of in moving about in the visible, audible, tactile world.


Quoted from the NY Times article concerning Tononi's theory that Steve linked above:

It’s the sort of proposal that I think people should be generating at this point: a simple and powerful hypothesis about the relationship between brain processing and conscious experience,” said David Chalmers, a philosopher at Australian National University. “As with most simple and powerful hypotheses, reality will probably turn out to be more complicated, but we’ll learn something from the attempt. I’d say that it doesn’t solve the problem of consciousness, but it’s a useful starting point.”
 
Last edited:
Steve wrote:



The question is whether subjective experience would/could take place in a robot. Subjective experience is another way of saying 'consciousness'. Robotics and AI triggered the development of interdisciplinary Consciousness Studies because computer engineers sought to build machine intelligences superior to human intelligences. It soon became apparent to them that consciousness is involved in the development of intelligence (indeed requisite for it in the naturally evolved world) and that their machines would need to duplicate consciousness somehow. AI engineers have issued, for a long time now, a promissory note that they will eventually build conscious robots. I doubt it.



I think it would have to be. For one thing, consciousness as we know it is influenced by subconscious impulses and ideations, as well as by collectively unconscious ones expressed throughout human cultural history in archetypes. The individual subconscious and the collective unconscious exist by virtue of memory accrued over the lifetime of an individual and millenia of our species' history. All of this is also 'information', and not of a digitized numeric or algorhythmic type. It is information that informs our bodies, our feelings, our intuitions, and our intelligence in sensory and global ways.

.

I agree there are real challenges - I'd like to read Dreyfuss' two books What Computers Can't (and Still Can't) Do on the subject.

After the Chalmer's talk on the singularity, I thought about AI developed in a virtual environment - it's in competition with 14 billion years of experience, but Chalmers says it would have a head start with a little intelligence (that we would give the process) but, if Nagel is right:

"... what Nagel calls "natural teleology," the hypothesis that the universe has an internal logic that inevitably drives matter from nonliving to living, from simple to complex, from chemistry to consciousness, from instinctual to intellectual."

... so we would have to build in this teleology to the virtual environment? and we'd have to provide a properly rich virtual environment and even then, if the AI evolves - how does it make the transition out of the virtual environment into the real world - it seems its embodied cognition would be limited. And do we just assume it's made of stuff that wouldn't have any biological threats like we have been dealing with (that it wouldn't get sick?) -

However, sufficient cognition with the ability to self-replicate might be all that's required to displace us.

But I think we are more likely to augment our intelligence with machines or biological changes - and I'm not sure what this would produce in terms of still being human? . . . it's also possible that some kind of AI would forcefully hybridize with us or within us - all it would need is something like the complexity of a bacteria or virus . . . of make itself attractive enough that we carry it around every- oh wait, that's already happened! :)

Someone posted an article comparing Kurzweil's push to AI - he's bought up a number of robotics and AI companies and they are comparing it to the "Manhattan Project" for AI - not sure that's accurate.

And did you catch the part of Peterson's talk (I think it was the one on Piaget) where he talked about Big Dog?

BigDog - Wikipedia, the free encyclopedia
Boston Dynamics - YouTube

... he said he had a friend who works in AI and although the idea is that Big Dog carry equipment for soldiers, it was likely they would become armed platforms and capable of shooting not only where you are but the six places you are likely to be next. This is a re-play of when man learned to harness the superior smell and hearing of wolves (dogs) for hunting and war - which means maybe the most likely scenario is just a continuation of the one we are on - man + ever more sophisticated machine, that would seem to have a competitive edge in terms of making use of already evolved human intelligence and consciousness and the other possibilities require tremendous energy and resources. Any AI that evolves it seems would be in competition with us and a lot of other living things for energy - would face ecological barriers. A human requires about the same energy as a 60 watt light bulb . . .
 
I don't think your response to Steve/smcder (quoting some earlier claims of yours) actually answered his question. Also, he is not 'throwing paradoxes' at you but calling your attention to critical issues debated in Consciousness Studies in order to understand how you respond to them in terms of your unique theory.



I don't think you do since you ignore his recognition of the role of subjective phenomenal experience in human consciousness and cognition.



In human consciousness, information is integrated by and through more than the brain (which for you seems to be identical to a computer). Information is first felt and integrated through the body, which experiences a palpable world via all of its primary senses (and perhaps subtler ones that we are not yet aware of). Our perception of phenomena in the world (through our eyes, our ears, our sense of touch, etc.) leads to the perception of our own point of view on phenomena, then to reflection on our phenomenal experience itself {already incipient cognition}, and thence to our recognition of our consciousness of an environment in which we exist as a self standing in some degree apart from our surroundings -- and capable of thinking about the relation of consciousness and mind to the physical world {voila: philosophy, and especially phenomenological philosophy}.


My cat is undoubtedly aware of the qualia she experiences, just as humans are.

Self-awareness/self-definition of one's being a cat or a human or a dolphin is not the issue relative to establishing the experiential reality of phenomenal consciousness as such in humans and in many animals, informing those beings who possess it about how to navigate in and survive in the local physical world.


Indeed, and that is just the beginning of what a human becomes aware of in moving about in the visible, audible, tactile world.


Quoted from the NY Times article concerning Tononi's theory that Steve linked above:

In human consciousness, information is integrated by and through more than the brain (which for you seems to be identical to a computer). Information is first felt and integrated through the body, which experiences a palpable world via all of its primary senses (and perhaps subtler ones that we are not yet aware of). Our perception of phenomena in the world (through our eyes, our ears, our sense of touch, etc.) leads to the perception of our own point of view on phenomena, then to reflection on our phenomenal experience itself {already incipient cognition}, and thence to our recognition of our consciousness of an environment in which we exist as a self standing in some degree apart from our surroundings -- and capable of thinking about the relation of consciousness and mind to the physical world {voila: philosophy, and especially phenomenological philosophy}.

We have to bring in the evidence of phenomena like NDEs and OOBs, re-incarnation and pre-cognition etc. to any theory of consciousness.
 
@scmder As far as I know - no theory solves the hard problem of consciousness or mental causation . . .

I don't disagree.

One idea to explore - as I've already said - is what are the properties of qualia? How might these properties supervene on IPS and integrated information?

For instance, just as the properties of liquid supervene on the molecules of which the liquid is composed, so might the properties of qualia supervene on the integrated information of which they are composed.

For instance, an H2O molecule on its own cannot swirl around in a glass cup. However, if that H2O molecule is engaged with X number of other molecules, it can swirl around in a glass, as this is a property of liquid.

Such might be the relationship of qualia on the information of which it is composed. We don't know. That it why I agree with Chalmers that qualia likely have ontologically new properties which we need to explore.

Finally, keep in mind that Chalmers is a property dualist, not a substance dualist. Meaning: my perspective and his perspective are closer than you seem to believe, scmder.

@Constance I don't think your response to Steve/smcder (quoting some earlier claims of yours) actually answered his question.

Did I solve the mind/body paradox? The hard problem? No. Did I answer scmder's question? Yes. (From here forward, Constance, if I don't reply to a direct question of yours, it won't be out of spite, but because I feel that you have allowed your own biases to color my words. This has happened too many times already. i can't waste any more time on it.)

@Constance I don't think you do since you ignore his recognition of the role of subjective phenomenal experience in human consciousness and cognition.

No, I don't.

Taken directly from Consciousness as Integrated Information: a Provisional Manifesto:

"[T]he quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Φ) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections). Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience."

> the role of subjective phenomenal experience in human consciousness

What you still don't seem to understand is that subjective phenomenal experience is consciousness.

@Constance In human consciousness, information is integrated by and through more than the brain (which for you seems to be identical to a computer). Information is first felt and integrated through the body, which experiences a palpable world via all of its primary senses (and perhaps subtler ones that we are not yet aware of).

The underlined may be your view, but that is certainly not what Tonini and ITT suggest. Information is not "felt" until and only if it is integrated.

@Constance My cat is undoubtedly aware of the qualia she experiences, just as humans are.

Unless your cat is self-aware, it will not be aware that it is experiencing qualia. Note that this is not the same as saying your cat does not experience qualia.

Metacognition.PNG


@scmder . . . what does even a human level of Phi mean in a robot? I'm not sure of that either - the subjective experience I guess would be very different - what's it like to be a robot could be very, very different than what it's like to be a human.

According to ITT, two different complexes can theoretically generate the same experience. But since ITT says this:

(i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Φ) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections).

It's likely that a "robot" would have much richer qualia than humans, just as humans have much richer qualia than an earthworm.

As ITT suggests that qualia = integrated information, any robot brain that functioned by generating integrated information would thus have qualia. (Note that Chalmers agrees that AI may have qualia.) However, it's possible that some AI might not generate integrated information and thus they would not have qualia.
 
Last edited:
@scmder As far as I know - no theory solves the hard problem of consciousness or mental causation . . .

I don't disagree.

One idea to explore - as I've already said - is what are the properties of qualia? How might these properties supervene on IPS and integrated information?

For instance, just as the properties of liquid supervene on the molecules of which the liquid is composed, so might the properties of qualia supervene on the integrated information of which they are composed.

For instance, an H2O molecule on its own cannot swirl around in a glass cup. However, if that H2O molecule is engaged with X number of other molecules, it can swirl around in a glass, as this is a property of liquid.

Such might be the relationship of qualia on the information of which it is composed. We don't know. That it why I agree with Chalmers that qualia likely have ontologically new properties which we need to explore.

Finally, keep in mind that Chalmers is a property dualist, not a substance dualist. Meaning: my perspective and his perspective are closer than you seem to believe, scmder.

@Constance I don't think your response to Steve/smcder (quoting some earlier claims of yours) actually answered his question.

Did I solve the mind/body paradox? The hard problem? No. Did I answer scmder's question? Yes. (From here forward, Constance, if I don't reply to a direct question of yours, it won't be out of spite, but because I feel that you have allowed your own biases to color my words. This has happened too many times already. i can't waste any more time on it.)

@Constance I don't think you do since you ignore his recognition of the role of subjective phenomenal experience in human consciousness and cognition.

No, I don't.

Taken directly from Consciousness as Integrated Information: a Provisional Manifesto:

"[T]he quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Φ) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections). Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience."

@Constance In human consciousness, information is integrated by and through more than the brain (which for you seems to be identical to a computer). Information is first felt and integrated through the body, which experiences a palpable world via all of its primary senses (and perhaps subtler ones that we are not yet aware of).

The underlined may be your view, but that is certainly not what Tonini and ITT suggest. Information is not "felt" until and only if it is integrated.

@Constance My cat is undoubtedly aware of the qualia she experiences, just as humans are.

Unless your cat is self-aware, it will not be aware that it is experiencing qualia. Note that this is not the same as saying your cat does not experience qualia.

Metacognition.PNG


@scmder . . . what does even a human level of Phi mean in a robot? I'm not sure of that either - the subjective experience I guess would be very different - what's it like to be a robot could be very, very different than what it's like to be a human.

According to ITT, two different complexes can theoretically generate the same experience. But since ITT says this:

(i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Φ) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections).

It's likely that a "robot" would have much richer qualia than humans, just as humans have much richer qualia than an earthworm.

As ITT suggests that qualia = integrated information, any robot brain that functioned by generating integrated information would thus have qualia. (Note that Chalmers agrees that AI may have qualia.) However, it's possible that some AI might not generate integrated information and thus they would not have qualia.

Yes - much clearer now. I think mental causation might be a better way to introduce the hard problem - people take consciousness for granted but to ask - how does a thought cause an action brings the problem out and motivates a look at possible positions and their problems.

Yes, Chalmers:

Chalmers characterizes his view as "naturalistic dualism": naturalistic because he believes mental states are causedby physical systems (such as brains); dualist because he believes mental states are ontologically distinct from and not reducible to physical systems.
 
Status
Not open for further replies.
Back
Top