• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 2

Free episodes:

Status
Not open for further replies.
I think reading on for at least several pages beyond the thread link posted above would be advisable at this point because the discussion there included many ideas about consciousness, mind, and their relation to -- and insights into -- reality that constitute a significant part of the context produced so far in the C&P discussion. Those pages include several of the most informative posts that Steve, Soupie, and Pharoah have written here.
 
I've often thought of java's garbage collection phase as its version of sleep or zoning out.

It's how it goes through and releases memory back to the os, closes abandoned threads, etc.

A maintenance activity.

Like washing acetylcholine out of my brain with a glass of whiskey.
 
Last edited:
I've often thought of java's garbage collection phase as its version of sleep or zoning out.

It's how it goes through and releases memory back to the os, closes abandoned threads, etc.

A maintenance activity.

Like washing acetylcholine out of my brain with a glass of whiskey.

As I suggested above, the current discussion and all of us involved in it will proceed with increased mutual understanding if we read the pages of the thread beginning at the link below [not necessarily everything since then, but a half-dozen pages or so starting there]. The groundwork for our current discussion has been laid there. :)

Consciousness and the Paranormal — Part 2 | Page 22 | The Paracast Community Forums
 
Last edited:
Here is a description of Dennett's "Multiple Drafts" model of consciousness that he devised to address a problem he sees with many models. (Although I'm not sure it's a problem, at least for materialists.)

Multiple drafts model - Scholarpedia

The multiple drafts model of consciousness (Dennett, 1991, 1996, 1998, Dennett and Kinsbourne, 1992) was developed as an alternative to the perennially attractive, but incoherent, model of conscious experience Dennett calls Cartesian materialism, the idea that after early unconscious processing occurs in various relatively peripheral brain structures "everything comes together" in some privileged central place in the brain–which Dennett calls the Cartesian Theater --for "presentation" to the inner self or homunculus. There is no such place in the brain, but many theories seem to presuppose that there must be something like it. For instance:

  1. They postpone indefinitely the task of saying where and when the results of all the transformations and discriminations are “made available to conscious awareness,” which suggests sending their products to some higher center (for what purpose? what would happen there?), or
  2. They argue that a decision to move a limb takes several hundred milliseconds to “rise to consciousness,” creating an ominous picture of human agents as deluded about their ability to make a conscious decision (e.g., Libet, 1985, Libet et al., 1999, Wegner, 2002), or
  3. They suppose that the transduction by sense organs of light and sound and odor and so forth into an unconscious neural code must be followed (somewhere in the brain) by a second transduction into some other “medium”, the medium of consciousness (e.g., Mangan, 1993). ...
Now, when thinkers propose that there really is a non-material, non-temporary mental self (homunculus) I do think that is a problem. However, I don't think that's what materialists believe; at least not this materialist.

In trying to understand consciousness, we have to be careful not to focus solely on human consciousness and more specifically self-reflective (meta-cognitive) consciousness (which I think gives rise to the sense of self).

What I've tried to say here in this discussion is that there are many layers of consciousness (which I believe directly relate to "layers" or structures of the physical brain). Not all organisms possess these layers.

And I will say that I've thought hard about this problem - the idea that "everything comes together" in one place in the brain - because of IIT, which might be understood to suggest this (but I don't think it necessarily does).

This "problem" is exactly why I've always been careful to say that phenomenal/qualitative consciousness can and does exist in the absence of a sense of self. There are organisms that experience phenomenal/qualitative consciousness without also experiencing a sense of (mental) self.

What I think happens is that organisms and qualitative experience evolved together, conceptual cognition evolved/developed later, followed by meta-cogntion. The sense of self (or sense of homunculus) arises when the conceptual cognition layer becomes aware of itself (self-aware). Many animals - I believe - lack this conceptual layer and the meta-conceptual layer; furthermore, this layer in humans is not always "active."

I go back to the phenomenon of Blind Sight mentioned in Part One. This is the phenomenon of the brain pulling in and processing visual information but the conscious mind (the meta-cognitive/self-reflexive) mind not having access to this information.
 
Last edited:
Regarding Free Will

@scmder You have asked "why the movie?" If our CM does not control (all of) our thoughts or actions, what does it do?

Going back to Peterson, he talked about how we humans live in a self-created narrative. As I've said, I think the higher cognitive abilities (conceptual cognition and meta-cognition) are there for social problem solving.

I think the conscious mind gets the information/awareness second hand in order to construct a narrative. "Life is happening to us." We know that the narratives people create are not accurate nor objective. However, my guess is that this ability is adaptive. I struggle to understand how it could exist and be epiphenomenal. I mean, it's not even an accurate narrative!

My guess is that this meta-cognitive, conscious awareness and the narrative (meaning) it creates has some causal influence further down the road.
 
Last edited:
I see it a little differently. Forgive going back to the computer methodology.

Every process enters into the processor, goes through a series of logic gates, then goes out the other side. How many gates a process needs to go through before it comes out the other side is known as it's gate depth.

So, a floating point divide (a computationally expensive operation) may have a gate depth of, say, 20 gates. Each level of logic gates takes a finite period of time, so a floating point divide takes each gate time x 20 to deliver.

A simple "true or false" comparison may take, say, 2 gates to go through to deliver. So that's 1/10th as computationally expensive.

What I'm surmising is that there are different levels of awareness, and different levels of autonomy in our actions.

Consciousness is probably a neuronally expensive task. We have bazillions of them, and they're a network, but let's say just for the sake of argument they have a neuron depth of 20 to keep our higher consciousness up and running.

Pulling our hand away from something hot may have been evolutionarily optimized by our neurons a long time ago, so let's say it passes through whatever the equivalent is of 2 neuron "gates" to be done. The act may be done before we're actually aware of it.

Let's also say that we have various subcomponents of the mind. One is for autonomic functions, one is for sympathetic functions, one for parasympathetic functions, etc. My brain, for the sake of local optimization, may carve off a set of neuron pathways to drive me into work. That's it's job -- just get me into work and don't make me think too much about it. Let's say it's optimized so it has a neuron depth of 10.

So it happily chugs along, and our consciousness is freed up to think as we drive into work. We may, then, take a turn or jam on the brakes before we become aware of it.

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

In short, I see it kinda this way, with a series of pretty elastically defined sub processes being executed massively in parallel, with different timings.

I don't think I'm quite there yet. When you say:

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

What do you mean by consciousness itself? If you mean the subjective experience, the awareness - it still sounds like (for you) that awareness is a product of neural processing - something that emerges, but it's always the neurons firing that does the work: the work of moving the hand and the work of creating the awareness, if that is the case, then how can an epiphenomenon cause the hand to move and why would you need two causes (the consciousness, if that's how you are using it and the neurons firing) to move the hand? If it's only ever the neurons doing it all, why do we need consciousness, unless it's just how it feels for the neurons to fire (I think that's what you are saying) and if it is, then how/where do you see free will entering? Everything is decided before awareness arises and everything is caused by the neurons -

if that's true, we might be able to create a drug that suppresses the function of the neurons to create consciousness (subjective experience) but allows the person to other wise function - a "zombie" drug ... that would kind of do in any sort of meaningful definition of free will ... or are you saying that for you free will exists at the neural level?

If not, if you are really saying that consciousness/subjective experience or awareness causes the hand to move - that seems to be close to dualism, or you have to have some kind of "de-emergence" process and I don't think that's coherent.
 
Last edited by a moderator:
It was Benjamin Libet who did the experiment you referenced above (highlighted in red), which was subsequently celebrated by materialists as evidence that consciousness is not involved in our actions, an interpretation that Libet criticized in a paper I linked about a month ago in Part II of this thread. I'll find and post the link to that paper again.

The following paper,
"Neurophenomenology for neurophilosophers" by Evan Thompson et al, outlines the interdisciplinary research in progress that addresses issues raised in Libet's experiment among others:

http://brainimaging.waisman.wisc.edu/~lutz/ET&AL&DC.Neuropheno_intro_2004.pdf

Good! That's what I'm looking for ... thanks Constance.
 
Here is a description of Dennett's "Multiple Drafts" model of consciousness that he devised to address a problem he sees with many models. (Although I'm not sure it's a problem, at least for materialists.)

Now, when thinkers propose that there really is a non-material, non-temporary mental self (homunculus) I do think that is a problem. However, I don't think that's what materialists believe; at least not this materialist.

In trying to understand consciousness, we have to be careful not to focus solely on human consciousness and more specifically self-reflective (meta-cognitive) consciousness (which I think gives rise to the sense of self).

What I've tried to say here in this discussion is that there are many layers of consciousness (which I believe directly relate to "layers" or structures of the physical brain). Not all organisms possess these layers.

And I will say that I've thought hard about this problem - the idea that "everything comes together" in one place in the brain - because of IIT, which might be understood to suggest this (but I don't think it necessarily does).

This "problem" is exactly why I've always been careful to say that phenomenal/qualitative consciousness can and does exist in the absence of a sense of self. There are organisms that experience phenomenal/qualitative consciousness without also experiencing a sense of (mental) self.

What I think happens is that organisms and qualitative experience evolved together, conceptual cognition evolved/developed later, followed by meta-cogntion. The sense of self (or sense of homunculus) arises when the conceptual cognition layer becomes aware of itself (self-aware). Many animals - I believe - lack this conceptual layer and the meta-conceptual layer; furthermore, this layer in humans is not always "active."

I go back to the phenomenon of Blind Sight mentioned in Part One. This is the phenomenon of the brain pulling in and processing visual information but the conscious mind (the meta-cognitive/self-reflexive) mind not having access to this information.

I'm not sure there is too much opposition around here to most of what you are saying, which is why you probably don't get a lot of response ... can you say more about what you mean by a non-matieral, non-temporal mental self (homonculus)? I understand the concept - I've read Dennett ... but I want to know how you conceive of that ... what would a non-material mental self be like? If it's not material, what is it made of? That question seems to not make any sense, so if we can't talk about it in terms of what it's made of - how do we talk about it?

Nagel and Chalmers talk about consciousness being a fundamental property of the universe and panpsychism and you've talked about this - about dual substance monism ... and this seems to help with the problem of how subjectivity gets on the scene - i.e. it is part of the scene from the get go ... but then there is the combination problem. And at any rate, this is not a non-material thing - it's an aspect of quality of the "primal stuff" (your term) everything is made of ... so I think you have a bit of a straw man set up here?

Or is it that these questions don't make sense given materialist pre-suppositions? What I mean by this is that to say something is non-material is to define it (actually, to de-define it) in terms of materialism ... and the pre-supposition is that everything is made of material, so ... this ends up being non-sense by definition. If that's not just word-play, under what framework would it make sense? Then we wouldn't of course be saying non-material ...

This "problem" is exactly why I've always been careful to say that phenomenal/qualitative consciousness can and does exist in the absence of a sense of self. There are organisms that experience phenomenal/qualitative consciousness without also experiencing a sense of (mental) self.

What would this be like? What is it like to have phenomenal/qualitative consciousness in the absence of a sense of self? I think any animal that experiences suffering, knows it is the one suffering. My six dogs have a very clear sense of individuality.

Also, phenomenal/qualitative consciousness without a sense of self ... you say this is not always "turned on" in humans ... so now it makes more sense to ask what it is like, since we are talking about humans ... but do we experience it at the time or only when we remember it when awareness is turned back on? I can't think of a time where I have had an experience without some degree of self-awareness ... I have driven down the road and come back to my senses and have no memory of the drive - but I did have experiences during that time. I've also been told of things I've done that I have no memory of under various circumstances ... is this memory or was I functional but unconscious (a zombie) at the time?

I know you've said you've been so absorbed in something that there was nothing else in your experience but that experience, yet somehow you had that experience at the time right? And to "have" an experience seems to me to indicate some self-awareness? You seem to be aware that you were only aware of the experience ...

Just trying to "sharpen your saw" a bit.

;-)
 
Last edited by a moderator:
can you say more about what you mean by a non-matieral, non-temporal mental self (homonculus)?
I mean non-physical.

Or is it that these questions don't make sense given materialist pre-suppositions? What I mean by this is that to say something is non-material is to define it (actually, to de-define it) in terms of materialism ... and the pre-supposition is that everything is made of material, so ... this ends up being non-sense by definition. If that's not just word-play, under what framework would it make sense? Then we wouldn't of course be saying non-material ...
Good stuff.

As you know, I do think information is immaterial. However, I know that you disagree. (The following picture prompted a deep meditation the other day...)

tumblr_ncbputWOqb1qbycdbo1_500.png


What is it like to have phenomenal/qualitative consciousness in the absence of a sense of self?
The mind is green. Green is what it's like.

I think any animal that experiences suffering, knows it is the one suffering. My six dogs have a very clear sense of individuality.
Well, suffering implies self-awareness, no?

How about pain? Would you say every object/organism that experiences pain knows that it is an object/organism experiencing pain?

This gets into the difference I've distinguished between awareness of the body-self and the mental-self (awareness of the awareness of the body-self). I think even "basic" organisms are aware that their body-self is experiencing pain; I do not think all animals know that they are an organism experiencing pain. That's a subtle but very important distinction.

do we experience it at the time [or only when we remember it when awareness is turned back on?] ...

I have driven down the road and come back to my senses and have no memory of the drive - but I did have experiences during that time.
This gets at the heart of the problem discussed by Dennett above: Do we experience all the information we receive or just the information our awareness is focused on (meta cognition) (Cartesian Theater). It's a very good question!

As I believe organisms lacking self-awareness have qualitative experiences, my answer is: We have experiences all the time, even if our awareness isn't directed at these experiences.

Thus, we may drive 50 miles down the road and arrive at our destination only to realize we were not "paying attention" to the road at all! Did we "not experience" all the sights and sounds along the way? If we search our memory, we may not discover a full and rich memory of the drive, but in my experience we can recall much of it. Ergo, I think, yes, we were having qualitative experiences on the drive despite our lack of conscious awareness of the drive.

I can't think of a time where I have had an experience without some degree of self-awareness
Odd. I do all the time. During intense games of basketball as a teenager my sense of self (and time) would often disappear. I've written before about "flow" and how I think the sense of self and time is altered in these states. Goodness, my sense of self disappears all the time. Even on my drive to work when I'm reflecting deeply on ideas and whatnot.

I know you've said you've been so absorbed in something that there was nothing else in your experience but that experience, yet somehow you had that experience at the time right? And to "have" an experience seems to me to indicate some self-awareness?
Homunculus alert!

Is it my mental-self that is having the experience or my body-self? It's my body-self having (generating) the experience.

I don't think "having" an experience requires a sense of mental-self, but it certainly requires having a body, haha.

You seem to be aware that you were only aware of the experience ...
Again, the mind is green. I wasn't "aware" that I was seeing green, I (my mind) was green.

After the fact, I can do a little metacognating to determine that I was seeing green. And there are many times that while seeing green, I am very aware that I am seeing green. But not all the time. And for some organisms, none of the time.
 
Last edited:
I mean non-physical.


Good stuff.

As you know, I do think information is immaterial. However, I know that you disagree. (The following picture prompted a deep meditation the other day...)




The mind is green. Green is what it's like.


Well, suffering implies self-awareness, no?

How about pain? Would you say every object/organism that experiences pain knows that it is an object/organism experiencing pain?

This gets into the difference I've distinguished between awareness of the body-self and the mental-self (awareness of the awareness of the body-self). I think even "basic" organisms are aware that their body-self is experiencing pain; I do not think all animals know that they are an organism experiencing pain. That's a subtle but very important distinction.


This gets at the heart of the problem discussed by Dennett above: Do we experience all the information we receive or just the information our awareness is focused on (meta cognition) (Cartesian Theater). It's a very good question!

As I believe organisms lacking self-awareness have qualitative experiences, my answer is: We have experiences all the time, even if our awareness isn't directed at these experiences.

Thus, we may drive 50 miles down the road and arrive at our destination only to realize we were not "paying attention" to the road at all! Did we "not experience" all the sights and sounds along the way? If we search our memory, we may not discover a full and rich memory of the drive, but in my experience we can recall much of it. Ergo, I think, yes, we were having qualitative experiences on the drive despite our lack of conscious awareness of the drive.


Odd. I do all the time. During intense games of basketball as a teenager my sense of self (and time) would often disappear. I've written before about "flow" and how I think the sense of self and time is altered in these states. Goodness, my sense of self disappears all the time. Even on my drive to work when I'm reflecting deeply on ideas and whatnot.


Homunculus alert!

Is it my mental-self that is having the experience or my body-self? It's my body-self having (generating) the experience.

I don't think "having" an experience requires a sense of mental-self, but it certainly requires having a body, haha.


Again, the mind is green. I wasn't "aware" that I was seeing green, I (my mind) was green.

After the fact, I can do a little metacognating to determine that I was seeing green. And there are many times that while seeing green, I am very aware that I am seeing green. But not all the time. And for some organisms, none of the time.

What the heck is that a picture of? And has anyone else had a green mind? I feel like I am missing out.

If you are saying information is "out there" waiting on energy and matter to instantiate, then I think we can use "immaterial" - if it is a property of material, then why don't we just say information is a property of material? Otherwise, we get into a kind of dualistic talk.

The rest of it sounds like terminology, on my view experience is what I am aware of - if I'm not aware of it, I don't experience it, but I may remember it later. That sort of thing happens I think to everyone frequently. There are billions or trillions of bits of information in the environment and we consciously process I think four bits at a time? Anyway, not much.

As for organisms, I think what they are aware of is tightly confined to what they can do with the information. A dog has less need of individual identity than a human - but I think still has a considerable amount of it. A bug maybe needs to know where it ends and the environment begins ...

Does suffering imply self-awareness? I probably meant to say pain there, though I've seen these words used more or less inter-changeably, and for me, usage is definition ... in Buddhism there is a lot of talk about suffering ... the saying is pain is inevitable, suffering is an option (this is the idea of the second arrow ...) and so suffering would be something only a human would experience.

In terms of animals, I've heard this both ways: that humans suffer, animals only feel pain - but then I've heard it that humans have various ways to cope with pain, animals can only experience brute suffering, with no mental resources to cope.

Very generally my idea then is that any organism only experiences what it can make use of .... at some level, it's probably not even "pain" in terms of what we think - but "aversive stimuli" - more and more like a reflex ... but that just might not be the case ... that's pretty hard to think about and it puts Darwin on the hook a bit in terms of efficiency. Again, from a Darwinian standpoint, Zombies look pretty good, right? No expensive neural processing to create consciousness, no lag time ... no philosophy!

Have you ever felt pain in a dream?

I'm still riddling out awareness and experience ... I think you've turned a couple of perfectly good words that I used to know into terminology!!
 
Last edited by a moderator:
I see it a little differently. Forgive going back to the computer methodology.

Every process enters into the processor, goes through a series of logic gates, then goes out the other side. How many gates a process needs to go through before it comes out the other side is known as it's gate depth.

So, a floating point divide (a computationally expensive operation) may have a gate depth of, say, 20 gates. Each level of logic gates takes a finite period of time, so a floating point divide takes each gate time x 20 to deliver.

A simple "true or false" comparison may take, say, 2 gates to go through to deliver. So that's 1/10th as computationally expensive.

What I'm surmising is that there are different levels of awareness, and different levels of autonomy in our actions.

Consciousness is probably a neuronally expensive task. We have bazillions of them, and they're a network, but let's say just for the sake of argument they have a neuron depth of 20 to keep our higher consciousness up and running.

Pulling our hand away from something hot may have been evolutionarily optimized by our neurons a long time ago, so let's say it passes through whatever the equivalent is of 2 neuron "gates" to be done. The act may be done before we're actually aware of it.

Let's also say that we have various subcomponents of the mind. One is for autonomic functions, one is for sympathetic functions, one for parasympathetic functions, etc. My brain, for the sake of local optimization, may carve off a set of neuron pathways to drive me into work. That's it's job -- just get me into work and don't make me think too much about it. Let's say it's optimized so it has a neuron depth of 10.

So it happily chugs along, and our consciousness is freed up to think as we drive into work. We may, then, take a turn or jam on the brakes before we become aware of it.

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

In short, I see it kinda this way, with a series of pretty elastically defined sub processes being executed massively in parallel, with different timings.

This is the key:

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.
computational task of consciousness - which resolves to neurons firing, right? So the computational task triggers the lower level event - not the subjective experience, because the subjective experience is also a result of the computational task ... right?

Because if we say:

neurons firing -----(emergence) ---> subjective experience ----> hand moving then you are saying the subjective experience is causing the hand to move ... and that's weird from a physicalist view.
 

(underlining mine - smcder)

That sorted out Libet's account in purely empirical terms: in philosophical terms the problems were only beginning. The research (subsequently repeated and corroborated by others) seemed to provide a scientific proof that free will was a delusion. How could we consider ourselves responsible for decisions we were not even aware of until after they had been made? Some would have been happy to see the demise of free will, but Libet himself was not ready to let it go so easily. Although the subject's decision to move occurred too early for it to have been initiated by conscious thought, there was still - just - a window of opportunity in which conscious awareness might conceivably veto the move. This window lasts, on Libet's account, no more than about 100 milliseconds. Experimental proof is difficult. Libet has conducted experiments in which the subjects were asked to form an intention to move and then veto it at the last moment: apparently an RP appeared and then dissipated, but the weirdness of the mental gymnastics required of the subject seem to leave an element of doubt about the process. Is it possible to decide to move at a random moment while simultaneously holding on to the belief that you will not, in fact, execute the movement?
Interestingly, Libet has a moral argument here. He rightly points out that free will is important partly because it underpins the idea of moral responsibility: but morality, he suggests, is mainly a matter of vetoing things we have a built-in tendency to do. This is a faintly depressing prospect, and I think Libet underestimates the difficulty of discriminating between doing and not-doing. If I veto my momentary desire to kill you, that surely is morally good: but what if I repress the automatic tendency to grab your hand when you are about to fall off the cliff?
 
This is the key:

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.
computational task of consciousness
- which resolves to neurons firing, right? So the computational task triggers the lower level event - not the subjective experience, because the subjective experience is also a result of the computational task ... right?

Because if we say:

neurons firing -----(emergence) ---> subjective experience ----> hand moving then you are saying the subjective experience is causing the hand to move ... and that's weird from a physicalist view.
Why is that weird?
I see it as a loop. Your subjective experience is more that a little informed by your environment, and it can inform your body how to respond.

What am I missing?
 
I don't think I'm quite there yet. When you say:

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

What do you mean by consciousness itself? If you mean the subjective experience, the awareness - it still sounds like (for you) that awareness is a product of neural processing - something that emerges, but it's always the neurons firing that does the work: the work of moving the hand and the work of creating the awareness, if that is the case, then how can an epiphenomenon cause the hand to move and why would you need two causes (the consciousness, if that's how you are using it and the neurons firing) to move the hand? If it's only ever the neurons doing it all, why do we need consciousness, unless it's just how it feels for the neurons to fire (I think that's what you are saying) and if it is, then how/where do you see free will entering? Everything is decided before awareness arises and everything is caused by the neurons -
I'm not sure how free will enters into the equation, if at all.
It may be a trick we play on ourselves.

if that's true, we might be able to create a drug that suppresses the function of the neurons to create consciousness (subjective experience) but allows the person to other wise function - a "zombie" drug ... that would kind of do in any sort of meaningful definition of free will ... or are you saying that for you free will exists at the neural level?
I call that scotch.
If not, if you are really saying that consciousness/subjective experience or awareness causes the hand to move - that seems to be close to dualism, or you have to have some kind of "de-emergence" process and I don't think that's coherent.
I'm missing it. Why can't the act of a bunch of neurons firing be perceived by itself as consciousness?
 
Why is that weird?
I see it as a loop. Your subjective experience is more that a little informed by your environment, and it can inform your body how to respond.

What am I missing?

What informs your body how to respond? Neurons firing or the subjective experience?

if I understand your model:

1. subjective experience is a result of neurons firing
2. your hand moves as a result of neurons firing

so the subjective experience is what we feel, but it doesn't do anything (on this model)

but if you say "it (your subjective experience, not the neurons firing, the experience) can inform your body how to respond", then I ask how a subjective experience can have an objective effect?
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top