• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Philosophy, Science, & The Unexplained - Main Thread

Free episodes:

I have no idea how all these strike-throughs have appeared in the text of this post. Any suggestions on how to remove them? I drafted the post within the screen and word-processing system provided here, so the strike-throughs must have arisen here.

Well . . . it may be that these strike-throughs are an emergent property arising somehow from the complexity of your thoughts, or it could be that strike-throughs are an irreducible aspect of typing . . . I'm just not sure! ;-)
 
Let's slow down because we're getting ahead of ourselves just a little. We almost had it back there where I commented that two separate types of "accounting for" are in play when it comes to the formulation of the so-called hard problem.
  1. On one hand we're talking about the design, materials, properties, and operating principles of a conscious being.
  2. On the other hand, we're talking about conveying the subjective experience of the fully assembled and functioning conscious being.
Solving the hard problem seems to require that the type of "accounting for" in 1. ( above ) also perform the same type of "accounting for" as 2. ( above ).
  1. Is it sufficient to convey a subjective experience by analogy or illustration ( what it's like )
  2. Or must it go beyond that to impart a "one in the same" type of subjective experience?
If the answer is 1. ( above ), then we have done that to some extent with the video example, and hypothetically could do something similar for the other perceptual information that make up a substantial part of what we call consciousness. However if the answer is 2. ( above ), then what is being asked is impossible and the question becomes pointless. It's like saying that just because we have a cookbook doesn't mean we have food. So what? That's not relevant to the initial question because the cookbook still explains food sufficiently. Get all the ingredients, follow the instructions, and out pops food. A fresh baked loaf of bread might take the better part of an hour. In the case of humans, it takes about 9 months, and the evidence for this is rather overwhelming.

Constance - can you help? It seems like Ufology and I have two very different understandings here . . . I read it that Nagel is just challenging the physicalist claim to be able to account for everything by reference only to the physical (because physicalists believe that is all there is) but he points out that they don't seem to be able to account for subjective experience this way - therefore they haven't explained everything - initially in 1974 he said they might one day be able to do this when physical science developed sufficiently but he didn't think any current attempts were satisfactory. a video doesn't meet the criteria because it assumes the consciousness its trying to explain and it is also not a full physical accounting of the underlying structures of consciousness . . . Now, Nagel's view (2012 book) seems to be that consciousness can't be explained physically because it is an irreducible property of the universe, so solving the hard problem with a physical answer is impossible but that doesn't make the hard problem pointless (because consciousness being irreducible makes the hard problem go away, because you are no longer trying to explain it physically) - so Ufology it seems like you are too early in dismissing the hard problem as pointless because you haven't explained consciousness/the subjective with a physical explanation - without that, you have to maintain a physicalist position tentatively on the faith that a physical explanation will come in time - or are you not maintaining a physicalist position?

that's all I take Nagel's argument to be - ?
 
Perhaps the Noe material holds some other views that make the hard problem challenge to physicalism go away or hold yet other possibilities -
 
some extent with the video example, and hypothetically could do something similar for the other perceptual information that make up a substantial part of what we call consciousness. However if the answer is 2. ( above ), then what is being asked is impossible and the question becomes pointless. It's like saying that just because we have a cookbook doesn't mean we have food. So what? That's not relevant to the initial question because the cookbook still explains food sufficiently. Get all the ingredients, follow the instructions, and out pops food. A fresh baked loaf of bread might take the better part of an hour. In the case of humans, it takes about 9 months, and the evidence for this is rather overwhelming.

again the food analogy is just like the water-h20 analogy or any other example of that type - and Nagel deals with that at the very beginning of the paper and then later with objective/subjective - so he deals with that argument - the cookbook only explains food if you assume consciousness in your argument (but that's what we're trying to explain, so it's circular) - this is why he says consciousness/subjective awareness is a unique problem - sorry, I just have to keep pointing you back to re-read those sections, I guess . . . ?
 
Constance - can you help? It seems like Ufology and I have two very different understandings here . . . I read it that Nagel is just challenging the physicalist claim to be able to account for everything by reference only to the physical (because physicalists believe that is all there is) but he points out that they don't seem to be able to account for subjective experience this way - therefore they haven't explained everything - initially in 1974 he said they might one day be able to do this when physical science developed sufficiently but he didn't think any current attempts were satisfactory. a video doesn't meet the criteria because it assumes the consciousness its trying to explain and it is also not a full physical accounting of the underlying structures of consciousness . . . Now, Nagel's view (2012 book) seems to be that consciousness can't be explained physically because it is an irreducible property of the universe, so solving the hard problem with a physical answer is impossible but that doesn't make the hard problem pointless (because consciousness being irreducible makes the hard problem go away, because you are no longer trying to explain it physically) - so Ufology it seems like you are too early in dismissing the hard problem as pointless because you haven't explained consciousness/the subjective with a physical explanation - without that, you have to maintain a physicalist position tentatively on the faith that a physical explanation will come in time - or are you not maintaining a physicalist position?

that's all I take Nagel's argument to be - ?

I see the hard problem as you and Chalmers and Nagel see it, and I've no doubt that Noe sees it as well. You also wrote:

Perhaps the Noe material holds some other views that make the hard problem challenge to physicalism go away or hold yet other possibilities -

As I read Noe's sensorimotor theory of perception, he isn't attempting to dissolve the hard problem but to approach the nexus of consciousness with perception in and of the physical world. His view of consciousness as embodied (bodily as well as mental) and his enactive view of perception both originate in his study of Merleau-Ponty's phenomenology. Noe has studied and worked with neuroscientists in order to better understand the nature of sensory and neurological contributions to/enablements of consciousness reaching out at a precognitive level into the tangible world in order to function in it. MP distinguishes between prereflective and reflective consciousness in some of the most enlightening passages he has written evoking our inseverable bond with the world in which we find ourselves.
 
The question seems to be: do we know all of the ingredients that constitute perceptual processing in the brain? You refer to "the other perceptual information that make up a substantial part of what we call consciousness." Would you clarify or specify what you mean by 'the other perceptual information that makes up a substantial part of what we call consciousness'?
The video I posted only showed a visual analogy, so what I was probably referring to are the other senses through which we identify with the outside world.
It seems that neuroscientists now understand a good deal about visual perception in terms of how the brain and its connected visual sensoria present that which is seen in the world through our visual apparatus. The problem in understanding consciousness and mind seems to be how to account for what happens to, and through, information initially conveyed through perception and further processed in thought.

Alva Noe is a philosopher of mind knowledgeable about both phenomenological and neurological approaches to the problem of consciousness. His sensorimotor theory of perceptual experience has challenged both externalist and internalist thinkers in consciousness studies and has found support from specialists on both sides. Noe's theory is summarized in the short version, linked below, of a much longer paper presenting this theory, which I read several years ago. It is further elaborated and contextualized in several books by Noe published in recent years and is likely presented at its fullest in the final chapter of Alva Noe and Evan Thompson, eds., Vision and Mind, briefly described here:

I think Noe's sensorimotor theory might help us move closer to the core of the problem we are discussing in this thread.

OK. I read the paper, A Sensorimotor Theory of Perceptual Experience, but I think it misses the connection between having all the parts in place to perform a function that resembles conscious behavior, and consciousness itself. For example the missile guidance system analogy only requires basic pattern matching and sensors that can detect an aircraft. Certainly sensory feedback and analysis is a substantial component of conscious experience. As I've suggested in another post, we can even get an idea what it's like to see the way other animals see. Yet as @smcder has pointed out, this still doesn't explain what it is that's perceiving the perception, and it is that something else that we call consciousness.

To illustrate further, hypothetically a whole sensorimotor apparatus could be run by a series of gears and cogs, what smcder called a "clockwork orange", and I think there's more to consciousness than that. I think it's an emergent property, analogous to a magnetic field. If we wrap some wire around a core and apply some electricity, it appears. However the field is not the wire nor the core nor the electricity. It's something else, invisible, intangible, yet real. It can store energy and even scientists don't know what it's actually made of. They do however theorize something called virtual photons, and I can't think of a better analogy for what happens when we imagine something. We're obviously seeing something, but there are no real photons involved, so "virtual photons" seem to fit the bill perfectly. Here's a page with some interesting ideas in that direction: Quantum Consciousness
 
OK. I read the paper, A Sensorimotor Theory of Perceptual Experience, but I think it misses the connection between having all the parts in place to perform a function that resembles conscious behavior, and consciousness itself. For example the missile guidance system analogy only requires basic pattern matching and sensors that can detect an aircraft. Certainly sensory feedback and analysis is a substantial component of conscious experience. As I've suggested in another post, we can even get an idea what it's like to see the way other animals see. Yet as @smcder has pointed out, this still doesn't explain what it is that's perceiving the perception, and it is that something else that we call consciousness.

Good. I agree with what you've said here. This shortened version of the paper I read some years ago has been gutted and has had its thesis watered down, no doubt by the 30-some critiques of it by cognitive scientists and neuroscientists that followed it in the format I saw. Noe seems increasingly out of touch with the phenomenology of Merleau-Ponty, which he studied for his doctoral work, after which he worked with Daniel Dennett and tried to retrain himself in neuroscience.
[/quote]
 
I like ufology's concept of consciousness as a kind of field phenomenon in lieu of a more linear process. In creating ever more detailed maps of the brain's connections, we may be unwittingly courting a more modern version of the ancient alchemical view: that every action and experience of the human organism is made possible by virtue of a tiny homunculus within, whose actions are in turn made possible by a tinier homunculus within the first, and so on to infinity. The question "How is it that I am self-aware" is irrational and irreducible process, like the value of pi. (Still--to wonder is to be self aware). True, thought is affected and moderated by complex algorithms that form an interface with "reality". The brain is a virtual reality generator. But one continually dependent on the real time flow of larger processes. Study of this algorithm is now a field now ripe for investigation. It may take our consciousness places of which we cannot dream. But we are at constant risk of falling into a kind of recursivity that keeps us forever chasing our tails in pursuit of an analytical explanation of
" I am". The algorithm, however complex, is not self-awareness. Even if we succeed in realizing self-awareness in non-biological substrates, we will still be left with the puzzle of "I am." Indeed, one indicator that we have suceeded in creating strong AI might be entities that are baffled by their own self awareness.

does the universe contain more than one electron? The answer seems self-evident. But electrons are elemental and have no intrinsic identity. They are differentiated only by virtue of dynamic properties that may not be at all as they appear in Newtonian physics. (John Wheeler toyed with this idea) Analysis of what makes a given electron unique has a way of devolving into mathematical equations describing not individual particles, but rather macro probability fields.

So a loose and shaky analogy and I then promise to behave: Self-awareness is every bit as elemental to how this universe works as is the electron and the field can be likened to our algorithm. What we experience as individuated awareness is a kind of emergent field phenomenon that yields only to analysis that is statistical, not analytical in nature.
 
I like ufology's concept of consciousness as a kind of field phenomenon in lieu of a more linear process. In creating ever more detailed maps of the brain's connections, we may be unwittingly courting a more modern version of the ancient alchemical view: that every action and experience of the human organism is made possible by virtue of a tiny homunculus within, whose actions are in turn made possible by a tinier homunculus within the first, and so on to infinity. The question "How is it that I am self-aware" is irrational and irreducible process, like the value of pi. (Still--to wonder is to be self aware). True, thought is affected and moderated by complex algorithms that form an interface with "reality". The brain is a virtual reality generator. But one continually dependent on the real time flow of larger processes. Study of this algorithm is now a field now ripe for investigation. It may take our consciousness places of which we cannot dream. But we are at constant risk of falling into a kind of recursivity that keeps us forever chasing our tails in pursuit of an analytical explanation of
" I am". The algorithm, however complex, is not self-awareness. Even if we succeed in realizing self-awareness in non-biological substrates, we will still be left with the puzzle of "I am." Indeed, one indicator that we have suceeded in creating strong AI might be entities that are baffled by their own self awareness.

does the universe contain more than one electron? The answer seems self-evident. But electrons are elemental and have no intrinsic identity. They are differentiated only by virtue of dynamic properties that may not be at all as they appear in Newtonian physics. (John Wheeler toyed with this idea) Analysis of what makes a given electron unique has a way of devolving into mathematical equations describing not individual particles, but rather macro probability fields.

So a loose and shaky analogy and I then promise to behave: Self-awareness is every bit as elemental to how this universe works as is the electron and the field can be likened to our algorithm. What we experience as individuated awareness is a kind of emergent field phenomenon that yields only to analysis that is statistical, not analytical in nature.

In creating ever more detailed maps of the brain's connections, we may be unwittingly courting a more modern version of the ancient alchemical view: that every action and experience of the human organism is made possible by virtue of a tiny homunculus within, whose actions are in turn made possible by a tinier homunculus within the first, and so on to infinity.

it's turtles all the way down! It does seem that we're carrying on very ancient and venerable pursuits but awareness of that is masked by our sense of superiority in this "high-tech" age -think the project of mapping the brain needs to go on - but with more of the spirit of discovery, instead of burrowing for the "I Am" in hopes that, when found, we will have one more commodity that we can then capitalize on. And with respect for those places marked "Here Be Dragons".

" I am". The algorithm, however complex, is not self-awareness. Even if we succeed in realizing self-awareness in non-biological substrates, we will still be left with the puzzle of "I am." Indeed, one indicator that we have suceeded in creating strong AI might be entities that are baffled by their own self awareness.

- brilliant, there's a short-story in that idea . . . maybe in the future where we have turned over all contemplation, meditation and spiritual quests to the machines . . . somewhere on the planet a bank of computers is humming along generating peace and equanimity for the planet while we get busy finding other trouble to get into -
 
Good. I'll listen to it tonight. Thanks.
Challenging stuff - needs a re-listen, I liked the metaphor of hiking requiring the terrain, I also wondered about his apple example and other results of introspection - is that true for all humans or is it culturally mediated? do different types of minds, not just say the autism spectrum but who, after all, really is "neuro-typical" have different landscapes - whole eco-systems? I've always wondered what it is that goes on in other people's minds - what it is like to be them, and what if we had more of that, would Noe's examples hold up? Or is his introspection not also heavily mediated by his education and experiences . . . why should we not expect every mind to be its own unique eco-system?

Your brain is a rain forest | OdeWire
 
Constance - can you help? It seems like Ufology and I have two very different understandings here . . . I read it that Nagel is just challenging the physicalist claim to be able to account for everything by reference only to the physical (because physicalists believe that is all there is) but he points out that they don't seem to be able to account for subjective experience this way - therefore they haven't explained everything - initially in 1974 he said they might one day be able to do this when physical science developed sufficiently but he didn't think any current attempts were satisfactory.
Sorry for interjecting here Constance, but there are a couple of things I should clarify for you first. Otherwise I don't think my position is being represented accurately.

What I see is a confusion between what it means to "account for" and what it means to "explain" and that the formulation of the so-called hard problem is using those words in two different contexts against each other in an incoherent manner. We have already determined what we mean by an accounting of the physical components, so that part is fine. However what doesn't seem to be getting across is that such an accounting doesn't also have to explain how something works. Explaining subjective experience is an entirely separate part of the physical model. A parts list for a clock doesn't explain how a clock works. So what? It doesn't have to. How it works and what it's made of are two completely separate concepts. But the formulation of the hard problem mushes it together and claims that because the inventory of the parts doesn't explain how it works, the assembled unit will be incomplete. It's faulty logic. But the discussion has gotten even more messed up because it also mushes the idea of "explaining" with the idea of "being" by making the phrase "what it's like" to be something ambiguous, and we still haven't decided which one to use.

a video doesn't meet the criteria because it assumes the consciousness its trying to explain and it is also not a full physical accounting of the underlying structures of consciousness . . .
Not exactly. It assumes that the word "like" in the phrase "what it's like" to be something, is to be taken literally as "similar" and not to be confused with "what it is to be" something. Therefore the only coherent way to interpret the problem is to "assume the consciousness we're explaining" will experience things "like this or that", and if we can do that, then we can assume that we have shown that the science behind the explanation is valid. It follows from there that because we agree that perception is a substantial part of consciousness, if we can show what the perception of something else is "like" we are addressing the problem as it has been stated in the discussion.

On the other hand, if the problem doesn't permit us to "assume the consciousness" we're explaining, then the problem is no longer requiring that we simply show "what it's like", and needs to be restated to convey that in addition to accounting for the physical components, the explanation must also impart the subjective experience of that something's consciousness, which is impossible. So either we have addressed the problem as it has been stated, or the problem is impossible because two separate contexts are being equated with a single aspect of a larger problem. However separate those two contexts, apply them each to their respective aspects of the same problem, be satisfied that "explaining" is sufficient, rather than having to impart identical subjective experience itself, and the incoherency is gone. What we are left with is a physical view of consciousness that explains consciousness as well as anything else can be explained.

But to press this point home, let's suppose for a moment that some physical device were invented that could impart via physical means the conscious experience of one conscious being onto another ( this isn't logically possible but for the sake of argument ), how would that in any way explain the existence of consciousness? It wouldn't. It's the same question as asking someone to explain emergence. So in this context, the formulation of the hard problem results in the exercise of unwrapping the mystery of consciousness to find that it's really the same as the bigger problem of emergence, and that problem is the real "hard problem", almost on par with that of existence itself.

Now, Nagel's view (2012 book) seems to be that consciousness can't be explained physically because it is an irreducible property of the universe, so solving the hard problem with a physical answer is impossible but that doesn't make the hard problem pointless (because consciousness being irreducible makes the hard problem go away, because you are no longer trying to explain it physically) - so Ufology it seems like you are too early in dismissing the hard problem as pointless because you haven't explained consciousness/the subjective with a physical explanation - without that, you have to maintain a physicalist position tentatively on the faith that a physical explanation will come in time - or are you not maintaining a physicalist position?

that's all I take Nagel's argument to be - ?
There seems to be confusion about what constitutes a physical explanation. Irreducibility is recognized by science as a property of emergence, and is simply accepted as a part of nature. This is why I constantly revert back to the idea of a magnetic field. Depending on which scientist or philosopher you talk to, magnetic fields are "physical" because they have properties that are measureable. However they are also emergent properties of materials and nobody knows exactly what a magnetic field is made of. Similarly, I submit that a consciousness field may be a particular organization of EM fields generated by a functioning brain that results in the emergent experience we call consciousness. In this context, consciousness is still within the physical domain, so we can do away with all the objections that go along with traditional ideas around substance dualism that suggest consciousness is some kind of non-physical substance. Certainly it's non-material, but it's still in the bounds of the physical ( again, not to be confused with "material" ).
 
Last edited:
boomerang wrote: "The algorithm, however complex, is not self-awareness."

ufology wrote: "What I see is a confusion between what it means to "account for" and what it means to "explain" and that the formulation of the so-called hard problem is using those words in two different contexts against each other in an incoherent manner. We have already determined what we mean by an accounting of the physical components, so that part is fine."

smcder wrote: "so Ufology it seems like you are too early in dismissing the hard problem as pointless because you haven't explained consciousness/the subjective with a physical explanation - without that, you have to maintain a physicalist position tentatively on the faith that a physical explanation will come in time - or are you not maintaining a physicalist position?

I think that if physical science could reach down far enough into the quantum substrate and discover precisely how interactions of particles and fields and their entanglement produce the physical world we live in, we might have a better understanding of how nature produces consciousness from the bottom up -- of what enables the evolution of consciousness -- but that that alone would not sufficiently account for the nature of consciousness as we experience it . . . and the ways in which we experience the world (and alter it) by virtue of the level of complexity of consciousness we possess. To recognize all of that -- what consciousness is in the classical and historical macroworld we inhabit -- requires the contributions of many additional disciplines developed in our species' exploration of the world.
 
That's what I was trying to get at when I wrote: "It seems that neuroscientists now understand a good deal about visual perception in terms of how the brain and its connected visual sensoria present that which is seen in the world through our visual apparatus. The problem in understanding consciousness and mind seems to be how to account for what happens to, and through, information initially conveyed through perception {and experience in the world} and further processed in thought." I've added the phrase in braces in that last line because 'information' is too abstract and bloodless a term to convey the palpable depth and breadth of the experienced world that founds our ideas and insights concerning existence.
 
I think that if physical science could reach down far enough into the quantum substrate and discover precisely how interactions of particles and fields and their entanglement produce the physical world we live in, we might have a better understanding of how nature produces consciousness from the bottom up -- of what enables the evolution of consciousness -- but that that alone would not sufficiently account for the nature of consciousness as we experience it ...

There's that expression again "account for". What does that mean exactly? I don't see any way that it's applicable as an argument against a physical explanation for consciousness when used in the context of, "the nature of consciousness as we experience it". A physical model for consciousness doesn't have to "account for" the nature of consciousness as we experience it. It simply has to describe the situation in enough detail to duplicate it, and if that can be done, then that's sufficient to explain it as well as we can explain anything else, including the material that makes up our brain and the world around us. Accounting for it's "nature" is a whole other issue. As mentioned above it's essentially the same as asking what the ultimate nature of emergence or existence itself is. We don't have those answers and probably never will. But that doesn't mean we can't accumulate enough information to create consciousness. We already give birth to it biologically, so it's obviously reproducible, and that is the essential question. Can we reproduce it via physical means? I see no reason why not. Answering the question of its "nature" is beside that particular point.
 
There's that expression again "account for". What does that mean exactly? I don't see any way that it's applicable as an argument against a physical explanation for consciousness when used in the context of, "the nature of consciousness as we experience it". A physical model for consciousness doesn't have to "account for" the nature of consciousness as we experience it. It simply has to describe the situation in enough detail to duplicate it, and if that can be done, then that's sufficient to explain it as well as we can explain anything else, including the material that makes up our brain and the world around us. Accounting for it's "nature" is a whole other issue. As mentioned above it's essentially the same as asking what the ultimate nature of emergence or existence itself is. We don't have those answers and probably never will. But that doesn't mean we can't accumulate enough information to create consciousness. We already give birth to it biologically, so it's obviously reproducible, and that is the essential question. Can we reproduce it via physical means? I see no reason why not. Answering the question of its "nature" is beside that particular point.

we're just going to have leave this one be for now, maybe it will come up again in a way that is more helpful - I would just say re-read Nagel's discussion of objective/subjective and why he thinks the problem of consciousness is not like other problems such as water --> h20, why he thinks it is unique . . . it's not exactly right to say you are being too literal here, but that is the one word that keeps coming up - so I'll put it there and I'll try to have another closer look at your posts above . . . but I think this is where I am going to settle out on it for now - it's not a sticking point to discussing other aspects of consciousness . . ..
 
That's what I was trying to get at when I wrote: "It seems that neuroscientists now understand a good deal about visual perception in terms of how the brain and its connected visual sensoria present that which is seen in the world through our visual apparatus. The problem in understanding consciousness and mind seems to be how to account for what happens to, and through, information initially conveyed through perception {and experience in the world} and further processed in thought." I've added the phrase in braces in that last line because 'information' is too abstract and bloodless a term to convey the palpable depth and breadth of the experienced world that founds our ideas and insights concerning existence.

you might also enjoy listening to this interview - Interview with Philosopher Alva Noë (BSP 58) — Brain Science Podcast
 
Back
Top