• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal

Free episodes:

Status
Not open for further replies.
What Searle clarifies is what I originally pointed out regarding IIT:

IIT provides a theory of how living organisms produce experiences (qualia) but not how this experience (qualia) becomes aware of itself. Indeed, this goes way, way back to my first posts in this thread where I outline streams of experience vs self-aware streams of experience.

I still don't understand what you mean in that highlighted clause: that living organisms "produce experiences." Can you clarify what you mean or cite a source that expresses and supports that claim? If we think in terms of autopoesis, defined by Maturana and Varela, an organism doesn't produce its own experiences of sensing and interacting with its environing situation vis a vis the boundaries that define self and not-self. Its experience is the arising of that relationship between self and not-self, a sense of its standing out from the situation in which it is embedded, a sense of boundaries across which it moves to acquire what it needs (nutriment) while maintaining its own integrity. It's the 'inner/outer' experience that others we've cited describe, and the boundaries sensed are porous (like the boundaries between the subconscious and conscious mind). The arising of the sense of self and nonself does not belong only to the 'inner' but also to the 'outer' from which it distinguishes itself. It is a qualitative difference in being that arises with life. At a deep level it is an experience of the environing earth itself as well as that of the living being responding to its sense of its environment and of itself within it. The speaker in this brief video expresses the deep symbolic significance of the sense of this relationship on which protoconscious and consciousness rest.


While 'information' exchanged in nature in its increasing complexity at purely physical levels no doubt enables the development of life, presence (awareness), consciousness, and mind, something new exists at the point when differentiation of life from nonlife begins. The bottom line of this direction of reasoning is that we cannot think ourselves beyond nature, outside of nature, nor can we think away our experience of our own point of view and consequent thought arising from the recognition of the difference of our own being within the being of nature.

But let's not throw the baby out with the bathwater!

By all means, let's not!!!

I'm out of time to respond to the balance of your post, but will return to it this evening.

ps: point to be taken up: 'experience' is not merely 'qualia', but involves qualia.


IIT may not explain self-aware experience, but it may provide a good working hypothesis of how experience arises. I believe this is what Chalmers meant when he said it was a good theory, but didn't answer the hard problem of how cognition and experience interact.

Note: This is exactly why Jaynes says organisms which produce experience can still have a "nothing" what it's like to be. Recall blindsight: the information can be there, but if there is no awareness, there is no "what it's like."

The paradox of the tree falling in the forest is relevant here. Does it make a sound if no one is there to hear it? Here hear means to give the sound waves/information meaning.

Can information be considered "experience" if no one is there to experience it (give it meaning)? That is what Searle is saying.

Tonini says the information/experience is there, but Searle says "not if no one is there to give it meaning."

So perhaps rather than "stream of experience" it's more accurate to say "stream of integrated information." But, once this "stream of integrated information" becomes aware of itself, "it then becomes as "stream of experience."

I think this relates to the Phenomenological position that all experience requires self-awareness.

IIT explains the experience, but not the awareness of the experience.

(As a side note, Searle has interesting ideas about consciousness. He is what I might call an Uber Monist. Not only does he think consciousness is a purely biological phenomena, he believes only biological processes can give rise to consciousness. He view AI/AGI as a form of dualism! Thus, I'd love to hear Searle's response to David Deutsch.

I'm not done with this article yet, but it is an interesting one.

I believe what DD is saying is just what Searle - indeed many thinkers - is asking: How can information make meaning of itself (another way of saying self-aware experience)?

Edit: I need to take a step back; Looks like DD is asking: How does information make meaning!

I had said earlier that I hadn't read any books on consciousness, but indeed I have. In college I read two books by Hofstadter. One of them being, "I am a strange loop." In the book, he talks about self-aware experience (my phrase) arising via the phenomena of a strange loop. An idea that is clearly not mine (a la Peterson) but which has influenced my thinking on how consciousness arises.
 
Last edited:
I still don't understand what you mean in that highlighted clause: that living organisms "produce experiences." Can you clarify what you mean or cite a source that expresses and supports that claim?
I believe that in the absence of living organisms (or more specifically Information Processing Systems*) there will be no subjective experience (or what-it's-like).

That is to say, we can conceive of a universe not-unlike ours filled with differentiated matter, but which lacked Information Processing Systems. This universe would also lack subjective experience. That is, subjective experience does not exist in the absence of Information Processing Systems, such as brains.

Thus, Information Processing Systems are differentiated systems of matter that interact with the environment and in doing so, produce or generate subjective experience. In my view, this is what Evan Thompson is seeking. To "understand the emergence of living subjectivity from living being."

*I don't believe that all Information Processing Systems create/produce/generate subjective experience, only certain ones. Also, I don't think human brains are the only IPS's that can create subjective experience. For example, I'm open to the possibility that our universe itself is an IPS capable of producing subjective experience. I think there are likely a multitude of non-human IPS's in our universe capable of generating subjective experience.
 
Thanks for your reply. I'd like to call your attention to several places in which I think the hypothesis you apply does not, cannot, stand on its own as an explanation of what consciousness is.

I believe that in the absence of living organisms (or more specifically Information Processing Systems*) there will be no subjective experience (or what-it's-like).

That is to say, we can conceive of a universe not-unlike ours filled with differentiated matter, but which lacked Information Processing Systems.

That would be a universe very unlike ours given the contemporary scientific recognition that information is fundamental in our universe and -- in its integration and entanglement -- holds this universe together. How would a universe 'filled with differentiated matter' but lacking integrated information systems -- indeed not evolving and maintaining a system of systems -- hold itself together rather than flying apart?

This universe would also lack subjective experience. That is, subjective experience does not exist in the absence of Information Processing Systems, such as brains.

Thus, Information Processing Systems are differentiated systems of matter that interact with the environment and in doing so, produce or generate subjective experience. In my view, this is what Evan Thompson is seeking. To "understand the emergence of living subjectivity from living being."

It rather seems to me that interacting physical systems {by exchanging information} produce the environment in which fields, forces, stars, planets, and galaxies can form, and on certain planets produce local environments in which biological life can arise and ultimately move from protoconsciousness to consciousness -- i.e., subjectivity. Maturana, Varela, and Thompson have recognized all along that protoconsciousness and consciousness must be understood in terms of phenomenology, particularly the phenomenology of embodiment and awareness developed by Merleau-Ponty. Computational systems theorists are going to have to recognize at some point that meaning is a human creation based in and developed historically in socially lived experience, not a pre-given 'information system' that humans express/act out only after their computational brains have absorbed it first.

I don't believe that all Information Processing Systems create/produce/generate subjective experience, only certain ones. Also, I don't think human brains are the only IPS's that can create subjective experience. For example, I'm open to the possibility that our universe itself is an IPS capable of producing subjective experience. I think there are likely a multitude of non-human IPS's in our universe capable of generating subjective experience.

@ufology, who posted in this thread earlier, holds to a similar view. I hope he'll respond to my tag and rejoin this discussion. My next question is how these 'IPS's' 'capable of generating subjective experience' arise in nature only in some places rather than in the whole of the universe.

You also wrote in that last paragraph: "I don't think human brains are the only IPS's that can create subjective experience." Again, it remains to be demonstrated that a nonhuman 'brain' can "create subjective experience" [when experience as we subjectively know it occurs within and by embodied interaction with a directly experienced phenomenal world].
 
Last edited:
And I appreciate you taking the time to reply to me! :)

How would a universe 'filled with differentiated matter' but lacking integrated information systems -- indeed not evolving and maintaining a system of systems -- hold itself together rather than flying apart?
Well, if what scientists tell us about the history of the universe and Earth is correct, at one point in time in our universe there were no planets, and thus no Earth, and thus no Earthlife. Thus it's conceivable there was a (long period of) time in our universe when there was no life, and therefore as I contend, no subjective experience.
Computational systems theorists are going to have to recognize at some point that meaning is a human creation based in and developed historically in socially lived experience, not a pre-given 'information system' that humans express/act out only after their computational brains have absorbed it first.
I agree that meaning is something that only conscious beings can create. Jordan Peterson has talked about this on the macro level of existence (h/t to @smcder), and I feel that DD in the essay you linked above did an excellent job of outlining this on the micro level. Humans are meaning-creation machines.
You also wrote in that last paragraph: "I don't think human brains are the only IPS's that can create subjective experience." Again, it remains to be demonstrated that a nonhuman 'brain' can "create subjective experience" [when experience as we subjectively know it occurs within and by embodied interaction with a directly experienced phenomenal world].
I agree with you 100%. I understand that incredibly intelligent people such as Searle don't believe it's possible. He may be right. Consciousness may be substrate dependent, but I've not read a convincing case of why/how that might be so. And if consciousness is a result of duality, then it definitely isn't substrate dependent and can presumably localize in/on a non-human just as easily as a human.

Finally, I don't believe "we" directly experience the world, if by we you mean our minds. I would agree our "living bodies" directly interact with the world, but our bodies do not have subjective experiences, they generate subjective experiences. Again, I thought Evan Thompson said it so well:
[Y]ou are a living bodily subject of experience and an intersubjective mental being.
@Constance What do you think Evan Thompson is saying in that line above?

What is a living bodily subject of experience?

What is an intersubjective mental being?

He's making a distinction between the two, and I think I know what he means, but I also think I may be wrong.
 
Last edited:
What Searle clarifies is what I originally pointed out regarding IIT:

IIT provides a theory of how living organisms produce experiences (qualia) but not how this experience (qualia) becomes aware of itself. Indeed, this goes way, way back to my first posts in this thread where I outline streams of experience vs self-aware streams of experience.

But let's not throw the baby out with the bathwater!

IIT may not explain self-aware experience, but it may provide a good working hypothesis of how experience arises. I believe this is what Chalmers meant when he said it was a good theory, but didn't answer the hard problem of how cognition and experience interact.

Note: This is exactly why Jaynes says organisms which produce experience can still have a "nothing" what it's like to be. Recall blindsight: the information can be there, but if there is no awareness, there is no "what it's like."

The paradox of the tree falling in the forest is relevant here. Does it make a sound if no one is there to hear it? Here hear means to give the sound waves/information meaning.

Can information be considered "experience" if no one is there to experience it (give it meaning)? That is what Searle is saying.

Tonini says the information/experience is there, but Searle says "not if no one is there to give it meaning."

So perhaps rather than "stream of experience" it's more accurate to say "stream of integrated information." But, once this "stream of integrated information" becomes aware of itself, "it then becomes as "stream of experience."

I think this relates to the Phenomenological position that all experience requires self-awareness.

IIT explains the experience, but not the awareness of the experience.

(As a side note, Searle has interesting ideas about consciousness. He is what I might call an Uber Monist. Not only does he think consciousness is a purely biological phenomena, he believes only biological processes can give rise to consciousness. He view AI/AGI as a form of dualism! Thus, I'd love to hear Searle's response to David Deutsch.

I'm not done with this article yet, but it is an interesting one.

I believe what DD is saying is just what Searle - indeed many thinkers - is asking: How can information make meaning of itself (another way of saying self-aware experience)?

Edit: I need to take a step back; Looks like DD is asking: How does information make meaning!

I had said earlier that I hadn't read any books on consciousness, but indeed I have. In college I read two books by Hofstadter. One of them being, "I am a strange loop." In the book, he talks about self-aware experience (my phrase) arising via the phenomena of a strange loop. An idea that is clearly not mine (a la Peterson) but which has influenced my thinking on how consciousness arises.

(As a side note, Searle has interesting ideas about consciousness. He is what I might call an Uber Monist. Not only does he think consciousness is a purely biological phenomena, he believes only biological processes can give rise to consciousness. He view AI/AGI as a form of dualism! Thus, I'd love to hear Searle's response to David Deutsch.

Unless Searle has written something since this - it seems he is open to artificial intelligence.

"The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially." (Biological Naturalism, 2004)
 
@Constance - I only had time to skim Nagel's Mind and Consciousness and McGilchrist's The Master and His Emissary - before they were due back with interlibrary loan.

You can make short work of Nagel's book as it is short and written as a straightforward argument ... interestingly, he makes some use of an argument similar to Plantinga's evolutionary argument against naturalism (Nagel is an atheist) ... and there are some other very interesting bits.

McGilchrist's book is very rich and deals with many things we've discussed about modern culture with a compelling explanation ... I wanted to post some quotes, but there was something on almost every page I turned to worth posting in my opinion (you can probably see this on Googlebooks ...). I plan to buy it.

10,000 word essay version for 99 cents on Kindle, I downloaded this a while back and recommend the full book instead:

 
@Constance

My next question is how these 'IPS's' 'capable of generating subjective experience' arise in nature only in some places rather than in the whole of the universe.
I believe IPSs can and likely do exist throughout our universe - on planets and conceivably off-planets as well.
 
http://en.wikipedia.org/wiki/Phenomenology_(philosophy)

McGilchrist's take on this adds the twist of there being two worlds - with the left hemisphere being in ascendance, even though it is or was neurologically "subordinate" to the right hemisphere - his discussion of changes in language (reading left to right) is fascinating in this regard, as well as changes in art and portrayals of self.

This being more the right hemisphere approach:

If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us.[29

That this isn't what is happening, is what McGilchrist diagnoses.

The Hubert Dreyfus approach (contemporary society)[edit]
In critiquing the artificial intelligence (AI) programme Hubert Dreyfus (1992) argues that the way skill development has become understood in the past has been wrong. He argues, this is the model that the early artificial intelligence community uncritically adopted. In opposition to this view he argues, with Heidegger, that what we observe when we learn a new skill in everyday practice is in fact the opposite. We most often start with explicit rules or preformulated approaches and then move to a multiplicity of particular cases, as we become an expert. His argument draws directly on Heidegger's account in Being and Time of humans as beings that are always already situated in-the-world. As humans ‘in-the-world’ we are already experts at going about everyday life, at dealing with the subtleties of every particular situation—that is why everyday life seems so obvious. Thus, the intricate expertise of everyday activity is forgotten and taken for granted by AI as an assumed starting point.[29] What Dreyfus highlighted in his critique of AI was the fact that technology (AI algorithms) does not make sense by itself. It is the assumed, and forgotten, horizon of everyday practice that make technological devices and solutions show up as meaningful. If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us.[29]
 
I still don't understand what you mean in that highlighted clause: that living organisms "produce experiences." Can you clarify what you mean or cite a source that expresses and supports that claim? If we think in terms of autopoesis, defined by Maturana and Varela, an organism doesn't produce its own experiences of sensing and interacting with its environing situation vis a vis the boundaries that define self and not-self. Its experience is the arising of that relationship between self and not-self, a sense of its standing out from the situation in which it is embedded, a sense of boundaries across which it moves to acquire what it needs (nutriment) while maintaining its own integrity. It's the 'inner/outer' experience that others we've cited describe, and the boundaries sensed are porous (like the boundaries between the subconscious and conscious mind). The arising of the sense of self and nonself does not belong only to the 'inner' but also to the 'outer' from which it distinguishes itself. It is a qualitative difference in being that arises with life. At a deep level it is an experience of the environing earth itself as well as that of the living being responding to its sense of its environment and of itself within it. The speaker in this brief video expresses the deep symbolic significance of the sense of this relationship on which protoconscious and consciousness rest.

While 'information' exchanged in nature in its increasing complexity at purely physical levels no doubt enables the development of life, presence (awareness), consciousness, and mind, something new exists at the point when differentiation of life from nonlife begins. The bottom line of this direction of reasoning is that we cannot think ourselves beyond nature, outside of nature, nor can we think away our experience of our own point of view and consequent thought arising from the recognition of the difference of our own being within the being of nature.



By all means, let's not!!!

I'm out of time to respond to the balance of your post, but will return to it this evening.

ps: point to be taken up: 'experience' is not merely 'qualia', but involves qualia.

doanloading video now to listen to tomorrow ...
 
Indeed and that is critically important for how we live now (which is why I return to theoretical discussions like this one). The Buddhist makes one kind of choice; the existential phenomenologist makes another -- to take the phenomenally disclosed world as real, with real and outrageous and unnecessary suffering in it, and to construct rational social theory and resulting politico-economic programs to relieve that suffering and injustice.

This is one I'd also like to further explore ... I worked in mental health advocacy and with the homeless and prior to that with the state legislature and I saw so many "good intentions" and other wasteful activity that I became pretty cynical about programs and I'm interested in what alternatives there might be ...
 
I don't believe "we" directly experience the world, if by we you mean our minds. I would agree our "living bodies" directly interact with the world, but our bodies do not have subjective experiences, they generate subjective experiences.

You'd need to read some phenomenological philosophy to be persuaded otherwise. Maybe Thompson's Mind in Life, which you indicated an interest in, will do it for you.


Again, I thought Evan Thompson said it so well:

[Y]ou are a living bodily subject of experience and an intersubjective mental being.

@Constance What do you think Evan Thompson is saying in that line above?

What is a living bodily subject of experience?

What is an intersubjective mental being?

He's making a distinction between the two, and I think I know what he means, but I also think I may be wrong.


Where did you find that sentence? Was it in Thompson's precis of Mind in Life that I posted maybe a week ago? What context (paragraph) does it appear in? I'd need that to be sure about what he meant in that sentence by seeing it in context. But I am sure he's not separating the body and the mind ontologically; he's a follower and an exponent of Merleau-Ponty's phenomenology of embodied consciousness, embodied subjectivity, as he states in the first half of your quoted sentence. In the second half of the sentence he is referring to the intersubjectivity of human experience and thought expressed in the history of human culture, and borne out of our individual recognition of other humans, all other humans, as like ourselves in their existential situation, their consciousness, their needs, and their moral claims upon us to safeguard their freedom and integrity. You might look up Emmanuel Levinas, another phenomenological philosopher who emphacized the relation of the individual and the other.
 
http://en.wikipedia.org/wiki/Phenomenology_(philosophy)

McGilchrist's take on this adds the twist of there being two worlds - with the left hemisphere being in ascendance, even though it is or was neurologically "subordinate" to the right hemisphere - his discussion of changes in language (reading left to right) is fascinating in this regard, as well as changes in art and portrayals of self.

This being more the right hemisphere approach:

If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us.[29

That this isn't what is happening, is what McGilchrist diagnoses.

I have not yet read McGilchrist but have been intending to do so based on your earlier references to him. I will do so tomorrow.


The Hubert Dreyfus approach (contemporary society)[edit]
In critiquing the artificial intelligence (AI) programme Hubert Dreyfus (1992) argues that the way skill development has become understood in the past has been wrong. He argues, this is the model that the early artificial intelligence community uncritically adopted. In opposition to this view he argues, with Heidegger, that what we observe when we learn a new skill in everyday practice is in fact the opposite. We most often start with explicit rules or preformulated approaches and then move to a multiplicity of particular cases, as we become an expert. His argument draws directly on Heidegger's account in Being and Time of humans as beings that are always already situated in-the-world. As humans ‘in-the-world’ we are already experts at going about everyday life, at dealing with the subtleties of every particular situation—that is why everyday life seems so obvious. Thus, the intricate expertise of everyday activity is forgotten and taken for granted by AI as an assumed starting point.[29] What Dreyfus highlighted in his critique of AI was the fact that technology (AI algorithms) does not make sense by itself. It is the assumed, and forgotten, horizon of everyday practice that make technological devices and solutions show up as meaningful. If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us.[29]

These comments following the Deutsch paper are relevant:

haig:
"I enjoyed David's thoughts, but at the end he makes an error when he says:
"the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough."

The differences in DNA between humans and chimpanzees specify the differences in phenotype from chimp to human, but you can't get the human without all the previous evolutionary advances. It is not that we understand everything about brains except the parts that are responsible for higher functioning in humans. On the contrary, AI has had the most success in recreating the highest level reasoning processes, but does not have a clue how to incorporate the lower level behavior of nervous systems. Chess is easy, object recognition is hard.

What needs to be acknowledged is that the goals of AGI or Strong AI, things like nuanced language recognition and common-sense reasoning, creativity, metaphorical thinking, etc, are activities that require all the baggage that came before they arose in humans, plus our need to understand those processes themselves as additions to and which take advantage of, the lower-level functions of less complex nervous systems."


Guest:
"Thoughtful comment, but you have made an error as well.

It is not that we understand everything about brains except the parts that are responsible for higher functioning in humans. On the contrary, AI has had the most success in recreating the highest level reasoning processes, but does not have a clue how to incorporate the lower level behavior of nervous systems. Chess is easy, object recognition is hard.

The way we humans think about chess, as David mentioned, is qualitatively different than the way our current "AI" systems think about chess. A computer goes through millions of possible situations each turn, computing which one is the most advantageous. A grandmaster only considers a limited amount of moves, each a few turns in, guided mainly by experience and intuition as to which moves to consider. Thus, when a computer beats a human at chess, it gives the appearance that the computer has achieved higher-level thinking.

We understand how neurons work in transmitting signals, but our understanding quickly breaks down when we reach the symbol level of reasoning (groups of neurons), and deteriorates towards zero as we move up the hierarchy of intelligence. This is a fundamental problem in AGI programming (as David mentioned). To first understand how to build AGI machines, we must first understand ourselves. An interesting proposition to say the least. Does this mean that by the time we are capable of AGI we can effectively predict human behavior? And in turn, predict the behavior of AGI? (The alternative to this is somehow accidentally stumbling upon the qualitative feature of AGI, an event that has what I consider an infinitesimally small chance of occurring).

For a more in-depth look into the fundamental problems of AGI take a look at Gödel, Escher, Bach an Eternal Golden Thread (I'm only half kidding... that book is huge!).


  • Jessie Henshaw
    "Good point, it's not just the finishing touches of life we don't understand how to reproduce in a computer, it's really every part. There's a very fundamental difference between the information relationships and physical ones that is visible enough, but not yet of interest to science it seems. Physical process work by locally emerging complex development processes, inherently not possible to represent in information. It's an odd implication of the type of continuity in physical systems implied by energy conservation. Information processes just create images of the rules made up by people."
 
Unless Searle has written something since this - it seems he is open to artificial intelligence.

"The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially." (Biological Naturalism, 2004)
From Searle's Wikipedia page:

A consequence of biological naturalism is that if we want to create a conscious being, we will have to duplicate whatever physical processes the brain goes through to cause consciousness. Searle thereby means to contradict to what he calls "Strong AI", defined by the assumption that as soon as a certain kind of software is running on a computer, a conscious being is thereby created.[37]
I'm certainly not a computer scientist, but Searle's contention seems valid to me.

I think mind arises from physical brains interacting with the physical environment. I think mind can be conceived as a property of these physical systems (brains) a la Chalmers and Max Texmark.

Like I say, I don't know enough about computer science to have a sense of whether an entire physical brain can be simulated via software and thus generate subjective experience. For sure, the complexity of our physical environment can't be simulated by a computer program.

As I think mind arises from the interaction of physical organisms with our physical environment, I'm leaning in Searle's direction. True AGI will require a physical brain, not an algorithm alone. Mind is substrate-dependent in the sense that it needs a substrate. Whether a completely digital substrate would work, I doubt - if only because we could never represent the environment digitally.

I believe what Searle is saying is that to argue otherwise would be to say we can have the property "liquid" without any physical molecules. Now, if the property liquid can be reproduced digitally, I stand corrected.
 
Last edited:
You'd need to read some phenomenological philosophy to be persuaded otherwise. Maybe Thompson's Mind in Life, which you indicated an interest in, will do it for you.
Yes, I think so. I do plan to read that book.
Evan Thompson, Mind in Life:

[Y]ou are a living bodily subject of experience and an intersubjective mental being.

@Constance

Where did you find that sentence? Was it in Thompson's precis of Mind in Life that I posted maybe a week ago? What context (paragraph) does it appear in? ... I am sure he's not separating the body and the mind ontologically.
Yes, it's at the very end.

And, yes, I interpreted it as him separating the body and mind, but I also vaguely understand that Phens don't believe that. (Which I find interesting about you @Constance because I get the sense that you believe in the immortality of the soul, that is, that the mind/soul exists before and after the body. But how can that be so if the physical body and mental mind are indivisible?)

Thompson was the most coherent of all the Phens that I've read so far, so I look forward to reading him.
 
Last edited:
This man's description of pre-reflective consciousness and reflective consciousness made perfect sense to me.

My issue earlier was the addition of the world "self" in the term: So instead of pre-reflective consciousness (which made sense to me) the term pre-reflective self-consciousness was being used, which didn't make sense to me. If it's "pre" reflective, then there is no sense of self.

Here is a question I've been pondering.

When we are in a state of pre-reflective consciousness - when our mind is drifting off whilst reading a book - we are absorbed in our thoughts and have no sense of self. But, because we have the capacity for reflective consciousness, we eventually "catch" ourselves and can reflect on current thoughts and even reflect back on prior, pre-reflective thoughts.

My contention is that some organisms don't have the capacity for reflective consciousness, but they do have the capacity for pre-reflective consciousness. Thus, they have thoughts, experiences, emotions, etc., but they don't have the capacity to reflect on them.

Do these organisms possessing only the capacity for pre-reflective consciousness have a what-it's-like? Or are they zombie? When you're drifting off in thought whilst reading a book, you have no sense of self (for a moment) but imagine an organism that can "drift" off in thought, but never "catch" themself and reflect...

(I'll have to read about the Phen concept of protoconsciousness.)
 
A long read that is applicable to this discussion on the direct level and meta level. :)

What’s the evidence on using rational argument to change people’s minds? : May 2014 : Contributoria - community funded, collaborative journalism

... The wider context is the recent progress in the sciences that puts our species in the biological context of the animals, a project that most psychologists are signed up to to some degree. A reflection of this is all the experiments which attempt to give a mechanistic - that is natural - account of the mind, an account which downplays idiosyncrasy, subjectivity and nondeterminism. The philosopher John Gray was reflecting on this trend in research, as well as giving vent to his own enthusiastic pessimism, when he wrote:

"We cannot wake up or fall asleep, remember or forget our dreams, summon or banish our thoughts, by deciding to do so. When we greet someone on the street we just act, and there is no actor standing behind what we do. Our acts are end points in long sequences of unconscious responses. They arise from a structure of habits and skills that is almost infinitely complicated. Most of our life is enacted without conscious awareness."

The science, and those who promote it, seem to be saying that we're unreasonable creatures. That's a problem, given that many of our social institutions (such as democracy) are based on an assumption that rational persuasion can occur. If I believed the story told in these books I would be forced to choose between my profession as a cognitive scientist and political commitment as a citizen and democrat.

Fortunately, as a cognitive scientist, I don't have to believe what I'm told about human nature - I can look into it myself.
 
On Chalmer's hard problem of consciousness:
From Wikipedia: The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes.[1]David Chalmers, who introduced the term "hard problem" of consciousness,[2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]
I can read this paragraph with much more ease after weeks of discussing and thinking about consciousness.

I believe even more strongly now that consciousness (subjective experience) is a property or state (still confused as to which is more appropriate) that is emergent from brains. Just as the state of matter known as liquid cannot be reduced to single molecules but emerges "magically" from certain, unique arrangements of molecules, so it is with subjective experience - subjective experience emerges "magically" from certain, unique arrangements of matter.

@Constance, you posted the following article in a thread here some time ago:
Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas — Medium

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.
Seeking to "explain" how subjective experience emerges from brains is perhaps as futile as explaining how liquidity emerges from systems of molecules.
 
Here is a graphic I created to organize my thoughts, and to clarify the question I asked above:
When we are in a state of pre-reflective consciousness - when our mind is drifting off whilst reading a book - we are absorbed in our thoughts and have no sense of self. But, because we have the capacity for reflective consciousness, we eventually "catch" ourselves and can reflect on current thoughts and even reflect back on prior, pre-reflective thoughts.

My contention is that some organisms don't have the capacity for reflective consciousness, but they do have the capacity for pre-reflective consciousness. Thus, they have thoughts, experiences, emotions, etc., but they don't have the capacity to reflect on them.

Do these organisms possessing only the capacity for pre-reflective consciousness have a what-it's-like? Or are they zombie? When you're drifting off in thought whilst reading a book, you have no sense of self (for a moment) but imagine an organism that can "drift" off in thought, but never "catch" themself and reflect...

Self%20Aware%20Experience%20Graphic.JPG


Objective Reality: First of all, the human body/brain is a foam of particles as well, not just the environment. But the point is that objective reality is composed of particles.

Subjective Experience: Our bodies and the environment interact, and from this interaction arises subjective experience. (Note though, that even in the absence of almost all environmental stimuli, the brain can still generate subjective experience.) However, there is no one to "observe" this subjective experience. I would correlate this mental state as pre-reflective consciousness. I believe humans can enter this state of consciousness, and I believe many organisms are always in this state of consciousness.

Self-Aware Subjective Experience: This is a state of consciousness in which subjective experience is aware of itself and becomes a "self." It becomes the observer of itself observing the universe. This is when there is a sense of "I." Thus begins a never ending feedback loop. This state of consciousness I would correlate with reflective self-consciousness. I think humans can enter this state, but I believe many organisms cannot. It is this "level" of consciousness that allows one to create meaning via the use and manipulation of symbols.

In the Subjective Experience state the qualia of peppers exists, but as there is no "observer" this qualia has no meaning. Once this qualia is observed in the Self-Aware Subjective Experience state, meaning and thus a "what-it's-like" arises.

I believe Tononi/IIT explains how the organism (matter) creates Subjective Experience (an emergent property/state, ie Integrated Information). But for me, the "hard problem" is how does this Subjective Experience observe itself? Somehow, a stream of Subjective Experience "sees" itself.
 
Last edited:
I'm certainly not a computer scientist, but Searle's contention seems valid to me.

I think mind arises from physical brains interacting with the physical environment. I think mind can be conceived as a property of these physical systems (brains) a la Chalmers and Max Texmark.

Like I say, I don't know enough about computer science to have a sense of whether an entire physical brain can be simulated via software and thus generate subjective experience. For sure, the complexity of our physical environment can't be simulated by a computer program.

As I think mind arises from the interaction of physical organisms with our physical environment, I'm leaning in Searle's direction. True AGI will require a physical brain, not an algorithm alone. Mind is substrate-dependent in the sense that it needs a substrate. Whether a completely digital substrate would work, I doubt - if only because we could never represent the environment digitally.

I believe what Searle is saying is that to argue otherwise would be to say we can have the property "liquid" without any physical molecules. Now, if the property liquid can be reproduced digitally, I stand corrected.

I was responding to your statement:

As a side note, Searle has interesting ideas about consciousness. He is what I might call an Uber Monist. Not only does he think consciousness is a purely biological phenomena, he believes only biological processes can give rise to consciousness. He view AI/AGI as a form of dualism! Thus, I'd love to hear Searle's response to David Deutsch.

- with a quote by Searle that clarified for me his position on artificial intelligence:

The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially." (Biological Naturalism, 2004)


Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
I still don't understand what you mean in that highlighted clause: that living organisms "produce experiences." Can you clarify what you mean or cite a source that expresses and supports that claim? If we think in terms of autopoesis, defined by Maturana and Varela, an organism doesn't produce its own experiences of sensing and interacting with its environing situation vis a vis the boundaries that define self and not-self. Its experience is the arising of that relationship between self and not-self, a sense of its standing out from the situation in which it is embedded, a sense of boundaries across which it moves to acquire what it needs (nutriment) while maintaining its own integrity. It's the 'inner/outer' experience that others we've cited describe, and the boundaries sensed are porous (like the boundaries between the subconscious and conscious mind). The arising of the sense of self and nonself does not belong only to the 'inner' but also to the 'outer' from which it distinguishes itself. It is a qualitative difference in being that arises with life. At a deep level it is an experience of the environing earth itself as well as that of the living being responding to its sense of its environment and of itself within it. The speaker in this brief video expresses the deep symbolic significance of the sense of this relationship on which protoconscious and consciousness rest.


While 'information' exchanged in nature in its increasing complexity at purely physical levels no doubt enables the development of life, presence (awareness), consciousness, and mind, something new exists at the point when differentiation of life from nonlife begins. The bottom line of this direction of reasoning is that we cannot think ourselves beyond nature, outside of nature, nor can we think away our experience of our own point of view and consequent thought arising from the recognition of the difference of our own being within the being of nature.



By all means, let's not!!!

I'm out of time to respond to the balance of your post, but will return to it this evening.

ps: point to be taken up: 'experience' is not merely 'qualia', but involves qualia.

listened to this video on pre reflective consciousness:

horizon of intelligibility

"earth thinks of thinkers thinking thoughts of thinkers thinking"

- it really does seem similar to some Buddhist reading on awareness/mindfulness I did this week - no surprise there as I think the Buddhists are good phenomenologists - although there concern is the alleviation of suffering, so everything is focused on that goal. "emerge to myself" - the construction of an "I" bringing pre-reflective consciousness to consciousness ... the experiment to stop your thoughts was interesting and the recognition that you hold your breath, breath being central to meditation practice - the training is to bring the attention to the breath, if you have a difficult emotion, attend to the breath and see what it's doing, same for thoughts - so the goal isn't stopping thought but awareness of thought, then release it and return to the breath ... so it was very interesting that he commented that most people stop the breath to stop thoughts - the connection between speech/breath and both being semi-autonomous, also interesting ... it may be stretching the connection but there is an enormous emphasis on "wise speech" in Buddhism, in part because of how much of it we produce and of its impact on people, so connecting the breath to speech to ethical behavior ... and also his discussion of training thought through reading and other communication technologies, so thought too is semi-autonomous ... breath, speech, thought - all as flows or processes that we can control to some extent with awareness and intention ... and I like the idea of Earth thinking thoughts, of who "we" really are, what the "I" is as a construction - I'll look for more from this speaker.


Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top