• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 6

Free episodes:

Status
Not open for further replies.
p.s.If i was a solipsist, I would say that the MBP did not exist until 49 years ago. I reject solipsism: most people I come across (though interesting not all) seem to identify with my notion of being in this world isolated. As a solipsist, mental properties, it seems, have nothing to do with mind.
 
@Soupie
answering you more directly, you say,
"My understanding is that most philosophers of mind would say that the Sense of Self was a (mental) property of consciousness, not the other way around, as you seem to suggest.
And, you say that all mental aspects* could be properties of physical processes, but you havent articulated a model of how this might be."

The indications are that physical processes cause mental characteristics; there are no known mental characteristics that cannot be disrupted by physical interventions. I don't have to explain how sleeping tablets cause unconcsiousness to make this kind of argument.: no model is required.

You may be right about most philosophers. Most philosophers' ideas about consciousness are predicated on their attitude toward dualism/monism/materialism. But that is because consciousness is a mystery that is entangled with self. Consciousness (and other mental properties) is entangled with mind. For me, I don't know what is a property of what: i have not figured out the self and never will ;).
When I fall asleep tonight, I'll hazard a guess that my self will continue to exist such that when I wake in the morning (God willing) it will be me that wakes and kisses my wife, not someone else.
 
@Soupie:
you say, 'The SEP articulates it concisely'

Where?
You asked what all those problems had in common. I quoted the MBP as articulated at the SEP:

"What is the relationship between physical properties and mental properties."

That is what all those problems have in common: the hard problem, the mind-body problem, the explanatory gap, the objectivity-subjectivity problem.

What they all have in common is the problem of "how the mental and the physical relate."

As noted, I now recall that you apparently do not think the question of "how the mental and physical relate" is a problem. In fact, I recall Smcder and I questioning whether you groked the Hard Problem. Not everyone does.

You say: "The indications are that physical processes cause mental characteristics; there are no known mental characteristics that cannot be disrupted by physical interventions."

Can you provide a model of how this might be? For instance, can you briefly outline physical processes that might cause the experience of the color red to exist? (An in depth explanation would be preferred of course.)

Pharoah you say: "The puzzle of the mind is not that minds have mental properties and characteristics, but that you have the particular one you have."

Okay, let me see if I understand what you are saying.

(1) The puzzle of the mind is not that minds have mental properties and characteristics.

The mind-body problem is how physical properties (organisms) relate to mental properties (minds). So I'm not "puzzled" as to why minds have mental properties; I'm puzzled about how organisms and their conscious minds relate. (However, I certainly am interested in how minds come to have the various properties that they do.)

(2) [The puzzle of the mind is] that you have the particular one you have.

This may seem like I'm splitting hairs, but it's a serious question: To what does the "you" refer in the above statement?

Does it refer to the physical body? So the statement would be:

"The puzzle of the mind is that your body has the particular mind that it does."

But I don't see why this is a puzzle at all! You've just said that "physical processes cause mental characteristics."

If I understand you correctly, then it follows that: physical bodies cause the mental mind.

On this view, mental properties, being caused by physical processes, are essentially physical properties. On your view, there is no sense in distinguishing between mental and physical properties because, as you say, so-called mental properties are caused by physical processes; why not call them physical properties then?

So asking why a particular body causes a particular mind, is like asking why a particular thunder storm causes a particular bolt of lightening.

I'm not seeing why this is problem or a puzzle?

Now, I sense that the "you" in your statement above does not refer to the physical body. What does the "you" refer to if not the physical body?

Does the "you" refer to the mind?

"The puzzle of the mind is not that minds have mental properties and characteristics, but that [your mind] has the particular [mind it has]."

Can minds have minds? That doesnt make sense.

So we've got three entities here:

(1) Body
(2) Mind
(3) You

You suggest that minds are physical, so really we just have two:

(1) Body/Mind
(2) You

If you assert that the mind and body are one, then surely you've solved the mind-body problem (how mental properties and physical properties are related). Please share the solution!

But that still leaves us with the question of the ontological identitiy of (2) you.

If "you" is not the physical body and "you" is not the mental mind, then what is the ontological identity of "you."

And if this "you" does not have any physical properties or any mental properties (being distinct from the body and the mind) what properties might it have?
 
@Soupie
A lot in your post.
1. "That is what all those problems have in common: the hard problem, the mind-body problem, the explanatory gap, the objectivity-subjectivity problem.
What they all have in common is the problem of "how the mental and the physical relate." "

Ok. Perhaps I should have said, what distinguishes them from one another?
I rcognise thst they grappling with mental/physical problems, but how do they differ? What is the dichotomy they exact, such that we can say why all thir terms of reference are necessary in expressing a different aspect of problems to do with mental/physical stuff?
 
@Soupie
2. "You say: "The indications are that physical processes cause mental characteristics; there are no known mental characteristics that cannot be disrupted by physical interventions."
Can you provide a model of how this might be? For instance, can you briefly outline physical processes that might cause the experience of the color red to exist? (An in depth explanation would be preferred of course.)"

Not sure how your question relates to what I said. Does it?
But, to consider your question, recently I came up with a way of explaining to you why the experience of red exists because of physical processes. I rcognise thst I have failed in the past. I'll post it when I have written it, time permitting
 
@Soupie
2. "You say: "The indications are that physical processes cause mental characteristics; there are no known mental characteristics that cannot be disrupted by physical interventions."
Can you provide a model of how this might be? For instance, can you briefly outline physical processes that might cause the experience of the color red to exist? (An in depth explanation would be preferred of course.)"

Not sure how your question relates to what I said. Does it?
But, to consider your question, recently I came up with a way of explaining to you why the experience of red exists because of physical processes. I rcognise thst I have failed in the past. I'll post it when I have written it, time permitting
Do you not consider the "experience of red" to be a mental characteristic or property?
 
@Soupie
2.a) It is not a property or a characteristic. it is a qualitatively relevant intractive response by a certain dynamic composite physical entity, of which your body is one example.

3.
"The mind-body problem is how physical properties (organisms) relate to mental properties (minds). So I'm not "puzzled" as to why minds have mental properties; I'm puzzled about how organisms and their conscious minds relate."
Of course, I disagree with the first statement (SEP) and your puzzlement in what i consider to amount to, an empirical query. You keep on asking for models of explanation. I think of you as needing empirical explanations.
I have not read Levine's explanatory gap paper (coz it costs),mbut from what I understand from commentaries, it is about the empirical question as to how physical states can give rise to mental characteristics. Corrct me if I am wrong.
I don't think of these queries as being about mind at all insofar as their eventual answers would not satisfy the metaphysical question as to what my mind is above and beyond the physical processes that cause mental characteristics.
 
@Soupie
4. "[The puzzle of the mind is] that you have the particular one you have.
This may seem like I'm splitting hairs, but it's a serious question: To what does the "you" refer in the above statement?
Does it refer to the physical body?"

no.
your body and mental states are different to mine. That difference is not the "you" to which I refer.
 
@Soupie
5. re "you"
I look out of my eyes and you look out of yours. i have my forever changing body and mental states and you have yours. There is nothing about these properties or characteristics that indicate why, from your inception in your mother's womb, your window onto the world would be Soupie not Pharoah or any other zillion potential individuals. Neither the physical differences nor the physical similarities can answer why your window on the world ended up where it has ended up in the universe of time and space.
 
@Soupie
2.a) It is not a property or a characteristic. it is a qualitatively relevant intractive response by a certain dynamic composite physical entity, of which your body is one example.
So qualia (such as red) are not mental properties or characteristics, but rather "responses."

I have not read Levine's explanatory gap paper (coz it costs),mbut from what I understand from commentaries, it is about the empirical question as to how physical states can give rise to mental characteristics. Corrct me if I am wrong.
I think it is an empirical question, but as discussed here at length, there are reasons to believe that consciousness cannot be empirically observed or experienced. Therefore, how unobservable, conscious processes relate to physical, observable processes is currently unexplained. This is the explanatory gap.

For example, while someone is, say, looking out a window, we can empirically observe the neurological workings of their brain, but we cannot empirically observe their conscious experiences (about looking out the window).

I don't think of these queries as being about mind at all insofar as their eventual answers would not satisfy the metaphysical question as to what my mind is above and beyond the physical processes that cause mental characteristics.
It seems to me that if mental characteristics are caused by physical processes, then there is no need to assume the mind is anything beyond said mental characteristics. Wouldnt the mind be constituted of these mental characteristics?

@Soupie
5. re "you"

I look out of my eyes and you look out of yours. i have my forever changing body and mental states and you have yours. There is nothing about these properties or characteristics that indicate why, from your inception in your mother's womb, your window onto the world would be Soupie not Pharoah or any other zillion potential individuals. Neither the physical differences nor the physical similarities can answer why your window on the world ended up where it has ended up in the universe of time and space.
Again, we could say the same thing about any dynamic system, indeed anything.

Why is the oak tree in my neighbor's yard in his yard and not the one in your neighbor's yard? Why is the rain drop falling from the sky in Brazil not the raindrop falling from the sky in England?

Again, you say "I look out of my eyes." What is this "I" and "you" that you speak of? If it's not the body or the mind, what is it?
 
Last edited:
Read it without the parentheses.

"HCT says that environmental stimuli are qualitatively relevant to the organism and this process results in phenomenal consciousness."

The bottom line is that HCT does not provide an explanation of the "what its like" or "feeling" aspect of consciousness.

Does HCT have to provide that explanation to be a useful model of what Pharoah calls evolutionary 'constructs'? The 'what it's like' and 'feeling' aspects of consciousness constitute the hard problem, the great problem (paralleling the mind-body problem) that the whole field of consciousness studies must deal with.
 
""The mind-body problem is how physical properties (organisms) relate to mental properties (minds). So I'm not "puzzled" as to why minds have mental properties; I'm puzzled about how organisms and their conscious minds relate."

Of course, I disagree with the first statement (SEP) and your puzzlement in what i consider to amount to, an empirical query. You keep on asking for models of explanation. I think of you as needing empirical explanations.

I have not read Levine's explanatory gap paper (coz it costs), but from what I understand from commentaries, it is about the empirical question as to how physical states can give rise to mental characteristics. Correct me if I am wrong.

I don't think of these queries as being about mind at all insofar as their eventual answers would not satisfy the metaphysical question as to what my mind is above and beyond the physical processes that cause mental characteristics."
 
@Soupie—Here goes:
You agree with the notion of physiological mechanisms being qualitatively relevant.
You say HCT does not then tell us how the feeling of qualia comes about. It does not explain phenomenal consciousness.

One can assume that pretty much any environmental trigger could potentially be important to a species. Therefore any physiological mechanism that makes individuals responsive to such triggers could be survivally beneficial.
Such environmental triggers (or properties or characteristics of the environment, if you prefer) might include, colours, hues, colour saturations, movements, size, shapes, temperature, pH, humidity, vibrations in any frequency range, molecular character etc.

So, over time a species of a more basic nature is likely to evolve physiologically, in a manner that becomes increasingly responsive to all of these kinds of triggers.
Some of the triggers are beneficial, others harmful. Some benefit in certain circumstances while harm on other occasions. Sometimes certain triggers are highly important and at other times totally irrelevant etc.
Inevitably, diverse physiological mechanisms come into play that make the individuals of the species more responsive to these differences. Importantly, speed of response is very important to a rapidly changing environment too.

Furthermore, at any given instance in time, a more sophisticated organism may have a trillion sensory inputs, ALL of which are of qualitative relevancy, but some more than others. Neural networks facilitate rapid assimilations. Certain neurological features will do a better job than others.

How do neural networks assimilate all of these qualitative inputs? I don't know. There are some obvious candidates. And one can speculate... ask the neuroscientist. Computationalists are keen to guess too.

I am saying with HCT, that the character of phenomenal consciousness—such as its diversity of qualaic impressions, memory of past experiences (I have observed that some flies remember the window they flew in whilst other fly species do not), anticipation of future events (interestingly, adult birds anticipate the landing on a branch but many insects just crash because they do not anticipate future landings), its unity of experience etc—are the consequence of the management of trillions of inputs of qualitatively relevancy.

So "red" grabs the attention, is vibrant blah blah... because during the assimilation of experience, in the mass of weighing a trillion potentially relevant qualitative assimilations, neural networks make it so—comparatively. Before you introspect (like a human) this is the nature of phenomenal consicousness. For instance, I just saw a red highlighter mark on my wooden desk and, before I had time to think about it, my first experiential impression was an intake of breath and 'zoning' in—with unity of thought and a bit of anxiety—to what I thought (before I introspected) was blood. That is the constant nature of phenomenal consciousness for those creatures lacking introspective awareness of phenomenal consciousness i.e. for these creatures, there are no labels. Rather there is just a rich individuated tapestry of changing qualitative 'impressions' about the qualitative relevance of things occupying time and space.
 
So qualia (such as red) are not mental properties or characteristics, but rather "responses."

I think it is an empirical question, but as discussed here at length, there are reasons to believe that consciousness cannot be empirically observed or experienced. Therefore, how unobservable, conscious processes relate to physical, observable processes is currently unexplained. This is the explanatory gap.

For example, while someone is, say, looking out a window, we can empirically observe the neurological workings of their brain, but we cannot empirically observe their conscious experiences (about looking out the window).

It seems to me that if mental characteristics are caused by physical processes, then there is no need to assume the mind is anything beyond said mental characteristics. Wouldnt the mind be constituted of these mental characteristics?

Again, we could say the same thing about any dynamic system, indeed anything.

Why is the oak tree in my neighbor's yard in his yard and not the one in your neighbor's yard? Why is the rain drop falling from the sky in Brazil not the raindrop falling from the sky in England?

Again, you say "I look out of my eyes." What is this "I" and "you" that you speak of? If it's not the body or the mind, what is it?

@Soupie #630 :
"So qualia (such as red) are not mental properties or characteristics, but rather "responses." "
"responses" is a rubbish way of expressing what I was trying to say.

This is all much simpler than recent posts would have us believe:
Mental states existed before you existed,
therefore, mental states do not explain your existence.
To my way of thinking, the logic of this is unassailable.
(the fact that your existence is contingent on mental states existing is immaterial. the mental states in themselves are not sufficient a cause of your mind)

So, given that mental states do not encapsulate mind (in its entirety) the MBP is not expressed merely in terms of mental properties and physical properties.
 
You say HCT does not then tell us how the feeling of qualia comes about. [That] it does not explain phenomenal consciousness. ...

Furthermore, at any given instance in time, a more sophisticated organism may have a trillion sensory inputs, ALL of which are of qualitative relevancy, but some more than others. Neural networks facilitate rapid assimilations. ...

I am saying with HCT, that the character of phenomenal consciousness ... [is] the consequence of the management of trillions of inputs of qualitatively relevancy.
Yes, as I noted when you first introduced it to us, I think it is a rich idea. And I also agree that it is indeed most likely this neurophysiological process—the dynamic integration (assimilation) of multiple modes of sensory input—from which phenomenal consciousness "emerges." (Incidentally, I would argue this is not a causal relationship in the usual sense.)

Coincidently, the paper @Constance and I have just read offers a similar explanation of the emergence of phenomenal consciousness:

http://kognitywistyka.umcs.lublin.pl/wp-content/uploads/2014/04/Zlatev2009-CS.pdf

"Still, a basic form of “evaluation” can be performed even without any consciousness (i.e. phenomenal experience), on the basis of a biological value system, as in Damasio’s (2000) somatic marker theory, where “emotions” are defined as bodily states outside of awareness, and not as experiential ones. Some simple forms of learning may be performed on this basis, by connecting “features” of a disjoint Umwelt with such bodily states.

However, “binding” the Umwelt into coherent multimodal wholes – objects, scenes and situations – would be functional for behavioural flexibility, learning, anticipation and “problem-solving”, which are all necessary for navigating in a complex environment (Edelman 1992). The hypothesis that I am proposing is that consciousness emerged as a biological adaptation in creatures in need of a “common currency” for multimodal perception, action and evaluation, so that attentional resources can be allocated flexibly, and evaluation can be performed efficiently via feelings, e.g. for the purpose of anticipating the results of actions. Thus, (part of) the Umwelt becomes transformed into a Lebenswelt, perceived as separate from the acting and feeling subject. This on its side would lead to the pre-reflective self-consciousness of the minimal self, and to what Husserl called “the correlational structure of intentionality”: intentional objects are perceived as external to the self, but are simultaneously categorized and “felt” on the basis of internal phenomenal value systems. This, I believe, is what is meant by concepts such as “core consciousness” (Damasio 2000) and “primary con- sciousness” (Edelman 1992).

The advantage of such a view is that consciousness is clearly functional (cf. Donald 2001), and ceases to be a mystery for evolutionary theory. As mentioned in Section 3.2, there are even empirically grounded proposals for the neural bases of this adaptation: the widely distributed and interconnected thalamocortical system present in mammals, but much less developed in e.g. reptiles. As is generally the case in evolution, consciousness even in its “core” and “primary” forms hardly emerged wholesale. But since even a minimal form of what was sketched above would have been adaptive for animals inhabiting a complex Umwelt, it would have been selected for in our distant ancestors, and its evolution into “higher” forms, e.g. with more flexible control of attention, more diverse feelings etc., under way. "

The integration (assimilation) of multiple modes of sensory signals (input) into a unified medium (common currency) is a prime candidate for phenomenal experience.

Pharoah you ask: "How do neural networks assimilate all of these [sensory] inputs?"

As you note, this is a problem neuroscientists are certainly working to answer. But in the meantime, we can ask: If phenomenal consciousness does indeed emerge as an ontologically new phenomenon as a result of the process of sensory integration, how do we characterize this emergence?

(1) The emergence of p-consciousness is ultimately the result of neurological processes, or,

(2) The emergence of p-consciousness is ultimately the result of signal integration processes?
 
Last edited:
Yes, as I noted when you first introduced it to us, I think it is a rich idea. And I also agree that it is indeed most likely this neurophysiological process—the dynamic integration (assimilation) of multiple modes of sensory input—from which phenomenal consciousness "emerges." (Incidentally, I would argue this is not a causal relationship in the usual sense.)

Coincidently, the paper @Constance and I have just read offers a similar explanation of the emergence of phenomenal consciousness:

http://kognitywistyka.umcs.lublin.pl/wp-content/uploads/2014/04/Zlatev2009-CS.pdf

"Still, a basic form of “evaluation” can be performed even without any consciousness (i.e. phenomenal experience), on the basis of a biological value system, as in Damasio’s (2000) somatic marker theory, where “emotions” are defined as bodily states outside of awareness, and not as experiential ones. Some simple forms of learning may be performed on this basis, by connecting “features” of a disjoint Umwelt with such bodily states.

However, “binding” the Umwelt into coherent multimodal wholes – objects, scenes and situations – would be functional for behavioural flexibility, learning, anticipation and “problem-solving”, which are all necessary for navigating in a complex environment (Edelman 1992). The hypothesis that I am proposing is that consciousness emerged as a biological adaptation in creatures in need of a “common currency” for multimodal perception, action and evaluation, so that attentional resources can be allocated flexibly, and evaluation can be performed efficiently via feelings, e.g. for the purpose of anticipating the results of actions. Thus, (part of) the Umwelt becomes transformed into a Lebenswelt, perceived as separate from the acting and feeling subject. This on its side would lead to the pre-reflective self-consciousness of the minimal self, and to what Husserl called “the correlational structure of intentionality”: intentional objects are perceived as external to the self, but are simultaneously categorized and “felt” on the basis of internal phenomenal value systems. This, I believe, is what is meant by concepts such as “core consciousness” (Damasio 2000) and “primary con- sciousness” (Edelman 1992).

The advantage of such a view is that consciousness is clearly functional (cf. Donald 2001), and ceases to be a mystery for evolutionary theory. As mentioned in Section 3.2, there are even empirically grounded proposals for the neural bases of this adaptation: the widely distributed and interconnected thalamocortical system present in mammals, but much less developed in e.g. reptiles. As is generally the case in evolution, consciousness even in its “core” and “primary” forms hardly emerged wholesale. But since even a minimal form of what was sketched above would have been adaptive for animals inhabiting a complex Umwelt, it would have been selected for in our distant ancestors, and its evolution into “higher” forms, e.g. with more flexible control of attention, more diverse feelings etc., under way. "

The integration (assimilation) of multiple modes of sensory signals (input) into a unified medium (common currency) is a prime candidate for phenomenal experience.

Pharoah you ask: "How do neural networks assimilate all of these [sensory] inputs?"

As you note, this is a problem neuroscientists are certainly working to answer. But in the meantime, we can ask: If phenomenal consciousness does indeed emerge as an ontologically new phenomenon as a result of the process of sensory integration, how do we characterize this emergence?

(1) The emergence of p-consciousness is ultimately the result of neurological processes, or,

(2) The emergence of p-consciousness is ultimately the result of signal integration processes?
Yes... I liked the Zlatev I read. He is on the right track for sure. This will happen: People will come round to HCT.

SIP is not a great term btw. But to answer question, both are necessary and evolve sophistications in tandem
 
Pharoah you ask: "How do neural networks assimilate all of these [sensory] inputs?"

As you note, this is a problem neuroscientists are certainly working to answer. But in the meantime, we can ask: If phenomenal consciousness does indeed emerge as an ontologically new phenomenon as a result of the process of sensory integration, how do we characterize this emergence?


(1) The emergence of p-consciousness is ultimately the result of neurological processes, or,

(2) The emergence of p-consciousness is ultimately the result of signal integration processes?


I think Platev's paper shows that those two preliminary 'characterizations' of how consciousness emerges are both too presuppositional and too reductive, and it seems that that is what Pharoah means by his question:

"How do neural networks assimilate all of these [sensory] inputs?"

How, indeed, is the question, and 'assimilation' seems to me to be the correct word to use to recognize that consciousness takes root in and develops in direct sensory experiences of evolving species of life. As Platev's reference to Husserl's recognition of "prereflective consciousness" indicates, subjective perspectives are incipient in life long before neural nets are developed, and they affectively motivate neuronal networking rather than being wholly explicable as effects of it.
 
@Soupie, coming back to your statement following Pharoah's question:

Pharoah you ask: "How do neural networks assimilate all of these [sensory] inputs?" As you note, this is a problem neuroscientists are certainly working to answer. But in the meantime, we can ask: If phenomenal consciousness does indeed emerge as an ontologically new phenomenon as a result of the process of sensory integration, how do we characterize this emergence?

It seems to me that we have to understand it ["this emergence"] before we can "characterize" the emergence of consciousness in a 'model'. And that we have a much better chance of understanding consciousness if we begin by recognizing that, in fact, we don't understand it yet and therefore require truly interdisciplinary approaches to the phenomenon of consciousness before we can construct an empirically justifiable 'model' of how it arises. Not only neuroscientists but also biologists, ethologists, and philosophers of mind are "working to answer" the question of what consciousness is. The clear implication of your phrase "an ontologically new phenomenon" is that our ontological concepts themselves are in question when we address phenomena we cannot account for. In other words, our 'models' themselves are in question if they proceed on the basis of presuppositional thinking. We need to approach the phenomenon of consciousness from outside the presuppositions of models such as the physicalist/objectivist paradigm still assumed to be the foundation of 'truth' in the hard sciences and in most of current neuroscience.
 
... One potential that I have been investigating on this front is whether the "what it is like" aspect of consciousness is a phase state of matter, not unlike gas, liquid, solid, and plasma. Of course it would be unlike these states of matter in the important sense that those states are objective and consciousness is subjective.
In my own reflections on it, the physics part is more analogous to a magnetic field ( virtual photons - an emergent property ) rather than a "state" of matter, and the "what it's like" part is a property of such fields.
 
Status
Not open for further replies.
Back
Top