• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 6

Free episodes:

Status
Not open for further replies.
From the wiki entry on Naive realism:

Naïve realism is known as direct as against indirect or representative realism when its arguments are developed to counter the latter position, also known as epistemological dualism;[2] that our conscious experience is not of the real world but of an internal representation of the world.
Ugh. Sigh. What a horrible misrepresnetation of the representationalist position. And the mysterious "observer" slips in there again!

According to a correct description of the representationalist approach, conscious experience is not of an internal representation of the world, rather, conscious experience is an internal representation of the world.*

I wonder if its a coincidence, @Constance, that I have noted you making this same assertion, which I think is a misunderstanding, and then find this same exact (mistaken) assertion in an entry on Naive realism.

*My own view is better described as Intentionalism; I don't believe our concious perceptions equal exact, virtual replicas of reality. Rather, I think our concious perceptions are subjective, shaped over the eons by evolutionary processes.
 
Last edited:
Based on what we know about reality, the brain, and conscious perception, I don't see how naive/direct realism can be the case.

Naïve realism - Wikipedia, the free encyclopedia

Interesting article. Some stuff makes sense. I guess it depends on how far one wants to take it. If you look a the five beliefs, they all make sense up to number five, where it states:

"By means of our senses, we perceive the world directly, and pretty much as it is. In the main, our claims to have knowledge of it are justified."

The point of contention there, hinging on the word "directly". I suspect we would both agree that perceptual experience, although as direct as is humanly possible, is still an interpretation created by our brain-body system, and therefore the world cannot logically be considered to be "directly perceived". That being said, I would still argue that firsthand experience is still among the best evidence for determining whether claims to knowledge about the world are justified. Sure there are blind spots and misperceptions and all the rest, but generally speaking, the stimulus response tends to provide fairly reliable information, and most decisions based on that information tend to work out well. If it didn't work that way, I doubt we would have survived as a species.
 
...*My own view is better described as Intentionalism; I don't believe our concious perceptions equal exact, virtual replicas of reality. Rather, I think our concious perceptions are subjective, shaped over the eons by evolutionary processes.
Looking up the word "Intentionalism" it appears to be synonymous with "Representationalism". I can see why you'd go there. It makes a lot of sense, and the arguments against it aren't really all that coherent because they're making assumptions that may not be applicable. It seems that you and I and Chalmers tend to gravitate toward a representational perspective.
 
Soupie wrote:

"From the wiki entry on Naive realism:

Naïve realism is known as direct as against indirect or representative realism when its arguments are developed to counter the latter position, also known as epistemological dualism,[2] that our conscious experience is not of the real world but of an internal representation of the world.

Ugh. Sigh. What a horrible misrepresnetation of the representationalist position. And the mysterious "observer" slips in there again!

According to a correct description of the representationalist approach, conscious experience is not of an internal representation of the world, rather, conscious experience is an internal representation of the world."



Take it easy, Soupie. There are many philosophers, cognitive neuroscientists, and even information theorists who would not agree with either position as stated in that wiki sentence. You really do need to inform yourself about the scope and variations among the ideas and positions of the various brain, mind, and consciousness investigators involved in Consciousness Studies.

I wonder if its a coincidence, @Constance, that I have noted you making this same assertion, which I think is a misunderstanding, and then find this same exact (mistaken) assertion in an entry on Naive realism.

What assertion do you think I’ve made, Soupie? I don't think you've ever understood what I've expressed here since to do so you would have had to read the papers I've linked for two years now.

My own view is better described as Intentionalism; I don't believe our concious perceptions equal exact, virtual replicas of reality.

I don’t remember anyone in this thread ever claiming that conscious (or unconscious) perceptions “equal exact, virtual replicas of reality.” I do know that I have never made such a bizarre claim.


Defining yourself as ‘an Intentionalist’ is not really informative. SEP does not even have an article devoted to ‘Intentionalism’, but the SEP article at the link below, entitled “Perceptual Experience and Perceptual Justification,” does use the term in one section:

Perceptual Experience and Perceptual Justification (Stanford Encyclopedia of Philosophy)


You might also want to read a paper entitled "Intentionalism Defended" by Alex Byrne, which clarifies some distinctions in ‘intentionalism’ that might help you place yourself in current discourse.

http://web.mit.edu/abyrne/www/Intentionalism.pdf


Thompson and Zahavi clarify the meanings of 'intentionality' in Husserl and also in Merleau-Ponty in the third section of the paper linked next, which I've linked here in the thread twice in the past. You asked me above for a paper 'defending naive realism', and this Thompson-Zahavi paper is certainly not a defense of so-called 'naive realism'. {I wonder if there is a philosopher in the world today who would write a paper defending what has been termed ‘naïve realism’ , which is a hostile misrepresentation of 'direct realism'.} The paper linked below will, however, provide you, I hope, with enough insight into phenomenology to understand that 'direct realism' is also not an accurate, thus appropriate, descriptor of phenomenology.

http://cfs.ku.dk/staff/zahavi-publications/phenomenology-thompson-zahavi.pdf

This paper might also provide you with a clearer understanding of neurophenomenology than you seem to have gathered from your partial reading of Thompson's Mind in Life: Biology, Phenomenology, and the Sciences of Mind.
 
Last edited:
It happens that the Internet Encyclopedia of Philosophy also has an article that would be useful as a survey of the various ideas concerning perception expressed in the disciplines involved in Consciousness Studies. The title is "Perception of Objects," and it problematizes the term 'intentionalist'. Reading it you might discover that you are actually an 'Adverbialist' or perhaps a 'Disjunctivist'.

Perception, Objects of | Internet Encyclopedia of Philosophy
 
@Soupie, I've extracted these paragraphs from the linked Thompson-Zahavi paper because they are so clarifying for people coming to phenomenology for the first time and should be grasped before continuing to the rest of the paper.

". . . The purpose of the phenomenological reduction, therefore, contrary to many misunderstandings, is neither to exclude the world from consideration nor to commit one to some form of methodological solipsism. Rather, its purpose is to enable one to explore and describe the spatiotemporal world as it is given. For Husserl, the phenomenological reduction is meant as a way of maintaining this radical difference between philosophical reflection on phenomenality and other modes of thought. Henceforth, we are no longer to consider the worldly object naïvely; rather, we are to focus on it precisely as a correlate of experience. If we restrict ourselves to that which shows itself (whether in straightforward perception or a scientific experiment), and if we focus specifically on that which tends to be ignored in daily life (because it is so familiar), namely, on phenomenal manifestation as such, the sheer appearances of things, then we cannot avoid being led back (re-ducere) to subjectivity. Insofar as we are confronted with the appearance of an object, that is, with an object as presented, perceived, judged, or evaluated, we are led back to the intentional structures to which these modes of appearance are correlated. We are led to the intentional acts of presentation, perception, judgement, and evaluation, and thereby to the subject (or subjects), in relation to whom the object as appearing must necessarily be understood. Through the phenomenological attitude we thus become aware of the givenness of the object. Yet the aim is not simply to focus on the object exactly as it is given, but also on the subjective side of consciousness. We thereby become aware of our subjective accomplishments, specifically the kinds of intentionality that must be in play in order for anything to appear as it does. When we investigate appearing objects in this way, we also disclose ourselves as ‘datives of manifestation’ (Sokolowski, 2000), as those to whom objects appear.

One can discern a certain ambivalence in the phenomenological tradition regarding the theoretical and practical or existential dimensions of the epoché. On the one hand, Husserl’s great concern was to establish phenomenology as a new philosophical foundation for science, and so the epoché in his hands served largely as a critical tool of theoretical reason. [4] On the other hand, because Husserl’s theoretical project was based on a radical reappraisal of experience as the source of meaning and knowledge, it necessitated a constant return to the patient, analytic description of lived experience through the phenomenological reduction. This impulse generated a huge corpus of careful phenomenological analyses of numerous different dimensions and aspects of human experience—the perceptual experience of space (Husserl, 1997), kinesthesis and the experience of one’s own body (Husserl, 1989, 1997), time-consciousness (Husserl, 1991), affect (Husserl, 2001), judgement (Husserl, 1975), imagination and memory (Husserl, 2006), and intersubjectivity (Husserl, 1973), to name just a few. Nevertheless, the epoché as a practical procedure—as a situated practice carried out in the first-person by the phenomenologist—has remained strangely neglected in the phenomenological literature, even by so-called existential phenomenologists such as Heidegger and Merleau-Ponty, who took up and then recast in their own ways the method of the phenomenological reduction (see Heidegger, 1982, pp. 19-23; Merleau-Ponty, 1962, pp. xi-xiv). For this reason, one new current in phenomenology aims to develop more explicitly the pragmatics of the epoché as a ‘first-person method’ for investigating consciousness (Depraz, 1999; Depraz, Varela & Vermersch, 2003; Varela & Shear 1999). This pragmatic approach has also compared the epoché to first-person methods in other domains, such as contemplative practice (Depraz, Varela & Vermersch, 2003), and explored the relevance of first-person methods for producing more refined first-person reports in experimental psychology and cognitive neuroscience (Varela, 1996; Lutz & Thompson, 2003). This latter endeavour is central to the research programme known as ‘neurophenomenology’, introduced by Francisco Varela (1996, 1999) and developed by other researchers (Lloyd, 2002, 2003; Lutz & Thompson, 2003; Rainville, 2005; Thompson, 2007; Thompson, Lutz, and Cosmelli, 2005; see also Cosmelli, Lachaux, & Thompson, this volume; and Lutz, Dunne & Davidson, this volume).

3. Intentionality

Implicit in the foregoing treatment of phenomenological method is the phenomenological concept of intentionality.
According to Husserlian phenomenology, consciousness is intentional, in the sense that it ‘aims toward’ or ‘intends’ something beyond itself. This sense of ‘intentional’ should not be confused with the more familiar sense of having a purpose in mind when one acts, which is only one kind of intentionality in the phenomenological sense. Rather, ‘intentionality’ is a generic term for the pointing beyond-itself proper to consciousness (from the Latin intendere, which once referred to drawing a bow and aiming at a target). Phenomenologists distinguish different types of intentionality. In a narrow sense, intentionality is defined as object-directedness. In a broader sense, which covers what Husserl (2001, p. 206) and Merleau-Ponty (1962, p. xviii) called ‘operative intentionality’ (see below), intentionality is defined as openness toward otherness (or ‘alterity’). In both cases, the emphasis is on denying that consciousness is self-enclosed. Object-directedness characterizes almost all of our experiences, in the sense that in having them we are exactly conscious of something. We do not merely love, fear, see, or judge; we love, fear, see, or judge something. Regardless of whether we consider a perception, a thought, a judgement, a fantasy, a doubt, an expectation, a recollection, and so on, these diverse forms of consciousness are all characterized by the intending of an object. In other words, they cannot be analyzed properly without a look at their objective correlates, that is, the perceived, the doubted, the expected, and so forth. The converse is also true: The intentional object cannot be analyzed properly without a look at its subjective correlate, the intentional act. Neither the intentional object nor the mental act that intends it can be understood apart from the other. . . . ."


[4 This sense of the epoché is well put by the noted North American and Indian phenomenologist J. N. Mohanty (1989, pp. 12-13): “I need not emphasize how relevant and, in fact, necessary is the method of phenomenological epoche for the very possibility of genuine description in philosophy. It was Husserl’s genius that he both revitalized the descriptive method for philosophy and brought to the forefront the method of epoche, without which one cannot really get down to the job. The preconceptions have to be placed within brackets, beliefs suspended, before philosophy can begin to confront phenomena as phenomena. This again is not an instantaneous act of suspending belief in the world or of directing one’s glance towards the phenomena as phenomena, but involves a strenuous effort at recognizing preconceptions as preconceptions, at unraveling sedimented interpretations, at getting at presuppositions which may pretend to be self-evident truths, and through such processes aiming asymptotically at the prereflective experience.”]
 
Last edited:
Another paper by Hut and a coauthor concerning consciousness and the hard problem (unfortunately not available online):

Turning the "hard problem" upside-down and sideways

Piet Hut & Roger N. Shepard
Journal of Consciousness Studies 3 (4):313-29 (1996)

Abstract

"Instead of speaking of conscious experience as arising in a brain, we prefer to speak of a brain as arising in conscious experience. From an epistemological standpoint, starting from direct experiences strikes us as more justified. As a first option, we reconsider the ‘hard problem’ of the relation between conscious experience and the physical world by thus turning that problem upside down. We also consider a second option: turning the hard problem sideways. Rather than starting with the third-person approach used in physics, or the first- person approach of starting with individual conscious experience, we consider starting from an I-and-you basis, centered around the second-person. Finally, we present a candidate for what could be considered to underlie conscious experience: ‘sense’. We consider this to be a shot in the dark, but at least a shot in the right direction: somewhere between upside down and sideways. Our notion of sense can be seen as an alternative to panpsychism. To give an analogy, using the notions of space and time is more convenient than trying to analyse the phenomenon of motion in terms of a space-based ‘pandynamism’. Similarly, when approaching the phenomenon of consciousness, we prefer the triad of space, time and sense, over a spacetime-based form of panpsychism."

Click Piet Hut's name above to access his other papers linked at philpapers.org.
 
Large Red Man Reading

There were ghosts that returned to earth to hear his phrases,
As he sat there reading, aloud, the great blue tabulae.
They were those from the wilderness of stars that had expected more.

There were those that returned to hear him read from the poem of life,
Of the pans above the stove, the pots on the table, the tulips among them.
They were those that would have wept to step barefoot into reality,

That would have wept and been happy, have shivered in the frost
And cried out to feel it again, have run fingers over leaves
And against the most coiled thorn, have seized on what was ugly

And laughed, as he sat there reading, from out of the purple tabulae,
The outlines of being and its expressings, the syllables of its law:
Poesis, poesis, the literal characters, the vatic lines,

Which in those ears and in those thin, those spended hearts,
Took on color, took on shape and the size of things as they are
And spoke the feeling for them, which was what they had lacked.

~~ Wallace Stevens
 
ps, given the Stevens poem I copied above (and indeed his entire oeuvre), his response to that question would be that, whatever its origin, what consciousness does is engage us bodily, emotionally, and mentally in our temporal existence in the actual world we live in. The role of consciousness in being, then, is to incorporate both the objective and subjective perspectives that we are able to take on the being of what-is, which includes our own being. Here is one of his last poems expressing the poet's role in this situation:

The Planet on the Table

Ariel was glad he had written his poems.
They were of a remembered time
Or of something seen that he liked.

Other makings of the sun
Were waste and welter
And the ripe shrub writhed.

His self and the sun were one
And his poems, although makings of his self,
Were no less makings of the sun.

It was not important that they survive.
What mattered was that they should bear
Some lineament or character,

Some affluence, if only half-perceived,
In the poverty of their words,
Of the planet of which they were part.
 
@ufology @Constance @Pharoah

Re Searle's Chinese Room thought experiment, I've been reading a little about the so-called Symbol Grounding Problem (SGP). From Wiki:

"The symbol grounding problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is ( link )."

It's an open debate whether the SGP has been theoretically solved. Some think it has been solved, others think it has not been solved. I'm in the camp who thinks it either has been solved or will/can be solved in the near future.

The second question is whether solving the SGP solves the mind-body problem (MBP). In other words, if meaning is synonymous with consciousness, and we have a theory of how meaning arises in physical systems, then we have a theory of how consciousness arises in physical systems.

In other words, if a physical system interacts with a rose, and the system knows that the rose is a rose; if the object means rose-ness to the system, then just maybe the system is having a conscious experience of the rose. Of course, this would be conceptual consciousness, and not necessarily phenomenal consciousness. We've talked here about conceptual consciousness being grounded in phenomenal consciousness. That is, in order to have conceptual consciousness of a rose, one would first need to experience phenomenal consciousness of the roses' smell, texture, colors, etc.

I've argued that phenomenal qualities (colors, smells, sounds, etc.) are non-conceptual meanings that have arisen between organisms and environmental stimuli.

I feel that @Pharoah 's HCT offers a theory for the SGP. Pharoah's theory offers an explanation of how neurophysiological processes come to acquire meaning. The meaning is derived from the "qualitative relevancy" of the neurophysiological processes.

However, it's not clear that solving the SGP will solve the MBP. It's conceptually possible that there might be a physical system capable of giving real meaning to symbols, while not being a conscious system. (I say it's conceptually possible because we really don't know; it's possible that any system capable of creating/manipulating grounded symbols would be conscious.)

This is my beef with HCT; I think it is a model for the SGP but not necessarily a model for the MBP. And here's the main reason why: Non-conscious brain states versus conscious brain states and related, complex behaviors. Put simply, there appear to be human brain states that utilize meaning to guide behavior but that are not associated with conscious experience. This is why I have repeatedly asked Pharoah if HCT can explain why some brain states are correlated with conscious experience but not others.

It thus seems to me that meaning (symbol grounding) is a necessary but not sufficient condition for (human-like) consciousness. Brain states involved in symbol grounding are sometimes conscious and sometimes not conscious. Imo HCT provides a model of how/why certain neurophysiological processes have meaning (grounded symbols), but not why certain neurophysiological processes are conscious.

Here's more from the Wiki entry to clarify (or further confuse) the issue:

"No, the problem of intentionality is not the symbol grounding problem; nor is grounding symbols the solution to the problem of intentionality. The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. But grounding is not meaning. Grounding is an input/output performance function. Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output.

Meaning, in contrast, is something mental. But to try to put a halt to the name-game of proliferating nonexplanatory synonyms for the mind/body problem without solving it (or, worse, implying that there is more than one mind/body problem), let us cite just one more thing that requires no further explication: feeling. The only thing that distinguishes an internal state that merely has grounding from one that has meaning is that it feels like something to be in the meaning state, whereas it does not feel like anything to be in the merely grounded functional state. Grounding is a functional matter; feeling is a felt matter. And that is the real source of Brentano's vexed peekaboo relation between "intentionality" and its internal "intentional object": All mental states, in addition to being the functional states of an autonomous dynamical system, are also feeling states: Feelings are not merely "functed," as all other physical states are; feelings are also felt.

Hence feeling is the real mark of the mental. But the symbol grounding problem is not the same as the mind/body problem, let alone a solution to it. The mind/body problem is actually the feeling/function problem: Symbol-grounding touches only its functional component. This does not detract from the importance of the symbol grounding problem, but just reflects that it is a keystone piece to the bigger puzzle called the mind ( link )."​

The author above confusingly distinguishes symbol grounding from meaning. I think that is confusing. To me, a grounded symbol is a symbol that has meaning. However, I agree that a grounded symbol is not necessarily a "conscious" symbol. In my own words, I think the above argument is better stated as:

The only thing that distinguishes an internal state that is grounded from one that is conscious is that it feels like something to be in the conscious state, whereas it does not feel like anything to be in the merely grounded functional state.

Conceptually, I am tempted to question the above argument. Intuitively it makes sense that if a brain state "generates" meaning then that same brain state may be "generating" conscious experience. That just seems right to me. That is, it seems conceptually right to me that meaning and consciousness should be synonymous.

At the same time, it seems evident that there are brain states associated with meaning (symbol grounding) that are not associated with conscious experience. Therefore, symbol grounding/meaning seems to be distinct from conscious experience.

This has a least one very interesting ramification which I've advocated here but which others, namely Pharoah, are completely against.
 
Last edited:
@ufology @Constance @Pharoah

Re Searle's Chinese Room thought experiment, I've been reading a little about the so-called Symbol Grounding Problem (SGP). From Wiki:

"The symbol grounding problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is ( link )."

It's an open debate whether the SGP has been theoretically solved. Some think it has been solved, others think it has not been solved. I'm in the camp who thinks it either has been solved or will/can be solved in the near future.

The second question is whether solving the SGP solves the mind-body problem (MBP). In other words, if meaning is synonymous with consciousness, and we have a theory of how meaning arises in physical systems, then we have a theory of how consciousness arises in physical systems.

In other words, if a physical system interacts with a rose, and the system knows that the rose is a rose; if the object means rose-ness to the system, then just maybe the system is having a conscious experience of the rose. Of course, this would be conceptual consciousness, and not necessarily phenomenal consciousness. We've talked here about conceptual consciousness being grounded in phenomenal consciousness. That is, in order to have conceptual consciousness of a rose, one would first need to experience phenomenal consciousness of the roses' smell, texture, colors, etc.

I've argued that phenomenal qualities (colors, smells, sounds, etc.) are non-conceptual meanings that have arisen between organisms and environmental stimuli.

I feel that @Pharoah 's HCT offers a theory for the SGP. Pharoah's theory offers an explanation of how neurophysiological processes come to acquire meaning. The meaning is derived from the "qualitative relevancy" of the neurophysiological processes.

However, it's not clear that solving the SGP will solve the MBP. It's conceptually possible that there might be a physical system capable of giving real meaning to symbols, while not being a conscious system. (I say it's conceptually possible because we really don't know; it's possible that any system capable of creating/manipulating grounded symbols would be conscious.)

This is my beef with HCT; I think it is a model for the SGP but not necessarily a model for the MBP. And here's the main reason why: Non-conscious brain states versus conscious brain states and related, complex behaviors. Put simply, there appear to be human brain states that utilize meaning to guide behavior but that are not associated with conscious experience. This is why I have repeatedly asked Pharoah if HCT can explain why some brain states are correlated with conscious experience but not others.

It thus seems to me that meaning (symbol grounding) is a necessary but not sufficient condition for (human-like) consciousness. Brain states involved in symbol grounding are sometimes conscious and sometimes not conscious. Imo HCT provides a model of how/why certain neurophysiological processes have meaning (grounded symbols), but not why certain neurophysiological processes are conscious.

Here's more from the Wiki entry to clarify (or further confuse) the issue:

"No, the problem of intentionality is not the symbol grounding problem; nor is grounding symbols the solution to the problem of intentionality. The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. But grounding is not meaning. Grounding is an input/output performance function. Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output.

Meaning, in contrast, is something mental. But to try to put a halt to the name-game of proliferating nonexplanatory synonyms for the mind/body problem without solving it (or, worse, implying that there is more than one mind/body problem), let us cite just one more thing that requires no further explication: feeling. The only thing that distinguishes an internal state that merely has grounding from one that has meaning is that it feels like something to be in the meaning state, whereas it does not feel like anything to be in the merely grounded functional state. Grounding is a functional matter; feeling is a felt matter. And that is the real source of Brentano's vexed peekaboo relation between "intentionality" and its internal "intentional object": All mental states, in addition to being the functional states of an autonomous dynamical system, are also feeling states: Feelings are not merely "functed," as all other physical states are; feelings are also felt.

Hence feeling is the real mark of the mental. But the symbol grounding problem is not the same as the mind/body problem, let alone a solution to it. The mind/body problem is actually the feeling/function problem: Symbol-grounding touches only its functional component. This does not detract from the importance of the symbol grounding problem, but just reflects that it is a keystone piece to the bigger puzzle called the mind ( link )."​

The author above confusingly distinguishes symbol grounding from meaning. I think that is confusing. To me, a grounded symbol is a symbol that has meaning. However, I agree that a grounded symbol is not necessarily a "conscious" symbol. In my own words, I think the above argument is better stated as:

The only thing that distinguishes an internal state that is grounded from one that is conscious is that it feels like something to be in the conscious state, whereas it does not feel like anything to be in the merely grounded functional state.

Conceptually, I am tempted to question the above argument. Intuitively it makes sense that if a brain state "generates" meaning then that same brain state may be "generating" conscious experience. That just seems right to me. That is, it seems conceptually right to me that meaning and consciousness should be synonymous.

At the same time, it seems evident that there are brain states associated with meaning (symbol grounding) that are not associated with conscious experience. Therefore, symbol grounding/meaning seems to be distinct from conscious experience.

This has a least one very interesting ramification which I've advocated here but which others, namely Pharoah, are completely against.
@Soupie you say,
"This is my beef with HCT; I think it is a model for the SGP but not necessarily a model for the MBP"
I have always made it clear that HCT does not provide a solution to the MBP. It is not a model of the MBP.

When you say @Soupie, "This has a least one very interesting ramification which I've advocated here but which others, namely Pharoah, are completely against" what are you saying I am completely against?—"This" refers to what?

btw you say, "if meaning is synonymous with consciousness, and we have a theory of how meaning arises in physical systems, then we have a theory of how consciousness arises in physical systems."
meaning is not synonymous with consciousness.
 
An interesting post, Soupie, and one I hope we will see discussed at some length here.

Your problem with the wiki article on 'symbol-grounding' comes up with the section on Brentano. Note that wiki also has a problem with that section, or rather one or more wiki editors have a problem with it to the extent that they raise a possible "conflict of interest" on the part of the unnamed author of that section. [see the Talk section] It seems to me that that wiki editor or editors' problem, and I think yours as well, is the continual conflict between phenomology and computationalism concerning the differences between computation and human thinking.

The robot acquires its ‘dictionary’ of categories and symbols from its human builder.

If the robot (equipped with sensorimotor capacities like ours) were to be ‘growed up’ from an infant-like state for the 20 or so years required to develop consciousness and mind in an actual human environment, we might expect that it would experience others, other minds, and the natural and cultural environment as a growing child does, developing and discarding along the way many interpretations of the nature of reality and what goes on in the society of embodied minds in which it lives. But the robot is born as an ‘adult’ mind fed systems of symbols and means of symbol manipulation that are radically different from the way a naturally developed human develops his or her mind over time out of conscious experience and exploration of its situation in the actual world, and with opportunities for education.

Thus, ‘symbol-grounding’ in robots/AI is not comparable to symbol-grounding in humans. You seem to accept uncritically the belief embedded in computationalism and robotics that the ‘meanings’ arrived at in embodied consciousnesses over decades of ‘lived reality’ can be mimicked in Turing machines, just because symbol-grounding is assumed in robotics to amount to no more than the ‘symbols’ and ‘categories’ represented by language. As I’ve said for a long time now, the ‘linguistic turn’ in philosophy has in itself confused the issue of how meaning is generated in the activities of consciousness and mind in coping with and exploring the world in which embodied consciousnesses exist. Most languages are living systems, open-ended and evolved over time, carrying along sedimented meanings from the human cultural past but not closed to recognizing new situations, new categories, new meanings based in what we think and do in the ever opening present in which each of us lives our temporal existence.

I hope the difference I’m talking about is clear. If not, let me know. Even the editor who wrote the opening paragraph of that wiki entry sees the problem, expressed in the last sentence of two of that paragraph:

“The symbols in an autonomous hybrid symbolic+sensorimotor system—a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does—would be grounded. But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing test—hence cognitive science itself—cannot determine, or explain.”

Sure, we can think of the signs and symbols fed into a computer as ‘grounded’ in the dictionary-like system of ‘meaning indicators’ it is schooled in, but that's a philosophical misuse of the term 'grounding' when applied to human consciousness and mind. A human language system does not constitute ‘meanings’ as permanent – rather it records them from the changing usages and structures of language as it is used and transformed in temporal embodied existence in the changing world that words refer to. Words and concepts drop away from living languages when human thought has moved beyond the no-longer-useful categories and definitions of the past, which are still sedimented in the language. Words and concepts become ‘Archaic’, yet their archaic definitions are retained in scholarly dictionaries for their historical significance and because some of them are still used by contemporary thinkers. Heidegger and other philosophers still quote and use [in the original Greek] words, phrases, and statements made by the pre-Socratic philosophers because some of the insights of those philosophers remain relevant in philosophy today. In short, we are not done with the signs and symbols of the earliest philosophers (Western as well as Eastern), though the modern languages we use have changed significantly under the pressure of technological developments influencing the way we live and think. Meaning developed by humans across numerous cultures and times into our own time is a panorama of what our species has thus far been able to think and understand, and we are far from the end of thinking about what we are and what the nature of the world is.

The section ‘Words and Meanings’ is not bad, but its last paragraph concerning the applicability of Peirce’s semiotic theory to computationalism opens a question that requires philosophical analyses that are not provided. Instead we get this:

“So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the narrow sense. If we use "meaning" in a widersense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object.

But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents.

In 19th century, the semiotician Charles Saunders Peirce suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called Semiosis.[2] Some have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[3] In recent years, Peirce's theory of signs is rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.”

AI researchers may have “rediscovered” Peirce’s theory of signs {actually, most philosophers who deal with language and semiotics continue to study Peirce}, but the question is whether computationalists will be able to understand it. If they don’t take the trouble to study Peirce's semiotic theory they’re most likely to reduce what he wrote beyond recognition. Of course, others will correct that error.

Immediately after this gesture toward a possible use of Peirce in robotics, the wiki article proceeds to this section:

“Consciousness[edit]
Here is where the problem of consciousness rears its head.[5] For there would be no connection at all between scratches on paper and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents.”

That first sentence is amusing when you read the attached note [5]:

“5.Cf. antipsychologism, psychologism, mentalism, intuitionism,
constructivism, anti-realism, realism


It’s amusing because, though a really good Turing Machine might conceivably have words (signs) in its database to match the theories listed in that note, it would not (I’m betting) have the slightest idea what any of them mean. And unlike we humans interested in these subjects and theories, it probably would not ask for a year’s sabbatical to study the disciplinary histories constructing those theories in order to provide an intelligent response to a question concerning one or more of those terms (words, signs).

I want to respond to more of your post, and will, but need to take a break.






 
Last edited:
@Soupie you say,
"This is my beef with HCT; I think it is a model for the SGP but not necessarily a model for the MBP"

I have always made it clear that HCT does not provide a solution to the MBP. It is not a model of the MBP.
But you do say HCT explains consciousness? Or is a model of consciousness? HCT gives us a bridge between objectivity and subjectivity?

I think HCT is a model for symbol grounding in organisms, but is not a model for how consciousness/feeling arises from physical processes.

When you say @Soupie, "This has a least one very interesting ramification which I've advocated here but which others, namely Pharoah, are completely against" what are you saying I am completely against?—"This" refers to what?
If meaning is not synonymous with consciousness, then meaning can exist in the absence of consciousness, and consciousness can exist in the absence of meaning.

What would it look like for consciousness (feeling) to exist in the absence of meaning? Why, it would look like panpsychism.

For example, we know that a system might exist that possessed intelligence (symbol grounding), but lacked consciousness (feeling).

Likewise, it's conceivable that a system might exist that was conscious (sentient), but lacked grounded symbols (intelligence).

btw you say, "if meaning is synonymous with consciousness, and we have a theory of how meaning arises in physical systems, then we have a theory of how consciousness arises in physical systems."

meaning is not synonymous with consciousness.
 
In 19th century, the semiotician Charles Saunders Peirce suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called Semiosis.[2] Some have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[3] In recent years, Peirce's theory of signs is rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.”
http://kognitywistyka.umcs.lublin.pl/wp-content/uploads/2014/04/Zlatev2009-CS.pdf

"This article outlines a general theory of meaning, The Semiotic Hierarchy, which distinguishes between four major levels in the organization of meaning: life, consciousness, sign function and language, where each of these, in this order, both rests on the previous level, and makes possible the at- tainment of the next. This is shown to be one possible instantiation of the Cognitive Semiotics program, with influences from phenomenology, Popper’s tripartite ontology, semiotics, linguistics, enactive cognitive sci- ence and evolutionary biology. Key concepts such as “language” and “sign” are defined, as well as the four levels of The Semiotic Hierarchy, on the basis of the type of (a) subject, (b) value-system and (c) world in which the subject is embedded. Finally, it is suggested how the levels can be united in an evolutionary framework, assuming a strong form of emergence giving rise to “ontologically” new properties: consciousness, signs and languages, on the basis of a semiotic, though not standardly biosemiotic, understanding of life."
 
In 19th century, the semiotician Charles Saunders Peirce suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called Semiosis.[2]
What if the object being represented and interpreted by the interpreter is the interpreter (ala my current avatar)?
 
What if the object being represented and interpreted by the interpreter is the interpreter (ala my current avatar)?
And so we go 'round and 'round and 'round. This thread is "Consciousness and the Paranormal". Any thoughts on how the paranormal fits in here anywhere?
 
But you do say HCT explains consciousness? Or is a model of consciousness? HCT gives us a bridge between objectivity and subjectivity?

I think HCT is a model for symbol grounding in organisms, but is not a model for how consciousness/feeling arises from physical processes.


If meaning is not synonymous with consciousness, then meaning can exist in the absence of consciousness, and consciousness can exist in the absence of meaning.

What would it look like for consciousness (feeling) to exist in the absence of meaning? Why, it would look like panpsychism.

For example, we know that a system might exist that possessed intelligence (symbol grounding), but lacked consciousness (feeling).

Likewise, it's conceivable that a system might exist that was conscious (sentient), but lacked grounded symbols (intelligence).
@Soupie.
1. Precisely: an explanation of consciousness is not an explanation of the MBP. I would even go so far as to say that the conflation of the two problems is one of the greatest obstacles to understanding consciousness in philosophy of mind.
2. You say, "I think HCT is a model for symbol grounding in organisms, but is not a model for how consciousness/feeling arises from physical processes."
What is the meaning of the slash ("/")? And or Or? We already have consciousness/mind from you (maybe). HCT does not say how physical processes actually create phenomenal consciousness insofar as it does not give a neurological process explanation. But it explains phenomenal consciousness... the qualitative subjective nature of consciousness and why it emerges. You say it does not. I say it does. I might have not explained with clarity or you might not have understood this part of the theory.
3. meaning exists in the absence of consciousness but consciousness does not exist in the absence of meaning.
4. Depends how you define "intelligence" as to the validity of your next statements. Not so keen on the term "conceivable" either. e.g. all the conceivability arguments are flawed thought experiments.

@ufology
To answer your question: No. Perhaps an explanation of phenomenon of consciousness that is not a answer to the MBP makes room for the paranormal...
Have you?
 
Status
Not open for further replies.
Back
Top