• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 11

Free episodes:

Status
Not open for further replies.
:eek:

I still don't understand why some have such an issue with "representation" as it now seems to be regarded in a very loose sense. In any case, the following popped into my stream a day or so ago:

Ecological Representations

"Abstract

Cognitive science has three main motivations for claiming that cognition requires representation; the need for intentional access to the world, poverty of perceptual access to that world, and the need to support 'higher-order' cognition. In addition to these motivations, all representational systems must address two major problems: symbol grounding and system-detectable error. Here we argue that James J Gibson's ecological information fits the basic definition of a representation, solves both problems and immediately addresses the first two motivations. We then develop an argument (begun in Golonka, 2015) that informational representations and the resulting neural representations can also support 'higher-order' cognition and provides an ecological framework for understanding neural activity. Ecological psychology can be a complete theory of cognition, and the key is the way that information represents the world."

It seems pretty clear to me that—while we don't know exactly how—computation*, cognition, perception, information, representation/intention, perception, and subjective experience have an intimate, if not direct, relationship.

*note that I'm not talking about programs and algorithms

Also note that I leave (phenomenal) consciousness out because it's a different beast.

without looking (Googling):

"*note that I'm not talking about programs and algorithms"

talk about computation without programs and algorithms ...

...bonus ... talk about a program without talking about algorithms ...

tenor.gif
 
"The successful reduction of the phenomenal character of a state to its intentional content would provide a solution to the hard problem of consciousness once a physicalist account of intentionality is worked out."

"It's simple." Said the mouse. "To know where the cat is...all we gotta do is put bells around it's neck!"

PinkyandtheBrain.TheBrain.png
 
"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I am running into."

"You only need to change your direction," said the cat, and ate it up.
 
:eek:

I still don't understand why some have such an issue with "representation" as it now seems to be regarded in a very loose sense. In any case, the following popped into my stream a day or so ago:

Ecological Representations

"Abstract

Cognitive science has three main motivations for claiming that cognition requires representation; the need for intentional access to the world, poverty of perceptual access to that world, and the need to support 'higher-order' cognition. In addition to these motivations, all representational systems must address two major problems: symbol grounding and system-detectable error. Here we argue that James J Gibson's ecological information fits the basic definition of a representation, solves both problems and immediately addresses the first two motivations. We then develop an argument (begun in Golonka, 2015) that informational representations and the resulting neural representations can also support 'higher-order' cognition and provides an ecological framework for understanding neural activity. Ecological psychology can be a complete theory of cognition, and the key is the way that information represents the world."

It seems pretty clear to me that—while we don't know exactly how—computation*, cognition, perception, information, representation/intention, perception, and subjective experience have an intimate, if not direct, relationship.

*note that I'm not talking about programs and algorithms

Also note that I leave (phenomenal) consciousness out because it's a different beast.

Notes from Two Scientific Psychologists: The only non-representational cognitive psychologist in the village

From 2010:

There are a number of unresolved issues with this representational stance: First, there is no theory of what representations actually are or of what information they contain. Second, many cognitive phenomena seem to defy a computational explanation. For instance, attempts to use a computational framework to model cognitive behaviours have often failed to produce anything as flexible or interesting as what we humans get up to. Third, alternative stances (e.g., that there are no discrete representations or that they are not processed algorithmically) have not been thoroughly explored. Cognitive psychologists usually take representations for granted; their existence is assumed, rarely defined or tested. This just isn’t good science. I’m just raising these points here; in future posts I’ll lay out the evidence.
 
"The radical implications of taking embodiment even a little bit seriously

What I think the real hypothesis of embodied cognition is that the type of perception and action systems we have radically alters what 'cognition' needs to look like. My favourite current example is the outfielder problem (how a baseball outfielder is able to catch a fly ball; McBeath et al, 1995), and the contrast between the straight-forward cognitive, predictive strategy versus the perception-based prospective control solutions. The contents of cognition for the predictive strategy includes a) an estimate of the initial conditions of the ball's flight, derived from perception, b) an internal model of projectile motion which can take those initial conditions as input and c) a mapping from that model to the motor control system to allow you to move in the right direction.

The embodied solutions, for which there is clear evidence, utterlychange this content. You no longer require an internal model of projectile motion; you simply need the ability to visually perceive the motion of the ball and the experience to move so as to produce a particular pattern to the optical motion of the ball. So it would make no sense to go looking for effects of the body on your internal model of projectile motion, because, when you take embodiment seriously, you cease to think there will be one."
 
without looking (Googling):

"*note that I'm not talking about programs and algorithms"

talk about computation without programs and algorithms ...

...bonus ... talk about a program without talking about algorithms ...

tenor.gif
Apparently I am conflating information processing with computation. Computation seems to have a much narrowing meaning. Duly noted.

Computation vs. Information Processing

Comments are interesting too.

Re representation and embodiment

It has ways interested me the difference between embodied knowledge and action and embodiement and perception.

I can see how the body can act SAMS representation, but I can't see how a body can phenomenally perceive without some form of representation/intention.
 
Apparently I am conflating information processing with computation. Computation seems to have a much narrowing meaning. Duly noted.

Computation vs. Information Processing

Comments are interesting too.

Re representation and embodiment

It has ways interested me the difference between embodied knowledge and action and embodiement and perception.

I can see how the body can act SAMS representation, but I can't see how a body can phenomenally perceive without some form of representation/intention.

I don't know much about this...Really. There are parallels with developments in robotics...

The bit on the blog about the governor on steam engines and this example are very interesting:

Notes from Two Scientific Psychologists: "Smart" perceptual mechanisms
 
Apparently I am conflating information processing with computation. Computation seems to have a much narrowing meaning. Duly noted.

Computation vs. Information Processing

Comments are interesting too.

Re representation and embodiment

It has ways interested me the difference between embodied knowledge and action and embodiement and perception.

I can see how the body can act SAMS representation, but I can't see how a body can phenomenally perceive without some form of representation/intention.

Another site for Wilson...

Dr Andrew D Wilson

"...but I can't see how a body can phenomenally perceive without some form of representation/intention."

I'm not following this...representation seems an extra step...the way early robots tried to map a room before attempting to drive across it...Brooks came along with subsumptive architecture and sensors wired to effectors and produced capable robots of very little brain.
 
"Vehicles" by V. Braitenberg - if you didn't read it, I think helps a bit here too.

lab03.illustration.gif
 
Last edited:
Phenom Cogn Sci (2018) 17:681–699 Understanding phenomenological differences in how affordances solicit action. An exploration

Understanding phenomenological differences in how affordances solicit action. An exploration.
Roy Dings

Abstract

Affordances are possibilities for action offered by the environment. Recent research on affordances holds that there are differences in how people experience such possibilities for action. However, these differences have not been properly investigated. In this paper I start by briefly scrutinizing the existing literature on this issue, and then argue for two claims. First, that whether an affordance solicits action or not depends on its relevance to the agent’s concerns. Second, that the experiential character of how an affordance solicits action depends on the character of the concern to which it is relevant. Concerns are conceived of as bodily forms of responsiveness, and solicitations are experienced through this responsiveness. The main aim of this paper is to make clear that an understanding of experiential differences in solicitations has to be based on a phenomenological appreciation of how one experiences one’s responsiveness to those solicitations. In the remainder of the paper I show how such a phenomenological appreciation reveals several characteristics of our responsiveness and I briefly explore three of them: valence, force and mineness. In the final section I discuss the self-referentiality of affordances in light of the current proposal, and argue that this self-referentiality is broader than is typically acknowledged.

Keywords Affordances.Solicitations.Phenomenology.Hermeneutics.Narrative. Concerns.Responsiveness.

https://link.springer.com/content/pdf/10.1007/s11097-017-9534-y.pdf
 
When there's a headline like someone being normal even though most of their brain is missing, there's the possibility that someone might interpret that as the brain not being necessary for consciousness, which could be construed as evidence for the possibility of paranormal phenomena like ghosts or reincarnation. So there's no claim that anyone in particular has claimed that consciousness is paranormal, just that the situation is one to be careful not to make that assumption about. Given that the name of the thread is Consciousness and the Paranormal, we're already oriented in that direction, so IMO it's a reasonable concern.

Referencing your highlighted opening clause, I think only people who are predisposed to believe that consciousness is produced solely by the brain [neurons, neural nets] are troubled by cases such as @Soupie has cited. Immersing yourself for awhile in phenomenological philosophy, particularly in Merleau-Ponty's works, might help you to gain an appreciation of the demonstrable origins of consciousness in the body and its prereflective experience of being in a world. The paper I linked a few minutes ago might be a good place to begin.
 
:eek:
. . . Also note that I leave (phenomenal) consciousness out because it's a different beast.

I'd argue instead [indeed have done so ad nauseum here] that phenomenal consciousness is the essential ground out of which all felt and thought experience arises.
 
"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I am running into."

"You only need to change your direction," said the cat, and ate it up.

Love this. Is it from something you're writing or somewhere else?
 
Phenom Cogn Sci (2018) 17:681–699 Understanding phenomenological differences in how affordances solicit action. An exploration

Understanding phenomenological differences in how affordances solicit action. An exploration.
Roy Dings

Abstract

Affordances are possibilities for action offered by the environment. Recent research on affordances holds that there are differences in how people experience such possibilities for action. However, these differences have not been properly investigated. In this paper I start by briefly scrutinizing the existing literature on this issue, and then argue for two claims. First, that whether an affordance solicits action or not depends on its relevance to the agent’s concerns. Second, that the experiential character of how an affordance solicits action depends on the character of the concern to which it is relevant. Concerns are conceived of as bodily forms of responsiveness, and solicitations are experienced through this responsiveness. The main aim of this paper is to make clear that an understanding of experiential differences in solicitations has to be based on a phenomenological appreciation of how one experiences one’s responsiveness to those solicitations. In the remainder of the paper I show how such a phenomenological appreciation reveals several characteristics of our responsiveness and I briefly explore three of them: valence, force and mineness. In the final section I discuss the self-referentiality of affordances in light of the current proposal, and argue that this self-referentiality is broader than is typically acknowledged.

Keywords Affordances.Solicitations.Phenomenology.Hermeneutics.Narrative. Concerns.Responsiveness.

Note this key extract from the above paper clarifying the way in which phenomenology overcomes Descartes' radical dualism:

"It is important to note that from a phenomenological point of view, the solicitation and responsiveness are intertwined. That is, we do not first experience a solicitation and then the bodily responsiveness. Rather, a solicitation (the ‘call’ to act) is experienced through one’s responsiveness. As Bruineberg & Rietveld (2014, p.2) put it, a solicitation is 'the (prereflective) experiential equivalent of a bodily action-readiness'. (Note that in section 4 I will argue that a solicitation can be more rich and nuanced on account of the various ways in which the bodily responsiveness may be experienced)." (pg. 687)

https://link.springer.com/content/pdf/10.1007/s11097-017-9534-y.pdf
 
Referencing your highlighted opening clause, I think only people who are predisposed to believe that consciousness is produced solely by the brain [neurons, neural nets] are troubled by cases such as @Soupie has cited.
Exactly, as they should be. Meanwhile those who believe the brain isn't required for the emergence of consciousness could see such an article as evidence for that belief ( confirmation bias ) and therefore not be bothered by it at all. The only problem is that the article isn't necessarily the kind of evidence they might think it is. I felt it important to make that distinction.
 
Exactly, as they should be.


Why?

Meanwhile those who believe the brain isn't required for the emergence of consciousness could see such an article as evidence for that belief ( confirmation bias ) and therefore not be bothered by it at all. The only problem is that the article isn't necessarily the kind of evidence they might think it is. I felt it important to make that distinction.

I think your thinking about this remains limited by your presuppositional objectivist approach. It's not an either/or question whether the evolution of neural nets in our species and others leads to more effective 'grips' on that which we encounter within our physical environments and in interactions with one another and other species of life. But consciousness begins in pre-reflective experience in the environing world and enables reflection and thought to begin, and the thinking of our species has long included not only physics but metaphysics, built on senses of and reasoning about the nature and extent of 'what-is' beyond the horizons of the visible that limit our perception locally while constituting the larger whole within which -- and in large evolutionary terms out of which -- our experienced reality becomes knowable by us to the extent that it does.
 
Here is an extract from a paper the whole of which might be helpful to us in gaining a grip on the concept of 'representation'. The paper is entitled "The Ontology of Concepts—Abstract Objects or Mental Representations?"; the author is Eric Margolis.

". . . The Psychological View is the default position in many areas of cognitive science and enjoys a good deal of support in the philosophy of mind. It is at the center of a rich and powerful model of the mind, but two of its benefits are especially worth mentioning. The first is the promise of explaining the productivity of thought. Productivity refers to the fact that, under suitable idealization, there is no upper bound to the range of semantically distinct thoughts. One way of appreciating just how vast our cognitive capacities are is to consider the thoughts associated with the sentences of a natural language. As Noam Chomsky has noted, nearly every sentence we speak or hear is a sentence we have never before encountered, but despite the novelty of these sentences, we have no difficulty entertaining the corresponding thoughts. (The sentences of this paper are an example. It’s unlikely that readers have come across most of these very sentences before.) The psychologist George Miller (1995) makes the point all the more vivid by focusing on just 20-word sentences, asking how many of these we can understand. Assuming conservatively that there are on average 10 words to draw from for each word choice as a sentence is constructed, the implication is that we understand at least 1020 20-word sentences. That’s one hundred million trillion of them. By comparison, the human brain contains roughly 1011 neurons, and the number of seconds in the history of the Universe is estimated to be on the order of 1017. So assuming that each sentence corresponds to a distinct thought,3 and sticking only to 20-word sentences (that is, ignoring not just longer sentences but also shorter ones), the number of thoughts we arrive at is more than a billion times the number of neurons in the brain and about a thousand times the number seconds in the history of the Universe.4 According to RTM and the Psychological View, this is just the tip of the iceberg. Once we abstract away from limitations of memory and attention and other factors that interact with our thinking, the human capacity for entertaining new thoughts is without limits. The actual thoughts that people entertain in their lifetime constitute a tiny and idiosyncratic subset of the thoughts that their conceptual system makes possible. . . ."

Isn't there a line in Alice in Wonderland about thinking six impossible things before breakfast?
 
That should be self evident because the overwhelming evidence favors the hypothesis that "consciousness is produced solely by the brain [neurons, neural nets]", at least so as we know at this time. The key there is in how I interpret your use of the word "produced". To use an analogy, consciousness clearly ≠ brain any more than light = light bulb. Yet both appear to be dependent on the other in the context in which they are associated. Sure we could say that the light seems to be produced by a light bulb, but that's only a correlation, and the light is really some mysterious thing that is merely associated with light bulbs, but in all honesty, how reasonable is that?
I think your thinking about this remains limited by your presuppositional objectivist approach.
Let's clarify that statement. Firstly, an objectivist position isn't presuppositional other than than that it defines an objective view as the quality of being true outside a subject's individual biases, interpretations, feelings, and imaginings. In this sense it is far from limiting, at least if the objective is truth, because it doesn't pander to confirmation bias. This means that it makes fewer suppositions and instead relies on evidence and critical thinking to determine the most reasonable state of affairs given the situation. Secondly, as a living feeling being, I'm not entirely devoid of intuition and subjective analysis either. I've just learned from experience that it's wiser to grant more weight to objective evidence than wishful thinking.
It's not an either/or question whether the evolution of neural nets in our species and others leads to more effective 'grips' on that which we encounter within our physical environments and in interactions with one another and other species of life.
You say: "It's not an either/or question", but don't list both options. To clarify, it would help if you could answer the following: It's not a question of neural nets or ( what specifically ) and with respect to what exactly? The following sentence you give ( below ) doesn't appear to indicate that. It is more of a statement in and of itself. I could assume that you may have meant, "It's not an either/or question whether the evolution of neural nets in our species does or doesn't ... but I'm not sure that's what you actually meant. If it is, then in the context of the issue we're referring to in these last few exchanges, then from a language perspective, that is exactly the nature of the question.
But consciousness begins in pre-reflective experience in the environing world and enables reflection and thought to begin, and the thinking of our species has long included not only physics but metaphysics, built on senses of and reasoning about the nature and extent of 'what-is' beyond the horizons of the visible that limit our perception locally while constituting the larger whole within which -- and in large evolutionary terms out of which -- our experienced reality becomes knowable by us to the extent that it does.
That seems fine in and of itself. But it doesn't seem relevant to why we should or shouldn't be objective in considering the evidence. Metaphysics and physics ( or any other science ) both use critical thinking to bring us closer to the truth. Metaphysics might perhaps also employ some more subjective methods like meditation, however such tools are equally useful for the scientific minded, and I would submit that it makes far more sense to meditate on something backed by the best evidence possible than by blind faith in nonsense, and BTW, I'm not saying that you ( personally ) have blind faith in any nonsense. I'm just making a statement about a hypothetical situation.
 
Last edited:
Status
Not open for further replies.
Back
Top