• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 7

Free episodes:

Status
Not open for further replies.
Regarding information theory

I would agree with you that searching for a computational origin for consciousness is a red herring. Also I know you have an affinity for a non-brain-based/non-neuron-based origin for consciousness. However you do search for a physical origin for consciousness, an origin that is based in the physical processes of life.

Yes, in my view we have to reach an understanding of how consciousness has evolved in the evolution of species on the planet whose history we know to any extent -- still a very limited extent. As you know, I think Jaak Panksepp and the discipline of Affective Neuroscience he has led makes a foundational contribution to this inquiry.

However I think this is a red herring as well. The problem with discontinuous physical models of consciousness have been well-covered in this discussion.

What do you mean by "discontinuous physical models of consciousness" and where in this thread has this 'problem' been well-covered?

Furthermore, despite your dismissal, exploration of the nexus of mind, information, and brain is not fading. It's true that some hold out hope that these three things can be reduced into one another which may be misguided, but that they are related is not.

Of course they are related. The still-open question is how they are related.

It's clear that processing information is a major part of what the brain does. I'm not suggesting the brain is running a software program composed of a preprogrammed set of algorithms. Neuroscientists do not understand exactly how the brain processes information.

Agreed.

In any case, it's becoming clear that when we are conscious, we are conscious of information in the brain. However there is lots of information in the brain, and we are not conscious of all of it.

Agreed that we are not conscious of all 'information' in the brain. The question is: what feeds 'information' into the brain, and where in the brain is 'information' 'processed' to produce our sense of the actuality of the world in which we exist? It's by now widely accepted that our species' orientation to its environing world begins in subconscious -- i.e., pre-reflective -- experience, out of which fund of experience the mind -- whether of the child born into our world or the proto-human species from which we developed -- attempts to 'make sense'. In itself, this effort to make sense of ourselves and our surroundings more than suggests that the origin of 'information' motivating our and other species' development of consciousness arises in lived experience -- the interaction of increasingly aware living creatures with the palpable, sensed, worldly environment in which they have found themselves {and as find ourselves} existing. Much of what you have brought forward in this thread chronically seeks to avoid the recognition of the organic nature of awareness, affectivity, and experience, from primordial species of life [Panksepp, Maturana, Varela] through the proliferating developments of awareness, seeking behavior, and protoconsciousness demonstrated in the evolution of species, to our own species capacities for reflection on experience funding in the achievement of mind itself. This has been the core of your and my disagreements over the past two years.

Why are we conscious of some of it and not all of it? And why are we conscious at all?

Re your first question, I think the answer is that we don't need to be conscious of many bodily processes regulated by brain activity. The more interesting question is your second one: "Why are we conscious at all?" I think the answer is that our physical, bodily, existence in an actual environing world -- like other animal species' existence in actual environing worlds -- has required and promoted the development of both consciousness and brain development. The growth of awareness and consciousness is primary in my view, and both capacities require for their development a tangible, sensed, environment of things and others that are experientially present (manifest) to the individual organism in its attempt to survive, function, and thrive.

As I noted, Hoffman's model gives a pathway toward mental causation by making consciousness primary and the physical derivative of consciousness. Because the physical is constituted of the mental, the two can interact.

Sorry, but I find that thinking, and the way you've expressed it, to be fuzzy. Hoffman does not seem to have what we could call a 'model' of consciousness but rather a conjecture for which there seems to be no evidence. His thinking seems to be a curious hybrid of dualistic Platonism and Idealism in philosophical terms refitted for contemporary cognitive and computational neuroscientific consumption and/or approval in the current age of 'information theory' and computer science.

The problem with free will is the causal flow. We appear to live in an orderly universe with a ordered causal flow. How can we have free will—the ability to step out of this causal flow—without disrupting the causal flow?

It's a considerable leap from recognizing causality in the evolution or development of complex forces constituting the physical universe as we perceive it [to the extent that we understand it], and postulating that every aware creature evolved in the universe is micromanaged in what it feels, what it makes sense of, and accordingly how it behaves given its capacities to act in its own lifetime.

As I've speculated in other threads, it may be that causality is a feature of the species specific user interface and not a hard feature of what-it, the level at which consciousness originates.

I need a bit of help with the syntax of that sentence before I can comment further. Am I correct in reading your final phrase – “the level at which consciousness originates” – as standing in apposition to “the species specific user interface” as Hoffman defines it? I’m fairly sure, but not certain, that that’s what you mean to claim based on your posts of the past.

But I’m wondering why -- if you speculate that “causality is a feature of the ‘informational’ species-specific interface a la Hoffman -- you seem here to speculate further that this informational interface is “not a hard feature of what-is” {if that’s indeed what you meant to type}? If so, are you now proposing that Hoffman’s interface hypothesis does not actually touch what you refer to as “the level – {of experienced life? of being?} -- at which consciousness originates”? Please clarify.
 
Last edited:
The two perspectives I'm referring to are on the notion of free will, not consciousness, and I've "characterized" them as "commonly held" based on my personal experience reading about and discussing the issue with numerous people over the years, e.g. statements like: "Free will is the ability to choose between different possible courses of action." - Wikipedia and this: "It would be misleading to specify a strict definition of free will since in the philosophical work devoted to this notion there is probably no single concept of it." - Stanford Encyclopedia of Philosophy, and then distilling out the essential differences.

So you have been concerned only to distinguish between the perspectives a) that we do have capacities for freely willed action and b) that we don't? Hmm, my original response was to this comment of yours:

"Given a brain based model for decision making, it's still a fact that the choices we become conscious of are formed before we become conscious of them."

That comment surely expresses a concept of consciousness, one which backtracks somewhat, as I recall, from one or more previous posts of yours. So I'm still wondering what your concept of consciousness is. Do you think that consciousness cannot be free because subconscious experience plays into it and determines what we think and do? Just curious.

 
Last edited:
So I'm still wondering what your concept of consciousness is.
I wasn't really trying to get into what my concept of consciousness is, so much as evoke thought on the issue of free will, but for the sake of discussion, my view of consciousness as a concept is that it's experiential in nature.
Do you think that consciousness cannot be free because subconscious experience plays into it and determines what we think and do? Just curious.
I'm not saying what I think so much as posing food for thought, not unlike the question your response evokes: Can consciousness be free? That is certainly another doozey. My initial reaction is that consciousness cannot be free because it seems to be dependent on circumstance, but then again, maybe the question doesn't apply to consciousness any more than it applies to the surface of lakes. Or maybe true freedom lies in the absence of existence. I guess it all depends on what paradigm one applies as a filter.
 
Last edited:
@smcder @Constance

If we deny that the brain processes information but at the same time, as Constance does, seek a purely naturalistic (materialistic) approach to congnition and consciousness, then you've reduced cognition and consciousness to mechanisms.

No information = no meaning

In the last article I posted which I described as depressing, that's exactly what the author was angling at. Constance lauded the article.

Embodied, situated, and enactive accounts of cognition do not have room for free will or mental causation. These accounts hold that cognition and consciousness are byproducts of the mechanical interaction between the organism and its environment.

Informational accounts on the other hand hold that the organism—utilizing information processing—actively makes meaning of its reciprocal interaction with the world and self-regulates accordingly.

Yes, this is a non-reductive, non-deterministic approach which is why the (depressing) article above labeled it as neo-egocentrism.

Any model that proposes something other than a determined, mechanistic approach to anything will be deemed non-scientific.

I know that the associations often see between different theories of cognition and consciousness often leave you all befuddled, but despite their core ideologies there is overlap between many of them.

Here is the beginnings of an attempt to bride Enactivism and Predicitve Processing:

Does Action-oriented Predictive Processing offer an enactive account of sensory substitution? | iCog
 
I also want to point out the asserting that the brain "processes information" is not the same as saying the brain is a computer.

Yes, there are cognitive neuroscientists who believe the brain is a computer running evolutionarily predetermined algorithms that, among other things, somehow compute consciousness. This is absurd.

However, I think it's equally absurd to claim that none of the processes of the brain are "information processing" processes akin to those man-made information-processing systems.

Technically, couldn't the physical processes of a computer be described in purely mechanical, non-informational terms? There's no information processing happing here, just the determined clanging together of atoms, right?

Information is still a controversial subject. Does it have any causal power? As Seattle says the notion of information presupposes consciousness. And how about it's causal power? Does it have any? As I said, technically speaking with causal closer of the quantum realm, nothing above it has causal power. So why even consider computers to be information processesor? They are just agglomerations of atoms doing their thing. The notion that they are processing information is a product of our species specific user interface. We only say they are processing information because we are conscious humans.

Three interesting responses below:

Is the human brain just another information processing device? - Quora

And here's a response to a response of the article smcder posted. Pretty interesting. And it seems the author has an affinity for a quantum explanation for the origin of consciousness and mind (which ironically also appeals to information processing).

A response to “A response to `The Empty Brain` — The Information Processing Brain”

http://www.sciencedirect.com/science/article/pii/S1571064513001188?via=ihub

I'm all for the notion that what the brain is doing is not just information processing—i.e. The brain is not a computer, but to say that the brain does not process information seems wrong.
 
Last edited:
I also want to point out the asserting that the brain "processes information" is not the same as saying the brain is a computer.

Yes, there are cognitive neuroscientists who believe the brain is a computer running evolutionarily predetermined algorithms that, among other things, somehow compute consciousness. This is absurd.

However, I think it's equally absurd to claim that none of the processes of the brain are "information processing" processes akin to those man-made information-processing systems.

Technically, couldn't the physical processes of a computer be described in purely mechanical, non-informational terms? There's no information processing happing here, just the determined clanging together of atoms, right?

Information is still a controversial subject. Does it have any causal power? As Seattle says the notion of information presupposes consciousness. And how about it's causal power? Does it have any? As I said, technically speaking with causal closer of the quantum realm, nothing above it has causal power. So why even consider computers to be information processesor? They are just agglomerations of atoms doing their thing. The notion that they are processing information is a product of our species specific user interface. We only say they are processing information because we are conscious humans.

Three interesting responses below:

Is the human brain just another information processing device? - Quora

And here's a response to a response of the article smcder posted. Pretty interesting. And it seems the author has an affinity for a quantum explanation for the origin of consciousness and mind (which ironically also appeals to information processing).

A response to “A response to `The Empty Brain` — The Information Processing Brain”

http://www.sciencedirect.com/science/article/pii/S1571064513001188?via=ihub

I'm all for the notion that what the brain is doing is not just information processing—i.e. The brain is not a computer, but to say that the brain does not process information seems wrong.

Change is hard.
 
the sounds of one knee jerking ... and the jerks may be right ... but this Epstein fellow intrigues.

Maybe because I've not thought about this in a good, long time.
 
Andy Clark - Wikipedia

"Clark’s work explores a number of disparate but interrelated themes. Many of these themes run against established wisdom in cognitive processing and representation. According to traditional computational accounts, the function of the mind is understood as the process of creating, storing and updating internal representations of the world, on the basis of which other processes and actions may take place. Representations are updated to correspond with an environment in accordance with the function, goal-state, or desire of the system in question at any given time. Thus, for example, learning a new route through a maze-like building would be mirrored in a change in the representation of that building. Action, on this view, is the outcome of a process which determines the best way to achieve the goal-state or desire, based on current representations. Such a determinative process may be the purview of a Cartesian "central executive" or a distributed process like homuncular decomposition.

In contrast to traditional models of cognition, which often posit the one-way flow of sensory information from the periphery towards more remote areas of the brain, Clark has suggested a two-way "cascade of cortical processing" underlying perception, action, and learning. The concept of predictive processing lies at the heart of this view, wherein top-down predictions attempt to correctly guess or "explain away" bottom-up sensory information in an iterative, hierarchical manner. Discrepancies between the expected signal and actual signal, in essence the "prediction error," travel upward to help refine the accuracy of future predictions. Interactions between forward flow of error (conveyed by "error units") and backward flow of prediction are dynamic, with attention playing a key role in weighting the relative influence of either at each level of the cascade (dopamine is mentioned as "one possible mechanism for encoding precision" with regard to error units). Action (or action-oriented predictive processing) also plays an important role in Clark's account as another means by which the brain can reduce prediction error by directly influencing the environment. To this, he adds that "personal, affective, and hedonic" factors would be implicated along with the minimization of prediction error, creating a more nuanced model for the relationship between action and perception.[4]

According to Clark, the computational model, which forms the philosophical foundation of artificial intelligence, engenders several intractable problems. One of the more salient is an information bottleneck: if, in order to determine appropriate actions, it is the job of the mind to construct detailed inner representations of the external world, then, as the world is constantly changing, the demands on the mental system will almost certainly preclude any action taking place. For Clark, we need relatively little information about the world before we may act effectively upon it. We tend to be susceptible to "grand illusion", where our impressions of a richly detailed world obscures a reality of minimal environmental information and quick action. We needn't try to reconstruct the detail of this world, as it is able to serve as its own best model from which to extract information "just in time"."

Elsewhere it discusses the notion of extended cognition. The terms cognition and mind seem to be used interchangeably. I think that's problematic. Especially if we are suggesting there is no information in the brain.

What is non-conscious cognition? What is non-conscious mind? If we are suggesting the brain does not process information then these terms are meaningless. There is no mind or cognition unless it is conscious. Just as there is no information unless there is consciousness. Mind, cognition, information all presuppose consciousness.
 
Last edited:
Andy Clark - Wikipedia

"Clark’s work explores a number of disparate but interrelated themes. Many of these themes run against established wisdom in cognitive processing and representation. According to traditional computational accounts, the function of the mind is understood as the process of creating, storing and updating internal representations of the world, on the basis of which other processes and actions may take place. Representations are updated to correspond with an environment in accordance with the function, goal-state, or desire of the system in question at any given time. Thus, for example, learning a new route through a maze-like building would be mirrored in a change in the representation of that building. Action, on this view, is the outcome of a process which determines the best way to achieve the goal-state or desire, based on current representations. Such a determinative process may be the purview of a Cartesian "central executive" or a distributed process like homuncular decomposition.

In contrast to traditional models of cognition, which often posit the one-way flow of sensory information from the periphery towards more remote areas of the brain, Clark has suggested a two-way "cascade of cortical processing" underlying perception, action, and learning. The concept of predictive processing lies at the heart of this view, wherein top-down predictions attempt to correctly guess or "explain away" bottom-up sensory information in an iterative, hierarchical manner. Discrepancies between the expected signal and actual signal, in essence the "prediction error," travel upward to help refine the accuracy of future predictions. Interactions between forward flow of error (conveyed by "error units") and backward flow of prediction are dynamic, with attention playing a key role in weighting the relative influence of either at each level of the cascade (dopamine is mentioned as "one possible mechanism for encoding precision" with regard to error units). Action (or action-oriented predictive processing) also plays an important role in Clark's account as another means by which the brain can reduce prediction error by directly influencing the environment. To this, he adds that "personal, affective, and hedonic" factors would be implicated along with the minimization of prediction error, creating a more nuanced model for the relationship between action and perception.[4]

According to Clark, the computational model, which forms the philosophical foundation of artificial intelligence, engenders several intractable problems. One of the more salient is an information bottleneck: if, in order to determine appropriate actions, it is the job of the mind to construct detailed inner representations of the external world, then, as the world is constantly changing, the demands on the mental system will almost certainly preclude any action taking place. For Clark, we need relatively little information about the world before we may act effectively upon it. We tend to be susceptible to "grand illusion", where our impressions of a richly detailed world obscures a reality of minimal environmental information and quick action. We needn't try to reconstruct the detail of this world, as it is able to serve as its own best model from which to extract information "just in time"."

Elsewhere it discusses the notion of extended cognition. The terms cognition and mind seem to be used interchangeably. I think that's problematic. Especially if we are suggesting there is no information in the brain.

What is non-conscious cognition? What is non-conscious mind? If we are suggesting the brain does not process information then these terms are meaningless. There is no mind or cognition unless it is conscious. Just as there is no information unless there is consciousness. Mind, cognition, information all presuppose consciousness.

Too many words. Big words. Can you do an executive summary in layman's terms?
 
Why do I say “rightly controversial”? Because if one interprets it as above, the whole idea fails to make sense. Hypothesising “a direct interaction between organisms and their world” means that there would be nothing to learn in studying the mechanisms which mediate the interactions and happen to occur inside bodies (would count as indirect?). In other words, it declares the reductionist approach a dead-end a priori. Trouble is, nobody does this: we do study how sensory signals travel along nerves towards the central nervous system and also what happens within brains in similar ways. The only problem I have with Radical Embodiment is that it might superficially seem to espouse such a view, while I happen to think that it tries to do something which is much more important, and orders of magnitudes more useful.

Radical Embodiment is challenging our understanding of “representations” and showing how they are far less “information rich” than what our common intuitions would suggest. It is doing so by showing how much the interaction with the world is necessary for guiding and fine-tuning behaviour. It does challenge the idea that we hold detailed models of the world and interact with those (instead of interacting with the world), and does so for a lot of good reasons, but, as exemplified in this brief exchange, it does not challenge the IP metaphor, it is merely showing how to apply it better!
 
To me, it is self-evident that this radical idea is basically correct, and at the same time, it is a reason why it is so difficult to figure out how brains work. One needs to account for much more than just neurons… At the same time, while I do accept the basic idea without reservations, I am also worried that, as exemplified by the short discussion I’ve linked above, radically rejecting all uses of the “representation” concept isn’t going to work: what needs to be done is different, but perhaps something that is best left for another time.
 
Thankfully I recognized the provocative nature of the article in question and didn't launch into detailed counterpoint over it, because Graziosi does such a fine job of that on his own. One thing that Epstein and I, and most certainly @Constance and perhaps even yourself probably agree on, is that consciousness ≠ computation. Once upon a time I could see no reason why sheer computational power could not somehow give rise to consciousness. I've since done a complete 180 on that.
 
@smcder @Constance

If we deny that the brain processes information but at the same time, as Constance does, seek a purely naturalistic (materialistic) approach to congnition and consciousness, then you've reduced cognition and consciousness to mechanisms.

No information = no meaning

In the last article I posted which I described as depressing, that's exactly what the author was angling at. Constance lauded the article.

Embodied, situated, and enactive accounts of cognition do not have room for free will or mental causation. These accounts hold that cognition and consciousness are byproducts of the mechanical interaction between the organism and its environment.

Informational accounts on the other hand hold that the organism—utilizing information processing—actively makes meaning of its reciprocal interaction with the world and self-regulates accordingly.

Yes, this is a non-reductive, non-deterministic approach which is why the (depressing) article above labeled it as neo-egocentrism.

Any model that proposes something other than a determined, mechanistic approach to anything will be deemed non-scientific.

I know that the associations often see between different theories of cognition and consciousness often leave you all befuddled, but despite their core ideologies there is overlap between many of them.

Here is the beginnings of an attempt to bride Enactivism and Predicitve Processing:

Does Action-oriented Predictive Processing offer an enactive account of sensory substitution? | iCog

Soupie, you make many assertions in this post, including misinterpretations of what you suppose my approach to consciousness and its origins to be. On the whole, imo, your assertions oversimplify a number of persisting issues in consciousness research and also, apparently, in your claims about IP hypotheses relating to consciousness. I'll read this latest paper you link. I suggest that you read the entirety of the comments following the Anil Seth paper you linked earlier to gain an appreciation of the informed critiques that have been made of the reductive techno-neuro hypothesis of Seth and his followers. Read beyond Seth to the extensive following comments here:

The hard problem of consciousness is a distraction from the real one | Aeon Essays
 
Last edited:
ps, I also want to note the presumptuousness and defensiveness you display in the following comment apparently addressed to Steve and me (given that that whole post begins by addressing us specifically):

"I know that the associations often see[n?] between different theories of cognition and consciousness often leave you all befuddled, but despite their core ideologies there is overlap between many of them."

Of course there is both overlap and persistent contradiction among these theories of cognition and consciousness concerning core issues yet to be resolved. What your presumptuous comment demonstrates is that you have rarely made an effort to understand the ideas expressed in posts by Steve or me, thus have not in fact understood what we have written. This is likely not because we are inarticulate or 'befuddled', but because you have rarely read the sources we cite as necessary supplements to -- fuller explications of -- our numerous posts over these last two years.
 
Last edited:
Status
Not open for further replies.
Back
Top