• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 2

Free episodes:

Status
Not open for further replies.
What I'm saying is the subjective experience is the act of specific neurons firing a specific way.

And that's what I'm saying too.

So phenomenal experience isn't constituted of neurons, but rather of the pattern of information they produce. Ergo phenomenal experience equals information, no?

Smcder, you stated that you believe consciousness might arise in non-organic computers. This to me seems to imply even more that mind isn't constituted of any specific physical substance, but mind is embodied by physical substances be they organic or silicone.

It's the patterns of information they produce that's important, not the atomic composition of the physical substance producing the pattern of information.

In my thinking:

Information = Proto qualitative experience (proto mind)

Integrated information = Qualitative experience (mind)

This means proto-mind has existed from the moment matter existed; and mind has existed from the moment matter gave rise to integrated information.

As I noted early, I wonder if the first moment of mind arising from proto-mind occurred when a physical system first arranged itself to represent another physical system.

However, the internal, affectual "information management" concept of the origin of mind is interesting too. I've not read that paper yet.

For me it comes down to the fact that mind appears not to be constituted of any physical substance, yet appears to be intimately related to physical organisms. What we know about physical organisms is that they are information processing systems. The information comes to them via "lived experience" in a real, physical world, but all this lived experience is converted to information via neuronal spikes in the body-brain.

This converted and integrated information, I believe, must be the mind.

We've been on a quale hunt and we've looked all over the physical body-brain and not found any quale! But the quale are jumping all over and around the body-brain. Where are the quale coming from? What are they made of? Here, let's move all this information out of the way that also seems to be spilling out of the body-brain so we can have a better look...
 
Last edited:
The Chalmers and Nagel papers aren't linked as statements of my position, they are were where we started on the thread ... so everyone who's been here a while has read them, it's so we don't have to re-hash what's been covered.
Other than being an overview or map of the different ideas, what's he offering here?
 
So phenomenal experience isn't constituted of neurons, but rather of the pattern of information they produce. Ergo phenomenal experience equals information, no?
Erm, I'm not sure.
I'm thinking it's data not information, but subjectively I guess it would be information once it's mapped to the experience one is perceiving.

Smcder, you stated that you believe consciousness might arise in non-organic computers. This to me seems to imply even more that mind isn't constituted of any specific physical substance, but mind is embodied by physical substances be they organic or silicone.

It's the patterns of information they produce that's important, not the atomic composition of the physical substance producing the pattern of information.
I'm with you so far.
In my thinking:

Information = Proto qualitative experience (proto mind)

Integrated information = Qualitative experience (mind)
Here's where it goes a bit pear shaped for me.
There's lots of kinds of information that won't lead to a mind.
This means proto-mind has existed from the moment matter existed; and mind has existed from the moment matter gave rise to integrated information.

As I noted early, I wonder if the first moment of mind arising from proto-mind occurred when a physical system first arranged itself to represent another physical system.

However, the internal, affectual "information management" concept of the origin of mind is interesting too. I've not read that paper yet.
I'm lost.
For me it comes down to the fact that mind appears not to be constituted of any physical substance, yet appears to be intimately related to physical organisms. What we know about physical organisms is that they are information processing systems. The information comes to them via "lived experience" in a real, physical world, but all this lived experience is converted to information via neuronal spikes in the body-brain.

This converted and integrated information, I believe, must be the mind.

We've been on a quale hunt and we've looked all over the physical body-brain and not found any quale! But the quale are jumping all over and around the body-brain. Where are the quale coming from? What are they made of? Here, let's move all this information out of the way that also seems to be spilling out of the body-brain so we can have a better look...
I think the assertion is that it is actually constituted of a physical substance, that would be the substrate doing the information processing. It's just the special arrangement and process that makes the mind.
 
[

Chalmers trained under Hofstadter, not Nagel.

Nagel's paper is #17 of the 100 most cited papers in philosophy of mind

MindPapers: 100 most cited works by philosophers in MindPapers according to Google Scholar

... so, good, bad or indifferent, it's probably been thrown at him over the I think 40 years since its publication and there have been counter arguments and counter counter arguments, etc ... so we should be able to address your concerns above - and most of them are pretty common reactions in discussions of the bat paper, for example:

That's like saying since "physicalism" doesn't currently have all the answers, it never will, and because I (at list think I have) a subjective experience, it must not exist physically!

The first bit is more in line with McGinn's position of New Mysterianism which should drive you even battier than Nagel's paper.

But the overall critique is addressed in a couple of places in the text:

What moral should be drawn from these reflections, and what should be done next? It would be a mistake to conclude that physicalism must be false. Nothing is proved by the inadequacy of physicalist hypotheses that assume a faulty objective analysis of mind. It would be truer to say that physicalism is a position we cannot understand because we do not at present have any conception of how it might be true. Perhaps it will be thought unreasonable to require such a conception as a condition of understanding. After all, it might be said, the meaning of physicalism is clear enough: mental states are states of the body; mental events are physical events. We do not know which physical states and events they are, but that should not

prevent us from understanding the hypothesis. What could be clearer than the words 'is' and 'are'?
Of course he goes on, but he does recognize/anticipate your critique.

And in footnote 15:

I have not defined the term 'physical'. Obviously it does not apply just to what can be described by the concepts of contemporary physics, since we expect further developments. Some may think there is nothing to prevent mental phenomena from eventually being recognized as physical in their own right. But whatever else may be said of the physical, it has to be objective. So if our idea of the physical ever expands to include mental phenomena, it will have to assign them an objective character—whether or not this is done by analyzing them in terms of other phenomena already regarded as physical It seems to me more likely, however, that mental-physical relations will eventually be expressed in a theory whose fundamental terms cannot be placed clearly in either category.

This is a good link that points out the rhetorical aspect of Nagel's argument that can be missed in a first reading:

Conscious Entities » What is it like to be a bat

You might find it a good tonic for your limbic system. What I'm interested in getting from Nagel is that from what I've seen people are split neatly on this paper ... they either kind of go "yeah, that's obvious" or they go ballistic like you did ... so I'm curious if he'll share how he has taught his idea through the years and handled various objections to them.
I'm reading it now, will post later. However I'll offer this for the moment:

My overall sense is that philosophy is a dead end in general, and in artificial intelligence in particular.

I think of course of all the pontificators that thought that heavier than air travel was philosophically impossible, even after it was done. I say it can be done, and it will be done.

Of course, I could be wrong. Mind could be the stuff of not of this universe. Of course, we could all be given consciousness by the incantations of dread Cthulhu, too.

But I doubt it.
 
Here's where it goes a bit pear shaped for me.

There's lots of kinds of information that won't lead to a mind.

I absolutely agree. I use the term "integrated information" very loosely.

While the mind may consist of information, it's certainly not just any information.

I define integrated information as "data patterned in such a way that phenomenal experience emerges." (Emerges from what? Proto-phenomenal experience, ie, non-integrated information.)

So, saying that mind is integrated information is one thing; we are still left with the task of determining in which ways brains integrate information to give rise to mind. (The combination problem?)

I'm lost.

I'm just speculating about the first instance of integrated information.
 
Last edited:
I'm reading it now, will post later. However I'll offer this for the moment:

My overall sense is that philosophy is a dead end in general, and in artificial intelligence in particular.

I think of course of all the pontificators that thought that heavier than air travel was philosophically impossible, even after it was done. I say it can be done, and it will be done.

Of course, I could be wrong. Mind could be the stuff of not of this universe. Of course, we could all be given consciousness by the incantations of dread Cthulhu, too.

But I doubt it.

What can and will be done? AI? Chalmers thinks AI is possible, I posted a talk he gave on the topic earlier in this thread about how he thought it would happen. I don't know where Nagel stands on the issue.

I'm not sure what you mean by mind is "the stuff of not of this universe" ... I think you have a typo in there somewhere.
 
I'm reading it now, will post later. However I'll offer this for the moment:

My overall sense is that philosophy is a dead end in general, and in artificial intelligence in particular.

I think of course of all the pontificators that thought that heavier than air travel was philosophically impossible, even after it was done. I say it can be done, and it will be done.

Of course, I could be wrong. Mind could be the stuff of not of this universe. Of course, we could all be given consciousness by the incantations of dread Cthulhu, t
But I doubt it.

A couple of papers by Chalmers I posted a while back on this thread:

Why Isn't There More Progress in Philosophy? http://consc.net/papers/progress.pdf speaks to your sense that philosophy is a "dead end":

For example:

"4) Greater distance from data. An answer naturally suggested by the discussion of decisive arguments is that there is less convergence in philosophy than in science because philosophy tends to concern domains that are remote from clear data. To put this in a Quinean mode, philosophical theses are a long way from the periphery in the network of belief.

Still, on the face of it, the same goes for many highly theoretical claims in science, for example concerning the distant past and the very small. And plausibly the same goes for mathematics. In that case one might point to mathematical axioms and intuitions as data, but this then raises the question of why we don’t have analogous philosophical data to settle philosophical questions. So this option tends to relabel the problem rather than to solve it."

He has seven such explanations (you might find #7 evolutionary explanations interesting) and concludes:

"Finally: what are the prospects for further philosophical progress? Is it possible that we may eventually converge to the truth on the big questions of philosophy? To get a grip on this, we need to address the question of whether the answers to these questions are even knowable in principle, by sufficiently ideal reasoners. Here I will just flag my own positive view on this question. In Constructing the World, I argued for a scrutability thesis (called Fundamental Scrutability in the book) holding that all truths are a priori entailed by fundamental empirical truths concerning fundamental natural properties and laws. It follows (roughly) that if someone could know all the fundamental empirical truths and reason ideally, they could know all the truths, including all the philosophical truths."

In the case that humans might fall below a threshold level of intelligence required to solve the big questions (if it is a matter of intelligence), a position known as New Mysterianism, Chalmers proposes the possibility of an AI solution:

"If McGinn and van Inwagen are right, it remains open that we could answer philosophical questions by first improving our intelligence level, perhaps by cognitive enhancement or extension. Alternatively, we could construct artificial beings more intelligent than us, who will then be able to construct artificial beings more intelligent than them, and so on. The resulting intelligence explosion might lead to creatures who could finally answer the big philosophical questions."

The other reading I can recommend for this thread and that I've posted before is Chalmers work on "verbal disputes", found at the same site. As far as I can tell, Chalmers does not discuss Cthulu anywhere in his corpus. In fact, I believe it may be the first mention of it on this thread, though Lovecraft has come up a time or two.
 
What can and will be done? AI? Chalmers thinks AI is possible, I posted a talk he gave on the topic earlier in this thread about how he thought it would happen. I don't know where Nagel stands on the issue.

I'm not sure what you mean by mind is "the stuff of not of this universe" ... I think you have a typo in there somewhere.
That's essentially what the dualist arguement is.

Mind is a magical substance that isn't of the physical universe, yet interacts with it.
 
That's essentially what the dualist arguement is.

Mind is a magical substance that isn't of the physical universe, yet interacts with it.

I'm familiar with dualism.

Here's what I was trying to clarify with neurons, subjective experience and action. The diagram is the best I can do, it's just two arrows pointing from neurons firing ... hope it makes sense.

neurons firing -----------(emergence)-----> subjective experience ("I will move my hand.")
|
|
|
V
action (my hand moves)

The question is what, if any relation there is between the subjective experience and the action?

I'll stop there, because I want to go one step at a time and make sure I understand. The kind of free will I would want would be that there is a direct relationship between my thinking, my subjective experience of willing or intending my hand to move and my hand moving ... but it looks like my subjective experience and my hand moving are both caused by the neurons firing?

As I understand it, the idea that subjective experience emerges from neurons firing is epiphenomenalism and eliminative materialism would say there is no advantage conferred by having subjective experience, it's just what it feels like for the neurons to fire. Natural selection put together a brain and it so happens that subjective experience is a quality of that brain.

So there is no relationship between the subjective experience of thinking "I will move my hand." and the hand moving, other than that the hand moving and the subjective experience are both caused by the neurons firing.

It seems this is one way to interpret experiments that seem to show a person becomes aware of the intention to move their hand after the impulse is already formed - under this reading, I believe free will can be eliminated and consciousness can be seen as an "illusion" in that there is no direct relationship between what you are doing and what is going on in your head, both are effects of neurons firing.

As I said there are other possible (maybe) relationships between neurons, subjective experience and action, but I have questions about those too.
 
Last edited by a moderator:
I know about axioms, I was just saying if you want to bust out the math I'll try to keep up.

Very Zen answer!

I just pulled 185 deadlift, 5 lbs over bodyweight, the old man ain't dead yet.

Oops, one hand deadlift, two wouldn't be particularly impressive ... still a long way to go.
 
I'm familiar with dualism.

Here's what I was trying to clarify with neurons, subjective experience and action. The diagram is the best I can do, it's just two arrows pointing from neurons firing ... hope it makes sense.

neurons firing -----------(emergence)-----> subjective experience ("I will move my hand.")
|
|
|
V
action (my hand moves)

The question is what, if any relation there is between the subjective experience and the action?

I'll stop there, because I want to go one step at a time and make sure I understand. The kind of free will I would want would be that there is a direct relationship between my thinking, my subjective experience of willing or intending my hand to move and my hand moving ... but it looks like my subjective experience and my hand moving are both caused by the neurons firing?

As I understand it, the idea that subjective experience emerges from neurons firing is epiphenomenalism and eliminative materialism would say there is no advantage conferred by having subjective experience, it's just what it feels like for the neurons to fire. Natural selection put together a brain and it so happens that subjective experience is a quality of that brain.

So there is no relationship between the subjective experience of thinking "I will move my hand." and the hand moving, other than that the hand moving and the subjective experience are both caused by the neurons firing.

It seems this is one way to interpret experiments that seem to show a person becomes aware of the intention to move their hand after the impulse is already formed - under this reading, I believe free will can be eliminated and consciousness can be seen as an "illusion" in that there is no direct relationship between what you are doing and what is going on in your head, both are effects of neurons firing.

As I said there are other possible (maybe) relationships between neurons, subjective experience and action, but I have questions about those too.

I see it a little differently. Forgive going back to the computer methodology.

Every process enters into the processor, goes through a series of logic gates, then goes out the other side. How many gates a process needs to go through before it comes out the other side is known as it's gate depth.

So, a floating point divide (a computationally expensive operation) may have a gate depth of, say, 20 gates. Each level of logic gates takes a finite period of time, so a floating point divide takes each gate time x 20 to deliver.

A simple "true or false" comparison may take, say, 2 gates to go through to deliver. So that's 1/10th as computationally expensive.

What I'm surmising is that there are different levels of awareness, and different levels of autonomy in our actions.

Consciousness is probably a neuronally expensive task. We have bazillions of them, and they're a network, but let's say just for the sake of argument they have a neuron depth of 20 to keep our higher consciousness up and running.

Pulling our hand away from something hot may have been evolutionarily optimized by our neurons a long time ago, so let's say it passes through whatever the equivalent is of 2 neuron "gates" to be done. The act may be done before we're actually aware of it.

Let's also say that we have various subcomponents of the mind. One is for autonomic functions, one is for sympathetic functions, one for parasympathetic functions, etc. My brain, for the sake of local optimization, may carve off a set of neuron pathways to drive me into work. That's it's job -- just get me into work and don't make me think too much about it. Let's say it's optimized so it has a neuron depth of 10.

So it happily chugs along, and our consciousness is freed up to think as we drive into work. We may, then, take a turn or jam on the brakes before we become aware of it.

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

In short, I see it kinda this way, with a series of pretty elastically defined sub processes being executed massively in parallel, with different timings.
 
I'm familiar with dualism.

Here's what I was trying to clarify with neurons, subjective experience and action. The diagram is the best I can do, it's just two arrows pointing from neurons firing ... hope it makes sense.

neurons firing -----------(emergence)-----> subjective experience ("I will move my hand.")
|
|
|
V
action (my hand moves)

The question is what, if any relation there is between the subjective experience and the action?

I'll stop there, because I want to go one step at a time and make sure I understand. The kind of free will I would want would be that there is a direct relationship between my thinking, my subjective experience of willing or intending my hand to move and my hand moving ... but it looks like my subjective experience and my hand moving are both caused by the neurons firing?

As I understand it, the idea that subjective experience emerges from neurons firing is epiphenomenalism and eliminative materialism would say there is no advantage conferred by having subjective experience, it's just what it feels like for the neurons to fire. Natural selection put together a brain and it so happens that subjective experience is a quality of that brain.

So there is no relationship between the subjective experience of thinking "I will move my hand." and the hand moving, other than that the hand moving and the subjective experience are both caused by the neurons firing.

It seems this is one way to interpret experiments that seem to show a person becomes aware of the intention to move their hand after the impulse is already formed - under this reading, I believe free will can be eliminated and consciousness can be seen as an "illusion" in that there is no direct relationship between what you are doing and what is going on in your head, both are effects of neurons firing.


As I said there are other possible (maybe) relationships between neurons, subjective experience and action, but I have questions about those too.

It was Benjamin Libet who did the experiment you referenced above (highlighted in red), which was subsequently celebrated by materialists as evidence that consciousness is not involved in our actions, an interpretation that Libet criticized in a paper I linked about a month ago in Part II of this thread. I'll find and post the link to that paper again.

The following paper,
"Neurophenomenology for neurophilosophers" by Evan Thompson et al, outlines the interdisciplinary research in progress that addresses issues raised in Libet's experiment among others:

http://brainimaging.waisman.wisc.edu/~lutz/ET&AL&DC.Neuropheno_intro_2004.pdf
 
Last edited:
I see it a little differently. Forgive going back to the computer methodology.

Every process enters into the processor, goes through a series of logic gates, then goes out the other side. How many gates a process needs to go through before it comes out the other side is known as it's gate depth.

So, a floating point divide (a computationally expensive operation) may have a gate depth of, say, 20 gates. Each level of logic gates takes a finite period of time, so a floating point divide takes each gate time x 20 to deliver.

A simple "true or false" comparison may take, say, 2 gates to go through to deliver. So that's 1/10th as computationally expensive.

What I'm surmising is that there are different levels of awareness, and different levels of autonomy in our actions.

Consciousness is probably a neuronally expensive task. We have bazillions of them, and they're a network, but let's say just for the sake of argument they have a neuron depth of 20 to keep our higher consciousness up and running.

Pulling our hand away from something hot may have been evolutionarily optimized by our neurons a long time ago, so let's say it passes through whatever the equivalent is of 2 neuron "gates" to be done. The act may be done before we're actually aware of it.

Let's also say that we have various subcomponents of the mind. One is for autonomic functions, one is for sympathetic functions, one for parasympathetic functions, etc. My brain, for the sake of local optimization, may carve off a set of neuron pathways to drive me into work. That's it's job -- just get me into work and don't make me think too much about it. Let's say it's optimized so it has a neuron depth of 10.

So it happily chugs along, and our consciousness is freed up to think as we drive into work. We may, then, take a turn or jam on the brakes before we become aware of it.

This doesn't mean that the higher level computational task of consciousness can't itself trigger a lower level event -- like taking a left hand turn around a traffic jam I just heard about on the radio.

In short, I see it kinda this way, with a series of pretty elastically defined sub processes being executed massively in parallel, with different timings.

There's nothing to forgive in your "going back to the computer methodology." Your work is centered in that area, and Steve has engaged you in discussing it. Those of us not working in computer science need to understand what it involves and how far it can go in accounting for consciousness and mind. The question is whether computational systems as applied analogously to neurological systems in brain science actually account for consciousness and mind.
 
Here is another paper that we should read at this point. I copied it into my Word files but need to find the link and post it for others here.

Journal of Cosmology, 2011, Vol. 14.
JournalofCosmology.com, 2011

The Spread Mind: Seven Steps to Situated Consciousness


Riccardo Manzotti
Institute of Communication and Behavior, "G. Fabris", IULM University, Via Carlo Bo, 8, 20143 Milano, Italy

Abstract

This paper outlines a radical version of phenomenal vehicle externalism dubbed "The Spread Mind" which suggests that both the content and the vehicles of phenomenal experience are identical to a process beginning in the environment and ending in the cortex. In seven conceptual steps, the Spread Mind outlines a counterintuitive yet logically possible hypothesis – namely that the physical underpinnings of consciousness may comprehend a part of the environment and thus may extend in space and time beyond the skin. If this view had any merit, consciousness would be situated in a strong sense.

KEY WORDS: Consciousness, Phenomenal experience, Externalism, Time, Ontology, Causation, Representation


1 Where to Look for the Physical Basis of Phenomenal Experience

The quest for the physical underpinnings of consciousness is still an unresolved one (Koch 2007; Tallis 2010; van Boxtel and de Regy 2010). When one perceives a red patch, what is the necessary and sufficient physical basis of such a phenomenal experience? Indeed what is the physical phenomenon that is one’s phenomenal experience of a red patch? Such questions single out the hard-problem (Chalmers 1996): "From all the low-level facts about physical configurations and causation, we can in principle derive all sorts of high-level facts about macroscopic systems, their organization, and the causation among them. One could determine all the facts about biological function, and about human behavior and the brain mechanisms by which it is caused. But nothing in this vast causal story would lead one who had not experienced it directly to believe that there should be any consciousness. The very idea would be unreasonable; almost mystical, perhaps" (Chalmers 1996, p. 102). More recently, Christof Koch wrote that "How brain processes translate to consciousness is one of the greatest unsolved questions in science. The scientific method […] has utterly failed to satisfactorily explain how subjective experience is created" (Koch 2007).

Such a lack of a physical explanation might be a consequence of one or more ill chosen assumptions as to the nature of the physical world (Strawson 2006; Skrbina 2009; Strawson 2011). Of course, there is plenty of evidence showing that neural activity is correlated and indeed necessary to consciousness. In the last twenty years, several researchers presented outstanding and remarkable results as to the ways in which conscious experience is related with brain activity (Logothetis 1998; Zeki 2001; Andrews, Schluppeck et al. 2002; Crick and Koch 2003; Changeux 2004; Buzsáki 2007; Hohwy 2009; Laureys and Tononi 2009; van Boxtel and de Regy 2010). Yet what is still missing is a theory outlining a conceptual and causal connection between neural activity and phenomenal experience. Thus it may be worth to consider other hypotheses however counterintuitive they may seem to be. After all, "If no theories seem to be capable of accounting for conscious experience, this probably means that there is something inherent in our assumptions that divorce theories from conscious experience" (Rockwell 2005, p. 49).

An assumption that is sometimes taken for granted is that the physical underpinnings of consciousness have to be internal to the nervous system. Neuroscientists like Christof Koch, Atti Revonsuo, Giulio Tononi, Semir Zeki have explicitly made this assumption. For instance, Atti Revonsuo stated that "sensory input and motor output are not necessary for producing a fully realized phenomenal level of organization. The dreaming brain creates the phenomenal level in an isolated form, and in that sense provides us with insights into the processes that are sufficient for producing the phenomenal level" (Revonsuo 2000, p. 58). Along the same line, Christof Koch believes that "The goal [of the scientific study of consciousness] is to discover the minimal set of neuronal events and mechanisms jointly sufficient for a specific conscious percept" (Koch 2004, p. 16). My opinion is that these authors assume that, although the environment has a key role in shaping neural networks during development, once the brain is developed and running, there is a set of neural events sufficient for specific conscious percepts [Crick and Koch 1990]. Such a hypothesis has become widely accepted not only in neuroscience but also in philosophy of mind. Consider a philosopher like Jaegwon Kim stating that "if you are a physicalist of any stripe, as most of us are, you would likely believe in the local supervenience of qualia – that is, qualia are supervenient on the internal physical/biological states of the subject" (Kim 1995, p. 160). Yet, is such an assumption unavoidable? After all, if you are a physicalist, you ought to look for physical phenomena – any kind of physical phenomena – and not just for an "internal physical/biological state of the subject". What occurs outside the body is physical too. Neural processes are only a subset of a much larger domain of feasible physical processes.

Here, I will consider a rather counterintuitive hypothesis – dubbed the spread mind – that might shed a new light on the issue of the physical underpinnings of phenomenal experience (Manzotti 2006c). This hypothesis is a somewhat more radical version of other related views (Varela, Thompson et al. 1991; O'Regan and Noë 2001; Rockwell 2005; Honderich 2006; Thompson 2007; Noë 2009; Velmans 2009). In a nutshell, I will consider whether it makes any sense to suppose that the physical underpinnings of consciousness are temporally and spatially larger than the subject body. Is consciousness situated in the environment?

2 Beyond the Skin

Instead of focusing on the brain and the nervous system, a few authors considered whether the relation between the mind and the environment is closer than traditionally assumed (Varela, Thompson et al. 1991; Clark and Chalmers 1998; O'Regan and Noë 2001; Rockwell 2005; Honderich 2006; Thompson 2007; Noë 2009; Velmans 2009). This insight gave rise to various views both in cognitive science and in philosophy of mind (Rowlands 2003; Robbins and Aydede 2009a; Hurley 2010): the embodied mind, the embedded mind, the extended mind, and the larger stance dubbed situated cognition. The insight has unfolded by degrees. For instance, for some authors, the environment provides the right place for the development of cognitive functions (Anderson 2003; Gallagher 2005) while, for other scholars, the environment allows off-loading cognitive work (Clark 1989; Wilson 2004). Eventually, David Chalmers and Andy Clark suggested that, somehow, the cognitive mind leaks into the world (Clark and Chalmers 1998; Clark 2008). During the last twenty years, many scholars gave rise to a heated debate whether and to what extent the mind may extend into environment (Varela, Thompson et al. 1991; Thompson and Varela 2001; Chrisley and Ziemke 2002; Anderson 2003; Pfeifer, Lungarella et al. 2007; Chemero 2009; Robbins and Aydede 2009a; Robbins and Aydede 2009b; Rupert 2009; Shanahan 2010). Although most of these authors limited their proposal to the cognitive mind (Clark and Chalmers 1998; Wilson 2004; Clark 2008) nevertheless they collected their share of criticism (Rupert 2004; Adams and Aizawa 2008; Rupert 2009). Since they focus on cognitive skills rather than on phenomenal experience, these views may be grouped under the label of situated cognition (Anderson 2003; Robbins and Aydede 2009b). Only a handful of authors ventured to suggest that although the mechanisms of the conscious mind remain safely inside the brain, phenomenal content may literally either depend on or be constituted by the outside world (Dretske 1996; Velmans 2000; Lycan 2001; O'Regan and Noë 2001; Honderich 2006; Noë 2009). Yet, even these phenomenal externalists have continued to distinguish between internal representations and the external world. For instance, Fred Dretske stated that "sensory experience gives primary representation to the properties of distant objects and not to the properties of those more proximal events on which it (causally) depends" (Dretske 1981, p. 165). Three decades later he still remarks that "The experiences themselves are in the head […] but nothing in the head needs have the qualities that distinguish these experiences" (Dretske 1996, p. 144-145). As a result they are often labeled an supporters of phenomenal content externalism – namely the view that although the content of experience depends on states of affairs external to the body, nevertheless the vehicles of experience remains inside the body.

Here, I will venture one step further. I will openly consider a perilous hypothesis: are the vehicles of phenomenal experience spread in time and space beyond the boundaries of the skin? Is phenomenal experience itself extended in time and space? Is consciousness situated in a strong sense? So far, many authors stepped back from this counterintuitive view. For instance, David Chalmers, in the foreword to Andy Clark’s book on the extended mind, wrote that "[the extended mind does] not rule out the supervenience of consciousness on the internal" (Chalmers 2008, p. 6). As we have seen, Kim’s dictum explicitly rules out such a possibility. Likewise, many objections have been raised against the hypothesis that the processes underpinning phenomenal experience might be totally or partially external to the body (Kim 1995; Clark and Chalmers 1998; Wilson 2004; Velmans 2007; Adams and Aizawa 2008; Clark 2008).

And yet, what are the strong arguments against strong situated consciousness – namely the hypothesis that the physical processes constituting consciousness are larger than the nervous system? Here I will venture to consider and put under scrutiny such a hypothesis. I will consider whether our phenomenal experience might indeed be extended in time and space beyond the limits of the nervous system. . . .
 
@Soupie, rooting around in my Word files just now I came across the following Discovery article where I first read about the default network (sometimes referred to as default consciousness), which you and I briefly discussed a week or two ago. Here is an extract from it followed by the link:

". . .Neuroscientists are investigating this paradox by searching for the signatures of mind wandering in the brain. To that end, Schooler and Smallwood recently ran yet another experiment (pdf)—this one in collaboration with Alan Gordon of Stanford University, University of British Columbia neuroscientist Kalina Christoff, and Christoff’s graduate student Rachelle Smith. The researchers put people in a functional magnetic resonance imaging (fMRI) scanner and gave them the standard press-a-key-unless-you-see-three test. From time to time they asked the subjects if they were paying attention to the task; if they hadn’t been, the researchers asked if they had been aware that their mind had wandered. The subjects reported mind wandering 43 percent of the time they were asked. In nearly half those cases, they said they hadn’t been aware of their inattentiveness until the scientists asked.

Later, the scientists pored over the scans, looking closely at the activity in people’s brains right before they were asked about their state of mind. Overall, people who said they were mind wandering had a pattern of brain activity quite different from those who were focused on the task.

The regions of the brain that become active during mind wandering belong to two important networks. One is known as the executive control system. Located mainly in the front of the brain, these regions exert a top-down influence on our conscious and unconscious thought, directing the brain’s activity toward important goals. The other regions belong to another network called the default network. In 2001 a group led by neuroscientist Marcus Raichle at Washington University discovered that this network was more active when people were simply sitting idly in a brain scanner than when they were asked to perform a particular task. The default network also becomes active during certain kinds of self-referential thinking, such as reflecting on personal experiences or picturing yourself in the future.

The fact that both of these important brain networks become active together suggests that mind wandering is not useless mental static. Instead, Schooler proposes, mind wandering allows us to work through some important thinking. Our brains process information to reach goals, but some of those goals are immediate while others are distant. Somehow we have evolved a way to switch between handling the here and now and contemplating long-term objectives. It may be no coincidence that most of the thoughts that people have during mind wandering have to do with the future.

Even more telling is the discovery that zoning out may be the most fruitful type of mind wandering. In their fMRI study, Schooler and his colleagues found that the default network and executive control systems are even more active during zoning out than they are during the less extreme mind wandering with awareness. When we are no longer even aware that our minds are wandering, we may be able to think most deeply about the big picture.

Because a fair amount of mind wandering happens without our ever noticing, the solutions it lets us reach may come as a surprise. There are many stories in the history of science of great discoveries occurring to people out of the blue. The French mathematician Henri Poincaré once wrote about how he struggled for two weeks with a difficult mathematical proof. He set it aside to take a bus to a geology conference, and the moment he stepped on the bus, the solution came to him. It is possible that mind wandering led him to the solution. John Kounios of Drexel University and his colleagues have done brain scans that capture the moment when people have a sudden insight that lets them solve a word puzzle. Many of the regions that become active during those creative flashes belong to the default network and the executive control system as well. . . ."

The Brain: Stop Paying Attention: Zoning Out Is a Crucial Mental State | DiscoverMagazine.com
 
Status
Not open for further replies.
Back
Top