• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 9

Free episodes:

Status
Not open for further replies.
He says it right here:

Information is not primary in the structure of reality; rather it is dependent on consciousness, just as consciousness itself is a biological phenomenon dependent on brain processes that are themselves dependent on more basic features of physics and chemistry.

OK ... what is your point? What is it you are trying to be right about and who are you arguing with?
 
(1) Correct. By this logic all sciences are observer dependent.

(2) Of course that consciousness is a biological processes dependent on brain processes is a thesis not without problems.

All very true ... but ... ?
 
(1) Correct. By this logic all sciences are observer dependent.

(2) Of course that consciousness is a biological processes dependent on brain processes is a thesis not without problems.

(2) Of course that consciousness is a biological processes dependent on brain processes is a thesis not without problems

SEARLE

In his review of Penrose’s The Emperor’s New Mind [NYR, March 15], John Maynard Smith expresses some doubt about whether the views he attributes to me are in fact mine. His doubts are justified. I do not hold the views that a computer “would not be conscious, because it was made of transistors and not of neurons.” That is not my view at all. My position rather, is this: I take it as a fact that certain quite specific, though still largely unknown, neurobiological processes in human and some animal brains, cause consciousness. But from the fact that brains cause consciousness we can derive trivially that any other system capable of causing consciousness would have to have the relevant causal powers at least equivalent to brains. If brains do it causally then any other system that does it causally will have to share with brains the power to do it causally. I hope that sounds tautological, because it is. Some other system might use a different chemistry, a different medium altogether; but any such medium has to be able to do what brains do. (Compare: airplanes don’t have to be made of feathers in order to fly, but they do have to share with birds the causal capacity to overcome the force of gravity in the earth’s atmosphere.)

Searle says the brain is a machine it does what it does with physical processes. See the video for the exact language.

As I understand Searle computers as we know them are syntactical machines - not semantic ones they don't have the causal powers to produce consciousness.

Does he think a man made artifact could some day produce consciousness? Yes. But he doesn't think we have very much or any at all of an idea about how to go about making one.

It's also not clear to me how biological he thinks that process might be ... my own thought is that something more like artificial life could succeed where AI fails - but what would that accomplishment be exactly?

This blurs the lines as @Michael Allen notes between organism and machine ... on the one hand you could claim that it's engineered but on the other, as a product of our biology, you could say it's more properly a mutation - another form of giving birth ... a descendant ... you can bring both ways of thinking about it to bear ... unless there does prove to be something specifically biological involved. I'm not sure what that could be ... right now it seems like you could switch back and forth with either descriptor ... a machine sophisticated enough to do everything we do and feel can be an organism ... but we don't have anything like that. On the other hand if we engineer something that basically reproduces how life got started - artificial DNA or whatever the appropriate beginning is, that point at which life can become autonomous and it then evolves on its own ... then that seems to me to be biological.
 
Last edited:
OK ... what is your point? What is it you are trying to be right about and who are you arguing with?
My points were:

(1) The analogy between livers and bile; and brains and minds is not a good one.

(2) Saying that brains are computers is just a level of description not unlike saying that certain quantum processes are also chemical reactions, cells, or organisms.

(3) If we say that brains can cause consciousness than we can't also say that other physical systems (like machines) can potential cause consciousness. (And I see that Seattle does clarify his position on this point.)
 
My points were:

(1) The analogy between livers and bile; and brains and minds is not a good one.

(2) Saying that brains are computers is just a level of description not unlike saying that certain quantum processes are also chemical reactions, cells, or organisms.

(3) If we say that brains can cause consciousness than we can't also say that other physical systems (like machines) can potential cause consciousness. (And I see that Seattle does clarify his position on this point.)

Watch the video, read the papers if you are interested.
 
http://kryten.mm.rpi.edu/SELPAP/REFSEARLENYRBNBLF/SB_refutes_Searle_on_B_and_F_0721151500.pdf

I think this boils down to the question of whether access consciousness is sufficient for there to be an AI threat? I don't know much about this area - so I'm just wading in ... in the story he presents, that would seem to be a limited AI threat. You pull the plug or press a remote control or issue a voice command. Or you wait out the batteries or shut the grid down.

An AI threat, for a rough start, might be an autonomous system that can secure resources, maintain and repair itself, reproduce if needed and carry out a malicious agenda and get better at all these things with just access consciousness?
 
Last edited:
Re the multiple realizability of minds

If we conceive of minds as being tightly coupled to the body or even identical to the body than it follows that minds are just as diverse as human bodies. A fascinating concept and one that we may intuitively accept but not intellectual digest.

Think for a moment how different other humans minds are or might be if we allow that the mind is identical to the body as some suggest.

Contrary to intuition, if we were to switch minds with another human—supposing we could retain a memory of our previous mind—I think the differences would be profound.

The new feelings, perceptions, affects, and concepts, etc would be profoundly different than our old ones.

Yes, there would be core similarities; just as most people have two arms, two ears, two legs, etc. But if our minds—lived experiences—are as diverse as our bodies, then the intuition that our minds are pretty much like the minds of other humans is mistaken.

So if that were so, it would be an argument against MRT?
 
Last edited:
http://kryten.mm.rpi.edu/SELPAP/REFSEARLENYRBNBLF/SB_refutes_Searle_on_B_and_F_0721151500.pdf

I think this boils down to the question of whether access consciousness is sufficient for there to be an AI threat? I don't know much about this area - so I'm just wading in ... in the story he presents, that would seem to be a limited AI threat. You pull the plug or press a remote control or issue a voice command. Or you wait out the batteries or shut the grid down.

An AI threat, for a rough start, might be an autonomous system that can secure resources, maintain and repair itself, reproduce if needed and carry out a malicious agenda and get better at all these things with just access consciousness?

This is the Abstract of Bringsjord's paper

Akratic robots and the computational logic thereof

Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.
 
So that is an argument against MRT?
Maybe. But maybe not.

Isn't it the case that humans are able to intersubjectivity agree? Don't humans agree that there is an "objective" common reality?

Wouldn't the ability to achieve this level of sameness imply that to a certain extent human minds (concepts) are MR?
 
Maybe. But maybe not.

Isn't it the case that humans are able to intersubjectivity agree? Don't humans agree that there is an "objective" common reality?

Wouldn't the ability to achieve this level of sameness imply that to a certain extent human minds (concepts) are MR?

You said MRT but then you said:

If we conceive of minds as being tightly coupled to the body or even identical to the body than it follows that minds are just as diverse as human bodies. A fascinating concept and one that we may intuitively accept but not intellectual digest.

So that seems like it would be identity theory - the mind is identical to the body/brain?

The way I have thought of MRT is that it claims something pretty specific: that the same mental property, state, or event can be implemented by different physical properties, states or events.

So that pain is pain for a man, a bat, a computer or a Martian man-bat computer made of silicone.
 
You said MRT but then you said:

If we conceive of minds as being tightly coupled to the body or even identical to the body than it follows that minds are just as diverse as human bodies. A fascinating concept and one that we may intuitively accept but not intellectual digest.

So that seems like it would be identity theory - the mind is identical to the body/brain?

The way I have thought of MRT is that it claims something pretty specific: that the same mental property, state, or event can be implemented by different physical properties, states or events.

So that pain is pain for a man, a bat, a computer or a Martian man-bat computer made of silicone.
Yes. And what those who reject MRT say is that mental states are not MRT.

And what I'm saying is that if two men can have the same thought this to me is a hint that mental states are MR.

Yes, the MR denier would say "they're both men. Thus they share the same physical state, ergo the same mental state. MRT is false."

And that's probably right.

However there is something about the way in which diverse humans can learn and hold similar concepts suggests to me that at least some aspects of mind—namely concepts—might be MR.

So, a machine might never experience conscious pain in the way a man experiences consciousness, but a machine might experience the conscious thought, say, "up" in the same way a man experiences the conscious thought "up."

So, some mental states may be MR but not all.
 
SIGH ... you need to read the material or watch the video ... he is talking about computation ... not "computational neuroscience" ... watch the bit where he throws the computer at his audience ...

Just catching up since last night (the night before). I have wanted all day to throw my computer off the deck -- connected online to one after another computer tech, each time the sessions being disconnected. I don't know how I'm actually online at this point and it might not last long. I'm probably going to have to start using a refurbished Compaq sitting in a box upstairs , after having all my data migrated from this computer to the Compaq, so I if disappear for awhile you'll know why. So y'all carry on and be happy. Glad you like the Floridi paper, Steve. I don't know how I connected to it the night before last. Thanks for the further links to Floridi. :)
 
@smcder @Michael Allen

If subjectivity (consciousness, mind, qualitative feel, phenomenal consciousness) is a property that emerges from a non-subjective background/substrate, why is this emergent property unable to be perceived/measured.

That subjectivity/consciousness is an emergent property like ripples in water, swarms of insects, and biological life seems intuitive.

However, subjectivity would seem to stand apart from all other emergent properties due to its inability to be perceived and/or quantitatively measured.

This leads some to believe that subjectivity/consciousness therefor is not an emergent property.

Why else might subjectivity/consciousness be "invisible" to perception and/or measurement?

Does it have something to do with recursion? An emergent subjectivity can't perceive itself?

But shouldn't it be able to perceive other subjectivities?
 
A Rabbit As King Of The Ghosts

The difficulty to think at the end of day,
When the shapeless shadow covers the sun
And nothing is left except light on your fur—

There was the cat slopping its milk all day,
Fat cat, red tongue, green mind, white milk
And August the most peaceful month.

To be, in the grass, in the peacefullest time,
Without that monument of cat,
The cat forgotten on the moon;

And to feel that the light is a rabbit-light
In which everything is meant for you
And nothing need be explained;

Then there is nothing to think of. It comes of itself;
And east rushes west and west rushes down,
No matter. The grass is full

And full of yourself. The trees around are for you,
The whole of the wideness of night is for you,
A self that touches all edges,

You become a self that fills the four corners of night.
The red cat hides away in the fur-light
And there you are humped high, humped up,

You are humped higher and higher, black as stone—
You sit with your head like a carving in space
And the little green cat is a bug in the grass.

-Wallace Stevens
 
A Rabbit As King Of The Ghosts

The difficulty to think at the end of day,
When the shapeless shadow covers the sun
And nothing is left except light on your fur—

There was the cat slopping its milk all day,
Fat cat, red tongue, green mind, white milk
And August the most peaceful month.

To be, in the grass, in the peacefullest time,
Without that monument of cat,
The cat forgotten on the moon;

And to feel that the light is a rabbit-light
In which everything is meant for you
And nothing need be explained;

Then there is nothing to think of. It comes of itself;
And east rushes west and west rushes down,
No matter. The grass is full

And full of yourself. The trees around are for you,
The whole of the wideness of night is for you,
A self that touches all edges,

You become a self that fills the four corners of night.
The red cat hides away in the fur-light
And there you are humped high, humped up,

You are humped higher and higher, black as stone—
You sit with your head like a carving in space
And the little green cat is a bug in the grass.

-Wallace Stevens

It's becoming one of my favorites @Constance ...

"Fat cat, red tongue, green mind, white milk
And August the most peaceful month."

Any more questions @Soupie?
 
@smcder @Michael Allen

If subjectivity (consciousness, mind, qualitative feel, phenomenal consciousness) is a property that emerges from a non-subjective background/substrate, why is this emergent property unable to be perceived/measured.

That subjectivity/consciousness is an emergent property like ripples in water, swarms of insects, and biological life seems intuitive.

However, subjectivity would seem to stand apart from all other emergent properties due to its inability to be perceived and/or quantitatively measured.

This leads some to believe that subjectivity/consciousness therefor is not an emergent property.

Why else might subjectivity/consciousness be "invisible" to perception and/or measurement?

Does it have something to do with recursion? An emergent subjectivity can't perceive itself?

But shouldn't it be able to perceive other subjectivities?

One approach to responding would be to clarify terms, check for consistency and question assumptions. The first way one could respond would be to question the "if" - is it possible that consciousness is not an emergent property of a non-subjective background? (cleaning up the "/" along the way!) And then to clarify what is meant by perceived/measured. My first thought is that it is perceived - it is directly experienced! And then I would ask what do you mean by measured? What do you want to measure and what would you do with the measurements?

Next I would ask if "emerge" is the same in all of your examples. I'm not familiar with the ripples in water example, what is it that emerges there? Swarms of insects - that makes me think of "BOIDS"

Boids - Wikipedia

... maybe it's just my familiarity wih the idea but that doesn't seem so surprising of a phenomena and it also depends on our tendency to form patterns - if birds formed exact geometric figures it would be one thing, but just that following simple rules results in a somewhat triangular pattern isn't all that surprising - in fact it took some intelligence to notice the phenomena and make something of it and then it sort of fades back into a "oh yeah, we all knew that". Insect colonies seems a step up but recent research shows that individual ants and bees are more intelligent than once thought, but still ants as a colony achieve something more ... still I think we can put that alongside the flocking behavior in general, even if we can't see it the way we can see a flock forming. Biological life ... that's a good example and one with many comparisons to consciousness. I don't remember if Chalmers addresses that in his paper on emergence, strong and weak.

Chalmers would agree that consciousness is a singular example of strong emergence. At this point something else comes to mind which is whether consciousness emerges in that same way that flocks and ant colonies do? And from that question, whether "emerges" is the right word at all? It's one thing for Pierre to emerge from the fog with gun in hand while a flock emerges overhead and a colony emerges under his feet, it's another for mind to emerge from whirling atoms in the void.

A possible response is that it is not "emergence" but something we don't yet or may never understand, something we don't have the cognitive capacity - the requisite Grok-Q to understand. Another response is whether one might come to an intuitive understanding ... whether we might be able to look at physical processes and have something like the experience of seeing a flock in formation ... a Eureka moment, that might not be conveyable - but might be very satisfying to say "ah, now I see how consciousness emerges!" and maybe this is a claim that has been made in Eastern philosophy? I've not come across anything like that.

And finally, you might ask, what would an answer to the above questions even look like? How would they compare to answers we have to other questions? That brings up the idea of aporia and the general recognition that there are lots of things out there that we can't really imagine what an answer would be like. What could we do with it, what would it tell us, how would the world change for us? I can at least conceive of a day, meaning it's not unimaginable, that we have conscious technologies, or reasonable certainty that our technology, something we have made, is conscious without answers to how it is conscious.

One kind of answer that Thomas Nagel has offered is that it would be the ability to look at a brain scan and see the experience of chocolate, literally. In other words, there might one day be something it is like to see something it is like-ness.
 
This is the Abstract of Bringsjord's paper

Akratic robots and the computational logic thereof

Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.

Here's the full paper:

http://kryten.mm.rpi.edu/SB_etal_akratic_robots_0301141621NY.pdf

The gist of it is that the author's claim we now have robots that can do what they know they are not supposed to do. Think about that in terms of millitary drones.

In this context, our plan for the sequel is as follows: We af- firm an Augustinian account of akrasia reflective of Thero’s (2006) analysis; represent the account in an expressive computational logic (DCEC∗ CL) tailor-made for scenarios steeped at once in knowledge, belief, and ethics; and demonstrate this representation in a real robot faced with “temptation” to trample the Thomistic just-war principles that underlie ethically regulated warfare. We then delineate and recommend the kind of engineering that will prevent akratic robots from arriving on the scene. Finally, in light of the fact that the type of robot with which we are concerned will ultimately need to interact with humans naturally in natural language, we point out that (DCEC∗ CL) will need to be augmented with a formalization of human emotion, and with an integration of that formalization with that of morality.

NOTE
2 We here presuppose the now-standard distinction between what Block (1995) calls access consciousness (A-consciousness) vs. what he calls phenomenal consciousness (P-consciousness). Along with many others, we routinely build robots that have the former form of consciousness, which consists in their being able to behave intelligently on the basis of information-processing; such robots are indeed the type that will be presented below. But the latter form of consciousness is what-it’s-like consciousness, rather a different animal; indeed, unattainable via computation, for reasons Leibniz sought to explain (we refer her to Leibniz’s “Mill”).

Two things - one access consciousness I believe the kind the Nao robots in the video above have ... and 2) the author's assume that p-consciousness is "unattainable via computation" using Leibiz venerable "Mill" example or analogy.
 
Last edited:
And then to clarify what is meant by perceived/measured. My first thought is that it is perceived - it is directly experienced! And then I would ask what do you mean by measured? What do you want to measure and what would you do with the measurements?
Well, what I meant by perceived/measured was to distinguish between so-called objective phenomena and so-called subjective phenomena, also known as primary and secondary qualities:

"Primary qualities are thought to be properties of objects that are independent of any observer, such as solidity, extension, motion, number and figure. These characteristics convey facts. They exist in the thing itself, can be determined with certainty, and do not rely on subjective judgments. For example, if an object is spherical, no one can reasonably argue that it is triangular.

Secondary qualities are thought to be properties that produce sensations in observers, such as color, taste, smell, and sound. They can be described as the effect things have on certain people. Knowledge that comes from secondary qualities does not provide objective facts about things."

But of course the problem with this distinction is that technically, it isn't real. As you point out. All phenomena are subjective. We can only infer--or suppose--that phenomena exist external to our direct experience. Thus, at most, we can say that phenomena are subjective and inter-subjective. We go too far when we assert that some phenomena are objective.

So, circling back around, what I meant by phenomena that can be perceived/measured would be phenomena corresponding to primary qualities. So for example, we "determine with certainty" the wavelength of light but we can't "determine with certainty" the color green.

Another way to capture this distinction between primary and secondary qualities is Leibniz's Mill:

"It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine."

So, despite this distinction between primary qualities and secondary qualities, many still insist that secondary qualities emerge from primary qualities. Specifically, a la Dennett's new book and the recycling of his ideas, people assert that the secondary qualities (i.e., phenomenal consciousness) emerge from neural processes.

But when we enter into the brain as into a mill, all we find are "figures and motions."

So my question is: If we assert that phenomenal qualities emerge from neural "figures and motions," what explanation do Dennettians give for the fact that we can't "find" these emergent qualities when we "enter" the brain? They can assert all day long that phenomenal qualities emerge from the brain, but then they need to explain why phenomenal qualities are unlike all other known emergent phenomena.
 
Status
Not open for further replies.
Back
Top