• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 8

Free episodes:

Status
Not open for further replies.
Some of the companies and researchers involved in the quest for artificial life believe that the 10-year time frame is possible. Not only that -- they say that the development of wet artificial life (as it's often called) will radically affect our views of biological life and our place in the universe.
Are we 10 years away from artificial life?


'Artificial life' breakthrough announced by scientists - BBC News

Artificial life created by Craig Venter - but could it wipe out humanity? | Daily Mail Online
 
quoting a post by @Soupie:

Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."



That's an "outlier" example ... plus "frolic" means he was physically active.

I don't know what you mean by an "outlier example". I don't think we can reduce what Victor seemed to feel -- enjoyment, even celebration, of sensual contact with nature -- to 'physical' responses in the reductive scientific sense of our time, say as muscular reactions or discharges of nervous energy (if that is what you have in mind). Is it?

Mammals, birds, and dolphins play with their peers out of natural exuberance and pleasure when they are in situations enabling them to do so (no stress from unmet needs or predators threatening them in the present moment). Our dog Dudley, accustomed to living in the south, was ecstatic the first time he was let out into the snow on a trip to Wisconsin in winter, and similarly thrilled the first time we took him to the Gulf Coast, when he immediately took off running along the shoreline until he was out of sight, eventually coming back to where we were sitting on the sand.
 
Last edited:
@Soupie "toggling" consiousness on and off ... does that really change the situation? if consciousness constrains intelligence - why would consciousness ever get toggled back on ... ? and knowing that, why would AI ever toggle it off ... ? comes down to would you rather be quick or dead? ;-)

All kinds of sci-fi plots there ...

remember I posted ... maybe it was Chalmers who asked the audience or did a survey about this ... give up consciousness for some benefit? I think 5% or so said yes ...

The androids in the brilliant film Blade Runner certainly are not enjoying their consciousness since they are immanently facing death/extinction to make way for a newer and 'better' generation of androids. Their leader, when asked by Decker "What do you want?," replies "More life." He apparently means conscious life, presupposed in the film to have been accomplished with androids aware of their immanent extinction. The other major concern in the film is the condition of androids, such as Decker's girlfriend, who realize that they have no lived past and thus feel that they have no identity similar to human's sense of identity.
 
I don't know what you mean by an "outlier example". I don't think we can reduce what Victor seemed to feel -- enjoyment, even celebration, of sensual contact with nature -- to 'physical' responses in the reductive scientific sense of our time, say as muscular reactions or discharges of nervous energy (if that is what you have in mind). Is it?

Mammals, birds, and dolphins play with their peers out of natural exuberance and pleasure when they are in situations enabling them to do so (no stress from unmet needs or predators threatening them in the present moment). Our dog Dudley, accustomed to living in the south, was ecstatic the first time he was let out into the snow on a trip to Wisconsin in winter, and similarly thrilled the first time we took him to the Gulf Coast, when he immediately took off running along the shoreline until he was out of sight, eventually coming back to where we were sitting on the sand.
@Soupie used Victor as an example of pain tolerance ... I felt that we couldn't generalize as Victor's circumstances were unusual.

Sent from my LGLS991 using Tapatalk
 
Ultimately what I'm considering is that conscious experience may be a fundamental feature of nature, and not something special (or mystical) that only humans posses.

That does not mean that all systems have minds like humans, but that experience is everywhere. However experience doesn't necessarily entail positive and negative valience. That is, good and bad feeling.

So an AI may be conscious, but it's consciousness might not feel "bad" or "good." It's conceivable that systems could have conscious perceptions but lack any affective/emotional systems.

However, emotional systems are a crucial element of human decision processes, so AI that operate without emotions will be very different than humans.

Miniature brain and skull found inside 16-year-old girl’s ovary

"About one-fifth of ovarian tumours contain foreign tissue, including hair, teeth, cartilage, fat and muscle. These tumours, which are normally benign, are named teratomas after the Greek word “teras”, meaning monster.

Although the cause of ovarian teratomas is unknown, one theory is that they arise when immature egg cells turn rogue, producing different body parts.

Brain cells are often found in ovarian teratomas, but it is extremely unusual for them to organise themselves into proper brain-like structures, says Masayuki Shintaku at the Shiga Medical Centre for Adults in Japan, who studied the tumour.

Angelique Riepsamen at the University of New South Wales in Australia, agrees. “Neural elements similar to that of the central nervous system are frequently reported in ovarian teratomas, but structures resembling the adult brain are rare.”

The miniature brain even developed in such a way that electric impulses could transmit between neurons, just like in a normal brain, says Shintaku."

Fascinating. To me it demonstrates the strength and persistence of the life force in nature.

Paranormal literature (going back several centuries) and cinema in our time are filled with narratives expressing insights regarding the folly and risks of human attempts to control or overcome nature. That includes contemporary attempts to produce 'artificial life', in my opinion. Genetic engineering to date has demonstrated the inadequacy of present-day scientific understanding of the intricate interconnections forged by nature, with which we meddle at our risk.

As Heidegger said, "Let being be."


ETA: To understand what he meant by that, the following paper is extremely helpful:

Peter Critchley,
MARTIN HEIDEGGER: ONTOLOGY AND ECOLOGY

Martin Heidegger : Ontology and Ecology

 
Last edited:
The androids in the brilliant film Blade Runner certainly are not enjoying their consciousness since they are immanently facing death/extinction to make way for a newer and 'better' generation of androids. Their leader, when asked by Decker "What do you want?," replies "More life." He apparently means conscious life, presupposed in the film to have been accomplished with androids aware of their immanent extinction. The other major concern in the film is the condition of androids, such as Decker's girlfriend, who realize that they have no lived past and thus feel that they have no identity similar to human's sense of identity.
The film is brilliant. I understand there is a new version coming .... Blade Runner 2049. I also recommend the PK Dick story: "Do Androids Dream of Electronic Sheep?" which takes a different line from the film.

Sent from my LGLS991 using Tapatalk
 
Here is Lowe's paper "There Are No Easy Problems of Consciousness":

Abstract
This paper challenges David Chalmers’ proposed division of the problems of consciousness into the ‘easy’ ones and the ‘hard’ one, the former allegedly being susceptible to explanation in terms of computational or neural mechanisms and the latter supposedly turning on the fact that experiential ‘qualia’ resist any sort of functional definition. Such a division, it is argued, rests upon a misrepresention of the nature of human cognition and experience and their intimate interrelationship, thereby neglecting a vitally important insight of Kant. From a Kantian perspective, our capacity for conceptual thought is so inextricably bound up with our capacity for phenomenal consciousness that it is an illusion to imagine that there are any ‘easy’ problems of consciousness, resolvable within the computational or neural paradigms.

Opening paragraph:

"David Chalmers is to be commended for challenging the complacent assumptions of reductive physicalism regarding the tractability of the problems of consciousness, but he concedes too much to such physicalists in allowing that some, at least, of these problems — the ‘easy’ ones — will fall prey to their favoured methods.2 I do not consider that there are any ‘easy’ problems of consciousness, and consider that Chalmers’ division of the problems into ‘easy’ ones and the ‘hard’ one betrays an inadequate conception of conscious thought and experience — a conception which plays into the hands of physicalists by suggesting that the only problem with functionalism is its apparent inability to say anything about ‘qualia’.3"

{Heartily seconded.}

http://anti-matters.org/articles/46/public/46-41-1-PB.pdf
 
The film is brilliant. I understand there is a new version coming .... Blade Runner 2049. I also recommend the PK Dick story: "Do Androids Dream of Electronic Sheep?" which takes a different line from the film.

Sent from my LGLS991 using Tapatalk

And of course Deckard was himself a replicant, but clearly thought he was conscious.

Ridley Scott Answers Whether Deckard Is A Replicant In Blade Runner

Like Rachel he had implanted memory's and couldn't tell he wasn't human and conscious nor could anyone else.
Which brings me back to the point i made earlier, in a complex enough simulation whether or not the entity is conscious as we define it becomes a moot point. It just doesn't matter on a practical level.

The book was indeed written differently.

Philip K. Dick (author of Do Androids Dream of Electric Sheep, the book the film is based on)- he wrote the original role of Deckard as a human. "The purpose of this story as I saw it was that in his job of hunting and killing these replicants, Deckard becomes progressively dehumanized. At the same time, the replicants are being perceived as becoming more human. Finally, Deckard must question what he is doing, and really what is the essential difference between him and them? And, to take it one step further, who is he if there is no real difference?"
 
Last edited by a moderator:
Here is Lowe's paper "There Are No Easy Problems of Consciousness":

Abstract
This paper challenges David Chalmers’ proposed division of the problems of consciousness into the ‘easy’ ones and the ‘hard’ one, the former allegedly being susceptible to explanation in terms of computational or neural mechanisms and the latter supposedly turning on the fact that experiential ‘qualia’ resist any sort of functional definition. Such a division, it is argued, rests upon a misrepresention of the nature of human cognition and experience and their intimate interrelationship, thereby neglecting a vitally important insight of Kant. From a Kantian perspective, our capacity for conceptual thought is so inextricably bound up with our capacity for phenomenal consciousness that it is an illusion to imagine that there are any ‘easy’ problems of consciousness, resolvable within the computational or neural paradigms.

Opening paragraph:

"David Chalmers is to be commended for challenging the complacent assumptions of reductive physicalism regarding the tractability of the problems of consciousness, but he concedes too much to such physicalists in allowing that some, at least, of these problems — the ‘easy’ ones — will fall prey to their favoured methods.2 I do not consider that there are any ‘easy’ problems of consciousness, and consider that Chalmers’ division of the problems into ‘easy’ ones and the ‘hard’ one betrays an inadequate conception of conscious thought and experience — a conception which plays into the hands of physicalists by suggesting that the only problem with functionalism is its apparent inability to say anything about ‘qualia’.3"

{Heartily seconded.}

http://anti-matters.org/articles/46/public/46-41-1-PB.pdf
I remember this paper ... Will re-read it now.

Sent from my LGLS991 using Tapatalk
 
Which brings me back to the point i made earlier, in a complex enough simulation whether or not the entity is conscious as we define it becomes a moot point. It just doesn't matter on a practical level.
Practical level = problem solving ability?

Sure, that's the assumption; non conscious systems can theoretically behave in ways that are just as intelligent as conscious systems (humans). However, until it happens, we won't know that it can.

Also, whether a system is conscious or not may not matter on a practical level, but it will certainly matter on an ethical level.

Let's say a group of scientists create an AI that possess the intelligence of a 3 yo. Which is quite high. However, let's suppose this AI also experiences conscious pain, emotions, and memories.

While experimenting on and augmenting this AI system, it experiences great pain, a plethora of emotions equivalent in valence and "feel" to human emotions such as fear, confusion, loneliness, anger, and despair. And, its memories are erased, augmented, and otherwise manipulated.

Of course, we would have no way of observering and measuring these "internal" states of this system. But would their existence truly be a moot point?
 
Last edited:
You mean the former might end up producing phenomenal consciousness in AI by serendipity? I suppose anything is possible, and we can only wait to see if it happens. The latter are certainly unlikely to "create p consciousness" since the 'physical processes' they attempt to use cannot provide AI with embodied being first felt and known in prereflective consciousness shared by animals and humans. In humans, and perhaps some other animals, the capacities of reflective consciousness develop from the ground of prereflective, felt, awareness that arises naturally in living beings/species and develops throughout biological evolution.
No, I don't think any scientists will create systems that produce phenomenal consciousness, as it seems to me that phenomenal consciousness is a fundamental aspect of reality.

Therefore, scientists who are working to discover physical mechanisms that can produce phenomenal consciousness are working in vain.

Thus, scientists who either take p consciousness for granted and/or choose to ignore it and instead focus their efforts on the "easy" problems will bear more fruit.

While I agree with you that Chalmers "easy" problems are not easy at all, I do think his insight that the "easy" problems and the "hard" problem are categorically different is correct.

Whereas the arrangement of "matter" doesn't seem to supervene on phenomenal consciousness, it does seem to supervene on the "easy" problems.

But our understanding of the psychophysical nexus is in its infancy. That there is a nexus seems self-evident, but understanding eludes us. I think Naive Realism is a big stumbling block in this regard.
 
Practical level = problem solving ability?

I was thinking a little farther afield , How do you know at a practical level I am conscious ?

Naturally you assume i am a human, and as such automatically i must possess consciousness.

Once a synthetic entity is indistinguishable from a biological one, Once you can longer tell the difference then at a practical level. Ie dealing with it on a day to day basis the issue of its "consciousness" will be irrelevant.

Imagine sitting at a table with 2 other people, You are wearing underpants. One of the others is wearing underpants the other is not.
You cant tell which one is and isn't from the conversation, and its not a relevant factor anyway.

You are hiring people for a job. Two applicants apply. One is wearing underpants the other isn't. You cant tell which.
Two barmen are serving at your local pub. One is wearing undies the other isn't. You cant tell which.

At a practical level how important are the underpants ?
 

And some journalists who interviewed "robot Gemma", were at least for a few seconds, convinced they were speaking to a real person.

Watching Channel 4's How to Build a Human has viewers freaked out about AI robots - AOL Entertainment UK




d13de83d-f089-45e2-83cd-755953e50d90.jpg

url
 
I was thinking a little farther afield , How do you know at a practical level I am conscious ?

Naturally you assume i am a human, and as such automatically i must possess consciousness.

Once a synthetic entity is indistinguishable from a biological one, Once you can longer tell the difference then at a practical level. Ie dealing with it on a day to day basis the issue of its "consciousness" will be irrelevant.

Imagine sitting at a table with 2 other people, You are wearing underpants. One of the others is wearing underpants the other is not.
You cant tell which one is and isn't from the conversation, and its not a relevant factor anyway.

You are hiring people for a job. Two applicants apply. One is wearing underpants the other isn't. You cant tell which.
Two barmen are serving at your local pub. One is wearing undies the other isn't. You cant tell which.

At a practical level how important are the underpants ?
Correct. But do you get the points I'm making?

(1) Right now it is merely a hypothesis that non-conscious systems can ever behave as intelligently as conscious systems.

(2) While it may not matter on a practical level whether a system is conscious, it will always matter on an ethical level.
 
This is an extremely interesting topic in its own right ... I wonder if we could get more participation on a new thread? A lot of folks probably tune out the C&P thread at this point.

Sent from my LGLS991 using Tapatalk
 
No, I don't think any scientists will create systems that produce phenomenal consciousness, as it seems to me that phenomenal consciousness is a fundamental aspect of reality.

Therefore, scientists who are working to discover physical mechanisms that can produce phenomenal consciousness are working in vain.

Thus, scientists who either take p consciousness for granted and/or choose to ignore it and instead focus their efforts on the "easy" problems will bear more fruit.

You mean bear more fruit in enabling us to understand how a deep structure of interaction and thus awareness in being and nature becomes in species like ours the consciousness we are capable of, and indeed rely on for our investigation of what-is? If we look to AI to account to us for the nature of our understanding of 'what-is' the question becomes, how can AI do so if it does not experience what-is in the way we do? [ETA: note that what we think we understand about what-is conditions what we do and how we attempt to justify what we do. Our species has in all eras of its presence on earth based its behaviors, actions, and ethics on whatever partial understanding it has had of the nature of 'what-is'.] I think we need to understand both the long evolution of consciousness in living beings and to appreciate the contributions of biology and neuroscience [affective neuroscience] in their analyses of the steps along the way to the development of consciousness as our species experiences it. [ETA: But there is a much greater task to be accomplished by philosophy and science -- to bring us closer to a deeper understanding of the nature of consciousness as an expression of the nature of being and thus of the ontological structure of Being as a whole.]

While I agree with you that Chalmers "easy" problems are not easy at all, I do think his insight that the "easy" problems and the "hard" problem are categorically different is correct.

The so-called 'easy' problems and the 'hard problem' can be sorted out/categorized as 'different' in the manner in which our species, especially in the modern period of our history of ideas, tends to categorize things and living beings in order to understand them. But categorical thinking is a scientific/intellectual overlay we place on what-is and, as we have seen, is not the only way in which humans have appreciated/understood and thought about the nature of what-is as we encounter it in our local world -- our temporal, historical, existential mileau which is both unique to us in our situation in spacetime and yet, more fundamentally, ontologically, a partial expression of the integrations of Being as a whole. I think we sense these larger and deeper integrations, but do not, outside phenomenological thinking, come closer to appreciating them, understanding them. The phenomenologists, Strawson, and Kafatos do bring us closer to understanding them, imo.

Whereas the arrangement of "matter" doesn't seem to supervene on phenomenal consciousness, it does seem to supervene on the "easy" problems.

Would you expand on that idea with some specific details? This would be an interesting topic to explore. For example, ‘emergence’ and ‘supervenience’ are theories intended to explain a variety of complex changes revealed in some of our species’ specific/specialized investigations of how nature works. How deep do these theoretical concepts go in investigating the intrinsic structure of what has evolved in nature and more fundamentally in being as Kafatos explicates it.

But our understanding of the psychophysical nexus is in its infancy. That there is a nexus seems self-evident, but understanding eludes us. I think Naive Realism is a big stumbling block in this regard.

Yes, to the underscored statement above. Would you and/or Steve [@smcder] expand on what you mean by "naive realism." My impression is that this term has been used in various ways, and for a variety of purposes [mostly dismissive] by both analytical philosophers and cognitive neuroscientists. Kant's and Husserl's contributions (early and late) to our understanding of the nature of human perception are critically important in the history of the recognition of the difference between 'things in themselves', which are closed to us, and 'things as seen' which are phenomenally available to us in our presence and experiential openness to them, and clarified and interpreted in their meaning and significance by phenomenological philosophers [who in turn are justified in referring to analytical philosophy and objectivist science as 'naive' in their categorical approach to reality {what-is} as it is known/understood in lived experience].
 
Last edited:
I was thinking a little farther afield , How do you know at a practical level I am conscious ?

Naturally you assume i am a human, and as such automatically i must possess consciousness.

Once a synthetic entity is indistinguishable from a biological one, Once you can longer tell the difference then at a practical level. Ie dealing with it on a day to day basis the issue of its "consciousness" will be irrelevant.

Imagine sitting at a table with 2 other people, You are wearing underpants. One of the others is wearing underpants the other is not.
You cant tell which one is and isn't from the conversation, and its not a relevant factor anyway.

You are hiring people for a job. Two applicants apply. One is wearing underpants the other isn't. You cant tell which.
Two barmen are serving at your local pub. One is wearing undies the other isn't. You cant tell which.

At a practical level how important are the underpants ?

I have to say that I find your analogies to be philosophically naieve. That is of course remediable.
 
Status
Not open for further replies.
Back
Top