• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 8

Free episodes:

Status
Not open for further replies.
Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."

That's an "outlier" example ... plus "frolic" means he was physically active.
Ultimately what I'm considering is that conscious experience may be a fundamental feature of nature, and not something special (or mystical) that only humans posses.

That does not mean that all systems have minds like humans, but that experience is everywhere. However experience doesn't necessarily entail positive and negative valience. That is, good and bad feeling.

So an AI may be conscious, but it's consciousness might not feel "bad" or "good." It's conceivable that systems could have conscious perceptions but lack any affective/emotional systems.

However, emotional systems are a crucial element of human decision processes, so AI that operate without emotions will be very different than humans.

Miniature brain and skull found inside 16-year-old girl’s ovary

"About one-fifth of ovarian tumours contain foreign tissue, including hair, teeth, cartilage, fat and muscle. These tumours, which are normally benign, are named teratomas after the Greek word “teras”, meaning monster.

Although the cause of ovarian teratomas is unknown, one theory is that they arise when immature egg cells turn rogue, producing different body parts.

Brain cells are often found in ovarian teratomas, but it is extremely unusual for them to organise themselves into proper brain-like structures, says Masayuki Shintaku at the Shiga Medical Centre for Adults in Japan, who studied the tumour.

Angelique Riepsamen at the University of New South Wales in Australia, agrees. “Neural elements similar to that of the central nervous system are frequently reported in ovarian teratomas, but structures resembling the adult brain are rare.”

The miniature brain even developed in such a way that electric impulses could transmit between neurons, just like in a normal brain, says Shintaku."
 
toggling @Soupie you could argue that we do something like that anyway, right? we zone out, take naps when we're bored ... get absorbed in our work of "flow" and that varies certain aspects of awareness and levels of consciousness ... speaking of pain, you pass out or go into a kind of endorphin(?) induced stupor ... from that it seems possible that there is some relationship between the kind of task we're doing and certain qualities of consciousness or awareness ... ?
Yes, I was going to.

Why would they ever turn it back on? Who knows what the life of a super advanced AI would be like. If they had no consciousness and were concerned only with being intelligent and surviving then we can conceive of them like a virus. Just consuming resources and spreading.

But maybe they would like to read mystery novels and do some knitting occasionally. In those cases, consciously experiencing smells, tastes, ubiks, and zoobles, might be nice. Feeling pride, joy, zeeble, and xabble might be nice.

Who knows.
 
Yes, I was going to.

Why would they ever turn it back on? Who knows what the life of a super advanced AI would be like. If they had no consciousness and were concerned only with being intelligent and surviving then we can conceive of them like a virus. Just consuming resources and spreading.

But maybe they would like to read mystery novels and do some knitting occasionally. In those cases, consciously experiencing smells, tastes, ubiks, and zoobles, might be nice. Feeling pride, joy, zeeble, and xabble might be nice.

Who knows.

So if consciousness is a disadvantage to the system, then whatever that disadvantage is, is turned off when the consciousness is turned off ... to turn it back on is to make a choice to put the system at a disadvantage but the very thing that would enable it to make that choice has been turned off ... so to say something incapable of enjoyment would then occasionally "want" to have enjoyment ... in other words, if the valence system is turned off, something would have to turn it back on - but we're postulating that turning it on is never an advantage, whereas in the examples above it IS an advantage in terms of enjoyment but remember the system doesn't recognize this as an advantage once it turns it off ... we can call this Soupie's Paradox
 
Last edited:
@Soupie

I think if you suddenly throw anyone into anyone else's environment, they would be less able to cope with the particular stressors.
 
Yeah I think that's right - a system that by definition maximizes its advantage (the original scenario) is never going to turn back on a system that it turns off for an advantage ... but ... this is interesting ... when the system is on valence and consciousness, it is never going to turn that option off unless it wants it never to come back on ... soa conscious system could choose to become unconscious but not vice versa ... is that right? given the original postulates that being conscious is always a disadvantage and the system maximizes advantage ,.. in our example then, we are a little different in that some parts of our brain/mind may always be wake (and sleeping!) the valence system is always in place, even if we are totally unconscious, we have clocks and systems that wake us up and vary our consciousness according to time, activity, etc.

It seems to me very sophisticated and adapted to what we need to do ... and while it's tempting to think of a machine of some kind of pure, unconscious calculation ... but it's also easy to see the real disadvantages to such a system and it's been endlessly explored in sci-fi ... such a system might never be capable of real intelligence or it might explore all the available options and "choose" to idle or even shut down ... it might be incapable of choosing any particular action among numerous of "equal" advantage and so loop infinitely ... etc, etc so something like emotion and consciousness might be part and parcel of intelligence ... short of a viral replication like you say. On the other hand, it might be conscious that is viral. Who knows! ;-)

So that means a super-human intelligence might also be super-valanced, with super-emotions! It might be more human than human with an extremely complex and sophisticated kind of society.
 
Yeah I think that's right - a system that by definition maximizes its advantage (the original scenario) is never going to turn back on a system that it turns off for an advantage ... but ... this is interesting ... when the system is on valence and consciousness, it is never going to turn that option off unless it wants it never to come back on ... soa conscious system could choose to become unconscious but not vice versa ... is that right? given the original postulates that being conscious is always a disadvantage and the system maximizes advantage ,.. in our example then, we are a little different in that some parts of our brain/mind may always be wake (and sleeping!) the valence system is always in place, even if we are totally unconscious, we have clocks and systems that wake us up and vary our consciousness according to time, activity, etc.

It seems to me very sophisticated and adapted to what we need to do ... and while it's tempting to think of a machine of some kind of pure, unconscious calculation ... but it's also easy to see the real disadvantages to such a system and it's been endlessly explored in sci-fi ... such a system might never be capable of real intelligence or it might explore all the available options and "choose" to idle or even shut down ... it might be incapable of choosing any particular action among numerous of "equal" advantage and so loop infinitely ... etc, etc so something like emotion and consciousness might be part and parcel of intelligence ... short of a viral replication like you say. On the other hand, it might be conscious that is viral. Who knows! ;-)

So that means a super-human intelligence might also be super-valanced, with super-emotions! It might be more human than human with an extremely complex and sophisticated kind of society.
Indeed if a choice is made unconsciously is it really a choice?

And the notion that conscious minds (not p consciousness per se *need to clarify) are a disadvantage could arguably be challenged by evolution as we understand it. After all we humans have evolved, have consciousness, and are currently contemplating creating artificial life with artificial intelligence and artificial self-awareness.

So evolution seems to find conscious minds valuable; its hubris to assume we'd be better off without it.
 
Last edited:
It does boil down to what intelligence is; let's assume intelligence is the ability to solve problems. So objectively (scientifically) what would that look like? That would look like behavior.

So intelligence is the ability of systems to behave in ways that solve problems.

As @mike implied—and we know—materialist science can only see behavior. It's blind to consciousness. It can't even account for subjectivity in its models. Conscious systems perceive one another as material systems. To be a Naive Realist is to assume (or even argue) that that's all systems are: matter.

So science asks: how can "matter" be arranged in such a way that it moves in ways that solve problems, ie, allows the arrangement of matter to keep existing.

So science looks at humans systems—systems we know via first hand knowledge to be conscious, subjective systems—and tries to describe them entirely objectively. And fails to do so.

Systems that we claim to be the result of natural evolution that are both objects and subjects. We look out at the rest of the world and perceive only objects and we naively assume that's all they are, objects.

We assume we can create machines that will equal us in intelligent movement but be completely dark inside.

We assume we can beat evolution at her own game and even throw out a major ingredient.

I don't think so. I like @smcder thinking aloud thoughts above that things like self-awareness, perceptions, memories, and emotions are "functions" necessary for intelligent movement.

And if p consciousness is a fundamental aspect of what-is, any systems possessing "functions" such as self-awareness, memories, emotions, and perceptions will likely possess them consciously.
 
*when scientists argue that consciousness may not be needed for intelligence, I'm willing to bet they mean p consciousness. Not the "easy" parts of the mind such as self-awareness, emotions, memories, perceptions, etc.

This is because that are likely making the assumption that p consciousness is something generated by brains. Something "extra."

They would probably want to keep perceptions, emotions/valence, memories, and conceptions.

It's p consciousness they are likely meaning when they say we could so without it.

But as I said above, scientists who take p consciousness for granted (don't grok the hp) or ignore it all together might be in a better position to make progress than those trying to create p consciousness via physical processes.
 
Do you have a copy of this book? I may request it ILL. I'm very interested in this last part - community of researchers and re-structuring of the social edifice of contemporary science - there were similar quotes in the interview ... I never made this exact connection, between community and first person not being private and the structuring for example of monastic or intentional communities - Benedict's Rule ... and this:

Francisco Varela — social learning. But it’s not obvious that basic learning, such as admitting that the other is equal to you, is something that is spontaneous; it really needs to be mediated by the social context. Is that more clear?

COS Yes, that makes absolute sense. Probably it's also true that without the other, the experience of the other, you could never perceive your self.

Francisco Varela Absolutely. So this is a very important antidote to the myth or the belief or the dogma that anything that has to do with introspection or meditation or phenomenological work is something that people do in their little corners. That really is a mistaken angle on the whole thing. Although there are some reasons that it is a very common mistake. This is perhaps the greatest difficulty within science.

Yes, I meant to say that I have a copy of this book saved in Word {transposed into Word at the time I downloaded it, so that I no longer have the url at which the pdf was posted online}. But I'm betting that you will be able to locate it online as you have done with so many other texts. :)
 
*when scientists argue that consciousness may not be needed for intelligence, I'm willing to bet they mean p consciousness. Not the "easy" parts of the mind such as self-awareness, emotions, memories, perceptions, etc.

A year or more ago I linked a paper in this thread entitled, roughly, 'There are no easy problems of consciousness'. I'll find it and post the link. I have to disagree with 'scientific' arguments that "consciousness may not be needed for intelligence" and the proposition that "self-awareness, emotions, memories, perceptions" are "easy parts of the mind." As you go on to say:

This is because that are likely making the assumption that p consciousness is something generated by brains. Something "extra."

"Assumption" is the correct word to describe these notions, assumptions that can only be corrected if/when these scientists read phenomenological philosophy and investigations such as those proposed in the Depraz, Varela, Vermersch book and in the Varela-Thompson project of 'neurophenomenology'. Not to do so at this point in the development of consciousness studies is an example of willful ignorance on the part of neuroscientists and computer scientists involved in AI.

They would probably want to keep perceptions, emotions/valence, memories, and conceptions.

They can't have those without consciousness, including prereflective and reflective consciousness, both of which underwrite the development of what we refer to as 'mind'[.

It's p consciousness they are likely meaning when they say we could so without it.

Sure, because they remain uninformed about what phenomenal consciousness is.

But as I said above, scientists who take p consciousness for granted (don't grok the hp) or ignore it all together might be in a better position to make progress than those trying to create p consciousness via physical processes.

You mean the former might end up producing phenomenal consciousness in AI by serendipity? I suppose anything is possible, and we can only wait to see if it happens. The latter are certainly unlikely to "create p consciousness" since the 'physical processes' they attempt to use cannot provide AI with embodied being first felt and known in prereflective consciousness shared by animals and humans. In humans, and perhaps some other animals, the capacities of reflective consciousness develop from the ground of prereflective, felt, awareness that arises naturally in living beings/species and develops throughout biological evolution.
 
Last edited:
@Constance this is from a post on May 14th (part 7 of this thread)

It is through language and its intersubjectivity that the intentionality of the body-subject makes sense of the world. And he makes it clear that language is to be understood in a wide sense as including all 'signs', employed not only in literature but also in art, science, indeed in the cultural dimension as a whole. Indeed the significance of a created work lies in this intersubjectivity — in the reader's or viewer's 're-creation' of it as well as in the work itself as originally created by the writer or artist.

  • Moreover, in an era when science is increasingly alienating man from the real, language and the arts in particular are particularly suited to be the means for this revelation.
Through the lived experience in which language is articulated — in our actions, art, literature, and so on (that is, in 'beings' as signifiers) — it opens up to the Being of all things [see The Visible and the Invisible]. Contemplated against the 'background of silence', language then comes to be seen as a 'witness to Being' [Signs] [d]. . . . ." (continues at the link)

Sounds like it's from the website you linked for us today, which I read first when I came into the thread today. Can you link us to that post from Part 7?
 
Would you give up your consciousness to be smarter? What if it was a matter of survival? The irony is that giving up ones consciousness to survive would be indistinguishable from ceasing to exist.

However AI wouldn't be faced with such a choice I don't think. If it were to turn out that conscious self awareness, feelings, perceptions, memories (conscious minds) were a constraint on intelligence, AI might be able to toggle them on and of.

Interesting idea. You mean like this: if there were actually an AI produced that experienced consciousness as we do, it might shut down its 'consciousness' software whenever it could not bear the ambiguities and anxiety that consciousness produces? And that then it would proceed in its activities from a safe zone of taking as 'real' only the presuppositional descriptions of 'reality' downloaded to it by reductive objectivist science? Thus transposing itself from an active engagement with the experienced, existentially open-ended, world to completely automatic mental behaviors written into the script provided by its engineers? If so, it would indeed choose death over life (if indeed it had achieved lived experience and consciousness and found them unbearable).

It's astonishing to realize how limited the thinking of AI engineers is concerning both consciousness and mind. They don't seem to have a clue that the human 'intelligence' they wish to model in AI has developed out of an extended embodied history of lived experience and expression in/of an always temporally changing world. The intelligence of our species in the modern age has been an achievement by degrees of gradually improving insights into the nature of reality and the nature of consciousness and mind capable of perceiving what-is, to the limited extent that we now do. We have a long way to go, and on the way to where we are now we have forgotten much of what our forebears knew and, worse, restricted what we can think by rigid and reductive presuppositions about what can be called 'real'.
 
... It's astonishing to realize how limited the thinking of AI engineers is concerning both consciousness and mind. They don't seem to have a clue that the human 'intelligence' they wish to model in AI has developed out of an extended embodied history of lived experience and expression in/of an always temporally changing world ...
Actually, the idea that intelligence requires learning from experience has been seriously considered by AI researchers. The thing is that the kind of experience they're talking about is not the kind of experience we're talking about. They're looking at experience in a strictly historical sense, as data accumulated and stored for recall when similar situations arise in the present. Additionally, the idea is that because of the speed of electronic circuitry, the evolution of intelligence in machines capable of sufficient processing would happen very quickly. In this model of intelligence, intelligence doesn't require ages of evolution or consciousness.
 
Understanding why the original project of Artificial Intelligence is widely regarded as a failure and has been abandoned even by most of contemporary AI research itself may prove crucial to achieving synthetic intelligence. Joscha Bach of the Institute for Cognitive Science, University of Osnabrück, Germany looks at some principles that we might consider to be lessons from the past five decades of AI. The author’s own AI architecture, MicroPsi contributes to that discussion.


http://cognitive-ai.com/publications/assets/AGI Bach.pdf
 
So if we concede AI is a failure as many now do, we should perhaps turn our focus to SI.

synthetic intelligence (SI)



Synthetic intelligence (SI), sometimes referred to as engineered intelligence, is a refinement of the concept of artificial intelligence (AI). SI recognizes that although the capacity for software to reason may be manufactured, it is nonetheless real intelligence and not just an imitation of how human beings acquire and apply knowledge and skill.

What is synthetic intelligence (SI)? - Definition from WhatIs.com

Synthetic intelligence - Wikipedia
 
Status
Not open for further replies.
Back
Top