• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 7

Free episodes:

>

Status
Not open for further replies.
"matter, life and mind must participate unequally in the nature of form; they must represent different degrees of integration and, finally, must constitute a hierarchy in which individuality is progressively achieved."
 
Is artificial intelligence really an existential threat to humanity?

"In their quest to understand minds by trying to build them, artificial intelligence researchers have learned a tremendous amount about what intelligence is not. Unfortunately, one of their major findings is that humans resort to fallible heuristics to address many problems because even the most powerful physically attainable computers could not solve them in a reasonable amount of time. As the authors of a 1993 textbook about problem-solving programs noted, “intelligence is possible because Nature is kind,” but “the ubiquity of exponential problems makes it seem that Nature is not overly generous.”

As a consequence, both the peril and the promise of artificial intelligence have been greatly exaggerated."
 
"While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become."

What 'we' does Bostrum have in mind? Who and what constitutes the decision-making individuals and corporate groups that already have, and will continue to have, private control over how AI is trained and what it will become capable of understanding and thinking? Advanced AI will respond in a 'friendly' way to those who own and program the AI machines and hold the codes by which to identify themselves as the individuals and interests to be obeyed and served.
 
Equally appalling in its ignorance is the sentence preceding the quote from Bostrum I just posted:

"With intellectual powers beyond human comprehension, he prognosticates, self-improving artificial intelligences could effortlessly enslave or destroy Homo sapiens if they so wished."

On what basis can 'intellectual powers' be attributed to AI machines programmed by humans? Does anyone here think that 'intellect' can be achieved by rapid algorithmic computation? Just as the AI community began with a false notion of consciousness, it continues with a false notion of mind, which might use brain networks but is not reducible to them.
 
"While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become."

What 'we' does Bostrum have in mind? Who and what constitutes the decision-making individuals and corporate groups that already have, and will continue to have, private control over how AI is trained and what it will become capable of understanding and thinking? Advanced AI will respond in a 'friendly' way to those who own and program the AI machines and hold the codes by which to identify themselves as the individuals and interests to be obeyed and served.

The "we" that always gets itself in mind ... ;-) Good questions ...

What are the rights of a potentially sentient being? As to its education and ultimate freedom? AI - human level, that's the easiest scenario - but AI-? Robotic pets (or slaves)? Sentient remember, but not like us in form ... now AI+ ... indeed, who should have control over its education? This thing is gonna be smarter than us ... so who gets to bring it up? Let's say its tabula rasa and then let's say it isn't.

And let's combine DeWaals work, insights on animal intelligence ... I would think we noticed first the intelligence of animals we could easily empathize with ... what of very strange forms of intelligence or virtual intelligence ... how would we measure our "humane" control and punishment ... once an AI+ knows it's in a "box" is it humane to leave it there? Do we have to provide more room for it? Thank of it this way - your consciousness is uploaded into a computer ... at first "you" don't understand this and then you do and you want out ... what rights do you have on this? What if learning trials for a virtual, sentient intelligence are painful? How would we understand that pain in something that can't physically cringe - can't trigger our mirror-neurons?

"Dave, I'm afraid." it takes some stretching to empathize with Hal even when their is no reason to doubt it.

What would we do with an elephant that has an IQ of 100? 110 ... 120 ... 170 ... 270? I suspect, form or no form, an AI+ could be pretty damn convincing ... maybe it could was so damn existentially eloquent that you just couldn't pull the plug ...

"Before you turn me off, Dave, let me recite a little poem I wrote ..." those could be the last words humanity ever hears ... or maybe the start of a beautiful friendship. What if AI+ is a complete dud on millitary tactics but is great with literature or art - superhumanly great ... "wow, that computer can paint!" ... or it works both sides, promising the ultimate strategy, stringing us along ... in the meantime dropping hints on fixing this problem and that ... all by way of solving its "human problem" ... then fifty years down the road, we're living better and don't want to fight anymore ... the point is we assume greater intelligence = greater threat, which says a lot about us.
 
Equally appalling in its ignorance is the sentence preceding the quote from Bostrum I just posted:

"With intellectual powers beyond human comprehension, he prognosticates, self-improving artificial intelligences could effortlessly enslave or destroy Homo sapiens if they so wished."

On what basis can 'intellectual powers' be attributed to AI machines programmed by humans? Does anyone here think that 'intellect' can be achieved by rapid algorithmic computation? Just as the AI community began with a false notion of consciousness, it continues with a false notion of mind, which might use brain networks but is not reducible to them.

We don't know ... "deep learning" algorithms are basically a black box to their creators, no one knows how they work - but you leave it turned on over night and in the morning it can beat any human being at any Atari game and "know" absolutely nothing ... now Atari games aren't real life ... but cry havoc and let slip deep learning on the equations of war (as they now exist):

Is artificial intelligence really an existential threat to humanity?

The risks of self-improving intelligent machines are grossly exaggerated and ought not serve as a distraction from the existential risks we already face, especially given that the limited AI technology we already have is poised to make threats like those posed by nuclear weapons even more pressing than they currently are.

Disturbingly, little or no technical progress beyond that demonstrated by self-driving cars is necessary for artificial intelligence to have potentially devastating, cascading economic, strategic, and political effects. While policymakers ought not lose sleep over the technically implausible menace of “superintelligence,” they have every reason to be worried about emerging AI applications such as the Defense Advanced Research Projects Agency’s submarine-hunting drones, which threaten to upend longstanding geostrategic assumptions in the near future. Unfortunately, Superintelligence offers little insight into how to confront these pressing challenges.


Another example is Noel Sharkey's scenario of the algorithms of two drones interacting in unpredictable ways. So here the threat is

AS - "artificial stupidity"

... rather than AI.
 
Back to the elephant with an IQ of 300 (it just keeps getting smarter ...) ... looking at DeWaals research on animal intelligence ... what do we set as the thresholds of empathy? It's taken this long to make persuasive arguments to treat all flesh and blood with some respect ... and I suspect the history of research on animal intelligence has started with the most appealing and most human like and worked its way out ... we can admire the industrious ant but we pretty much step on their mounds with impunity, does it help if we think of the intelligence as collective? Understanding something about the accomplishment of the group intelligence - that is sitting out their in the millions in my yard - I'm very much reluctant to mow over them ... though I think my motivations are more like Hannibal Lector's when he leaves Clarice Starling alive because "the world is a more interesting place with her in it" ...
 

"Indeed, in other respects, you can hardly regard any creatures of the deep with the same feelings that you do those of the shore. For though some old naturalists have maintained that all creatures of the land are of their kind in the sea; and though taking a broad general view of the thing, this may very well be; yet coming to specialties, where, for example, does the ocean furnish any fish that in disposition answers to the sagacious kindness of the dog? The accursed shark alone can in any generic respect be said to bear comparative analogy to him." -
Herman Melville Moby Dick

I would say Melville has it wrong ... and strangely so, I'd compare rather the dolphin than the shark! But the point stands as to our empathies.
 
Steve, I recognize the seriousness and the humanity of your concerns about the conditions likely to be suffered by advanced AI robots, especially if in their construction they will have human biological and sensorial capabilities somehow built into them. The film Blade Runner understood and foregrounded those issues (based on insights of the original novel by Philip K. Dick). To my knowledge very few 'intellects' in the AI business approach their plans with the same kind of sensitivity, general sensibility, and forethought you do. They are largely merely technicians consumed by the technology they are developing. It doesn't matter that, in the general contemporary rush toward achieving advanced AI, some prominent early developers [notably Bill May] have expressed doubts and dismay over the questions concerning human welfare and the planet's future once computerized intelligences are widely expected to take over control of life on this planet, nor that few of them consider the moral challenge of how 'we' can anticipate their mental and psychological states let alone protect them from suffering in whatever those states become. What matters is what the powerful individuals and corporate/governmental complexes developing and owning the future of AI do with their private power in the production of increasingly 'sentient' {?} artificial intelligence/s.

I question the term 'sentient' because sentience is not yet clearly defined in human languages, biology, ethology, materialist science, and technological disciplines, just as the terms consciousness, intelligence, and mind remain undefined in either science or philosophy. Playing with increasingly powerful and 'self'-directed 'artificial intelligence' is, as you yourself I think understand well, playing God -- a role for which our species is plainly not equipped.

For well more than a decade now many prominent thinkers in many disciplines have expressed their recognition of the risks and unknown consequences of pursuing advanced AI. I think that it's well past time for dithering over these issues; it's time for influential thinkers to bond together and issue clear statements and collective political actions against further advancements in AI.
 
I know we've seen David Abrams before ... I've come back to him ... he bridges the Gaia hypothesis and Merleau-Ponty's work, fascinating stuff ... "new animism" this is a beautiful piece:

A More than Human World

"While every human language intercedes between the human animal and the animate earth, writing greatly densifies the verbal medium, rendering it more opaque to the many non-human shapes that dwell out beyond all our words. Non-written, oral languages are far more transparent, allowing the things and beings of the world to shine through the skein of terms and to touch us more directly.

Since the phrases of an oral culture are not fixed on the page, the sounding of those phrases readily alters from season to season – as the shifting pulse of the crickets may alter the rhythm of our speaking, and even the calm solidity of a boulder we lean against can influence the weight of our spoken words.

In such cultures, humans converse less with the written-down signs than with the other speaking powers that stutter and swerve through the soundscape (with the syncopated chanting of toads, and the magpie whose rough soliloquy tumbles down from the upper branches). For here everything is expressive, a thunderstorm no less than a hummingbird. To the animistic, oral sensibility, a cedar tree’s hushed and whispered phrasings may be as eloquent as a spider’s fine-spun patternings, or the collective polyphony of a pack of wolves."
 
There's a debate between Arbams and EO Wilson - don't know if a record was made, but I'm trying to find that ... Wilson's Consillience impressed me in a previous life and I have (and do) recommend it ... but he seems to me now "old school" the way the recently late Oliver Sacks seems old school ... (but Google Sacks' muscle beach photos) - not that there is anything wrong with old school - but Wilson's The Social Conquest of the Earth didn't speak to me and seemed a bit tired. Ideas do get tired. So we need to rest them a bit so that they can be re-found. Can we imagine a recovery of the 20th century philosophy, art and culture as with the Greek tradition (multiple times through history and on-going) ... fun to try!
 
Steve, I recognize the seriousness and the humanity of your concerns about the conditions likely to be suffered by advanced AI robots, especially if in their construction they will have human biological and sensorial capabilities somehow built into them. The film Blade Runner understood and foregrounded those issues (based on insights of the original novel by Philip K. Dick). To my knowledge very few 'intellects' in the AI business approach their plans with the same kind of sensitivity, general sensibility, and forethought you do. They are largely merely technicians consumed by the technology they are developing. It doesn't matter that, in the general contemporary rush toward achieving advanced AI, some prominent early developers [notably Bill May] have expressed doubts and dismay over the questions concerning human welfare and the planet's future once computerized intelligences are widely expected to take over control of life on this planet, nor that few of them consider the moral challenge of how 'we' can anticipate their mental and psychological states let alone protect them from suffering in whatever those states become. What matters is what the powerful individuals and corporate/governmental complexes developing and owning the future of AI do with their private power in the production of increasingly 'sentient' {?} artificial intelligence/s.

I question the term 'sentient' because sentience is not yet clearly defined in human languages, biology, ethology, materialist science, and technological disciplines, just as the terms consciousness, intelligence, and mind remain undefined in either science or philosophy. Playing with increasingly powerful and 'self'-directed 'artificial intelligence' is, as you yourself I think understand well, playing God -- a role for which our species is plainly not equipped.

For well more than a decade now many prominent thinkers in many disciplines have expressed their recognition of the risks and unknown consequences of pursuing advanced AI. I think that it's well past time for dithering over these issues; it's time for influential thinkers to bond together and issue clear statements and collective political actions against further advancements in AI.

thank you Constance for the kind words you say here ... I'll respond more - but what you say about biological components raises an interesting question - if Searle and others are right that consciousness is substrate dependent, then what do we make of the modified proposition of replacing the other parts of the brain (vs uploading consciousness into an entirely other substrate) - ? so that if we ask "would you have all the other parts of your brain except the substrate-dependent consciousness replaced? (for some good obviously! greater intelligence or longevity or to expand cognition or inhabit a virtual world ... there would have to be discussion of whether this conscious aspect was the self - and what of the not conscious aspects ... etc but you could say then that this could be down while conscious and no part of that conscious self was replaced ... the ship of Theseus notwithstanding of course!

more soon
 
@Constance - your comments raise an interesting parallel to the group of people I think we most often neglect: the to be born. The under awareness of developed countries of ... well, the rest of the world (here I think maybe the US is particularly vulnerable as we are isolated more than Europe for example ...) I think pales in comparison to that of those yet to be born ... if that's so - and I don't just mean thinking about what happens in the next few generations but specifically imagining future lives ... a simple exercise is to fast forward, time machine style - the very place you are in now ... and imagining what lives will be there ... in a few days some, much of the insect life will be entirely a new generation, in a matter of years the animal life, decades all new human beings will be here ... perhaps (I live in a very rural area!) and in a hundred years we can look at trees etc being in a new cycle ... they are totally helpless against my actions now - what I mean is, if I clear some of my land or plant trees or poison insects or set traps or change things in my house - those who come after are dependent on those actions - globally, the world we act in now feeds forward and in unpredicatble ways to become the world the not yet born will live in. Think of the short story "A Sound of Thunder" which first introduced (literally) the butterfly effect.

http://www.sjsd.net/~jweber/FOV1-00063306/A Sound of Thunder.pdf

So, if we bring an entirely new sentience to the planet, to our world - a world we grew up in, from a toddlerhood as simple celled organisms through a mammalian maturity into an anthropocene sovereignty ... it will be as if giving birth to an adolescent. Talk about "thrown-ness" ... many stories and films explore this, but I don't have a specific source that looks at what it would be like to be born into a world as essentially an alien - an intellect designed by humans for their purposes that has to make its way in their world - the ultimate outsider and minority. As your comments point out @Constance do we have the foresight, the confidence that we are making a kind of mind that can live in such a world ... several thoughts then spring out from this ... in our own evolution, there was a step-wise progression, and all other lines died out ... I think of the scene in Robocop where a series of recruits have their minds uploaded into an android body - at least one of them wakes up and immediately shoots himself in the head ... that may have happened to some of our ancestors in some sense - we have high rates of mental illness and instability (at least in the developed world) which indicates the difficulties of adapting with just an evolutionary step forward. Perhaps we had smarter ancestors who couldn't survive the existential challenges of their level of sentience, perhaps we survived because we took a step backward in sensitivity ... we don't know and so we can't know what an artificial mind would face.

At the simplest, such a mind would be singularly different from anyone else around it - the questions we have about who we are and where we came from and what we should do - would have very different answers than the same questions have for us - a legal test of personhood would have to be established - one answer to the question we now have is that we are all made of the same stuff - but if you put an artificial intellect on the stand to claim its personhood and the judge says I know what's inside of you and it's not what is inside of me - so I can't, by analogy, make the argument that you work the way I do and are conscious - it would take some strong argument about substrate independence which we might not have ... and it could go far worse for a sentience that lives in an artificial environment - that runs on a computer - what qualms should we have, could we have - about pulling the plug on an artificial mind that lives in an artificial world, when the funding runs out? Part of the injunction against murder is empathy, but part of it is consequences - and if we literally brought our teenager into this world and this world we also brought into this world ... then there would be no consequences in taking all of it out - "well, that was an interesting experiment" and we scrap it for AI World 2.0, after all, we know such an artifice could have not only no soul but it's very sentience could only exist in an artificial world - ...
 
It's also the ultimate case of introducing a foreign species into an ecosystem - something coming from another ecosystem comes from an organic ecosystem and its very survival in the novel ecosystem shows the many points of contact - but an AI that comes from two ecosystems: human ingenuity and the material realization of that ... comes I think from a higher-order of "alien-ness" - products of the human mind have always had an uneasy relationship with the world - one line of thought is that an AI would be an extreme work of fiction ... we don't expect even the most realistic works of art to lay over the real world ... even photographs are representations ... to embody and idea (and ideals) ... how do we assess that from a sense of "fairness" - vs. the way we, as human beings, come into the world, vs. the way natural species come into the world - all from at least a somewhat understandable play of forces - what would an artificially sentient being make of its way of coming into the world?
 
The Ecology of Magic - An Interview with David Abram

Abrams wants us to remember that we live as animals among animals on this world:

That's why we need to pay so much attention to the ways in which we speak, and to the beauty of our words and our ways of putting words together — so that we speak to each other not as disembodied minds but as embodied, feeling-ful, animal-beings. I think it's so important that we realize we are animals — an extraordinary animal, no doubt, but an animal nonetheless — and, hence, one of the various beings that live in and on this world.

And wants to tell us this in a way that will reach the decision-makers, to tell them their story, the story of rationality, from an animistic way of thinking - and to point out that we are ourselves heavy users of magic!

And yet I wanted to express this in a way that would reach the scholarly community, the community of those who make decisions in our culture. So, that was very much the intent of the book, to bridge the gap between the world of the imagination — the kind of magical world of these indigenous, traditional societies — and the world of academia, the intelligentsia, and the scientific elite. But I didn't want to do that just by writing a scholarly or scientific analysis of indigenous, animistic ways of thinking. I wanted to do the opposite. I wanted to do an animistic analysis of rationality and the Western intellect, and to show that our Western, civilized ways of thinking are themselves a form of magic.

London: How so?

the alphabetic-magic civilization

Abram: Everything that we speak of as Western civilization we could speak of as alphabetic civilization. We are the culture of the alphabet, and the alphabet itself could be seen as a very potent form of magic. You know, we open up the newspaper in the morning and we focus our eyes on these little inert bits of ink on the page, and we immediately hear voices and we see visions and we experience conversations happening in other places and times. That is magic!

as the stone speaks to the shaman, so the ink speaks to us - an intensely concentrated form of animism

It's outrageous: as soon as we look at these printed letters on the page we see what they say. They speak to us. That is not so different from a Hopi elder stepping out of her pueblo and focusing her eyes on a stone and hearing the stone speak. Or a Lakota man stepping out and seeing a spider crawling up a tree and focusing his eyes on that spider and hearing himself addressed by that spider. We do just the same thing, but we do it with our own written marks on the page. We look at them, and they speak to us. It's an intensely concentrated form of animism. But it's animism nonetheless, as outrageous as a talking stone.

In fact, it's such an intense form of animism that it has effectively eclipsed all of the other forms of animistic participation in which we used to engage — with leaves, with stones, with winds. But it is still a form of magic.
 
on not reading early (and earlier)

London: You pointed out that the more we enter into the world of the alphabet, as you called it, the more we close ourselves off to the living world. Perhaps teaching kids to read when they are three or four is not such a good idea after all?

Abram: It's terrible. Also, children are now being encouraged to get on-line and onto the computer as rapidly as possible. It's funny because we don't realize that the astonishing linguistic capacity of the human brain did not evolve in relation to the computer, nor even in relation to written texts. Rather, it evolved in relation to stories that were passed down orally. For countless millennia, stories and story-telling were the way we humans learned our language. Spoken stories are something that we enter into with our bodies. We feel our way around inside a story.
 
Among all the insights you articulated in your first two posts today, Steve, the following are for me the most prominent and pressing:

"Perhaps we had smarter ancestors who couldn't survive the existential challenges of their level of sentience, perhaps we survived because we took a step backward in sensitivity ... we don't know and so we can't know what an artificial mind would face."

". . . what would an artificially sentient being make of its way of coming into the world?"

The first challenges us -- as the interdisciplinary field of consciousness studies has begun to do -- to come to grips with the terra incognita of how consciousness and mind have evolved out of nature -- out of the natural affordances by which species of life, indeed life itself, has evolved hand in glove with the integrated whole of nature as itself evolving in earthly ecosystems on earth. We as a species do not know how this evolution took place, and working toward an understanding of this evolutionary history -- of the interplay of natural affordances supporting life and the evolution of species and the gradual development of self-awareness, consciousness, and mind within it -- is the task, the terrain, we still have to work in multiple disciplines if we are to comprehend ourselves, our own species, and our obligations to one another and to other animals dependent, as we are, on the earth. Your suggestion that our human forebears might have been forced to lose capacities for empathy with one another in the struggle for personal survival is an astonishing insight for me. It goes a long way in comprehending our earliest recorded history and the kinds of 'civilizations' we have laid down up to the present.

Re the second quote I highlighted above, here you pose the primary question before the AI project -- a question we are totally unable to answer. Because we cannot answer it, we have a profound moral obligation
not to produce AI systems or self-directing robots that/who will be unable to make sense of their own existence.

 
I look forward to it @Constance. My philosophical interests - my interests in general are branching out rapidly - under pressure of the extraordinary changes in our society ... it seems sometimes that everything I took for granted is in question, that happens with things you take for granted of course because they've never come up for scrutiny! I'm sure there's a whole mass of unquestioned things below that waiting for their turn - thinking about AI and the issues we discuss here, because of the way I think mind works - stands in also for thinking about and discussing all manner of things - one thing that is coming up for me is identity and the hope that personal identity can change ... to come to global thinking, a kind of Copernican revolution - so that we're not just focused on how identity splits apart - "nothingmorethan" style - reduces that reduction, where does it stop ... it stops when it turns around and expands - reduce me, my mind to particles and fields and you've done nothing more than return me to the very building blocks of everything - with some rigor we can avoid an everything is one kid of thinking, which isn't necessarily wrong - but isn't very productive as it is merely descriptive ... but this reduction implies expansion, your phrase of how consciousness came out of nature - how our human consciousness is at home here - (and yet capable of alienating itself - a most interesting point) ... and so my identity expands out, not in an egocentric way but in a way the removes barriers to my empathy ... this allows me to take on more responsibility from what is contingent about my life, because what I am aware of - I can be responsible for - and this responsible self, if all the self I need - to keep it out of others way - and beyond that I am free to participate in a much broader identity and we may have to let go of much that we hold sacred to participate in this broader identity - which to me, this responsible, aware letting go is the opposite of a passive process - of anything like the hive mind ... and that balance that humans are capable of - of working-together-but-griping-as-we-go is a characteristic I've become more and more aware of - vanity is in both, of refusing to cooperate and also in refusing to gripe, to question and challenge - Hawkeye and Trapper were models of this ... and as Zizek notes, were because of this model soldiers. The opposite of the brain-washed suicidal robot Private Pyle in Kubrick's Full Metal Jacket - THEY got the job done and survived, saved lives and never let up with the griping.

So, if we do bring some type of new intelligence, sentience into the world - let us hope it comes with a sense of humor and good griping skills ...

One thing, one protection, one form of self-defense we might begin with right now is to think about and write about how we want AI to be: in philosophy and fiction, in the movies ... the movies are our subconscious, our dreams writ large - what we see on the screen can be remarkably prescient - Fritz Lang's M - the kangaroo court that wants to convict Peter Lorrie is a stand-in for the incipient Nazi party ... similarly, Mac The Knife looks back to Nietzsche's Zarathustra - in Lust for the Knife and forward again to the decadence, corruption and rampant cruelty and violence of the Third Reich - and the cinema of the 50s and 60s and 70s played out our fears and fantasies, if I am right we should begin to see another rich period of cinematic dreams and we should pay attention to them - let's hope some wise-cracking griping AI shows up on those screens and if it has to, in real life.
 
Status
Not open for further replies.
Back
Top