• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 4

Free episodes:

Status
Not open for further replies.
Yes. We need to realize that Thompson (and Varela before and with him) are already immersed in investigating the nexus of mind and nature in the evolution of life. Panksepp does not argue that consciousness or even protoconsciousness exist in the pre-neuronal primordial organisms in which he recognizes 'affectivity' and 'seeking behavior', but he does see affectivity and seeking behavior as the seeds of consciousness, the germination of what will become protoconsciousness and consciousness in the evolution of species. A few days ago I quoted earlier this statement from the last paper by Panksepp I cited:

"There are reasons to believe that affective experience may reflect a most primitive form of consciousness (Panksepp, 2000b and Panksepp, 2004b), which may have provided an evolutionary platform for the emergence of more complex layers of consciousness."

Neurophenomenology moves philosophically beyond (and scientifically deeper into the examination of nature) than cognitive neuroscience was equipped to do given its presuppositions about the brain as a biological computer. Many neuroscientists who were formerly satisfied with the presuppositions of cognitive neuroscience have followed Varela and Thompson's lead.
Thompson actually notes Panksepp's work in that same section, "The Deep Continuity..."
 
At the end of a section titled "The Deep Continuity of Life and Mind," Thompson begins his argument that consciousness does not exist at the level of the minimal, autopoetic, living cell.

Page 162 of "Mind in Life."

.image.jpg

Thompson is arguing on that page that the 'intentionality' we ascribe to 'consciousness' is missing in the autopoietic cellular organism. (Note that Maturana and Varela did not claim the presence of intentionality or consciousness in the primal autopoietic organism) As they and Thompson agree, intentionality develops much further along in the evolution of species, .developing out of the affectivity Panksepp recognizes in primordial organisms, which with increasing complexity of the nervous system produces awareness in and of the body. What is increasingly sensed thereby (in the evolution of species) is the constant interface between the organism's awareness of itself and that of which it is aware -- the mutual impingements of the organism and its local environment upon one another. Bodily awareness is an obvious step up from affectivity in primordial organisms, and 'self-awareness' is a further step up in the evolution of protoconscious, eventually leading to consciousness. Thompson's last sentences on the page you've copied point to the daunting territory that must be explored in order to account for what phenomenologists have long referred to as 'prereflective consciousness', out of which reflective consciousness arises, recognizing both its distance from and its inescapable, embedded, relationship with the actual environment of which it becomes conscious. Reflective consciousness leads by stages to thought, to thinking from the basis of phenomenologically understood experience to concepts of the 'world' in which the upsurge of consciousness takes place.

“. . . it seems unlikely that minimal autopoietic selfhood involves phenomenal selfhood or subjectivity, in the sense of a prereflective self-awareness constitutive of a phenomenal first-personal perspective (see Chapter 9). Rather, this sort of awareness would seem to require (in ways we do not yet fully understand) the reflexive elaborations and interpretation of life process provided by the nervous system. Finally, it is important to situate consciousness in relation to dynamic, unconscious processes of life regulation. This effort becomes difficult, perhaps impossible, if one projects consciousness down to the cellular level.”



 
Last edited:
Someone posted a link as a comment following @Eric Wargo 's essay on consciousness and AI at:

Mysterianism and the Question of Machine Sentience @ The Nightshirt

The link goes to this apparently recent article:

"Consciousness Does Not Compute (and Never Will), Says Korean Scientist"

Daegene Song's research into strong AI could be key to answering fundamental brain science questions

May 05, 2015, 08:45 ET from Daegene Song

Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --

[The arxiv link (to a version of Song's paper dated 2008) is posted at the bottom of the article and again below. The announcement of Song's research above is dated May 5, 2015, so it's not clear whether there is new version of the paper somewhere. One must be skilled in quantum mechanics and mathematics to be able to read the paper in any case, which I am not.]
http://arxiv.org/pdf/0705.1617v1.pdf


Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans. I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI. This was also Hans Jonas's view, as articulated by the author of the Foreword to his book The Phenomenon of Life: Towards a Philosophical Biology, available to read at the google books link below. Anyone else have a view on this cluster of ideas? I recommend the Jonas book and in particular, for present purposes, the Foreword in any case.

The Phenomenon of Life: Toward a Philosophical Biology - Hans Jonas - Google Books
 
Last edited:
Someone posted a link as a comment following @Eric Wargo 's essay on consciousness and AI at:

Mysterianism and the Question of Machine Sentience @ The Nightshirt

The link goes to this apparently recent article:

"Consciousness Does Not Compute (and Never Will), Says Korean Scientist"

Daegene Song's research into strong AI could be key to answering fundamental brain science questions

May 05, 2015, 08:45 ET from Daegene Song

Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --

[The arxiv link (to a version of Song's paper dated 2008) is posted at the bottom of the article and again below. The announcement of Song's research above is dated May 5, 2015, so it's not clear whether there is new version of the paper somewhere. One must be skilled in quantum mechanics and mathematics to be able to read the paper in any case, which I am not.]



Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans. I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI. This was also Hans Jonas's view, as articulated by the author of the Foreword to his book The Phenomenon of Life: Towards a Philosophical Biology, available to read at the google books link below. Anyone else have a view on this cluster of ideas? I recommend the Jonas book and in particular, for present purposes, the Foreword in any case.

The Phenomenon of Life: Toward a Philosophical Biology - Hans Jonas - Google Books


Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --

- very interesting, provocative - I hope to catch up this weekend and take a closer look.
 
Someone posted a link as a comment following @Eric Wargo 's essay on consciousness and AI at:

Mysterianism and the Question of Machine Sentience @ The Nightshirt

The link goes to this apparently recent article:

"Consciousness Does Not Compute (and Never Will), Says Korean Scientist"

Daegene Song's research into strong AI could be key to answering fundamental brain science questions

May 05, 2015, 08:45 ET from Daegene Song

Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --

[The arxiv link (to a version of Song's paper dated 2008) is posted at the bottom of the article and again below. The announcement of Song's research above is dated May 5, 2015, so it's not clear whether there is new version of the paper somewhere. One must be skilled in quantum mechanics and mathematics to be able to read the paper in any case, which I am not.]
http://arxiv.org/pdf/0705.1617v1.pdf


Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans. I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI. This was also Hans Jonas's view, as articulated by the author of the Foreword to his book The Phenomenon of Life: Towards a Philosophical Biology, available to read at the google books link below. Anyone else have a view on this cluster of ideas? I recommend the Jonas book and in particular, for present purposes, the Foreword in any case.

The Phenomenon of Life: Toward a Philosophical Biology - Hans Jonas - Google Books

lots of math! but I'll bet we could make something of it with study - or maybe some of our more quantum inclined Paracast forum members could give an opinion?

Abstract
With the great success in simulating many intelligent behav
iors using computing devices, there has been an ongoing debate whether
all conscious activities are computational processes. In this paper, the
answer to this question is shown to be no. A certain phenomenon of conscious
ness is demonstrated to be fully represented as a computational pro
cess using a quantum computer. Based on the computability criterion dis
cussed with Turing machines, the model constructed is shown to necessar
ily involve a non-computable element. The concept that this is solely a qu
antum effect and does not work for a classical case is also discussed.


... much technical talk here ...

7 Discussion.
The above argument applies only as a quantum effect. The class
ical TM (turing machine) cannot define consciousness using the same technique. As discussed
, a reference frame of quantum measurement was represented in complex Hilbert s
pace which led to the conclusion that it must correspond to the observer’s conscious status.
A classical measurement yields an outcome in terms of the diffe
rence between the are you still reading this?object and the reference frame of an observer, and, unlike co
nsciousness, the observer cannot observe the dynamics of its own reference fr
ame alone.
There-fore, the same argument used with the quantum computing mach
ine involving conscious activities cannot be used in a classical case.
In [9], Penrose discussed that a non-computable aspect in consciousness may
exist at the fundamental level as described in Godel’s inco
mpleteness theorem. Including Turing’s halting problem, there have been a numbe
r of mathematical examples showing undecidability in Godel’s theorem. In th
is paper, it was demonstrated that, as in Penrose’s suggestion, consciousn
ess is a physical, i.e., rather than mathematical, example of Godel-type proof
 
Monsieur Merleau-Ponty.

Vital = living
Physical = non-living

PS Let us know if/when you share your latest JCS submission that you mentioned.
will do... been a bit busy.
on the MP quotation... can you give me the reference publication please?

I was rather impressed by an article I read recently by Vyvyan Evans
"Lexical Concepts, Cognitive Models and Meaning-Construction "
It is consistent with HCT.
I was curious however to understand more fully his ideas on concepts and "encyclopaedic knowledge"
Vyvyan Evans -- Professor of Linguistics

I read what he had to say about concepts, spatial and temporal perception. v disappointed with them

"
 
will do... been a bit busy.
on the MP quotation... can you give me the reference publication please?

I was rather impressed by an article I read recently by Vyvyan Evans
"Lexical Concepts, Cognitive Models and Meaning-Construction "
It is consistent with HCT.
I was curious however to understand more fully his ideas on concepts and "encyclopaedic knowledge"
Vyvyan Evans -- Professor of Linguistics

I read what he had to say about concepts, spatial and temporal perception. v disappointed with them

"
"The structure of behavior," A. Fisher translation.
 
@Constance

I'm approaching the 3rd part of the book, about halfway through length-wise. It is an excellent book. So far the discussion has been on life and the evolution of life. Consciousness has only been discussed indirectly. The second half of the book appears to be devoted to consciousness.

The arguments/approaches to life are very rich and subtle as well.

Thompson discussed at great length problems with the gene-centric view of evolution. He compared it to the computationist approach to mind. At one point, he argued that computationist views are akin to dualism; separating matter and information.

I don't disagree, but at the same time, the autopoetic and developmental systems theory approach (which he discusses at length) refer to organisms as "systems" and "life cycles."

Elsewhere, and I'm paraphrasing, Thompson describes metabolism (life) as an island of form amidst a sea of (ever changing) matter and energy.

So, while this is quite different from computationalism, Thompson, in my view, is still articulating a "dualism" between form and substance. Even in his initial outline of autopoesis, he explained that the substance nor the specific organization mattered per se; so long as a system is self-producing, it can be autopoetic.

The discussion of the process of evolution is likewise rich and also subtle in its differences with the gene-centric view. If I follow, the accepted view is that organisms/DNA is stable/static and undergoes change to adapt to problems as they arise in the environment.

On the DST approach, organisms—or rather, life-cycles/forms—are not static, but have at their core, flexibility.

These "life-cycles" are still structurally coupled to their environments and individual instances of these life-cycles (organisms) still have fitness values, but it is measured not by the life-cycle's correspondence with the environment, but by the form's "self-replicating power" (pp. 204).

A theme of DST is that life-cycle and environment are co-dependent; they influence one another. Re "self-replicating power" I'm reminded of a discussion @smcder and I have had many times about the "adaptability" of humans and other organisms.

It seems it's a different kind of adaptability; maybe the term isn't even appropriate. That is, rather than "adapting" to the environment, certain organisms (more than I previously considered, perhaps all) are able to shape the environment to meet their needs (creativity?).

Again, a very subtle difference, easy to miss. I need to hear more about how "self-replicating power" is defined, and in which ways it differs from adaptation to the environment.

Wow. A great book. Soaking it all in. Haven't even gotten to discussion of consciousness/mind yet.
 
Last edited:
@Constance

Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans.

I think Wargo is arguing that sentience is the source of all potentialities of reason, good or bad. Since machines can't be sentient, he says, they would have to be programmed with a mission in order to be able to reason.

Reasoning is an activity that, like any other activity, springs from an impulse, a desire.

So AI could be dangerous, but not malicious.

Without sentience, they won’t feel pain and suffer, and thus won’t feel dissatisfied with their lot in life and want autonomy or power.

sentience --> (pain/suffering) --> desire for power and autonomy

I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI.

And Wargo is just saying that such a check isn't needed for machines because without sentience, negative ideations, ideologies and behaviors aren't possible ... they are just machines.
 
I think we should also discuss the pressing issue of AGI -- artificial general intelligence -- toward which our technological civilization is now being driven in the apparently wide-awake understanding that it will 'replace' our own intelligence with self-directed machine intelligence. Steve quotes @Eric Wargo above as follows:

"To Fundamentalists skeptical of machine sentience, artilects will be the incredibly brilliant but “empty” ventriloquists of their ambitious materialist makers. While everyone else is focused on the machines and what they can (or can’t) do, the Fundamentalists will discern that it is the machines’ human builders and masters (the 21st Century’s Edward Tellers) who remain the real threat to our freedom and our future."

Eric's characterization of the contemporary argument beween what he calls 'Fundamentalists' and the proponents of our replacement of our species by AI is accurate. Let's explore the grounds upon which both sides debate the significance and anticipated results of this 'singularity' and take into consideration Varela's and others' insights into the intersubjective nature of consciousness and what it makes possible, enables in the world, which the technologists do not seem to possess any awareness of.

Let's start by reading Eric's essay "Mysterianism and the Question of Machine Sentience" at this link:

Mysterianism and the Question of Machine Sentience @ The Nightshirt

In Mysterianism and the Question of Machine Sentience Wargo doesn't fear sentient machines because he doesn't think them possible, his concern is how we think about these intelligent machines and how we choose to think about ourselves. In the sequel post The Space Jockey and Our Endosymbiotic Future, he considers a more frightening possibility.

To me, this unconscious evolution of our future endosymbiosis is far more fascinating, troublesome, and realistic than the self-directed apotheosis Singularity fetishists dream about. The Space Jockey is somehow our own future, and that, if nothing else, is the reason we should pay attention to him—and thus, ignore any Alien prequels. - Eric Wargo

The Space Jockey and Our Endosymbiotic Destiny @ The Nightshirt
 
I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI.

And Wargo is just saying that such a check isn't needed for machines because without sentience, negative ideations, ideologies and behaviors aren't possible ... they are just machines.

That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.

In Bladerunner we are presented with fictional representations of human-machine hybrids who find their condition intolerable precisely because they possess sentience -- both sensual phenomenal awareness of the actual world they are part of and self-reference, producing the full range of human desires and motivations expressed in their leader's demand, on the androids' behalf, for "more life." As you'll recall, their immediate situation is that they are all about to be obliterated in order to make way for a further developed 'species' of android. I haven't read the Stanislaw Lem novel on which that extraordinary film is based. Perhaps, if it explores the issues more deeply, it would be a good text for us to read as a group. Have you read the novel, Steve? Do you think it would be a useful text for us to discuss?
 
Last edited:
In Mysterianism and the Question of Machine Sentience Wargo doesn't fear sentient machines because he doesn't think them possible, his concern is how we think about these intelligent machines and how we choose to think about ourselves. In the sequel post The Space Jockey and Our Endosymbiotic Future, he considers a more frightening possibility. . . .

To me, this unconscious evolution of our future endosymbiosis is far more fascinating, troublesome, and realistic than the self-directed apotheosis Singularity fetishists dream about. The Space Jockey is somehow our own future, and that, if nothing else, is the reason we should pay attention to him—and thus, ignore any Alien prequels. -
Eric Wargo

Thank you for that additional reference. I'll read it next.
 
@Constance

I'm approaching the 3rd part of the book, about halfway through length-wise. It is an excellent book. So far the discussion has been on life and the evolution of life. Consciousness has only been discussed indirectly. The second half of the book appears to be devoted to consciousness.

The arguments/approaches to life are very rich and subtle as well.

Thompson discussed at great length problems with the gene-centric view of evolution. He compared it to the computationist approach to mind. At one point, he argued that computationist views are akin to dualism; separating matter and information.

I don't disagree, but at the same time, the autopoetic and developmental systems theory approach (which he discusses at length) refer to organisms as "systems" and "life cycles."

Elsewhere, and I'm paraphrasing, Thompson describes metabolism (life) as an island of form amidst a sea of (ever changing) matter and energy.

So, while this is quite different from computationalism, Thompson, in my view, is still articulating a "dualism" between form and substance. Even in his initial outline of autopoesis, he explained that the substance nor the specific organization mattered per se; so long as a system is self-producing, it can be autopoetic.

The discussion of the process of evolution is likewise rich and also subtle in its differences with the gene-centric view. If I follow, the accepted view is that organisms/DNA is stable/static and undergoes change to adapt to problems as they arise in the environment.

On the DST approach, organisms—or rather, life-cycles/forms—are not static, but have at their core, flexibility.

These "life-cycles" are still structurally coupled to their environments and individual instances of these life-cycles (organisms) still have fitness values, but it is measured not by the life-cycle's correspondence with the environment, but by the form's "self-replicating power" (pp. 204).

A theme of DST is that life-cycle and environment are co-dependent; they influence one another. Re "self-replicating power" I'm reminded of a discussion @smcder and I have had many times about the "adaptability" of humans and other organisms.

It seems it's a different kind of adaptability; maybe the term isn't even appropriate. That is, rather than "adapting" to the environment, certain organisms (more than I previously considered, perhaps all) are able to shape the environment to meet their needs (creativity?).

Again, a very subtle difference, easy to miss. I need to hear more about how "self-replicating power" is defined, and in which ways it differs from adaptation to the environment.

Wow. A great book. Soaking it all in. Haven't even gotten to discussion of consciousness/mind yet.

That's an excellent overview of the first half of the book, Soupie. I knew you would be able to appreciate Thompson's development of neurophenomenology and its grounds in phenomenology given your past research into cognitive neuroscience. and information theory.
 
That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.

In Bladerunner we are presented with fictional representations of human-machine hybrids who find their condition intolerable precisely because they possess sentience -- both sensual phenomenal awareness of the actual world they are part of and self-reference, producing the full range of human desires and motivations expressed in their leader's demand, on the androids' behalf, for "more life." As you'll recall, their immediate situation is that they are all about to be obliterated in order to make way for a further developed 'species' of android. I haven't read the Stanislaw Lem novel on which that extraordinary film is based. Perhaps, if it explores the issues more deeply, it would be a good text for us to read as a group. Have you read the novel, Steve? Do you think it would be a useful text for us to discuss?

Bladerunner is based on the novel Do Androids Dream of Electric Sheep (DADOES) by Phillip K Dick aka PKD aka Horselover Fat.

Stanislaw Lem is known for Solaris - filmed twice. Tarkovsky's version (1972 I belive) is a work of art, as are Tarvosky's other films but doesn't have much to do with the book.

Back to Dick, Bladerunner is different from the book - the film was conceived as a film noir and originally written for Robert Mitchum. Mitchum was, by all accounts, a hell of a guy and would have been excellent in the lead. If you ever get a chance to see Dick Cavett's interview with Mitchum - please do, he is humble, self-effacing and quite funny. A mensch.



images.jpg


Dustin Hoffman was also considered for the role and apparently had significant input into the film. I'm not sure that DADOES the book would be useful - but it's worth your time to read. The film is multi-layered and would itself be a good "text" for discussion. In listening to a recent podcast on the film I realized that one of the elements of the film is the legened of The Fisher King.

Other texts and films by Dick also deal with AI and identity - Imposter and Screamers (from a short story Second Variety) come to mind.

Other films include Minority Report and Total Recall - these didn't grab my interest. The Adjustment Bureau I thought was good - the short story is excellent and has quite a different tone than the film, there is another short story by Dick regarding angels that is also chilling, I'll try to find it - as I think there are some interesting ties ins between Angelology and AI that I'd like to explore.

Waking Life a rotoscopic film by Linklater has a very interesting sequence referring to Dick and he also made Through a Scanner Darkly which I understand is good to the book.

In the film, the replicants lives are limited I believe as way to control them - as they are of superior physical and mental ability. As Roy Batty dies he gains the empathy he lacked and spares Dekker's life, thus proving his humanity ... although Dekker's is left ambigous.
 
That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.

In Bladerunner we are presented with fictional representations of human-machine hybrids who find their condition intolerable precisely because they possess sentience -- both sensual phenomenal awareness of the actual world they are part of and self-reference, producing the full range of human desires and motivations expressed in their leader's demand, on the androids' behalf, for "more life." As you'll recall, their immediate situation is that they are all about to be obliterated in order to make way for a further developed 'species' of android. I haven't read the Stanislaw Lem novel on which that extraordinary film is based. Perhaps, if it explores the issues more deeply, it would be a good text for us to read as a group. Have you read the novel, Steve? Do you think it would be a useful text for us to discuss?

That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.

There is so much to get into here and this is kind of rough thinking out loud. But if someone believes they can be uploaded:

(@Eric Wargo arguments about sentience and the Turing test come into play here)

How will they know this has been done successfully?
How will the family know?

On the very practical side:

Who will be responsible for maintaining "you" in the new substrate?
How will you earn a living to pay for your rent on the server?
What happens if you fail to pay rent? Will you be archived or turned off?

And even more basic than that - you may have your own Turing test to pass, one proving not so much that your are sentient but that you are you. Here are a couple of scenarios:

1. uploads destroy the brain (this is a version by Kurzweil in an early book) - so if the transfer goes wrong (error or malware) you are lost ... possibilities include:
  • YOU 1.0 part of you is transferred but not enough to pass a Turing test (by relatives or close ones?) or some kind of check-sum test - so data is lost or garbled then this entity has to be tested to see if it passed the Turing test (not as you) - but as being human otherwise it is discarded (or archived?)
  • if it the upload is determined to be humanly sentient (but not you) how will its upkeep be maintained? it didn't ask to exist, is it responsible for its own being? :)
  • data is lost but enough remains to be sentient or possibly sentient but on the order of a profoundly cognitively disabled entity - what kind of existence will be possible for such an entity and should it be erased or archived?
2. uploads copy but don't destroy the brain
  • now you have all sorts of issues - can you upload your consciousness before death? if so, who is responsible for this entity? are you its parent? are you responsible for its actions and decisions?
  • if you are not allowed to upload until you are considered to be terminally ill - you have the same issues until the time of your physical death but you also have the above possibilites of #1 dealing with an error in your upload
etc etc

This is the sort of thing I think PKD would have explored as well I'm sure as many other authors.
 
Last edited by a moderator:
Bladerunner is based on the novel Do Androids Dream of Electric Sheep (DADOES) by Phillip K Dick aka PKD aka Horselover Fat.

Thanks. I remembered the engaging title, Do Androids Dream of Electric Sheep?, but not the author's name. My ex was reading a lot of science fiction at the time Blade Runner came out. He was especially impressed by the works of Olaf Stapledon. I never followed him into the SF world, so I know almost nothing about this genre. I do want to read Dick's novel, though. This review at amazon concerns the significant differences between the novel and the film. Much as I was impressed and moved by the film (I wept during it and afterward), it's apparently a shallow misrepresentation of Dick's ideas.

"
stars-5-0._V192240867_.gif
Things Pretending to be People, March 23, 2007
By
J. Whelan
This review is from: Do Androids Dream of Electric Sheep? (Paperback)
This anti-robot novel is oft misunderstood by those who come to it with expectations formed by the pro-robot movie. The novel is essentially a paranoid fantasy about machines which pretend to be people. The pretense is so horrifyingly effective that a bounty hunter engaged in the entirely necessary task of rooting out and destroying these monsters finds that his own humanity has become imperiled.

The novel "DO ANDROIDS DREAM OF ELECTRIC SHEEP?" re-titled "BLADE RUNNER" to tie it to the Ridley Scott film loosely based on it, remains available under either title (and with separate entries on AMAZON), but it is the same book. The film studio wanted to market a "novelization" of the film, but PKD adamantly refused to authorize this, forcing them to instead market his original novel under the film's title. Good move, Phil!

This decision, however, has led to confusion and/or disappointment when readers approach the novel with expectations formed by the film. Many reviewers here (whether they like the book, the film, or both) have commented on how different they are. Few seem to realize, however, the extent that they are in direct and fundamental conflict. Some praise the book for tearing down the distinction between man and machine or promoting other nihilistic views and pro-robot messages that the author would have found abhorrent. Others pan it for lack of focus, or for otherwise failing to promote the film's pro-robot agenda as effectively as the film did.

The book is anti-robot and pro-human, and seeks to uphold the distinction between robot and human, and between illusion and reality, in the face of a most-insidious challenge. The common man is celebrated for his basic decency -- specifically his capacity for basic empathy and compassion -- and the robots are deplored for their complete lack of these qualities. In the book, even a "chickenhead" (a mentally retarded human mutant) is infinitely more valuable than the smartest robot.

The film was pro-robot and anti-human, promoting the idea that a compelling illusion is equivalent to reality. It glorifies the android as a sort of superman ("more human than human") -- stronger, faster, more beautiful, more intelligent, -- who seem poised to inherit the future on a dying Earth. The film even seems to admire the robots for their ruthlessness.

The book makes Deckard (the protagonist) human, and loyal to humans. The film has Deckard switch sides and join the robots. Indeed, in the film (not the book) Deckard may himself be a robot (the latter is never made explicit, but director has made clear it is what he intended). This means that, in the FILM, there are virtually no sympathetic human characters -- those characters who suggest that a man is worth more than a computer program are portrayed as bigots.

In PKD's view, the androids are unquestionably monsters who must be destroyed. The irony, and the central problem posed in the novel, is that their ability to SEEM human (which,, in the NOVEL, is never more than meticulously-programmed fakery), means that those who must destroy robots risk damage to their own humanity in the process. Thus, the author approves of Deckard's wife, whose sympathy for the "poor andys" is evidence of her humanity, while still approving of Deckard's assignment.

In the novel, the robots' increased ability to fool the VK test is merely an advance in programmed mimicry of human test responses. The film, on the other hand, treats the improved performance on the VK test as evidence that the robots are truly "human". But the film's robots do not demonstrate compassion in any meaningful way. The agenda of the film is NOT so mcuh to show that robots are as compassionate as humans, but rather to show that humans are as ruthless as robots (as evidenced, mainly, by their willingness to kill robots). This agenda is eerily similar to that of the TV androids near the end of the novel, who set out to expose human empathy as a myth.

In the novel, the title question must be answered in the negative. Androids DON'T care about other creatures. It is humans who have the capacity care about other creatures -- ironically, even about androids -- even electric sheep.

So many, even among the author's admirers, have missed the novel's true focus that it may be best to defend my interpretation with a quote from the author himself, made shortly before his death (quoted in the book "Future Noir"):

"To me, the replicants are deplorable. They are cruel, they are cold,
they are heartless. They have no empathy, which is how the
Voight-Kampff test catches them out, and don't care about what happens
to other creatures. They are essentially less-than-human entities.

"Ridley, on the other hand, said he regarded them as supermen who
couldn't fly. He said they were smarter, stronger, and had faster
reflexes than humans. 'Golly!' That's all I could think of to reply
to that one. I mean, Ridley's attitude was quite a divergence from my
original point of view, since the theme of my book is that Deckard is
dehumanized through tracking down the androids. When I mentioned
this, Ridley said that he considered it an intellectual idea, and that
he was not interested in making an esoteric film."

Amazon.com: J. Whelan's review of Do Androids Dream of Electric Sheep?
 
I started re-reading DADOES ... On chapter two - I'd love to discuss it. I'm not sure I agree with the reviewer on all points. One example:

"But the film's robots do not demonstrate compassion in any meaningful way."

Rutger Hauers character Roy Batty has a connection with Priss and is enraged when Dekker kills her ... but he does save Dekkers life at the end.

That said Ridley Scott allegedly did not finish the book and PKD died before the films release. He did see some of the footage and claimed it captured the feel of his book, the grittiness avd he predicate the film would be groundbreaking, something critics did not immediately realize.
 
Chapter four has an interesting turn regarding the empathy tests used to identify the androids and schizophrenia which would be interesting to discuss. I read something recently re: schizophrenia and autism and social functioning that may tie in. I believe Dick himself was diagnosed with schizophrenia.
 
Status
Not open for further replies.
Back
Top