NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
Thompson actually notes Panksepp's work in that same section, "The Deep Continuity..."Yes. We need to realize that Thompson (and Varela before and with him) are already immersed in investigating the nexus of mind and nature in the evolution of life. Panksepp does not argue that consciousness or even protoconsciousness exist in the pre-neuronal primordial organisms in which he recognizes 'affectivity' and 'seeking behavior', but he does see affectivity and seeking behavior as the seeds of consciousness, the germination of what will become protoconsciousness and consciousness in the evolution of species. A few days ago I quoted earlier this statement from the last paper by Panksepp I cited:
"There are reasons to believe that affective experience may reflect a most primitive form of consciousness (Panksepp, 2000b and Panksepp, 2004b), which may have provided an evolutionary platform for the emergence of more complex layers of consciousness."
Neurophenomenology moves philosophically beyond (and scientifically deeper into the examination of nature) than cognitive neuroscience was equipped to do given its presuppositions about the brain as a biological computer. Many neuroscientists who were formerly satisfied with the presuppositions of cognitive neuroscience have followed Varela and Thompson's lead.
Someone posted a link as a comment following @Eric Wargo 's essay on consciousness and AI at:
Mysterianism and the Question of Machine Sentience @ The Nightshirt
The link goes to this apparently recent article:
"Consciousness Does Not Compute (and Never Will), Says Korean Scientist"
Daegene Song's research into strong AI could be key to answering fundamental brain science questions
May 05, 2015, 08:45 ET from Daegene Song
Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --
[The arxiv link (to a version of Song's paper dated 2008) is posted at the bottom of the article and again below. The announcement of Song's research above is dated May 5, 2015, so it's not clear whether there is new version of the paper somewhere. One must be skilled in quantum mechanics and mathematics to be able to read the paper in any case, which I am not.]
Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans. I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI. This was also Hans Jonas's view, as articulated by the author of the Foreword to his book The Phenomenon of Life: Towards a Philosophical Biology, available to read at the google books link below. Anyone else have a view on this cluster of ideas? I recommend the Jonas book and in particular, for present purposes, the Foreword in any case.
The Phenomenon of Life: Toward a Philosophical Biology - Hans Jonas - Google Books
Someone posted a link as a comment following @Eric Wargo 's essay on consciousness and AI at:
Mysterianism and the Question of Machine Sentience @ The Nightshirt
The link goes to this apparently recent article:
"Consciousness Does Not Compute (and Never Will), Says Korean Scientist"
Daegene Song's research into strong AI could be key to answering fundamental brain science questions
May 05, 2015, 08:45 ET from Daegene Song
Consciousness Does Not Compute (and Never Will), Says Korean Scientist -- CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ --
[The arxiv link (to a version of Song's paper dated 2008) is posted at the bottom of the article and again below. The announcement of Song's research above is dated May 5, 2015, so it's not clear whether there is new version of the paper somewhere. One must be skilled in quantum mechanics and mathematics to be able to read the paper in any case, which I am not.]
http://arxiv.org/pdf/0705.1617v1.pdf
Of particular interest to me is Wargo's reasoning in "Mysterianism and the Question of Machine Sentience" that strong AI would not be capable of consciousness {in his term 'sentience'} and thus that it would pose no danger to its managers or the human species, sentience being in Wargo's view the source of negative and destructive potentialities in humans. I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI. This was also Hans Jonas's view, as articulated by the author of the Foreword to his book The Phenomenon of Life: Towards a Philosophical Biology, available to read at the google books link below. Anyone else have a view on this cluster of ideas? I recommend the Jonas book and in particular, for present purposes, the Foreword in any case.
The Phenomenon of Life: Toward a Philosophical Biology - Hans Jonas - Google Books
will do... been a bit busy.Monsieur Merleau-Ponty.
Vital = living
Physical = non-living
PS Let us know if/when you share your latest JCS submission that you mentioned.
"The structure of behavior," A. Fisher translation.will do... been a bit busy.
on the MP quotation... can you give me the reference publication please?
I was rather impressed by an article I read recently by Vyvyan Evans
"Lexical Concepts, Cognitive Models and Meaning-Construction "
It is consistent with HCT.
I was curious however to understand more fully his ideas on concepts and "encyclopaedic knowledge"
Vyvyan Evans -- Professor of Linguistics
I read what he had to say about concepts, spatial and temporal perception. v disappointed with them
"
I think we should also discuss the pressing issue of AGI -- artificial general intelligence -- toward which our technological civilization is now being driven in the apparently wide-awake understanding that it will 'replace' our own intelligence with self-directed machine intelligence. Steve quotes @Eric Wargo above as follows:
"To Fundamentalists skeptical of machine sentience, artilects will be the incredibly brilliant but “empty” ventriloquists of their ambitious materialist makers. While everyone else is focused on the machines and what they can (or can’t) do, the Fundamentalists will discern that it is the machines’ human builders and masters (the 21st Century’s Edward Tellers) who remain the real threat to our freedom and our future."
Eric's characterization of the contemporary argument beween what he calls 'Fundamentalists' and the proponents of our replacement of our species by AI is accurate. Let's explore the grounds upon which both sides debate the significance and anticipated results of this 'singularity' and take into consideration Varela's and others' insights into the intersubjective nature of consciousness and what it makes possible, enables in the world, which the technologists do not seem to possess any awareness of.
Let's start by reading Eric's essay "Mysterianism and the Question of Machine Sentience" at this link:
Mysterianism and the Question of Machine Sentience @ The Nightshirt
I don't understand Wargo's claim or the reasoning on which he bases it. As I see it, it is consciousness/sentience that opens the mind to larger perspectives and reflections on life, mind, and existential responsibility in the world than computational cognition is capable of, and thus provides a check on negative ideations, ideologies, and behaviors that might develop in strong AI.
And Wargo is just saying that such a check isn't needed for machines because without sentience, negative ideations, ideologies and behaviors aren't possible ... they are just machines.
In Mysterianism and the Question of Machine Sentience Wargo doesn't fear sentient machines because he doesn't think them possible, his concern is how we think about these intelligent machines and how we choose to think about ourselves. In the sequel post The Space Jockey and Our Endosymbiotic Future, he considers a more frightening possibility. . . .
To me, this unconscious evolution of our future endosymbiosis is far more fascinating, troublesome, and realistic than the self-directed apotheosis Singularity fetishists dream about. The Space Jockey is somehow our own future, and that, if nothing else, is the reason we should pay attention to him—and thus, ignore any Alien prequels. -
Eric Wargo
@Constance
I'm approaching the 3rd part of the book, about halfway through length-wise. It is an excellent book. So far the discussion has been on life and the evolution of life. Consciousness has only been discussed indirectly. The second half of the book appears to be devoted to consciousness.
The arguments/approaches to life are very rich and subtle as well.
Thompson discussed at great length problems with the gene-centric view of evolution. He compared it to the computationist approach to mind. At one point, he argued that computationist views are akin to dualism; separating matter and information.
I don't disagree, but at the same time, the autopoetic and developmental systems theory approach (which he discusses at length) refer to organisms as "systems" and "life cycles."
Elsewhere, and I'm paraphrasing, Thompson describes metabolism (life) as an island of form amidst a sea of (ever changing) matter and energy.
So, while this is quite different from computationalism, Thompson, in my view, is still articulating a "dualism" between form and substance. Even in his initial outline of autopoesis, he explained that the substance nor the specific organization mattered per se; so long as a system is self-producing, it can be autopoetic.
The discussion of the process of evolution is likewise rich and also subtle in its differences with the gene-centric view. If I follow, the accepted view is that organisms/DNA is stable/static and undergoes change to adapt to problems as they arise in the environment.
On the DST approach, organisms—or rather, life-cycles/forms—are not static, but have at their core, flexibility.
These "life-cycles" are still structurally coupled to their environments and individual instances of these life-cycles (organisms) still have fitness values, but it is measured not by the life-cycle's correspondence with the environment, but by the form's "self-replicating power" (pp. 204).
A theme of DST is that life-cycle and environment are co-dependent; they influence one another. Re "self-replicating power" I'm reminded of a discussion @smcder and I have had many times about the "adaptability" of humans and other organisms.
It seems it's a different kind of adaptability; maybe the term isn't even appropriate. That is, rather than "adapting" to the environment, certain organisms (more than I previously considered, perhaps all) are able to shape the environment to meet their needs (creativity?).
Again, a very subtle difference, easy to miss. I need to hear more about how "self-replicating power" is defined, and in which ways it differs from adaptation to the environment.
Wow. A great book. Soaking it all in. Haven't even gotten to discussion of consciousness/mind yet.
That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.
In Bladerunner we are presented with fictional representations of human-machine hybrids who find their condition intolerable precisely because they possess sentience -- both sensual phenomenal awareness of the actual world they are part of and self-reference, producing the full range of human desires and motivations expressed in their leader's demand, on the androids' behalf, for "more life." As you'll recall, their immediate situation is that they are all about to be obliterated in order to make way for a further developed 'species' of android. I haven't read the Stanislaw Lem novel on which that extraordinary film is based. Perhaps, if it explores the issues more deeply, it would be a good text for us to read as a group. Have you read the novel, Steve? Do you think it would be a useful text for us to discuss?
That is an encouraging thought. However, the tendency in AI has been to project the possibility of 'downloading' or 'uploading' human sentience into machine substrates thus creating a human-computer interface. I've never thought it possible to interface, interconnect, consciousness/sentience with a computational machine, but the AI people do (naively imo) seem to think this is possible. Which suggests that they are going to need instruction from the neurophenomenologists, the sooner the better, before they do attempt to transplant machine intelligence into humans or vice versa in order to produce a 'posthuman' species.
In Bladerunner we are presented with fictional representations of human-machine hybrids who find their condition intolerable precisely because they possess sentience -- both sensual phenomenal awareness of the actual world they are part of and self-reference, producing the full range of human desires and motivations expressed in their leader's demand, on the androids' behalf, for "more life." As you'll recall, their immediate situation is that they are all about to be obliterated in order to make way for a further developed 'species' of android. I haven't read the Stanislaw Lem novel on which that extraordinary film is based. Perhaps, if it explores the issues more deeply, it would be a good text for us to read as a group. Have you read the novel, Steve? Do you think it would be a useful text for us to discuss?
Bladerunner is based on the novel Do Androids Dream of Electric Sheep (DADOES) by Phillip K Dick aka PKD aka Horselover Fat.