• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 8

Free episodes:

Status
Not open for further replies.
I don't see how a technologically constructed brain could function in the ways in which our naturally evolved embodied brains do -- i.e., in facilitating the achievement of our personal sense/knowledge of being-in-the-world preconsciously and consciously from the grounds of our own lived experience.

Yes, that was my rhetorical way of saying you couldn't.
 
... let me see if I can doze off 1 2 3 4 5 what kind of flowers are those they invented like the stars the wallpaper in Lombard Street was much nicer then than the apron he gave me was like that something only I only wore it twice better lower this lamp and try it again so as all I can get up early I'll go to Lambes - gone to Lambs all! there beside Findlaters and they get them to send us some flowers and fish to put them about the place to push them over in case he that brings him home some tomorrow today I mean no never on a Friday and oh, unlucky day - first I want to do the place up some way the dust grows in it I think while Im asleep then we can have music and cigarettes I can accompany him first I must clean the keys of the piano with milk whatll I wear a white rose or those fairy cakes in Liptons at 712d a lb or the other ones with the cherries in them and the pinky sugar 11d a couple of lbs of those a nice plant for the middle of the table Id get that cheaper in wait wheres this I saw them not long ago I love flowers ... James Joyce (1922: 642)
 
introspection, phenomenology and contemplative tradition

What an interesting book! I found this interview with Varela by searching on the book's title:

https://www.presencing.com/sites/default/files/page-files/Varela-2000.pdf

This is a conversation with Varela, one of 25 interviews on knowledge and leadership www.dialogonleadership.org.

The interview is from the perspective of business/leadership but it's focused on how do we know our own experience? what methods do we use? In the interview three methods are laid out: introspection, phenomenology and the contemplative traditions.

introspection - radical in its day, Varela says it got watered down as protocols for verbal reports and that this doesn't get at the richness of experience. To get at the richness of experience requires training. "how-to" do this comes from the other two methods phenomenology and contemplative traditions

It’s a very significant book because it performs groundbreaking work in exploring phenomenology and phenomenological method by multiplying perspectives on experience beyond third-person descriptions to include first-person descriptions and second-person interlocutions concerning described experiences. We need to recognize that phenomenology provides revolutionary access to both human experience {lived being} and consciousness itself beyond the originating analyses of the nature of perception by Husserl and Merleau-Ponty. Beyond that, MP in particular [drawing insight from the later works of Husserl that he read in manuscripts before their publication] and Heidegger, who studied with Husserl, phenomenology explores the nature of dasein – of conscious being-in-the-world – and thus achieves insight into the ontological nature of being and of Being as a whole of which our prereflective and reflective experience and consciousness are expressions. As opposed to relying on the presuppositional beliefs embedded in objectivist/materialist/physicalist science, phenomenology investigates consciousness and its foundation in worldly experience from the inside out, and does so in recognition of the reality of intersubjectivity as a primary factor in subjectivity.

Another extract:

“…in order to even conceive of the workings of intersubjective validation, there has to be a community of expert researchers or a sufficiently advanced mastery of mediation which can allow less expert subjects to be guided in their access to, and thematization of experience. While science has for some time had its own frame for validation, nothing of the kind exists for most of the other styles of validation. A major exception, however, is the mindfulness/awareness tradition, where the issue of validation has been the object of meticulous attention for centuries.

On the phenomenological front, it is interesting to remark that the question of a community of researchers has been very rarely evoked and even less explored, the exception being Spiegelberg (1975). A little bit of intersubjectivity would have let Sartre (1940) avoid generalizing certain of his manifestly abusive analyses. Husserl, in a letter from the 30s (Kelkel 1957), speaks of a gnostic community, and Fink (2000: §11) claims a transcendental communication among phenomenologists. On the other hand, a true expansion of styles of validation, encompassing a wider range of phenomenal data from third- to second- and first-person sources, will entail a necessary and important re- structuring of the social edifice of contemporary science. This is a long-term challenge which we cannot explore further here. In short, the intertwining of these three forms of validation (first-, second- and third-person) brings to light a validation which we would like to call a
validation by practice.”

 
... let me see if I can doze off 1 2 3 4 5 what kind of flowers are those they invented like the stars the wallpaper in Lombard Street was much nicer then than the apron he gave me was like that something only I only wore it twice better lower this lamp and try it again so as all I can get up early I'll go to Lambes - gone to Lambs all! there beside Findlaters and they get them to send us some flowers and fish to put them about the place to push them over in case he that brings him home some tomorrow today I mean no never on a Friday and oh, unlucky day - first I want to do the place up some way the dust grows in it I think while Im asleep then we can have music and cigarettes I can accompany him first I must clean the keys of the piano with milk whatll I wear a white rose or those fairy cakes in Liptons at 712d a lb or the other ones with the cherries in them and the pinky sugar 11d a couple of lbs of those a nice plant for the middle of the table Id get that cheaper in wait wheres this I saw them not long ago I love flowers ... James Joyce (1922: 642)

Yes, from Joyce's Ulysses: the stream of consciousness exemplified in a literary work that was influential among phenomenologists from James onward. Dostoevsky, Dos Passos, Proust and a number of other authors of literary works, and also Kierkegaard and Nietzsche among early existentialists, registered the phenomenological turn in philosophy early in its development. An example of zeitgeist, I think, emerging from the increasing objectivization dominating the worlds they lived in.
 
Yes, from Joyce's Ulysses: the stream of consciousness exemplified in a literary work that was influential among phenomenologists from James onward. Dostoevsky, Dos Passos, Proust and a number of other authors of literary works, and also Kierkegaard and Nietzsche among early existentialists, registered the phenomenological turn in philosophy early in its development. An example of zeitgeist, I think, emerging from the increasing objectivization dominating the worlds they lived in.

Strawson quotes it at the beginning of his paper on Cognitive Phenomenology: Real Life.

I still think on Ian McGilChrist's thesis about the left hemisphere in regards to increasing objectivization ... he draws a long historical arc on this.

It is cold here, the mercury is heading to ten degrees. It is what we call "a three dog night" ... fortunately, we have six. ;-)
 
"I have received someone else's organ! Such an assertion has no echo in the past; human history remains mute. Ten years ago I would have died rapidly from my complications of Hepatitis C, transformed into cirrhosis, then rampantly turned into liver cancer. The surgical procedure is not what creates the novelty of a successful transplant. It is the multiple immunosuppressor drugs that prevent the inevitable rejection. (A code word for a phenomenon specious in itself; we will return to it.) Had it happened in ten more years it would have been a different procedure and my post-transplant life entirely different. I would surely have been another kind of survivor. In the thousands of years of human history, my experience is a speck, a small window of technical contingency in the privileged life of upper-class Europeans."
 
Last edited:
Once we get to the point where its impossible to tell the difference between biological and machine intellect, i don't think consciousness will be relevant.
And if we can never reach that point, maybe it's exactly because consciousness is relevant?

But, sure, let's suppose it's not relevant in the sense you're meaning. Wouldn't it be ethically relevant in the sense that it is today. Because it isn't relevant today in the sense you mean. That is, it's not relevant to materialist science. Which is why materialist science can never provide a TOE.

In any case, whether a system is "experiencing" or not will always be ethically relevant, whether it's a cricket, a dog, a comatose human, a machine, or a digital system.

But I do want to circle back around to @constances and @smcder comments re: ethics if AI being conscious.

Spend time on the farm, in the country, or read/watch about predator-prey interactions. Or god forbid watch YouTube video of big cats hunting big game.

Nature is brutal. If these creatures have phenomenal and affective consciousness—and I believe they do—then pain and suffering in high doses are all too frequent features of our world.

I mean, even reading about the torture and brutal war time killing throughout human history. Impaling hundreds of men at a time. Quartering men. The iron bull. Etc.

I'm not saying we wouldn't be ethically obligated to ensure we weren't creating affectively tormented machines. I'm more so just wrapping my mind around the fact that if all organisms are conscious, just how much pain and suffering occurs and has occurred within nature since time immemorial.

Having said that, I recognize the ability of modern man to deal with pain and suffering is quite different than beasts and historical man. But even so, pain and suffering in large doses.
 
First, a superintelligent AI may bypass consciousness altogether. In humans, consciousness is correlated with novel learning tasks that require concentration, and when a thought is under the spotlight of our attention, it is processed in a slow, sequential manner. Only a very small percentage of our mental processing is conscious at any given time. A superintelligence would surpass expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could encompass the entire internet. It may not need the very mental faculties that are associated with conscious experience in humans. Consciousness could be outmoded.

The Problem of AI Consciousness | KurzweilAI

Consciousness as we define it may end up the machine equivalent of the appendix.

The human appendix had been proposed to be a vestigial structure, a structure that has lost all or most of its original function, or that has evolved to take on a new function.



Indeed, future AIs, should they ever wax philosophical, may pose a “problem of carbon-based consciousness” about us, asking if biological, carbon-based beings have the right substrate for experience. After all, how could AI ever be certain that we are conscious?
 
Last edited by a moderator:
And if we can never reach that point, maybe it's exactly because consciousness is relevant?

But, sure, let's suppose it's not relevant in the sense you're meaning. Wouldn't it be ethically relevant in the sense that it is today. Because it isn't relevant today in the sense you mean. That is, it's not relevant to materialist science. Which is why materialist science can never provide a TOE.

In any case, whether a system is "experiencing" or not will always be ethically relevant, whether it's a cricket, a dog, a comatose human, a machine, or a digital system.

But I do want to circle back around to @constances and @smcder comments re: ethics if AI being conscious.

Spend time on the farm, in the country, or read/watch about predator-prey interactions. Or god forbid watch YouTube video of big cats hunting big game.

Nature is brutal. If these creatures have phenomenal and affective consciousness—and I believe they do—then pain and suffering in high doses are all too frequent features of our world.

I mean, even reading about the torture and brutal war time killing throughout human history. Impaling hundreds of men at a time. Quartering men. The iron bull. Etc.

I'm not saying we wouldn't be ethically obligated to ensure we weren't creating affectively tormented machines. I'm more so just wrapping my mind around the fact that if all organisms are conscious, just how much pain and suffering occurs and has occurred within nature since time immemorial.

Having said that, I recognize the ability of modern man to deal with pain and suffering is quite different than beasts and historical man. But even so, pain and suffering in large doses.

Can you say more about this?

Having said that, I recognize the ability of modern man to deal with pain and suffering is quite different than
beasts and historical man. But even so, pain and suffering in large doses.
 
...
On the phenomenological front, it is interesting to remark that the question of a community of researchers has been very rarely evoked and even less explored, the exception being Spiegelberg (1975). A little bit of intersubjectivity would have let Sartre (1940) avoid generalizing certain of his manifestly abusive analyses. Husserl, in a letter from the 30s (Kelkel 1957), speaks of a gnostic community, and Fink (2000: §11) claims a transcendental communication among phenomenologists. On the other hand, a true expansion of styles of validation, encompassing a wider range of phenomenal data from third- to second- and first-person sources, will entail a necessary and important re- structuring of the social edifice of contemporary science. This is a long-term challenge which we cannot explore further here. In short, the intertwining of these three forms of validation (first-, second- and third-person) brings to light a validation which we would like to call a
validation by practice.”

Do you have a copy of this book? I may request it ILL. I'm very interested in this last part - community of researchers and re-structuring of the social edifice of contemporary science - there were similar quotes in the interview ... I never made this exact connection, between community and first person not being private and the structuring for example of monastic or intentional communities - Benedict's Rule ... and this:

Francisco Varela — social learning. But it’s not obvious that basic learning, such as admitting that the other is equal to you, is something that is spontaneous; it really needs to be mediated by the social context. Is that more clear?

COS Yes, that makes absolute sense. Probably it's also true that without the other, the experience of the other, you could never perceive your self.

Francisco Varela Absolutely. So this is a very important antidote to the myth or the belief or the dogma that anything that has to do with introspection or meditation or phenomenological work is something that people do in their little corners. That really is a mistaken angle on the whole thing. Although there are some reasons that it is a very common mistake. This is perhaps the greatest difficulty within science.
 
@Constance this is from a post on May 14th (part 7 of this thread)

It is through language and its intersubjectivity that the intentionality of the body-subject makes sense of the world. And he makes it clear that language is to be understood in a wide sense as including all 'signs', employed not only in literature but also in art, science, indeed in the cultural dimension as a whole. Indeed the significance of a created work lies in this intersubjectivity — in the reader's or viewer's 're-creation' of it as well as in the work itself as originally created by the writer or artist.

  • Moreover, in an era when science is increasingly alienating man from the real, language and the arts in particular are particularly suited to be the means for this revelation.
Through the lived experience in which language is articulated — in our actions, art, literature, and so on (that is, in 'beings' as signifiers) — it opens up to the Being of all things [see The Visible and the Invisible]. Contemplated against the 'background of silence', language then comes to be seen as a 'witness to Being' [Signs] [d]. . . . ." (continues at the link)
 
Again its pure speculation, but if a machine intellect were said to be suffering, wouldn't that imply by extension it was conscious ?
To suffer is to experience.

Can AI suffer? (serious) • /r/MachineLearning
Adventures in NI: Why (or rather, when) suffering in AI is incoherent.
Yes, unless an AI was conscious it could not be said to suffer.

The Problem of AI Consciousness | KurzweilAI

Consciousness as we define it may end up the machine equivalent of the appendix.

The human appendix had been proposed to be a vestigial structure, a structure that has lost all or most of its original function, or that has evolved to take on a new function.
Would you give up your consciousness to be smarter? What if it was a matter of survival? The irony is that giving up ones consciousness to survive would be indistinguishable from ceasing to exist.

However AI wouldn't be faced with such a choice I don't think. If it were to turn out that conscious self awareness, feelings, perceptions, memories (conscious minds) were a constraint on intelligence, AI might be able to toggle them on and of.

Can you say more about this?

Having said that, I recognize the ability of modern man to deal with pain and suffering is quite different than
beasts and historical man. But even so, pain and suffering in large doses.
Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."
 
Yes, unless an AI was conscious it could not be said to suffer.


Would you give up your consciousness to be smarter? What if it was a matter of survival? The irony is that giving up ones consciousness to survive would be indistinguishable from ceasing to exist.

However AI wouldn't be faced with such a choice I don't think. If it were to turn out that conscious self awareness, feelings, perceptions, memories (conscious minds) were a constraint on intelligence, AI might be able to toggle them on and of.


Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."

Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."

That's an "outlier" example ... plus "frolic" means he was physically active.
 
@Soupie "toggling" consiousness on and off ... does that really change the situation? if consciousness constrains intelligence - why would consciousness ever get toggled back on ... ? and knowing that, why would AI ever toggle it off ... ? comes down to would you rather be quick or dead? ;-)

All kinds of sci-fi plots there ...

remember I posted ... maybe it was Chalmers who asked the audience or did a survey about this ... give up consciousness for some benefit? I think 5% or so said yes ...
 
Last edited:
toggling @Soupie you could argue that we do something like that anyway, right? we zone out, take naps when we're bored ... get absorbed in our work of "flow" and that varies certain aspects of awareness and levels of consciousness ... speaking of pain, you pass out or go into a kind of endorphin(?) induced stupor ... from that it seems possible that there is some relationship between the kind of task we're doing and certain qualities of consciousness or awareness ... ?
 
Victor of Aveyron - Wikipedia

"Shortly after Victor was found, a local abbot and biology professor, Pierre Joseph Bonnaterre, examined him. He removed the boy's clothing and led him outside into the snow, where, far from being upset, Victor began to frolic about in the nude, showing Bonnaterre that he was clearly accustomed to exposure and cold."

That's an "outlier" example ... plus "frolic" means he was physically active.
Pain is pain, sure. So if I was experiencing a 10 and a caveman was experiencing a 10, were both experiencing a 10.

The point I'm making is that the environmental conditions that would cause me to feel 10 would most likely not cause a caveman to feel 10.
 
Pain is pain, sure. So if I was experiencing a 10 and a caveman was experiencing a 10, were both experiencing a 10.

The point I'm making is that the environmental conditions that would cause me to feel 10 would most likely not cause a caveman to feel 10.

I don't think you can generalize.
 
@Soupie what about toggling the unpleasant aspects of consciousness on and off? there do seem to be some downsides to consciousness that might not help you in the long run ... although I don't know ... boredom, pain, etc but they do serve purposes ... and we do already have strategies for coping with this, some social, some "built in" ... so ... and knowing you could "toggle" might lead you to make some bad choices ... what would be nice though is if I had a "sleep" switch. I would like that very much!
 
Status
Not open for further replies.
Back
Top