• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

Hey, I just asked the next logical question following Soupie's summary of what Tononi-Koch presented. There was no subtext in my question, so why are you addressing this response to me?

Why did I address my response to you? Because you are the one who posted the question, and it wasn't directed at anyone in particular Therefore it was open to the floor so-to-speak. More simply: Hey, you asked. So why would I not quote you?
 
Last edited:
I'm hopelessly behind in tracking the voluminous flow of ideas here. Scanning posts has triggered a sort of tongue in cheek anecdote from one of Pickover's speculative books on the subject of the mind body problem and substrate independence. To paraphrase: Scientists of the future have finally succeeded in transferring a human mind into a highly advanced computer. The computer's first comment was to the effect of: "Get me out of here ! I feel like I'm drowning !"

Cognitive dissonance caused by awareness of the mind-body problem would seem an unavoidable aspect of higher cognitive function. Perhaps awareness by all humans that they will one day die drives us, in subtle and sometimes not so subtle ways, to live a kind of functional insanity.

Re the hive mind: Who's to say we are not already a part of such? This subject has been broached here before. But humans interact, like neurons, according to sets of both innate and socially learned (and like neurons--changeable) rules as they constantly interact in pairs and in groups of varied sizes. Memes are generated and dissolved, promoted and suppressed by a kind of dynamic consensus that goes largely unseen from the individual viewpoint. Some may flash and circulate, like electrical signals between neurons, throughout societies, until a kind of social action potential is achieved and collective action taken. If consciousness is an emergence arising from the interaction of neurons, might it also emerge unseen from the rule bound interactions of individual humans? Would the larger consciousness be aware of the human individual, but not individuals aware of the conscious emergence they unwittingly produce?

Even putting the question of self-awareness aside, we can at least say with confidence that complex processes not achievable by any one person are achieved by the collective. IMO, It would not be a stretch to say that human populations comprise a kind of hive intelligence.
 
I'm hopelessly behind in tracking the voluminous flow of ideas here. Scanning posts has triggered a sort of tongue in cheek anecdote from one of Pickover's speculative books on the subject of the mind body problem and substrate independence. To paraphrase: Scientists of the future have finally succeeded in transferring a human mind into a highly advanced computer. The computer's first comment was to the effect of: "Get me out of here ! I feel like I'm drowning !"

Cognitive dissonance caused by awareness of the mind-body problem would seem an unavoidable aspect of higher cognitive function. Perhaps awareness by all humans that they will one day die drives us, in subtle and sometimes not so subtle ways, to live a kind of functional insanity.

Re the hive mind: Who's to say we are not already a part of such? This subject has been broached here before. But humans interact, like neurons, according to sets of both innate and socially learned (and like neurons--changeable) rules as they constantly interact in pairs and in groups of varied sizes. Memes are generated and dissolved, promoted and suppressed by a kind of dynamic consensus that goes largely unseen from the individual viewpoint. Some may flash and circulate, like electrical signals between neurons, throughout societies, until a kind of social action potential is achieved and collective action taken. If consciousness is an emergence arising from the interaction of neurons, might it also emerge unseen from the rule bound interactions of individual humans? Would the larger consciousness be aware of the human individual, but not individuals aware of the conscious emergence they unwittingly produce?

Even putting the question of self-awareness aside, we can at least say with confidence that complex processes not achievable by any one person are achieved by the collective. IMO, It would not be a stretch to say that human populations comprise a kind of hive intelligence.

The Splintered Mind: Is Crazyism Obvious?
see in particular "Chinese Nation Functionalism"

China brain - Wikipedia, the free encyclopedia

In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?
Early versions of this argument were put forward in 1974 by Lawrence Davis[1] and again in 1978 by
Ned Block.[2] Block argues that the China brain would not have a mind, whereas Daniel Dennett argues that it would.[3]

More generally, the term hive mind (or, if you are a musician, jive mind) covers a range of ideas ...

A hive mind or group mind may refer to a number of uses or concepts, ranging from positive to neutral and pejorative. Examples include:
So we could easily talk at cross purposes ... from positive to neutral to pejorative. I'm not sure I've seen a definition here on the thread?

I'm not sure how @mike feels about it - but I'm assuming no one likes the idea of being a unit in the Borg collective? Although the Cybermen seem to be very similar and I think Mike's said he is OK with that conception.
 
In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?

Interesting links ( as usual for you :) ).

In the China Brain thought experiment we can simply replace the humans with switches, and that's essentially what we already have now with large networks. Do these large networks have their own independent consciousness? I don't think so because I'm not convinced that consciousness is simply a matter of having enough switches. The brain is made up of specialized regions with specialized functions that communicate with each other in very specific ways. But even if we could get that all happening, we don't know that consciousness emerges solely from the arrangement and operation of assigned switches.

Maybe our sense of experience is dependent on the way the switches themselves work, and arises as a byproduct of the functioning of the system. In other words, substrate independence may require more than replicating the neuronal switches, but also the fields and other properties generated by biological cells, and it might not be possible to do that with anything other than biological cells arranged in a precise way. Or maybe it has something to do with the beliefs found in the land of woo. I don't know ( though I wouldn't personally place any bets there ).
 
Last edited:
. . . Maybe our sense of experience is dependent on the way the switches themselves work, and arises as a byproduct of the functioning of the system. In other words, substrate independence may require more than replicating the neuronal switches, but also the fields and other properties generated by biological cells, and it might not be possible to do that with anything other than biological cells arranged in a precise way. Or maybe it has something to do with the beliefs found in the land of woo. I don't know ( though I wouldn't personally place any bets there ).

I take that as a modification/moderation of what you referred to as a 'straw man' [without identifying the straw man] in your response last night to my earlier question: "So the next question is what is meant by 'artificial neurons'?" If so, I think you're now on the right track in approaching an understanding of what Tononi-Koch are saying and doing with IIT.3. In the earlier two versions of IIT it was presupposed that human consciousness (on the basis of which mind develops) could be explained in terms of quantities of 'information' that somehow produced the qualities of information we recognize in our own consciousnesses and those of others. Tononi and Koch make it very clear in the paper we've been referring to in this thread that they now recognize that qualitative experience is what calls for explanation in consciousness studies and philosophy of mind. And so they now begin with characteristics of human consciousness previously recognized through phenomenological descriptions of human responses to the encountered environments in which we and other animals live. Somewhere along the way in their attempt to build an informational theory of consciousness quantified in terms of sufficient 'integration' of information they have recognized (perhaps with help from others working in consciousness studies and philosophy of mind) that the 'integration' they postulate arises in the interaction of subjective and objective poles of experience already elaborated in phenomenological philosophy (and implicit as well in Jaak Panksepp's identification of 'affectivity' of primitive organisms leading to proto-consciousness and ultimately to consciousness as we experience it). If their original goal was to explain/account for how consciousness arises in a material world described in solely 'objectivist' terms, it appears that they are no longer comfortable working within those terms, within that presupposition. IIT.3 thus now pursues an inquiry distinguished from the question whether 'artificial intelligences' can be expected to experience qualitative consciousness as it shows up in the behaviors of animals and humans interacting in and with their physical environments in time.

Yesterday Soupie linked (in the C&P thread) another paper by Tononi-Koch, also published in May of this year, in which they elaborate the differences between IIT1.-2. and IIT3 for a technically knowledgeable audience of researchers who have been attempting to use the earlier versions of IIT in their own research:

http://www.ploscompbiol.org/article/fetchObject.action?uri=info:doi/10.1371/journal.pcbi.1003588&representation=PDF

That paper is daunting for ordinary readers, but it's concluding paragraphs are not difficult to understand:

“. . . the primary aim of IIT 3.0 is simply to begin characterizing, in
a self-consistent and explicit manner, the fundamental properties
of consciousness and of the physical systems that can
support it. Hopefully, heuristic measures and experimental
approaches inspired by this theoretical framework will make
it possible to test some of the predictions of the theory
[14,69]. Deriving bounded approximations to the explicit
formalism of IIT 3.0 is also crucial for establishing in more
complex networks how some of the properties described
here scale with system size and as a function of system
architecture.

The above formulation of IIT 3.0 is also incomplete:

i) We did not discuss the relationship between MICS and specific
aspects of phenomenology, such as the clustering into
modalities and submodalities, and the characteristic ‘‘feel’’ of
different aspects of experience (space, shape, color and so on;
but see [4–6,18]).

ii) In the examples above, we assumed that
the ‘‘micro’’ spatio-temporal grain size of elementary logic
gates updating every time step was optimal. In general,
however, for any given system the optimal grain size needs
to be established by examining at which spatio-temporal level
integrated information reaches a maximum [20]. In terms of
integrated information, then, the macro may emerge over the
micro, just like the whole may emerge above the parts.

iii) While emphasizing that meaning is always internal to a
complex (it is self-generated and self-referential), we did not
discuss in any detail how meaning originates through the
nesting of concepts within MICS (its holistic nature).

iv) In IIT, the relationship between the MICS generated by a complex of
mechanisms, such as a brain, and the environment to which it
is adapted, is not one of ‘‘information processing’’, but rather
one of ‘‘matching’’ between internal and external causal
structures [4,6]. Matching can be quantified as the distance
between the set of MICS generated when a system interacts
with its typical environment and those generated when it is
exposed to a structureless (‘‘scrambled’’) version of it [6,70].
The notion of matching, and the prediction that adaptation to
an environment should lead to an increase in matching and
thereby to an increase in consciousness, will be investigated in
future work, both by evolving simulated agents in virtual
environments (‘‘animats’’ [71–73]), and through neurophysiological
experiments.


v) IIT 3.0 explicitly treats integrated
information and causation as one and the same thing, but
the many implications of this approach need to be explored
in depth in future work. For example, IIT implies that
each individual consciousness is a local maximum of causal
power. Hence, if having causal power is a requirement
for existence, then consciousness is maximally real. Moreover,
it is real in and of itself – from its own intrinsic
perspective – without the need for an external observer to
come into being."
 
Last edited:
I take that as a modification/moderation of what you referred to as a 'straw man' [without identifying the straw man] in your response last night to my earlier question: "So the next question is what is meant by 'artificial neurons'?"
The straw man was the use of a star simulation as a rationale for why a brain simulation would not be comparable to a biological brain. However a star is not a brain, and therefore the star is the straw man in the argument. On the other hand, a computer is a kind of brain. Whether or not one can be engineered and programmed to produce experiences similar to our biological brains has yet to be determined. It also doesn't matter if those experiences are identical in every respect to a human brain to consider them experiential.

For example it would be true that a virtual Sally or Joe would not be a biological Sally or Joe ( the originals ), but that doesn't mean that Joe and Sally's virtual counterparts are necessarily devoid of experience. They might very well be their own conscious selves and possess many similarities to their biological originals. Yet they would not be the originals, and this is what I was getting at in our past discussions on reincarnation and such. Simply transferring things like memory and other traits into another support system ( whatever that may be ) does not constitute continuity of consciousness or personhood.

[/quote]
If so, I think you're now on the right track in approaching an understanding of what Tononi-Koch are saying and doing with IIT.3. In the earlier two versions of IIT it was presupposed that human consciousness (on the basis of which mind develops) could be explained in terms of quantities of 'information' that somehow produced the qualities of information we recognize in our own consciousnesses and those of others. Tononi and Koch make it very clear in the paper we've been referring to in this thread that they now recognize that qualitative experience is what calls for explanation in consciousness studies and philosophy of mind. And so they now begin with characteristics of human consciousness previously recognized through phenomenological descriptions of human responses to the encountered environments in which we and other animals live. Somewhere along the way in their attempt to build an informational theory of consciousness quantified in terms of sufficient 'integration' of information they have recognized (perhaps with help from others working in consciousness studies and philosophy of mind) that the 'integration' they postulate arises in the interaction of subjective and objective poles of experience already elaborated in phenomenological philosophy (and implicit as well in Jaak Panksepp's identification of 'affectivity' of primitive organisms leading to proto-consciousness and ultimately to consciousness as we experience it). If their original goal was to explain/account for how consciousness arises in a material world described in solely 'objectivist' terms, it appears that they are no longer comfortable working within those terms, within that presupposition. IIT.3 thus now pursues an inquiry distinguished from the question whether 'artificial intelligences' can be expected to experience qualitative consciousness as it shows up in the behaviors of animals and humans interacting in and with their physical environments in time.
[/quote]
Yes. All very interesting. Thank you for introducing these researchers into the discussion.
 
I think you misunderstood what T-K were saying in pointing out the difference between an object in nature and a simulated object. They were drawing an analogy to a difference they recognize between actual embodied consciousness as it is experienced by living organisms and 'virtual consciousness', which yet remains a concept that might not be realized to any significant extent, as you recognize. The difference matters in a situation where well-funded AI engineers and their corporate partners are attempting to replace human experiential consciousness with a machine replica that likely cannot experience and therefore think about the actual world -- the actual conditions within which our and other species live. If AI cannot experience the actualities of biological existence, it cannot be expected to understand or empathise with biological beings or manage the resources of the planet in terms of their benefit and survival.

If you think I've misunderstood something, it would be more helpful to explain why my analysis of what the authors are saying is flawed than to explain what you think the authors mean. You seem to understand what they are saying just fine, but also seem to be missing the points I was making as to why what they are saying isn't coherent. Maybe that's my fault.

To clarify my position further, there is a profound difference between a star simulated by a computer, and a human brain simulated by a computer. A human brain simulated by a computer is essentially an electronic brain configured to process signals like a human brain, while a star is something else altogether. A "computer simulation of a giant star will not bend spacetime around the machine". However a computer simulation of a human brain might very well see the space around itself if hooked up to a camera. So although an electronic brain configured to process signals like a human brain doesn't make the computer a human being, that point isn't relevant to the essential question. Might it possess consciousness?

To elaborate: Their argument goes like this: First they ask, "Could such a digital simulacrum ever be conscious?" and then they require that for such consciousness to happen that another separate question be answered affirmatively when it is not possible to do so: "Why should we not grant to this simulacrum the same consciousness we grant to a fellow human?". If we blindly accept that the second question is valid with respect to the first, it automatically sets up the first question for failure.


Their mistake is that the second question has no relevance to the first. It's not necessary for the simulated brain to have the "same consciousness" as its biological counterpart. It's only necessary for it to be conscious; period. So any argument against consciousness based on their assumption of sameness is irrelevant. Their second question should be, "Why should we not assume this simulacrum possesses it's own consciousness?" Or even better they could have simply done away with the second question altogether.

So in addition to the straw man ( a star is not a brain ), it's a logical flaw to assume that because two different brains are not the same ( one electronic the other biological ), that only one of them can possess consciousness. Perhaps both can, We don't know. It's not an easy question to answer. The truth of ones own consciousness appears to be available only to the ones who possess it. This point was illustrated in the movie Transcendence, a clip from which is included below again for your convenience:



 
Last edited:
If you think I've misunderstood something, what you need to do rather than explaining what the authors are saying, is to explain why my analysis of what they are saying is flawed. So far, you haven't done that.

No, I don't need to do that. If you don't like what Tononi-Koch have written, I suggest you take it up with them.

 
No, I don't need to do that. If you don't like what Tononi-Koch have written, I suggest you take it up with them.
OK I've reworded the original post so that it might reduce your propensity for taking what I say personally. You're the one who was claiming I misunderstand something, therefore it's up to you to explain why you think that. Not the authors. The authors can't explain why you think one thing or another any better than I can. And BTW not liking what someone says is entirely different than saying what they are saying isn't coherent. I like the article. It's interesting. However it's also flawed. If you don't like that analysis, it's not relevant. However if you have reasons for thinking my reasoning is flawed, then that would be relevant and I would be interested in hearing them.
 
Last edited:
The only way to cut this off is to delete the comment that upset you, so I've just done that. Have a nice night.
You're assuming I'm upset. I'm not. I'm simply trying to figure out why you think I misunderstood the section of the paper we were discussing :).
 
Did you find the email from Kurzweil?
I've edited out my name and email address. My name was/became the subject line of the email, so it's blurred out too. I sent an email to the general info email address, an underling received it and forwarded it on to Kurzweil, who then responded back via the underling I believe.

image1.JPG


image2.JPG


image3.JPG
 
I've edited out my name and email address. My name was/became the subject line of the email, so it's blurred out too. I sent an email to the general info email address, an underling received it and forwarded it on to Kurzweil, who then responded back via the underling I believe.

image1.JPG


image2.JPG


image3.JPG

:-)
 
Review of Bostrom's Simulation Argument

Brian Eggleston

In “Are you living in a computer simulation?”, Nick Bostrom presents a probabilistic analysis of the possibility that we might all be living in a computer simulation.

He concludes that it is not only possible, but rather probable that we are living in a computer simulation.

This argument, originally published in 2001, shook up the field of philosophical ontology, and forced the philosophical community to rethink the way it conceptualizes “natural” laws and our own intuitions regarding our existence. Is it possible that all of our ideas about the world in which we live are false, and are simply the result of our own desire to believe that we are “real”? Even more troubling, if we are living in a computer simulation, is it possible that the simulation might be shut off at any moment?

In this paper, I plan to do two things :

First, I hope to consider what conclusions we might draw from Bostrom’s argument, and what implications this might have for how we affect our lives.

Second, I plan to discuss a possible objection to Bostrom’s argument, and how this might affect our personal probability for the possibility that we are living in a computer simulation.
 
Cognitive dissonance caused by awareness of the mind-body problem would seem an unavoidable aspect of higher cognitive function.
Wow. Good stuff.

It does seem that once a system is able to form concepts and is suitably aware, it will inevitably become aware of itself and thus form a concept of itself.

I don't know what you had in mind (ahem) regarding the mind-body problem, but perhaps the most perplexing aspect of the problem is that according to mainstream models of reality, mind is epiphenomenal.

Many in the field of AI are seeking to create autonomous systems. One assumes that autonomous systems would be conscious, i.e., have experiences, and one might even assume that conscious systems would be self-aware.

But if mind is epiphenomenal, why should intelligent systems need it to be autonomous, i.e., intelligent? If mind is causally impotent and epiphenomenal, why are we concerning ourselves with uploading our minds onto servers? It seems ironic that physical systems (humans) would attempt to create artificial physical systems (AI) in an effort to generate things (minds) that at best are causally impotent or at worst don't really exist at all.
 
Last edited:
We have comparable data in regards to the mind body problem.

Total locked in syndrome

Locked-in syndrome - Wikipedia, the free encyclopedia

In some rare cases they are unable to see , feel or hear but they are none the less conscious

Ironically enough a lot of the BCI research is aimed at helping these people

...Nevertheless, research has shown that it is possible to detect the consciousness in CLiS with an auditory BCI (Schnakers et al, 2009), so BCIs are still promising devices for this group of people...

Hayrettin Gürkök, et al. Brain–Computer Interfaces for Multimodal Interaction:
Detecting consciousness in a total locked-in syndrome: An active event-related paradigm


Total—Total immobility and inability to communicate, with full consciousness.
Locked-in syndrome


Braingate Frees Trapped Minds | Singularity HUB
 
" This might entail several things. Assuming that we don’t want the simulation to be turned off (as this would cause us to cease to exist), we should do everything in our power to keep whoever is simulating us interested in the simulation. This might cause us to pursue actions that are more likely to cause very dramatic events to happen. Also, if we believe that our simulators are willing to punish/reward people for certain behavior within the simulation, we should try to figure out what behavior they are going to reward and act on that. Thus, knowing that we are very probably living in a computer simulation should have a profound effect on the way we lead our lives."

Pascal's Wager redux




Sent from my iPhone using Tapatalk
 
Its a plot device thats been used in a few examples

The replicants in Blade Runner were machine consciousness (albiet on biological machines) Giving them mind files that simulated a personal history was deemed a way to improve them.

Like wise in Total recall, the "abort" sequence was if you know where to look for the clues part of the recall program designed to make the experience seem seamless.

If in a scenario where you were uploading conciousness in persons who were not aware this was being done, it makes sense to include a virtual experience that looks and feels like real life, the simulation of which is gradually teased to make them aware of whats happening.

In what is a real twist, our interest in this subject and reading up on it, might itself be part of that simulation, with the "reveal" coming only after you have become acclimatised to the idea.

Are we already dead ? a conciousness being put thru this process would think it was still alive during that process
 
" This might entail several things. Assuming that we don’t want the simulation to be turned off (as this would cause us to cease to exist), we should do everything in our power to keep whoever is simulating us interested in the simulation. This might cause us to pursue actions that are more likely to cause very dramatic events to happen. Also, if we believe that our simulators are willing to punish/reward people for certain behavior within the simulation, we should try to figure out what behavior they are going to reward and act on that. Thus, knowing that we are very probably living in a computer simulation should have a profound effect on the way we lead our lives."

At last, a purpose in existence.
 
Back
Top