• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

How do we get people to accept the hive mind, consciousness uploads or cybernetization?

I think it just happens naturally they way it does with all technology. Children will grow up speaking to post biological grandparents in much the same way they do now via skype

And while this is satirical in nature, i think it illustrates what we might see

 
You addressed that question to Steve, but I would also like to respond to it. What would a totalized hive mind operating on an artificial computer substrate be able to 'adapt to'? Apparently not to changing conditions in the physical, natural world since, as Tononi and Koch have also now recognized, a computer intelligence would be capable of almost no experience of the world.


I think not only would it be able to experience the world, its experience will surpase ours.

Our experience is simply sensory input, some people are already using artificial input as part of their experience matrix. Is their experience invalid because they use artificial inputs ?

In addition a synthetic intellect might borrow via wireless BCI's the input of not just one individual, but a series of them all at once.

Finally, there is a strong overlapping between current control theory of very complex system and the role that is played by a conscious mind. A fruitful approach could be the study of artificial consciousness as a kind of extended control loop (Chella, Gaglio et al., 2001; Sanz, 2005; Bongard, Zykov et al., 2006)
There have also been proposals that AI systems may be well-suited or even necessary for the specification of the contents of consciousness (synthetic phenomenology), which is notoriously difficult to do with natural language (Chrisley, 1995).
One line of thought (Dennett, 1991; McDermott, 2001; Sloman, 2003) sees the primary task in explaining consciousness to be the explanation of consciousness talk, or representations of oneself and others as conscious. On such a view, the key to developing artificial consciousness is to develop an agent that, perhaps due to its own complexity combined with a need to self-monitor, finds a use for thinking of itself (or others) as having experiential states.

Consciousness and Artificial Intelligence
 
. . . would such a mindclone be alive? Rothblatt thinks so. She cited one definition of life as a self-replicating code that maintains itself against disorder. Some critics have shunned what Rothblatt called "spooky Cartesian dualism," arguing that the mind must be embedded in biology. On the contrary, software and hardware are as good as wet ware, or biological materials, she argued.

Of course "Rothblatt thinks so." She has a product to sell, which she wants us to buy without seeing evidence that it works.

She described how the mind clones are created from a "mindfile," a sort of online repository of our personalities, which she argued humans already have (in the form of Facebook, for example).

Seriously?
 
You have said that by the 2030s, people will have blood cell-sized computing devices in their bloodstreams and brains that connect directly to off-site computer data servers. What makes you think that?
We already have computerized devices that are placed inside the body and even connected into the brain, such as neural implants for Parkinson’s disease and cochlear implants for the deaf. These devices can already wirelessly download new software from the cloud. Technology is shrinking at an exponential rate, which I’ve measured at about 100 in 3D volume per decade. At that rate, we will be able to introduce blood cell-sized devices that are robotic and have computers that can communicate wirelessly by the 2030s.

Theodore Berger, a neural engineer at the University of Southern California in Los Angeles, is taking BCIs to a new level by developing a memory prosthesis. Berger aims to replace part of the brain's hippocampus, the region that converts short-term memories into long-term ones, with a BCI. The device records the electrical activity that encodes a simple short-term memory (such as pushing a button) and converts it to a digital signal. That signal is passed into a computer where it is mathematically transformed and then fed back into the brain, where it gets sealed in as a long-term memory. He has successfully tested the device in rats and monkeys, and is now working with human patients.

I dont think some sort of offsite storage for memorys is too far fetched, I would personally subscribe to a mindfile service if i could
 
When you're blind, being able to see even the basics of light, movement and shape can make a big difference. Both the Argus II Retinal Prosthesis, currently in FDA trials, and a system being developed by Harvard Research Fellow Dr. John Pezaris record basic visual information via camera, process it into electronic signals and send it wirelessly to implanted electrodes. The Argus II uses electrodes implanted in the eye, which could help people who've lost some of their retinal function. Dr. Pezaris' system, still in the early stages of research, would bypass the eyes entirely, sending visual data straight to the brain. Both systems will work best with people who could once see because their brains will already know how to process the information. "The visual brain depends on visual experience to develop normally," Pezaris explained.
 
Overview

The continuing development of implantable neural prostheses signals a new era in bioengineering and neuroscience research. This collection of essays outlines current advances in research on the intracranial implantation of devices that can communicate with the brain in order to restore sensory, motor, or cognitive functions. The contributors explore the creation of biologically realistic mathematical models of brain function, the production of microchips that incorporate those models, and the integration of microchip and brain function through neuron-silicon interfaces. Recent developments in understanding the computational and cognitive properties of the brain and rapid advances in biomedical and computer engineering both contribute to this cutting-edge research.

The book first examines the development of sensory system prostheses—cochlear, retinal, and visual implants—as the best foundation for considering the extension of neural prostheses to the central brain region. The book then turns to the complexity of neural representations, offering, among other approaches to the topic, one of the few existing theoretical frameworks for modeling the hierarchical organization of neural systems. Next, it examines the challenges of designing and controlling the interface between neurons and silicon, considering the necessity for bidirectional communication and for multiyear duration of the implant. Finally, the book looks at hardware implementations and explores possible ways to achieve the complexity of neural function in hardware, including the use of VLSI and photonic technologies.

Toward Replacement Parts for the Brain | The MIT Press
 
... Tononi and Koch have also now recognized, a computer intelligence would be capable of almost no experience of the world ...

It's not quite that simple: Summary From the Paper by Tononi, Albantakis, & Oizumi ( source here ):

"Integrated information theory (IIT) approaches the relationship between consciousness and its physical substrate by first identifying the fundamental properties of experience itself: existence, composition, information, integration, and exclusion. IIT then postulates that the physical substrate of consciousness must satisfy these very properties. We develop a detailed mathematical framework in which composition, information, integration, and exclusion are defined precisely and made operational. This allows us to establish to what extent simple systems of mechanisms, such as logic gates or neuron-like elements, can form complexes that can account for the fundamental properties of consciousness. Based on this principled approach, we show that IIT can explain many known facts about consciousness and the brain, leads to specific predictions, and allows us to infer, at least in principle, both the quantity and quality of consciousness for systems whose causal structure is known. For example, we show that some simple systems can be minimally conscious, some complicated systems can be unconscious, and two different systems can be functionally equivalent, yet one is conscious and the other one is not."
Their assumption is that consciousness ( and consequently experience ) arises out of specific types of sufficiently complex relationships within an integrated system. The most important three words in the piece are, "... the substrate of consciousness ..." because the assumption is that consciousness arises out of some sort of substrate, whatever that may be ( biological or otherwise ). It doesn't assume that consciousness exists independently like some sort of disembodied sense of awareness that can somehow attach and detach itself from the substrate. Quite the opposite. It assumes some sort of substrate is required.

I tend to agree that because minds are processing systems they therefore require processors, and in-turn such processors must exist someplace. Therefore if for the sake of discussion, we assume that minds are capable of floating about independent of any of the processors we currently believe are responsible for it, then all that means is that there is some other set of processors someplace else that is giving rise to it. Perhaps such processors are the same ones that give rise to everything else in the universe too. I don't know. But when reflecting on that possibility, we find that we start to run into problems really fast, because there is so much evidence in favor of the hypothesis that our minds are indeed the result of physical processes taking place in the brain.
 
Last edited:
Constance said:
"You addressed that question to Steve, but I would also like to respond to it. What would a totalized hive mind operating on an artificial computer substrate be able to 'adapt to'? Apparently not to changing conditions in the physical, natural world since, as Tononi and Koch have also now recognized, a computer intelligence would be capable of almost no experience of the world."


I think not only would it be able to experience the world, its experience will surpase ours.

Our experience is simply sensory input. . .

'Input' from what? Biologists and most neuroscientists recognize that sensory input involves experience of an actual world.


. . . some people are already using artificial input as part of their experience matrix. Is their experience invalid because they use artificial inputs ?

That depends on the extent to which the 'world' they experience, for example in represented 'virtual realities', is an artificial world rather than the actual world we humans live in now.



In addition a synthetic intellect might borrow via wireless BCI's the input of not just one individual, but a series of them all at once."

What are "wireless BCIs." Also, how would 'individuals' continue to exist within a 'hive mind'?
 
Last edited:
What are "wireless BCIs."

We already have computerized devices that are placed inside the body and even connected into the brain, such as neural implants for Parkinson’s disease and cochlear implants for the deaf. These devices can already wirelessly download new software from the cloud. Technology is shrinking at an exponential rate, which I’ve measured at about 100 in 3D volume per decade. At that rate, we will be able to introduce blood cell-sized devices that are robotic and have computers that can communicate wirelessly by the 2030s.

Also, how would 'individuals' continue to exist within a 'hive mind'?

In much the same way they do now i imagine, The indiviual nodes could use software, much as we do now in terms of firewalls and antivirus programs, to prevent intrusion and theft of or manipulation of mindfile data.

Data sharing protocols could be set much as we do privacy settings on facebook etc.

If the whole point of the project is to preserve the experience set of an individual, then this will be an intrinsic value in the resulting upload.

Privacy and individuality can be maintained, its the data sharing aspect thats enhanced.

For example my wife's 5th birthday is a special one for her, but of course i wasnt there. She might choose to share that experience set with me, load the data into my mind so i could see for myself what it was like, or someone who was there might share their "memories" of the event

And these can also be subject to privacy settings, it might just be the base sensory data, what they saw/heard/smelt etc. Or it might also include the data relating to their own emotional responses to the event.
 
Last edited by a moderator:
It's not quite that simple: Summary From the Paper by Tononi, Albantakis, & Oizumi ( source here ):

"Integrated information theory (IIT) approaches the relationship between consciousness and its physical substrate by first identifying the fundamental properties of experience itself: existence, composition, information, integration, and exclusion. IIT then postulates that the physical substrate of consciousness must satisfy these very properties. We develop a detailed mathematical framework in which composition, information, integration, and exclusion are defined precisely and made operational. This allows us to establish to what extent simple systems of mechanisms, such as logic gates or neuron-like elements, can form complexes that can account for the fundamental properties of consciousness. Based on this principled approach, we show that IIT can explain many known facts about consciousness and the brain, leads to specific predictions, and allows us to infer, at least in principle, both the quantity and quality of consciousness for systems whose causal structure is known. For example, we show that some simple systems can be minimally conscious, some complicated systems can be unconscious, and two different systems can be functionally equivalent, yet one is conscious and the other one is not."
Their assumption is that consciousness ( and consequently experience ) arises out of specific types of sufficiently complex relationships between various types of networks within an integrated system. The most important three words in the piece are, "... the substrate of consciousness ..." because the assumption is that consciousness arises out of some sort of substrate, whatever that may be ( biological or otherwise ). It doesn't assume that consciousness exists independently like some sort of disembodied sense of awareness that can somehow attach and detach itself from the substrate. Quite the opposite. It assumes some sort of substrate is required.

I don't know where you got the idea that anyone is claiming that consciousness is "some sort of disembodied sense of awareness that can somehow attach and detach itself from the substrate" -- unless you're reacting to discussions of NDEs, OBEs, and other paranormal experiences that are yet unexplained. I think there's no doubt that consciousness is evolved within biological substrates (see the papers by Jaak Panksepp referred to and linked in C&P Part II). That doesn't mean that all aspects and aptitudes of consciousness can be explained, accounted for, by those biological substrates, or, much less, by computationalist theories centered in 'information'.

I tend to agree that because minds are processing systems they therefore require processors, and in-turn such processors must exist someplace. Therefore if for the sake of discussion, we assume that minds are capable of floating about independent of any of the processors we currently believe are responsible for it, then all that means is that there is some other set of processors someplace else that is giving rise to it. Perhaps such processors are the same ones that give rise to everything else in the universe too. I don't know.

You're opening the door slightly from your former presuppositions, but not far enough to contemplate that "some other set of processors someplace else" might not be constituted in the same way we currently understand computational 'processors'. If all experiences of the actual world are gathered along with records of all forms of physical evolution in the universe we exist in presently, as many quantum theorists believe, we might never understand the nature of the reality we experience through 'normal' and 'para-normal' means.


But when reflecting on that possibility, we find that we start to run into problems really fast, because there is so much evidence in favor of the hypothesis that our minds are indeed the result of physical processes taking place in the brain.

We run into problems to the extent that we insist on imposing that which we can understand on that which we do not yet understand.
 
Last edited:
I don't know where you got the idea that anyone is claiming that consciousness is "some sort of disembodied sense of awareness that can somehow attach and detach itself from the substrate" -- unless you're reacting to discussions of NDEs, OBEs, and other paranormal experiences that are yet unexplained. I think there's no doubt that consciousness is evolved within biological substrates (see the papers by Jaak Panksepp referred to and linked in C&P Part II). That doesn't mean that all aspects and aptitudes of consciousness can be explained, accounted for, by those biological substrates, or, much less, by computationalist theories centered in 'information'.
The idea is alluded to as a general part of the discussion for sake of keeping the concepts clear in my own head ( if not anyone else's ).
You're opening the door slightly from your former presuppositions, but not far enough to contemplate that "some other set of processors someplace else" might not be constituted in the same way we currently understand computational 'processors'.
I think the idea was implied, even if not stated.
If all experiences of the actual world are gathered along with records of all forms of physical evolution in the universe we exist in presently, as many quantum theorists believe, we might never understand the nature of the reality we experience through 'normal' and 'para-normal' means.
We have a pretty good idea about our subjective realities. But we'll be lucky to figure out the nature of reality with respect to our own cosmos ( observable universe, spacetime continuum, whatever you call this realm ), let alone the realities that lie beyond it ( if any ). But at the same time, those particular issues aren't relevant to the questions we're pondering. We don't need to know all the details ( like how the processors are constituted ). We can deduce based on assumptions what the situation must be given basic variables, and from that determine if beliefs based on those situations makes any sense or not.

We run into problems to the extent that we insist on imposing that which we can understand on that which we do not yet understand.
Sure we run into problems, but that's also how progress is made. Nobody ever said it would be easy.
 
Last edited:
What kinds of 'checks and balances'?
Checks and balances regarding hive minds might simply be an ability to disconnect from the "hive" whenever one wants. There may even be laws to support this "right." And trying to forcibly connect with another might also be criminalized.

There would be clear advantages to hive minding. I can just imagine hive mind sporting events. However, there would be clear disadvantages as well. Protecting the young and innocent would be important. The list goes on and on.

To go fully and willingly hive would be something different altogether.

You addressed that question to Steve, but I would also like to respond to it. What would a totalized hive mind operating on an artificial computer substrate be able to 'adapt to'? Apparently not to changing conditions in the physical, natural world since, as Tononi and Koch have also now recognized, a computer intelligence would be capable of almost no experience of the world.
I think of a transhuman as essentially a cyborg: a biological human whose body (and brain) have been heavily augmented with non-organic parts. Conceivably, this might even involve a whole-body prothesis.

I can't say when the "tipping point" would be, but once these transhumans departed too far in form from the current human body/brain, they would no longer be transhuman, they would be posthuman. Thus, if a mind were to exist purely in a virtual environment, I would not consider it human in any sense.

If a digital mind had no interaction with the external world, then no, it could not adapt to the external, physical world. However, it would be able to adapt to changes in the virtual environment in which it existed.

Are you postulating that 'transhumans' would just plug in to the superior 'hive mind' at times, for one reason or another, but also maintain their biological substrates and continue to feel and think independently, live in and procreate within nature, and maintain a stake in the actualities of lived reality on the planet shared with other humans and animals? This doesn't sound like what Kurzweil and Mike have been talking about. Also what "other, perhaps more robust, ways to adapt" do you anticipate for all-in members of the hive mind?
As noted above, I make a distinction between a transhuman and a posthuman. Yes, a transhuman might be able to connect/disconnect with the hive at their choosing, not unlike accessing the internet today. Yes, transhumans would consider to live in and interact with the physical, palpabale world. They may contiue to tweak their bodies and brains into forms far departed from the current human body-brain, and thus may no longer be humans.

Completely digital minds — existing solely on servers — would no longer be human imo. Minds fully and irreversibly merged with a hive would no longer be human either imo.

Re robust ways to adapt: transhumans would conceivably be able to augment their bodies and brains at a rapid pace. If the could conceive it, they could engineer it. Posthumans would have even more flexibilty. Consider a posthuman whose body consisted of a swarm of nanobots. A posthuman could conceivably possess a "body" capable of travel at will through interstellar space. Etc.
 
Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting in it. Let's start with sensory input and motor output: being conscious requires neither . We humans are generally aware of what goes on around us and occasionally of what goes on within our own bodies. It's only natural to infer that consciousness is linked to our interaction with the world and with ourselves.

Yet when we dream, for instance, we are virtually disconnected from the environment--we acknowledge almost nothing of what happens around us, and our muscles are largely paralyzed. Nevertheless, we are conscious, sometimes vividly and grippingly so. This mental activity is reflected in electrical recordings of the dreaming brain showing that the corticothalamic system, intimately involved with sensory perception, continues to function more or less as it does in wakefulness.

Neurological evidence points to the same conclusion. People who have lost their eyesight can both imagine and dream in images, provided they had sight earlier in their lives. Patients with locked-in syndrome, which renders them almost completely paralyzed, are just as conscious as healthy subjects. Following a debilitating stroke, the French editor Jean-Dominique Bauby dictated his memoir, The Diving Bell and the Butterfly , by blinking his left eye. Stephen Hawking is a world-renowned physicist, best-selling author, and occasional guest star on ”The Simpsons,” despite being immobilized from a degenerative neurological disorder.

So although being conscious depends on brain activity, it does not require any interaction with the environment.

Can Machines Be Conscious? - IEEE Spectrum
 
Of course "Rothblatt thinks so." She has a product to sell, which she wants us to buy without seeing evidence that it works.



Seriously?

The reference to facebook represents a rudimentary version
Artificial intelligence project exClone wants to map the DNA of your Mind (DNAM).
Although the terminology sounds original, DNAM is actually not a new concept. For example, tracking and profiling Facebook users based on their “likes” is a rudimentary form of DNAM

What Is the DNA of Your Mind? - The Epoch Times

exClone

Dr. Riza Berkan, an entrepreneur, AI expert, physicist and not-so-mad professor, he’s the creator and principal scientist behind the The exClone Project. Berkan explained his team’s concept is to make it possible for anyone to create an “online clone” of themselves, simply by entering details of their personality, memories and expertise into a system, which in turn creates a digital copy of their person – their “essence” if you will.
Then, once your exClone is ‘born’, it (he or she?) commences on a never-ending mission to further educate itself through conversations with people, and reading material on the web based on the interests of its creator (you). The exClone team label this “cloning expertise and experience”, which is a novel and interesting way of categorizing what is essentially ‘eternal life’.

exClones are built using technologies including machine learning algorithms, cognitive science, fuzzy logic and semantics, which combine to form the essential characteristics of artificial intelligence (AI). While there’s no shortage of technology startups touting and leveraging AI to highlight their wares, this particular iteration seems, somehow, more invigorating and interesting, maybe even feasible.

The exClone Project: Now even you can become immortal | SiliconANGLE


exClones have several basic intelligence functions,” said Dr. Berkan. “They have consciousness, whereby it knows its performance and identity; they have curiosity, they detect unknown names, and investigate them through Wikipedia, or through any other information channel you have; and they have learning, which is very important as they can learn from social conversations.”
This concept of ‘social learning’ is the exClone’s most important ability, Berkan explained.
exClones are able to talk to people and remember those conversations, thereby gaining more and more knowledge.
“Absolutely the most important thing in artificial intelligence is social learning. Interaction is everything,” he said. “Intelligence can only be created by social learning. Any system claiming to have AI, not having this function, is a canned system, a database disguised as an AI system.”


 


exClone is a human-like dialogue system that encaptures a person’s expertise via digital cloning. Once the digital cloning is complete, an exClone is born and it has an independent, autonomous, and self-conscious life. It can learn from social conversations to improve its knowledge. It can also keep reading new material to get better following the directions and personality traits of its creator
 
Last edited by a moderator:
Back
Top