• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

The 'Singularity' would be the suicide of our species, Mike, not its 'survival'. Did you see the abstract I posted above of the most recent paper by Tononi and Koch? I highlighted the last part in which they reject what Tononi's Integrated Information Theory had formerly argued -- i.e., that a sufficiently computationally integrated machine substrate would be as consciousness as a biological one. He's learned, probably from some of the same sources we've been reading and citing in the C&P thread, that AI would "experience nothing at all." And experience is the ground out of which humans and other animals develop consciousness -- an embodied consciousness -- and mind.

Why cant experience be be simulated ?

The simplest use of brain-in-a-vat scenarios is as an argument for philosophical skepticism and solipsism. A simple version of this runs as follows: Since the brain in a vat gives and receives exactly the same impulses as it would if it were in a skull, and since these are its only way of interacting with its environment, then it is not possible to tell, from the perspective of that brain, whether it is in a skull or a vat

280px-Brain_in_a_vat_%28en%29.png


And i think you need to be careful of such phrases as

"He's learned, that AI would "experience nothing at all."

He can theorize that possibility, but until he has a real AI to study he cant learn anything about it

And again history is full of scientific experts who said thats impossible

Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax.” — William Thomson, Lord Kelvin, British scientist, 1899

“There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.” — Albert Einstein, 1932
 
The 'Singularity' would be the suicide of our species, Mike, not its 'survival'.

I take the opposite view.

If the singularity does happen and minds can be uploaded to BSIM's how would that be suicide ? The biosphere is a fragile thing.
I could make a case that if methane levels got so bad that biological life was no longer viable on earth, BSIMs would be the only ones to survive.

Upgrading the substrate from a flaky error,accident and disease prone biological platform to a durable self repairing non biological one is a logical expression of surival of the fittest. If the new substrate is better than othe old, more fit for purpose, If it addresses and fixes the shortcoming of our native substrates then it must be fitter than the original

Your claim is akin to saying libraries would be the suicide of creative writing, who would bother to write books when there are whole buildings full of them
 
I believe the answer to that would be yes, that was after all the whole point of the exercise.To preserve their conciousness when the biological body broke down

The Cybermen are a fictional race of cyborgs who are among the most persistent enemies of the Doctor in the British science fiction television programme, Doctor Who. All but the third, eighth, war, and ninth incarnations of the Doctor faced them. Cybermen were originally a wholly organic species of humanoids originating on Earth's twin planet Mondas that began to implant more and more artificial parts into their bodies as a means of self-preservation. This led to the race becoming coldly logical and calculating, with every emotion deleted from their minds.


Where the problem seems to arise with these characters is some of the supporting software that is also loaded is flawed, the problem isnt the self preservation, its in the implementation. Something i would hope we would get right should we do the same

To expand on this

The conference took a surreal turn when Martine Rothblatt — a lawyer, author and entrepreneur, and CEO of biotech company United Therapeutics Corp. — took the stage. Even the title of Rothblatt's talk was provocative: "The Purpose of Biotechnology is the End of Death."
Rothblatt introduced the concept of "mindclones" — digital versions of humans that can live forever. She described how the mind clones are created from a "mindfile," a sort of online repository of our personalities, which she argued humans already have (in the form of Facebook, for example). This mindfile would be run on "mindware," a kind of software for consciousness. "The first company that develops mindware will have [as much success as] a thousand Googles," Rothblatt said.
But would such a mindclone be alive? Rothblatt thinks so. She cited one definition of life as a self-replicating code that maintains itself against disorder. Some critics have shunned what Rothblatt called "spooky Cartesian dualism," arguing that the mind must be embedded in biology. On the contrary, software and hardware are as good as wet ware, or biological materials, she argued.
Rothblatt went on to discuss the implications of creating mindclones. Continuity of the self is one issue, because your persona would no longer inhabit just a biological body. Then, there are mind-clone civil rights, which would be the "cause célèbre" for the 21st century, Rothblatt said. Even mindclone procreation and reanimation after death were mentioned.

'Mind Uploading' & Digital Immortality May Be Reality By 2045, Futurists Say


"mindware," a kind of software for consciousness"., thats the problem the Cyberman character has its Mindware in the plot has been coded in the extreem
It would be hoped that we do a better job, that the mindware code is unobtrusive as subroutines go, much like our current autonomic systems
 
And experience is the ground out of which humans and other animals develop consciousness -- an embodied consciousness -- and mind.

Which is really not an issue in the gradual replacement scenario, we simply replace biological sensory devices with artificial ones.
The cochlear implant is a good example, are people with these implants no longer conscious ?


Gradual replacement
Scanning might also occur in the form of gradual replacement, as piece after piece of the brain
is replaced by an artificial neural system interfacing with the brain and maintaining the same
functional interactions as the lost pieces. Eventually only the artificial system remains

as per Soongs idea

"In a perfect universe i would create nanomachines that would replace my organic brain cells one by one. Duplicating their function and memory content.I'd notice no change in my conciousness during the process in the process of the change. And then one day all of the organic cells would be replaced.And all that would remain would be the synthetic brain."
The question does seem to resolve itself when looked at this way. If this ever becomes possible it would be neither transfer or copy. The substrate simply gets replaced.

The cochlear implant is just one step, the recipient is still Conscious. at what percentage of artificial substrate does that change ?
 
Last edited by a moderator:
The cochlear implant is just one step, the recipient is still Conscious. at what percentage of artificial substrate does that change ?

I'm not arguing against cochlear implants or any other technological device that can enhance life and well-being for disabled or injured people. I'm arguing against the notion that the human species should be phased out of existence and replaced by the 'hive mind' of an exponentially linked artificial intelligence in order to 'survive', because our species and others won't survive that way. If computational substrates cannot experience this world and the life in it, they cannot be expected to be responsive to the needs of living beings on this planet -- they cannot be expected in short to be responsible controllers of this planetary world. I realize that Kurzweil's fantasy world has inspired you with hope, but everything I've read about this tells me it's a false hope.
 
I take the opposite view.

If the singularity does happen and minds can be uploaded to BSIM's how would that be suicide ? The biosphere is a fragile thing.
I could make a case that if methane levels got so bad that biological life was no longer viable on earth, BSIMs would be the only ones to survive.

Upgrading the substrate from a flaky error,accident and disease prone biological platform to a durable self repairing non biological one is a logical expression of surival of the fittest. If the new substrate is better than othe old, more fit for purpose, If it addresses and fixes the shortcoming of our native substrates then it must be fitter than the original

Your claim is akin to saying libraries would be the suicide of creative writing, who would bother to write books when there are whole buildings full of them

"Upgrading the substrate from a flaky error,accident and disease prone biological platform to a durable self repairing non biological one is a logical expression of surival of the fittest. If the new substrate is better than othe old, more fit for purpose, If it addresses and fixes the shortcoming of our native substrates then it must be fitter than the "

flaky error disease prone ... I'll have to think about that ... I see the "biological platform" itself as continuous with the ecosystem, of dependent origination not as a self contained individual - taking in from the environment and ultimately returning to it and that is what survives ... the entire ecosystem ... and that system is durable and self repairing, what we call disease is part and parcel as I suspect is what we call error ...

Another question - What non biological materials would you use?


Sent from my iPhone using Tapatalk
 
I'm not arguing against cochlear implants or any other technological device that can enhance life and well-being for disabled or injured people. I'm arguing against the notion that the human species should be phased out of existence and replaced by the 'hive mind' of an exponentially linked artificial intelligence in order to 'survive', because our species and others won't survive that way. If computational substrates cannot experience this world and the life in it, they cannot be expected to be responsive to the needs of living beings on this planet -- they cannot be expected in short to be responsible controllers of this planetary world. I realize that Kurzweil's fantasy world has inspired you with hope, but everything I've read about this tells me it's a false hope.

How can you be sure they cant experience the world. Its just input as the cochlear implant example demonstrates

Does a brain fed by a cochlear implant and a bionic eye

Shedding a Light on Blinds: Bionic Eyes are Ready to Use. | Scientific Wizard

experience the world ?

Responsive and responsibility are relative, are we responsible controllers of the planetary world ?

Their responsibility will be to their survival , are we any different
 
"Upgrading the substrate from a flaky error,accident and disease prone biological platform to a durable self repairing non biological one is a logical expression of surival of the fittest. If the new substrate is better than othe old, more fit for purpose, If it addresses and fixes the shortcoming of our native substrates then it must be fitter than the "

flaky error disease prone ... I'll have to think about that ... I see the "biological platform" itself as continuous with the ecosystem, of dependent origination not as a self contained individual - taking in from the environment and ultimately returning to it and that is what survives ... the entire ecosystem ... and that system is durable and self repairing, what we call disease is part and parcel as I suspect is what we call error ...

Another question - What non biological materials would you use?


Sent from my iPhone using Tapatalk


Brain Diseases

Hardly optimal.

Given the choice of installing a cheap non durable part in your car, or a more durable more reliable part before setting off on a long journey, which do you pick ?
 
Brain Diseases

Hardly optimal.

Given the choice of installing a cheap non durable part in your car, or a more durable more reliable part before setting off on a long journey, which do you pick ?

You're excluding the middle.

Transviruses ... ? Precedent for complex inorganic systems ... computer viruses.

You're basically just saying you'll build a perfect system and that would be great if it's possible ... it's unprecedented ... do Transhumanists discuss any pitfalls? Why should I expect a more complex more intelligent system not to have more complex disorders?

Also, what non-organic materials will you use? Evolution uses carbon because it's plentiful and combines in many useful forms with other plentiful materials ... water is a universal organic solvent ... With unique properties ... I can see the excitement but what are the details or even a broad outline of how it's possible?




Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
Brain Diseases

Hardly optimal.

Given the choice of installing a cheap non durable part in your car, or a more durable more reliable part before setting off on a long journey, which do you pick ?

Again when looked at from an ecosystem or higher level this IS what evolution does pick and life is robust and adaptive and yes at the expense of the individual.

But I'm not sure if the individual is put above "higher levels" of organization what happens to the system itself?

But one difference is that I don't want to live forever. In some forms of Transhumanism I see religious and psychological elements I'm not comfortable with.


Sent from my iPhone using Tapatalk
 
See discussion in Death! thread ... An immortal self repairing platform can find itself in some pretty horrific situations ... fates worse than death.

If it can't be destroyed how does the collective isolate a rogue element?

Uploading your consciousness as opposed to repairing a car ... Of possible infinite consequence ... But assuming all of what you hope for here under the best possible conditions ... I'm not buying today.

Isn't it basically an eternity or thousands of years of experience ... Yes, Assume a superior mind - it may take 80000 years to become bored ... but it will subjectively feel like the 80 years of a human life ... There is an exact analogy to the Devas in Buddhist scripture.

And, my personal finitude is of great value to me.

Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
what non-organic materials will you use

The answer isnt quite that simple. To quote Chalmers

What about computers? Although Searle (1990) talks about what it takes for something to be a "digital computer", I have talked only about computations and eschewed reference to computers. This is deliberate, as it seems to me that computation is the more fundamental notion, and certainly the one that is important for AI and cognitive science. AI and cognitive science certainly do not require that cognitive systems be computers, unless we stipulate that all it takes to be a computer is to implement some computation, in which case the definition is vacuous.
What does it take for something to be a computer? Presumably, a computer cannot merely implement a single computation. It must be capable of implementing many computations - that is, it must be programmable. In the extreme case, a computer will be universal, capable of being programmed to compute any recursively enumerable function. Perhaps universality is not required of a computer, but programmability certainly is. To bring computers within the scope of the theory of implementation above, we could require that a computer be a CSA with certain parameters, such that depending on how these parameters are set, a number of different CSAs can be implemented. A universal Turing machine could be seen in this light, for instance, where the parameters correspond to the "program" symbols on the tape. In any case, such a theory of computers is not required for the study of cognition.
Is the brain a computer in this sense? Arguably. For a start, the brain can be "programmed" to implement various computations by the laborious means of conscious serial rule-following; but this is a fairly incidental ability. On a different level, it might be argued that learning provides a certain kind of programmability and parameter-setting, but this is a sufficiently indirect kind of parameter-setting that it might be argued that it does not qualify. In any case, the question is quite unimportant for our purposes. What counts is that the brain implements various complex computations, not that it is a computer.

A Computational Foundation for the Study of Cognition


For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam, 1967).
Chalmers' argument for artificial consciousness[edit]
One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his manuscript A Computational Foundation for the Study of Cognition, is roughly that computers perform computations and the right kinds of computations are sufficient for the possession of a conscious mind. In outline, he defends his claim thus: Computers perform computations. Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.
The most controversial part of Chalmers’ proposal is that mental properties are “organizationally invariant;” i.e., nothing over and above abstract causal organization. His rough argument for which is the following. Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are “characterized by their causal role” within an overall causal system. He adverts to the work of Armstrong (1968) and Lewis (1972) in claiming that “systems with the same causal topology…will share their psychological properties.”

Artificial consciousness - Wikipedia, the free encyclopedia
 
The answer isnt quite that simple. To quote Chalmers

What about computers? Although Searle (1990) talks about what it takes for something to be a "digital computer", I have talked only about computations and eschewed reference to computers. This is deliberate, as it seems to me that computation is the more fundamental notion, and certainly the one that is important for AI and cognitive science. AI and cognitive science certainly do not require that cognitive systems be computers, unless we stipulate that all it takes to be a computer is to implement some computation, in which case the definition is vacuous.
What does it take for something to be a computer? Presumably, a computer cannot merely implement a single computation. It must be capable of implementing many computations - that is, it must be programmable. In the extreme case, a computer will be universal, capable of being programmed to compute any recursively enumerable function. Perhaps universality is not required of a computer, but programmability certainly is. To bring computers within the scope of the theory of implementation above, we could require that a computer be a CSA with certain parameters, such that depending on how these parameters are set, a number of different CSAs can be implemented. A universal Turing machine could be seen in this light, for instance, where the parameters correspond to the "program" symbols on the tape. In any case, such a theory of computers is not required for the study of cognition.
Is the brain a computer in this sense? Arguably. For a start, the brain can be "programmed" to implement various computations by the laborious means of conscious serial rule-following; but this is a fairly incidental ability. On a different level, it might be argued that learning provides a certain kind of programmability and parameter-setting, but this is a sufficiently indirect kind of parameter-setting that it might be argued that it does not qualify. In any case, the question is quite unimportant for our purposes. What counts is that the brain implements various complex computations, not that it is a computer.

A Computational Foundation for the Study of Cognition


For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam, 1967).
Chalmers' argument for artificial consciousness[edit]
One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his manuscript A Computational Foundation for the Study of Cognition, is roughly that computers perform computations and the right kinds of computations are sufficient for the possession of a conscious mind. In outline, he defends his claim thus: Computers perform computations. Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.
The most controversial part of Chalmers’ proposal is that mental properties are “organizationally invariant;” i.e., nothing over and above abstract causal organization. His rough argument for which is the following. Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are “characterized by their causal role” within an overall causal system. He adverts to the work of Armstrong (1968) and Lewis (1972) in claiming that “systems with the same causal topology…will share their psychological properties.”

Artificial consciousness - Wikipedia, the free encyclopedia

I'm familiar with Chalmers - discussed in C&P.

My question is, given org invariance, what (inorganic) physical materials will be used?


Sent from my iPhone using Tapatalk
 
Recent improvements[edit]
Computational devices have been created in CMOS, for both biophysical simulation and neuromorphic computing. More recent efforts show promise for creating nanodevices[10] for very large scale principal components analyses and convolution. If successful, these efforts could usher in a new era of neural computing[11] that is a step beyond digital computing, because it depends on learning rather than programming and because it is fundamentally analog rather than digital even though the first instantiations may in fact be with CMOS digital devices.
Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning.[12] For example, multi-dimensional long short term memory (LSTM)[13][14] won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned.
Variants of the back-propagation algorithm as well as unsupervised methods by Geoff Hinton and colleagues at the University of Toronto[15][16] can be used to train deep, highly nonlinear neural architectures similar to the 1980 Neocognitron by Kunihiko Fukushima,[17] and the "standard architecture of vision",[18] inspired by the simple and complex cells identified by David H. Hubel and Torsten Wiesel in the primary visual cortex.
Deep learning feedforward networks, such as convolutional neural networks, alternate convolutional layers and max-pooling layers, topped by several pure classification layers. Fast GPU-based implementations of this approach have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition[19] and the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge.[20] Such neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[21] on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun and colleagues at NYU.

Artificial neural network - Wikipedia, the free encyclopedia
 
I'm familiar with Chalmers - discussed in C&P.

My question is, given org invariance, what (inorganic) physical materials will be used?


Sent from my iPhone using Tapatalk

The current research is using digital computers

Reconstructing the brain piece by piece and building a virtual brain in a supercomputer—these are some of the goals of the Blue Brain Project. The virtual brain will be an exceptional tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases.

The Blue Brain project began in 2005 with an agreement between the EPFL and IBM, which supplied the BlueGene/L supercomputer acquired by EPFL to build the virtual brain.
In brief | EPFL

But as the other links ive posted suggest the skys the limit, nanotech, Gas manipulated at the molecular level etc etc
 
The current research is using digital computers

Reconstructing the brain piece by piece and building a virtual brain in a supercomputer—these are some of the goals of the Blue Brain Project. The virtual brain will be an exceptional tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases.

The Blue Brain project began in 2005 with an agreement between the EPFL and IBM, which supplied the BlueGene/L supercomputer acquired by EPFL to build the virtual brain.
In brief | EPFL

But as the other links ive posted suggest the skys the limit, nanotech, Gas manipulated at the molecular level etc etc

They do suggest this in a very general way. But is there also a discussion of how AI will avoid Artificial Viruses and Artificial Mental Illnesses? Generally how a complex system avoids complex problems? Complex problems seem to be a part of complex systems ... can you provide a counter example?

Yes scientists have said things are impossible that have or may yet come true but the opposite is just as true - from Leibnitz "Gentlemen, let us calculate" four hundred years ago to GOFAIs promises of AI to annual promises of perfected versions of last years technology ... To which there are always economic barriers as well as technological. Development of perfect technology would only go forward under already perfect human oversight. An old sci fi/utopian trope.

An IPhone that self repairs and lasts a thousand years? Will your Transhuman platform come with a warranty and maintenance agreement?

:-)

The sky is also the limit as to problems ... Do we leave these for the AI to deal with being smarter than we are?

Again no specifics on the materials ... what properties will the nano assembled materials or gas clouds need to have that are superior to carbon and water? Will the form be humanoid? Is your consciousness a plastic form composed of raw experience that can be folded into any container or is it structured by your anthropoid embodiment and if the latter how will it function in an alien form or body?

Back to my finitude ... What are the odds this will be ready to go and in my price range before my current platform wears out or is beyond repair?

If it's available and in my reach I might consider it ... But right now the safe money is that I'm going to die. If I can come to terms with that, find peace and even meaning in it - I'll be happier and having prepared for it can put it away and enjoy the time I do have and value the experiences I can pursue with effort in real life ... as opposed to dialing in by virtual reality.


Sent from my iPhone using Tapatalk
 
The sky is also the limit as to problems ... Do we leave these for the AI to deal with being smarter than we are?

Funny you should ask that. I was just reading about a recent transhumanism conference in which a new and vocal light in the field suggested that what we should do, once AI has reached the 'singularity', is to ask it to figure out 'what we would have most wanted it to do with the world had we been able to figure it out'.
 
Funny you should ask that. I was just reading about a recent transhumanism conference in which a new and vocal light in the field suggested that what we should do, once AI has reached the 'singularity', is to ask it to figure out 'what we would have most wanted it to do with the world had we been able to figure it out'.

Ask it politely! :-)

It reminds me of Clifford Pickover's question:

"If an alien comes to you and asks, "What is the most important question we can ask humanity and what is the best possible answer you can give?" the safest reply is,

"You have just asked the most important question you can ask humanity, and I'm giving you the best possible answer."







Sent from my iPhone using Tapatalk
 
Again no specifics on the materials ... what properties will the nano assembled materials or gas clouds need to have that are superior to carbon and water? Will the form be humanoid?

Yes any new system will bring with it new system specific problems.

Let me ask you this, which is better suited to storing and retrieving word for word the entire works of shakespere ?

The biological data bank or the artificial one.

Purpose designed systems are by inherant nature improvements on biological ones, birds fly. we fly at hypersonic speeds longer distances.

We may even use DNA itself as the storage medium, or cultured biocells may work better than silicone storage, the optimum might even in the short term be a combination of both.

We are now in a situation like molecular biology was a few years ago when people started to map the human genome and make the data available,” Meier says. “Our colleagues are recording data from neural tissues describing the neurons and synapses and their connectivity. This is being done almost on an industrial scale, recording data from many, many neural cells and putting them in databases

Brain On A Chip? -- ScienceDaily

Canadian scientists have successfully connected brain cells to a silicon chip to "hear" conversation between brain tissue.

Human brain on a microchip nearly ready - The Hindu

I created this thread to bookmark emergent technologies in this field because i suspect its related to the UFO enigma.

If im right a hundred years from readers will say that mike guy was ahead of his time

If im wrong im wrong

Im not here to change anyones mind on this topic, if you dont think its possible thats fine

As i have shown history is replete with scientists who said it cant be done, only to be ignored by other scientists who did it anyway.

The difference is, those who say it cant be done cant prove their negative

The research being done in this area presents a path to testing the idea in a way that is empirical, something the naysayers cant do

My personal view is everything in the physical universe can be deconstructed, understood and replicated. If consciousness doesnt just reside in the brain (though i think it does) then we will chase that down too, deconstruct the mechanism and replicate that too.

Just my opinion, im not trying to force that worldview on anyone else.

But i think the methodology to try, and either succeed or fail, beats deciding in advance it cant be done and not even trying
 
Back
Top