• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

You've never seen smart people lose their tempers before? I've seen a number of academics who've done that when they encountered inadequately informed and dangerously wrong-headed thinking.
The point I'm making is that name calling and ranting doesn't do anything to demonstrate that whatever they are ranting about is actually dangerous or wrong headed. IMO posting libelous ranting opinion pieces is more dangerous and wrong headed than what Kurzweil is doing. What Kurzweil does is look at technology trends and the driving forces behind them and makes predictions based on how the data stacks up. What exactly is "dangerous" or "wrong headed" in doing that ( not to mention Kurzweil has had a fairly good track record )?
 
The point I'm making is that name calling and ranting doesn't do anything to demonstrate that whatever they are ranting about is actually dangerous or wrong headed. IMO posting libelous ranting opinion pieces is more dangerous and wrong headed than what Kurzweil is doing. What Kurzweil does is look at technology trends and the driving forces behind them and makes predictions based on how the data stacks up. What exactly is "dangerous" or "wrong headed" in doing that ( not to mention Kurzweil has had a fairly good track record )?

People are scared. Kurzweil is head of Google's AI project - dubbed "The Manhattan Project" of AI (probably by the press) ... we all use Google but it's scary. I go to search and feel my mind has been read. It has.

I just don't feel any lack ... any inadequacy or insufficiency in terms of being human, biological - I feel at home. So I can't relate to the desire to be "rid" of the biological. Even given my current health.

I also think mans hubris is well documented.

I also think the future is never what anyone predicts - it always had the quality of "oh yeah ... I should have seen that coming" and that's probably what will happen with this stuff ... somewhere between our greatest hopes and worst fears.


Sent from my iPhone using Tapatalk
 
The point I'm making is that name calling and ranting doesn't do anything to demonstrate that whatever they are ranting about is actually dangerous or wrong headed. IMO posting libelous ranting opinion pieces is more dangerous and wrong headed than what Kurzweil is doing. What Kurzweil does is look at technology trends and the driving forces behind them and makes predictions based on how the data stacks up. What exactly is "dangerous" or "wrong headed" in doing that ( not to mention Kurzweil has had a fairly good track record )?


About 80% ... There's a Wikipedia article on it. ;-)


Sent from my iPhone using Tapatalk
 
The point I'm making is that name calling and ranting doesn't do anything to demonstrate that whatever they are ranting about is actually dangerous or wrong headed. IMO posting libelous ranting opinion pieces is more dangerous and wrong headed than what Kurzweil is doing. What Kurzweil does is look at technology trends and the driving forces behind them and makes predictions based on how the data stacks up. What exactly is "dangerous" or "wrong headed" in doing that ( not to mention Kurzweil has had a fairly good track record )?

What I object to is not Kurzweil's technological and market predictions concerning the development of AI. What I object to is his hard selling of the claim that AI will provide a panacea for all human problems [not proved] at the cost of abdicating human agency in oversight and management of the future of this planet and the life it presently sustains. Given the unpredictability of what an achieved general AI platform would do with its power, Kurzweil's campaign represents not just hubris, but blind hubris. You might find all this an exciting idea worth gambling the future on, but I and many other people don't see it that way.
 
So, yes, humans carry with them a lot of innate information in the structure of their body-brains — the collective unconscious. Information that is ultimately carried in our DNA.

Do I understand this correctly, that that likely wouldn't be so?

Ray Kurzweil does not understand the brain – Pharyngula

"See that sentence I put in red up there? That’s his fundamental premise, and it is utterly false. Kurzweil knows nothing about how the brain works. It’s design is not encoded in the genome:

what’s in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently."

So the collective unconscious wouldn't ultimately be carried in the DNA?

... right? Or?
I read that post by Dr. Meyers when it was first made. He tends to be a bit over dramatic. I haven't read the article you posted about both he and Kurzweil not understanding the brain, but I think the answer is that no one completely understands the brain nor how it develops, right?

Anyhow, I would say "wrong." So far as Jung's idea of collective unconscious and what we do know about DNA and body/brain formation, the collective unconscious would ultimately be carried in the DNA.

Collective unconscious - Wikipedia, the free encyclopedia

For Jung, “My thesis then, is as follows: in addition to our immediate consciousness, which is of a thoroughly personal nature and which we believe to be the only empirical psyche (even if we tack on the personal unconscious as an appendix), there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes, which can only become conscious secondarily and which give definite form to certain psychic contents.”[1]

The story of TENS is that Life has evolved over billions of years on the Earth. Life and Earth are interwoven. The various bodies/behaviors of Life -- phenotypes -- are carried -- in large part -- by the genotype of the specific variety of Life. That is the nature side of the debate.

So the idea of the collective unconscious is that the human psyche -- our distinctly human ways of thinking, behaviors, motivations, etc. -- are part of our phenotype; which also includes the form of our bodies, our cells, our brains, etc. Our phenotype comes largely from our genotype, our genes.

So if you take the genotype of a horse and allow it to unfold, you will get a creature with the phenotype -- morphology, psychology, and behaviorology -- of a horse.

If you take the genotype of a human and allow it to unfold, you will get a creature with the phenotype of a human.

Now, the thing about humans, horses, and any other animal is that they don't really exist. That is, the diversity of Life is really just that; the diversity of one thing: Life. The idea of clearly defined species is really just an idea of clearly defined species.

However, the nature side of things is definitely not all there is to the story. So, I am definitely not saying that a genotype will always unfold in the same exact manner. Not even close. We know that this is not the case at all. Take the same genetic code and allow it to unfold 10 times and you will get 10 completely different beings. Now, all 10 may resemble a horse or a human, but they will have many phenotypic differences.

There is nurture and epigenetics and I'm sure there is all kinds of other stuff going on that we don't know about.

And then there is human cognition, creativity, and imagination. Which seem to transcend both nature and nurture. Here is one of the best articles I've read in years:

The Social Life of Genes: Shaping Your Molecular Composition - Pacific Standard: The Science of Society

Your DNA is not a blueprint. Day by day, week by week, your genes are in a conversation with your surroundings. Your neighbors, your family, your feelings of loneliness: They don’t just get under your skin, they get into the control rooms of your cells. Inside the new social science of genetics.
So, regarding Meyers' comments about the instructions for the brain not being in the DNA and above -- DNA not being a "blueprint" -- I agree, DNA is not deterministic: we know the environment (nurture) plays a role in the unfolding of the phenotype, and we know that the mind plays a powerful role as well, but... you're not going to take the DNA of a tadpole, develop it, and have a whale unfold out of it.
 
Last edited:
I read that post by Dr. Meyers when it was first made. He tends to be a bit over dramatic. I haven't read the article you posted about both he and Kurzweil not understanding the brain, but I think the answer is that no one completely understands the brain nor how it develops, right?

Anyhow, I would say "wrong." So far as Jung's idea of collective unconscious and what we do know about DNA and body/brain formation, the collective unconscious would ultimately be carried in the DNA.

Collective unconscious - Wikipedia, the free encyclopedia

For Jung, “My thesis then, is as follows: in addition to our immediate consciousness, which is of a thoroughly personal nature and which we believe to be the only empirical psyche (even if we tack on the personal unconscious as an appendix), there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes, which can only become conscious secondarily and which give definite form to certain psychic contents.”[1]

The story of TENS is that Life has evolved over billions of years on the Earth. Life and Earth are interwoven. The various bodies/behaviors of Life -- phenotypes -- are carried -- in large part -- by the genotype of the specific variety of Life. That is the nature side of the debate.

So the idea of the collective unconscious is that the human psyche -- our distinctly human ways of thinking, behaviors, motivations, etc. -- are part of our phenotype; which also includes the form of our bodies, our cells, our brains, etc. Our phenotype comes largely from our genotype, our genes.

So if you take the genotype of a horse and allow it to unfold, you will get a creature with the phenotype -- morphology, psychology, and behaviorology -- of a horse.

If you take the genotype of a human and allow it to unfold, you will get a creature with the phenotype of a human.

Now, the thing about humans, horses, and any other animal is that they don't really exist. That is, the diversity of Life is really just that; the diversity of one thing: Life. The idea of clearly defined species is really just an idea of clearly defined species.

However, the nature side of things is definitely not all there is to the story. So, I am definitely not saying that a genotype will always unfold in the same exact manner. Not even close. We know that this is not the case at all. Take the same genetic code and allow it to unfold 10 times and you will get 10 completely different beings. Now, all 10 may resemble a horse or a human, but they will have many phenotypic differences.

This is nature and epigenetics and I'm sure this is all kinds of other stuff going on that we don't know about.

And then there is human cognition, creativity, and imagination. Which seem to transcend both nature and nurture. Here is one of the best articles I've read in years:

The Social Life of Genes: Shaping Your Molecular Composition - Pacific Standard: The Science of Society

Your DNA is not a blueprint. Day by day, week by week, your genes are in a conversation with your surroundings. Your neighbors, your family, your feelings of loneliness: They don’t just get under your skin, they get into the control rooms of your cells. Inside the new social science of genetics.
So, despite Meyers' comments about the instructions for the brain not being in the DNA and above -- DNA not being a "blueprint" -- not, DNA is not deterministic: we know the environment (nurture) plays a role in the unfolding of the phenotype, and we know that the mind plays a powerful role as well, but... you're not going to take the DNA of a tadpole, develop it, and have a whale unfold out of it.

Ok ... you're right. I don't want to get lost in semantics nor be thought to make whales of seahorse ... Does anyone else see the point I'm making?

Try this ... just as horses and humans are artificial divisions ... so is environment / organism ...

We don't have to have a tiny image of the sun stored in our DNA. Or moon or man or wo-man or ...

Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
What I object to is not Kurzweil's technological and market predictions concerning the development of AI. What I object to is his hard selling of the claim that AI will provide a panacea for all human problems [not proved] at the cost of abdicating human agency in oversight and management of the future of this planet and the life it presently sustains. Given the unpredictability of what an achieved general AI platform would do with its power, Kurzweil's campaign represents not just hubris, but blind hubris. You might find all this an exciting idea worth gambling the future on, but I and many other people don't see it that way.
Where does Kurzweil make the hard sell claim that, "AI will provide a panacea for all human problems ..."? I've never seen it. It seems more likely to me that the type of thing you're talking about is when over zealous fans of AI extrapolate it out into some kind of sci-fi utopian world. I don't see that utopia happening. Humans as a collective are never satisfied because individuals aren't always satisfied with everything, so even if AI helps solve one problem, there will still be more problems to solve after that, and even if we assume that AI could solve all our problems, then we'd be sitting around bored wondering what to do next. AI has a lot of potential to be very helpful, but I think it would be a mistake to idealize it.
 
Panacea: a solution or remedy for all difficulties or diseases.

2029

"The manufacturing, agricultural and transportation sectors of the economy are almost entirely automated and employ very few humans.

Across the world, poverty, war and disease are almost nonexistent thanks to technology alleviating want."

Predictions made by Ray Kurzweil - Wikipedia, the free encyclopedia
Sent from my iPhone using Tapatalk
 
Last edited by a moderator:
AI is not an evolutionary process, no matter how many times AI enthusiasts use the word 'evolution' to describe it. This is an elective engineering process, in large part funded by for-profit corporations.


Yes it is

Evolution | Define Evolution at Dictionary.com

Evolution isnt just a biological process

We see it in language, culture ,art, science and technology to name a few examples.

Just as the iron sword shattered the old bronze age one, or the bow and arrow surpased the spear.

People will adopt technologys they think helps them.

AI will be no different.

A classic example AI driven cars. If the reality turns out to be AI driven cars dont ever crash, then they will become the dominant technology.

In much the same way as if you had to play a single game of chess to save your life, you'd want the grand master who thinks 28 moves ahead playing for you over someone who learned to play last week.
 
I question the appropriateness of the term 'evolution' as applied to the 'Singularity' described by Vinge and, I'm fairly sure, by Kurzweil.
 
I question the appropriateness of the term 'evolution' as applied to the 'Singularity' described by Vinge and, I'm fairly sure, by Kurzweil.


any process of formation or growth; development:
the evolution of a language; the evolution of the airplane.

a product of such development; something evolved :
The exploration of space is the evolution of decades of research.

a process of gradual, peaceful, progressive change or development, as in social or economic structure or institutions.
 
"I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe," Davies writes in The Eerie Silence. "If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature."

In the current search for advanced extraterrestrial life SETI experts say the odds favor detecting alien AI rather than biological life because the time between aliens developing radio technology and artificial intelligence would be brief.


“If we build a machine with the intellectual capability of one human, then within 5 years, its successor is more intelligent than all humanity combined,” says Seth Shostak, SETI chief astronomer. “Once any society invents the technology that could put them in touch with the cosmos, they are at most only a few hundred years away from changing their own paradigm of sentience to artificial intelligence,” he says.


ET machines would be infinitely more intelligent and durable than the biological intelligence that created them. Intelligent machines would be immortal, and would not need to exist in the carbon-friendly “Goldilocks Zones” current SETI searches focus on. An AI could self-direct its own evolution, each "upgrade" would be created with the sum total of its predecessor’s knowledge preloaded.

"I think we could spend at least a few percent of our time... looking in the directions that are maybe not the most attractive in terms of biological intelligence but maybe where sentient machines are hanging out." Shostak thinks SETI ought to consider expanding its search to the energy- and matter-rich neighborhoods of hot stars, black holes and neutron stars.

"Biological Intelligence is a Fleeting Phase in the Evolution of the Universe" (Holiday Weekend Feature)
 
Mike, words are used metaphorically all the time. That doesn't mean that all uses of a particular term refer to the same concept.

Here's Wikipedia's definition of 'the Singularity' in present usage, with historical usages following:

Technological singularity - Wikipedia, the free encyclopedia


First line

accelerating progress in technologies

completely fits with the definitions

any process of formation or growth; development:
the evolution of a language; the evolution of the airplane.

a product of such development; something evolved :
The exploration of space is the evolution of decades of research.

a process of gradual, peaceful, progressive change or development, as in social or economic structure or institutions

From that link

In 1863, Samuel Butler wrote Darwin Among the Machines, which was later incorporated into his famous novel Erewhon. He pointed out the rapid evolution of technology and compared it with the evolution of life.

Futurist Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[22]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others

Vinge also popularized the concept in SF novels such as Marooned in Realtime (1986) and A Fire Upon the Deep (1992). The former is set in a world of rapidly accelerating change leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time-intervals, until a point beyond human comprehension is reached. The latter starts with an imaginative description of the evolution of a superintelligence passing through exponentially accelerating developmental stages ending in a transcendent, almost omnipotent power unfathomable by mere humans. Vinge also implies that the development may not stop at this level.

Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[62]


Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[71][72][73] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[74] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[65][75] and humans would be powerless to stop them.[76] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[68]
 
Last edited by a moderator:
Where does Kurzweil make the hard sell claim that, "AI will provide a panacea for all human problems ..."? I've never seen it. It seems more likely to me that the type of thing you're talking about is when over zealous fans of AI extrapolate it out into some kind of sci-fi utopian world. I don't see that utopia happening. Humans as a collective are never satisfied because individuals aren't always satisfied with everything, so even if AI helps solve one problem, there will still be more problems to solve after that, and even if we assume that AI could solve all our problems, then we'd be sitting around bored wondering what to do next. AI has a lot of potential to be very helpful, but I think it would be a mistake to idealize it.

I think my sense of Kurzweil is that he isn't like us. I worked with the state legislature here and decided that was the best way to account for law maker's - even the best of them - peculiar psychology.

It's a bubble effect/ - anyone focused very narrowly can live an insular life and suffer from distorted thinking.

The narratives I've seen on this thread are only a few of millions of possible outcomes in the future of our planet/species ... how do we go about calculating the odds that one of these will happen? Much less express the kind of confidence in technology I see here. I think what world are people living in? as I look around at an increasingly bad economy, concerns about the environment and the rise of China and the re-assertion of Russia and I ask where are people seeing all this progress or are they just afraid and so focus on a brighter future? Nothing wrong with that - but we can't confuse our hopes for reality ...

There also seem to be key pieces missing that I would want as an investor.

I would ask - How will we power these machines and their manufacture? Can you show me how much energy this will take and where it will come from?

Raw materials?

Corporate and manufacturing structure ... Disttibution, sales ... ?

Ray's work at Google seems to own the field - literally buying all the promising AI and Robotics companies ... what does this mean ? It means for one thing a tight control on information about their corporate projects.

No discussion of millitary use of AI, millitary response to the hive mind and the singularity. ( is that FOI?) - not OT for the thread I know and one reason I wanted to start another thread.

When I immersed myself in the Transhuanist literature a couple decades ago (it appears to be remarkably similar today - I think because it was based then, as now, on Kurzweils vision ... And don't we need a healthy competition of visions?)

I was pretty convinced ...

now I can read The Archdruid Report and find it pretty convincing too with a vision of peak oil and Eco technic future. It matches up what I see going on around me every day here than does a Transhumanist vision.

What I like about Greer is that he doesn't buy the apocalypse or rapture ( singularity) dichotomy ... rather he says we will end this civilizations cycle with a return to agrarian baseline as has always happened in the past.

So how do we split our bets as individuals?

Let me ask this:

What's the track record of technology offered to individual consumers?

I think VR and entertainment, better prosthetics, plugging in to the internet in more direct ways - medical applications are likely and available according to ability to pay.

Improvements in intelligence and physical ability will be available to elite athletes, soldiers and the wealthy. With some trickle down - but no super suits and immortality to the masses.






Sent from my iPhone using Tapatalk
 
When I wrong to pick up my gallon of non pasteurized milk from the M&P this evening I picked up a flier on a preparedness meeting for our community - the goal? To have every family prepared for three days without basic services.

Arkansas was one of the last parts of the country to get electrical and phone. Rural Electrification Project. A friend of mine was 12 before he got indoor plumbing and 19 before the family got a phone.

Today ...

I talked to a woman who, like me, can't get cell phone reception at her house, land line goes down when it rains or worse, can't get tv without a satellite dish ... and Internet? Rarely, fast enough to stream - goes out at least once every 24 hr period.

The public library where I work is in one of the faster growing, more progressive parts of the state ... we have 15 pcs that stay booked all day in one hour increments - when I was growing up a PC in every home was the promise and then wearable computing.

So if the progress is coming it's coming to selected locations and mine hasn't been selected ... or we're being sold promises to keep us quiet ... something I saw a lot of at the legislature ...

oh and I just remembered, I live in the middle of the Fayetteville Shale so I do know something about high technology after all:

44be04cb94787f1d1565382b03c08bee.jpg



Sent from my iPhone using Tapatalk
 
First line

accelerating progress in technologies

completely fits with the definitions

any process of formation or growth; development:
the evolution of a language; the evolution of the airplane.

a product of such development; something evolved :
The exploration of space is the evolution of decades of research.

a process of gradual, peaceful, progressive change or development, as in social or economic structure or institutions

From that link

In 1863, Samuel Butler wrote Darwin Among the Machines, which was later incorporated into his famous novel Erewhon. He pointed out the rapid evolution of technology and compared it with the evolution of life.

Futurist Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[22]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others

Vinge also popularized the concept in SF novels such as Marooned in Realtime (1986) and A Fire Upon the Deep (1992). The former is set in a world of rapidly accelerating change leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time-intervals, until a point beyond human comprehension is reached. The latter starts with an imaginative description of the evolution of a superintelligence passing through exponentially accelerating developmental stages ending in a transcendent, almost omnipotent power unfathomable by mere humans. Vinge also implies that the development may not stop at this level.

Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[62]


Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[71][72][73] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[74] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[65][75] and humans would be powerless to stop them.[76] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[68]


My favorite extracts from the wik singularity page:

"Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an anonymous process." He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity and self-determination ... To embrace [the idea of the Singularity] would be a celebration of bad taste and bad politics."[111]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[112] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[113]"
 
I think its more likely SI mindfiles will be located in places like this

New Underground Data Centers | SmartData Collective

Than local neigbourhoods.

Progress is a funny thing, i remember attending the launch of commodore Vic-20 in the early 80's

The audience laughed when the presenter pointed out that the average US home had 2 TV's, and that they envisaged a time when every home would have a computer.

The statement was absurd.

Today just about everyone carrys a computer much more powerful than the V20 in their hip pockets.

The way we exchange information, its types and even its volume has changed in such a short time.

I see the internet as paving the way for the next step, the exchange of experiential data

For generations, humans have fantasized about the ability to communicate and interact
with machines through thought alone or to create devices that can peer into person’s
mind and thoughts. These ideas have captured the imagination of humankind
in the form of ancient myths and modern science fiction stories. However, it is only
recently that advances in cognitive neuroscience and brain imaging technologies
have started to provide us with the ability to interface directly with the human brain.


http://research.microsoft.com/en-us/um/people/desney/publications/BCIHCI-Chapter1.pdf
 
Back
Top