NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
"While he expresses skepticism that such machines can be controlled, Bostrom claims that if we program the right “human-friendly” values into them, they will continue to uphold these virtues, no matter how powerful the machines become."
What 'we' does Bostrum have in mind? Who and what constitutes the decision-making individuals and corporate groups that already have, and will continue to have, private control over how AI is trained and what it will become capable of understanding and thinking? Advanced AI will respond in a 'friendly' way to those who own and program the AI machines and hold the codes by which to identify themselves as the individuals and interests to be obeyed and served.
Equally appalling in its ignorance is the sentence preceding the quote from Bostrum I just posted:
"With intellectual powers beyond human comprehension, he prognosticates, self-improving artificial intelligences could effortlessly enslave or destroy Homo sapiens if they so wished."
On what basis can 'intellectual powers' be attributed to AI machines programmed by humans? Does anyone here think that 'intellect' can be achieved by rapid algorithmic computation? Just as the AI community began with a false notion of consciousness, it continues with a false notion of mind, which might use brain networks but is not reducible to them.
Steve, I recognize the seriousness and the humanity of your concerns about the conditions likely to be suffered by advanced AI robots, especially if in their construction they will have human biological and sensorial capabilities somehow built into them. The film Blade Runner understood and foregrounded those issues (based on insights of the original novel by Philip K. Dick). To my knowledge very few 'intellects' in the AI business approach their plans with the same kind of sensitivity, general sensibility, and forethought you do. They are largely merely technicians consumed by the technology they are developing. It doesn't matter that, in the general contemporary rush toward achieving advanced AI, some prominent early developers [notably Bill May] have expressed doubts and dismay over the questions concerning human welfare and the planet's future once computerized intelligences are widely expected to take over control of life on this planet, nor that few of them consider the moral challenge of how 'we' can anticipate their mental and psychological states let alone protect them from suffering in whatever those states become. What matters is what the powerful individuals and corporate/governmental complexes developing and owning the future of AI do with their private power in the production of increasingly 'sentient' {?} artificial intelligence/s.
I question the term 'sentient' because sentience is not yet clearly defined in human languages, biology, ethology, materialist science, and technological disciplines, just as the terms consciousness, intelligence, and mind remain undefined in either science or philosophy. Playing with increasingly powerful and 'self'-directed 'artificial intelligence' is, as you yourself I think understand well, playing God -- a role for which our species is plainly not equipped.
For well more than a decade now many prominent thinkers in many disciplines have expressed their recognition of the risks and unknown consequences of pursuing advanced AI. I think that it's well past time for dithering over these issues; it's time for influential thinkers to bond together and issue clear statements and collective political actions against further advancements in AI.