S
smcder
Guest
I've finished re-reading Being and Nothingness as well as Nausea, which is actually one of my favourite books.
And while I dearly love his writing, I'm left with more than a little feeling like I just listened to the Cure's Disintegration album one too many times; it's clearly coloured by his feeling of angst about the whole matter.
And on existentialism, Heidegger actually criticized him, as did other existentialists along the following lines:
"Existentialism says existence precedes essence. In this statement he is taking existentia and essentia according to their metaphysical meaning, which, from Plato's time on, has said that essentia precedes existentia. Sartre reverses this statement. But the reversal of a metaphysical statement remains a metaphysical statement. With it, he stays with metaphysics, in oblivion of the truth of Being."And, at any rate, I utterly and completely fail to see what this has to do with consciousness itself, or how to recreate it, or how it's generated by the brain. How he deals with authenticity and concepts of free will, however, I have always found to be fascinating:
Bad faith (existentialism) - Wikipedia, the free encyclopedia
His struggle with "two modes of consciousness" I have always found to be far, far more simply explained by thinking about consciousness as a parallel, multithreaded activity. One thread only becomes aware of the other when signalled.
This also, of course, gives far less rise to angst, and far fewer Morrisey albums, so isn't as cool to talk about while wearing black turtlenecks and smoking clove cigarettes over glasses of absinthe.
I actually anticipate working with AIs hopefully within my lifetime that contemplate existence at a far higher plane of understanding that we humans currently possess.
I actually anticipate working with AIs hopefully within my lifetime that contemplate existence at a far higher plane of understanding that we humans currently possess.
What will these AI look like?
Here's some questions about AI that I have, that may be very naïve:
Embodied cognition tells us that the world we live in, how we think and communicate comes from the kind of bodies we have - so if a robot is going to live in our world and make sense of it and communicate intelligibly with us, it's going to have to be a lot like us - if not exactly like us.
\We say dolphins are smart and chimps, but we haven't had much luck ... now maybe that's a bad example, but human intelligence is what it is because it is in a human body. A dolphin is never going to appreciate chess or a slam dunk. And I can't imagine having sonar. As for the chimp, how long could you make it in their society? I bet you'd break all kinds of rules ....
So, if it isn't physically like us, it could be inscrutable to us - truly alien, with no basis for communication and only basic goals in common (survival, reproduction, resources) ... if it's wheeled or has six legs or four eyes or radar, it isn't going to think like us, if it doesn't feel pain like we do, if it doesn't have moods, doesn't sleep or enjoy food or go to the bathroom or get embarrassed .... then how will we work with such creatures? What will we have in common? Their world, this world, currently our world - will look very different to such beings.
Another reason it has to at least look like us and behave like us - if it's too weird, we aren't going to put up with it and vice versa. We don't do such a good job putting up with one another as it is. So suppose we do create some new kind of AI with emotions and everything but very different from us or very similar but superior physically and mentally, emotionally suppose it's response to us then is disgust? Or, perhaps worse, pity. Empathy will be hard to come by the more difference there is.
If it's AI in a box, OK - you simulate the human platform, a virtual world - but you better never - let it figure out it's in a box you built ... Chalmers hits that note with the no red pills clause. Good luck, if it's as smart or smarter than us ... prisoners escape because they have all day to think of one thing, my dogs get out of my locked house for the same reason. Now you've got something with computing resources and unlimited time that is just like us only smarter ...
And speaking of resources, if it's going to be like us and live in our world, be the same size and weight as us - it's going to be made of the same materials ... if AI is going to be cheap and plentiful, it will be made of the same cheap and plentiful elements we are ... which means it will compete for our resources.
People say in a few years robots will be doing all the mundane tasks - cleaning, cooking and even dirty and dangerous jobs. But to do those things it takes a human level of intelligence - and not just close, either.
Take waiting tables, sure "anyone" can do it, but it takes skill and intelligence (and patience, tact, wit, thinking on your feet, physical stamina, a good attitude, communication skills, memory etc etc) to be good ... so we're going to put a human like AI made of human like materials with human level intelligence to work waiting tab- ... you see where I am going.
I am going here: Why? I read that humans consume about the same energy as a 60 watt light bulb, humans fit in any human sized space, access energy the way humans do (eat), can withstand tremendous compression forces on their structure (bones, etc) - porters in Greece traditionally carried several hundred pound loads on a daily basis - muscular strength and efficiency has been optimized over billions of years in terms of the materials they are made of - making more effective actuators will be a challenge - humans interface with humans in a familiar format (speech, writing) ... and for many jobs they will work for minimum wage and much less.
Given the above, what kind of machine would you design to do these jobs .... fireman, policeman, soldier, doctor, psychologist? I bet it would look an awful lot like a person.
Aha! But let's let AI worry about all that, we make AI and it makes AI+ solving all these problems (assuming AI accepts the task we give it) ... we're assuming a better mouse trap can be made ... the constraints stand in terms of energy and materials, try and make a better 70kg bipedal system with the available materials, not just one from titanium or special alloys but hundreds and thousands and millions to serve man - then you'll have to use plentiful materials - then you'll have increased, effectively, the biomass of the planet ...
OR, more likely the AI says, "screw that" and nano-engineers fabulous materials from ... what? From what's around it, what's plentiful, from anything it wants to ... including the kind of stuff we are made of - or us ourselves, 7 billion people, that's a lot of carbon and calcium and water ... free for the taking by a superior force. And why stop there? Animals, plants, water ... but then, isn't that just evolution?
Believe me, I'm open to a much more optimistic vision!
Last edited by a moderator: