Randall
J. Randall Murphy
There is more mental nutrition here than I can digest, but it still fascinates. I would like to pose a couple of loose questions in hopes of gleaning viewpoints and better understanding.
Student raises his hand from back of classroom. He has read but dismally failed to comprehend Roger Penrose's "The Emperor's New Mind". He hopes his questions have not already been answered and he has simply missed them :
-Can the emergent process of human consciousness be deconstructed as no more than a kind of awareness producing algorithm following an essentially hierarchical set of (self-modifying or externally modified) codes? Does the 'algorithm only' view violate the notion (often attributed to Kurt Godel and the math is gibberish to said student) that any algorithm can generate true statements that can only be proved within the concept of a larger algorithm, ad infinitum?
-If consciousness is indeed no more than an algorithm occurring in the macro vs quantum world, does that mean it is indeed substrate independent ? Can it be mapped in silicon as well as carbon based systems?
Are the above questions evidence the student has entered a psychologically dissociative state and should have signed up for "Turfgrass Management 101" instead?
As a former landscape lead hand, I appreciate your humor more than the average Joe. I think Mike and I will probably have different answers, but I'll take a stab at what you seem to be getting at. What Penrose appears to be doing is applying the principles of quantum physics to information processing in such a manner as to suggest that consciousness ( the mind ) as we experience it, cannot be algorithmically modeled, the assumption being that a computational model requires all algorithms to resolve in a predictable manner, which is contrary to the behavior of quantum processes. An example would be the difference between a hardware random number generator and a Rube Goldberg machine.
So Penrose presumes that because our brains include quantum processes, that those unpredictable processes are in part responsible for giving rise to our consciousness, and therefore due to the inherent unpredictability, cannot be duplicated by sheer calculating power alone. I would suggest that there are a couple of ways around this. First of all does all consciousness have to be identical to that produced by a human brain to qualify as consciousness? I don't think so. Secondly, how closely to the quantum world of a human brain can a computational system be modeled?
It turns out that the math involved with quantum physics has already been used to model particle behavior, and future computers based on the principles of quantum computing are predicted to be able to model particle interactions, including those we see in CERN with great accuracy. Combine that processing power with the architecture of the human brain and there's no reason to assume that a reasonable AI equivalence won't emerge. In fact I would suggest that at the present time it is already evolving and is at a stage analogous to the first digital media, highly pixelated and limited in resolution, a sort of proto-consciousness, that as resolution and real time processing evolves, will at some point become self aware.
I'm sure you've heard of Kurzweil's book The Age of Spiritual Machines. Although somewhat dated now, it still provides some thought provoking counterpoint to Penrose.
Last edited: