Not 'woo':
Why Computers Will Never Be Truly Conscious
By
Subhash Kak - Oklahoma State University October 16, 2019
My usage of the word "woo" had nothing to do with consciousness in current computers.
A good article and PDF. Thanks. However, in reading and searching through them, I didn't find the specific sections you cite. Regardless, there are a couple of problems with the quote. Firstly,
experiences themselves aren't stored neurophysiologically. Nobody knows how experiences are created yet. At best neurophysiology is causal, but as you have pointed out in the past, at present we only have correlation.
By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain's information handling must also be different from how computers work.
Biological memory also has short term and long term "modules", but in any case, it's not that sort of information handling that seems relevant to me. Rather it's the difference between the physical construction of the systems. In other words, no amount of neuron modelling by electronic circuits can make the model into actual neurons. I suspect ( yet to be proven ), that something about the situation with actual brain materials and functions is responsible for consciousness, and those situations might not occur the way they need to with current electronic designs.
The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can
identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.
The above also has little relevance because so far as we know, computers don't "consciously interpret the data". Additionally, human perception actually
is directly related to sensory data. Perhaps it's not always real time sensory data, but then again pattern recognition in computers also works with a combination of real time and stored data.
Another perspective on this is that the most mundane memory tasks are associated with
multiple areas of the brain — some of which are quite large. Skill learning and expertise involve
reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture. . . ."
Computer memory changes its configuration as needed in order to do what it needs to do. It's not a "fixed architecture". Brains can grow new cells, but that's not practically different than simply accessing new unused memory, or installing more memory as needed. For these reasons, while I think the writer is intuitively correct, the specific differences mentioned, aren't necessarily at the root of the question, whereas the materials and design are.
". . .
Computation and awareness
In my own recent work, I've highlighted some
additional reasons that consciousness is not computable.
Are you saying you are Subhash Kak? Or are you still quoting someone else?
A conscious person is aware of what they're thinking, and has the ability to stop thinking about one thing and start thinking about another — no matter where they were in the initial train of thought. But that's impossible for a computer to do. More than 80 years ago, pioneering British computer scientist Alan Turing showed that there was no way ever to prove that any particular
computer program could stop on its own — and yet that ability is central to consciousness. . . ."
Why Computers Will Never Be Truly Conscious
The above implies a leap in logic in that it's premise applies to situations that are different than the premise it sets out. It also assumes that persons can simply stop thinking on command. I see no evidence for this. Humans can change their minds, or shoot themselves in the head, but they cannot simply switch off their brains off. At best we can only fall asleep, and that isn't even always guaranteed. Even then it's also not a true off state.
In contrast computers have advanced to the point where they can do rudimentary self-programming and adapt to environmental conditions. Attach a light sensor to a computer and it can dim, brighten, or turn itself off in response to changing lighting conditions without any human intervention. A lot more is also possible. Eventually computers will be at the point where they are no longer designed, programed, or built by us.
When computers evolve to that point, I have little doubt that if they select an option to turn themselves off, they'll be able to do so, but whether or not they will
experience anything in the process is another question altogether.