The answer isnt quite that simple. To quote Chalmers
What about computers? Although Searle (1990) talks about what it takes for something to be a "digital computer", I have talked only about computations and eschewed reference to computers. This is deliberate, as it seems to me that computation is the more fundamental notion, and certainly the one that is important for AI and cognitive science. AI and cognitive science certainly do not require that cognitive systems be computers, unless we stipulate that all it takes to be a computer is to implement some computation, in which case the definition is vacuous.
What does it take for something to be a computer? Presumably, a computer cannot merely implement a single computation. It must be capable of implementing many computations - that is, it must be
programmable. In the extreme case, a computer will be universal, capable of being programmed to compute any recursively enumerable function. Perhaps universality is not required of a computer, but programmability certainly is. To bring computers within the scope of the theory of implementation above, we could require that a computer be a CSA with certain parameters, such that depending on how these parameters are set, a number of different CSAs can be implemented. A universal Turing machine could be seen in this light, for instance, where the parameters correspond to the "program" symbols on the tape. In any case, such a theory of computers is not required for the study of cognition.
Is the brain a computer in this sense? Arguably. For a start, the brain can be "programmed" to implement various computations by the laborious means of conscious serial rule-following; but this is a fairly incidental ability. On a different level, it might be argued that learning provides a certain kind of programmability and parameter-setting, but this is a sufficiently indirect kind of parameter-setting that it might be argued that it does not qualify. In any case, the question is quite unimportant for our purposes. What counts is that the brain implements various complex computations, not that it is a computer.
A Computational Foundation for the Study of Cognition
For other theorists (e.g.,
functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam, 1967).
Chalmers' argument for artificial consciousness[edit]
One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his manuscript
A Computational Foundation for the Study of Cognition, is roughly that computers perform computations and the right kinds of computations are sufficient for the possession of a conscious mind. In outline, he defends his claim thus: Computers perform computations. Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.
The most controversial part of Chalmers’ proposal is that mental properties are “organizationally invariant;” i.e., nothing over and above abstract causal organization. His rough argument for which is the following. Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are “characterized by their causal role” within an overall causal system. He adverts to the work of Armstrong (1968) and Lewis (1972) in claiming that “systems with the same causal topology…will share their psychological properties.”
Artificial consciousness - Wikipedia, the free encyclopedia