AI and Consciousness: Theoretical foundations and current approaches
In the last ten years there has been a growing interest towards the field of artificial consciousness. Several researchers, also from traditional Artificial Intelligence, addressed the hypothesis of designing and implementing models for artificial consciousness (sometimes referred to as machine consciousness or synthetic consciousness) – on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness (Baars, 1988; Minsky, 1991; McCarthy, 1995; Edelman and Tononi, 2000; Jennings, 2000; Aleksander, 2001; Baars, 2002; Franklin, 2003; Kuipers, 2005; Adami, 2006; Minsky, 2006; Chella and Manzotti, 2007).
The traditional field of Artificial Intelligence is thus flanked by the seminal field of artificial or machine consciousness (sometimes machine or synthetic consciousness) aimed at reproducing the relevant features of consciousness using non biological components. According to Ricardo Sanz, there are three motivations to pursue artificial consciousness (Sanz, 2005):
1) implementing and designing machines resembling human beings (cognitive robotics);
2) understanding the nature of consciousness (cognitive science);
3) implementing and designing more efficient control systems.
The current generation of systems for man-machine interaction shows impressive performances with respect to the mechanics and the control of movements; see for example the anthropomorphic robots produced by the Japanese companies and universities. However, these robots, currently at the state of the art, present only limited capabilities of perception, reasoning and action in novel and unstructured environments. Moreover, the capabilities of user-robot interaction are standardized and well defined.
A new generation of robots and softbots aimed at interacting with humans in an unconstrained environment shall need a better awareness of their surroundings and of the relevant events, objects, and agents. In short, the new generation of robots and softbots shall need some form of “artificial consciousness”.
Epigenetic robotics and synthetic approaches to robotics based on psychological and biological models have elicited many of the differences between the artificial and mental studies of consciousness, while the importance of the interaction between the brain, the body and the surrounding environment has been pointed out (Chrisley, 2003; Rockwell, 2005; Chella and Manzotti, 2007; Manzotti, 2007).
In the field of artificial intelligence there has been a considerable interest towards consciousness. Marvin Minsky was one of the first to point out that “some machines are already potentially more conscious than are people, and that further enhancements would be relatively easy to make. However, this does not imply that those machines would thereby, automatically, become much more intelligent. This is because it is one thing to have access to data, but another thing to know how to make good use of it.” (Minsky, 1991)
The target of researchers involved in recent work on artificial consciousness is twofold: the nature of phenomenal consciousness (the so-called hard problem) and the active role of consciousness in controlling and planning the behaviour of an agent. We do not know yet if it is possible to solve the two aspects separately.
The goal of the workshop is to examine the theoretical foundations of artificial consciousness as well as to analyze current approaches to artificial consciousness.
According to Owen Holland (Holland, 2003) and following Searle's distinction between Weak and Strong AI, it is possible to distinguish between Weak Artificial Consciousness and Strong Artificial Consciousness:
- Weak Artificial Consciousness: design and construction of machine that simulates consciousness or cognitive processes usually correlated with consciousness.
- Strong Artificial Consciousness: design and construction of conscious machines.
Most of the people currently working in the field of Artificial Consciousness would embrace the former definition. In any case, the boundaries between the two are not always easy to define. For instance, if a machine could exhibit all behaviours normally associated with a conscious being, could we reasonably deny it the status of conscious machine? Conversely, if a machine could exhibit all such behaviours, is it really possible it might not be subjectively conscious?
Most mammals seem to show some kind of consciousness – in particular, human beings. Therefore, it is highly probable that the kind of cognitive architecture responsible for consciousness has some evolutionary advantage. Although it is still difficult to single out a precise functional role for consciousness, many believe that consciousness endorses more robust autonomy, higher resilience, more general capability for problem-solving, reflexivity, and self-awareness (Atkinson, Thomas et al., 2000; McDermott, 2001; Franklin, 2003; Bongard, Zykov et al., 2006)
Consciousness and Artificial Intelligence