Found a good (imo) article today
(i) The human brain is a machine.
(ii) We will have the capacity to emulate this machine (before long).
(iii) If we emulate this machine, there will be AI.
—————-
(iv) Absent defeaters, there will be AI (before long).
The first premise is suggested by what we know of biology (and indeed by what we know of physics). Every organ of the body appears to be a machine: that is, a complex system comprised of law-governed parts interacting in a law-governed way. The brain is no exception. The second premise follows from the claims that microphysical processes can be simulated arbitrarily closely and that any machine can be emulated by simulating microphysical processes arbitrarily closely.
It is also suggested by the progress of science and technology more generally: we are gradually increasing our understanding of biological machines and increasing our capacity to simulate them, and there do not seem to be limits to progress here. The third premise follows from the definitional claim that if we emulate the brain this will replicate approximate patterns of human behaviour, along with the claim that such replication will result in AI. The conclusion follows from the premises along with the definitional claim that absent defeaters, systems will manifest their relevant capacities.
One might resist the argument in various ways. One could argue that the brain is more than a machine; one could argue that we will never have the capacity to emulate it; and one could argue that emulating it need not produce AI. Various existing forms of resistance to AI take each of theseforms. For example, J.R. Lucas (1961) has argued that for reasons tied to G¨odel’s theorem, humans are more sophisticated than any machine. Hubert Dreyfus (1972) and Roger Penrose(1994) have argued that human cognitive activity can never be emulated by any computational machine. John Searle (1980) and Ned Block (1981) have argued that even if we can emulate the human brain, it does not follow that the emulation itself has a mind or is intelligent.
I have argued elsewhere that all of these objections fail.
But for present purposes, we can set many of them to one side. To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies onsubsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.
http://consc.net/papers/singularity.pdf
Papers on AI and Computation (David Chalmers)
David Chalmers
He makes imo a good point here
Another argument for premise 1 is the evolutionary argument, which runs as follows
.
(i) Evolution produced human-level intelligence.
(ii) If evolution produced human-level intelligence, then we can produce AI (before
long).
—————-
(iii) Absent defeaters, there will be AI (before long).
Here, the thought is that since evolution produced human-level intelligence, this sort of intelligence is not entirely unattainable. Furthermore, evolution operates without requiring any antecedent intelligence or forethought. If evolution can produce something in this unintelligent manner, then in principle humans should be able to produce it much faster, by using our intelligence.