"In theory, an artificial general intelligence could carry out any task a human could, and likely many that a human couldn't. At the very least, an AGI would be able to combine human-like, flexible thinking and reasoning with computational advantages." – https://www.zdnet.com/article/what-is-artificial-general-intelligence/ , viewed on March 6th 2019
"The ability to achieve complex goals in complex environments using limited computational resources." – https://intelligence.org/2013/08/11/what-is-agi/ , viewed on March 6th 2019
https://wiki.lesswrong.com/wiki/AI_takeoff
https://wiki.lesswrong.com/wiki/Friendly_AI
https://wiki.lesswrong.com/wiki/Intelligence_explosion
The basic argument here was set out by the statistician I. J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. http://consc.net/papers/singularity.pdf
Philosophically: The singularity raises many important philosophical questions. The basic argument for an intelligence explosion is philosophically interesting in itself, and forces us to think hard about the nature of intelligence and about the mental capacities of artificial machines. The potential consequences of an intelligence explosion force us to think hard about values and morality and about consciousness and personal identity. In effect, the singularity brings up some of the hardest traditional questions in philosophy and raises some new philosophical questions as well.
The Argument for a Singularity:
Basic Argument:
As for defeaters: I will stipulate that these are anything that prevents intelligent systems (human or artificial) from manifesting their capacities to create intelligent systems.
We can think of the three premises as an equivalence premise (there will be AI at least equivalent to our own intelligence), an extension premise (AI will soon be extended to AI+), and an amplification premise (AI+ will soon be greatly amplified to AI++). Why believe the premises? I will take them in order.
Premise 1: There will be AI (before long, absent defeaters).
(i) The human brain is a machine. (ii) We will have the capacity to emulate this machine (before long). (iii) If we emulate this machine, there will be AI. —————- (iv) Absent defeaters, there will be AI (before long).
Another argument for premise 1 is the evolutionary argument, which runs as follows.
(i) Evolution produced human-level intelligence mechanically and nonmirac- ulously (ii) If evolution produced human-level intelligence, then we can produce AI (before long). —————- (iii) Absent defeaters, there will be AI (before long)
Premise 2: If there is AI, then there will be AI+ (soon after, absent defeaters).
(i) If there is AI, AI will be produced by an extendible method. (ii) If AI is produced by an extendible method, we will have the capacity to extend the method (soon after). (iii) Extending the method that produces an AI will yield an AI+. —————- (iv) Absent defeaters, if there is AI, there will (soon after) be AI+.
Premise 3: If there is AI+, there will be AI++ (soon after, absent defeaters).
The case for the amplification premise is essentially the argument from I.J. Good given above. We might lay it out as follows. Suppose there exists an AI+. Let us stipulate that AI1 is the first AI+, and that AI0 is its (human or artificial) creator. (If there is no sharp borderline between non-AI+ and AI+ systems, we can let AI1 be any AI+ that is more intelligent than its creator.) Let us stipulate that δ is the difference in intelligence between AI1 and AI0, and that one system is significantly more intelligent than another if there is a difference of at least δ between them. Let us stipulate that for n > 1, an AIn+1 is an AI that is created by an AIn and is significantly morevintelligent than its creator.
(i) If there exists AI+, then there exists an AI1. (ii) For all n > 0, if an AIn exists, then absent defeaters, there will be an AIn+1. (iii) If for all n there exists an AIn, there will be AI++. —————- (iv) If there is AI+, then absent defeaters, there will be AI++
The arguments so far have depended on an uncritical acceptance of the assumption that there is such a thing as intelligence and that it can be measured.
So it would be good to be able to formulate the key theses and arguments without assuming the notion of intelligence. I think that this can be done. We can rely instead on the general notion of a cognitive capacity: some specific capacity that can be compared between systems. All we need for the purpose of the argument is
(i) a self-amplifying cognitive capacity G: a capacity such that increases in that capacity go along with proportionate (or greater) increases in the ability to create systems with that capacity
(ii) the thesis that we can create systems whose capacity G is greater than our own, and
(iii) a correlated cognitive capacity H that we care about, such that certain small increases in H can always be produced by large enough increases in G.
Obstacles to the Singularity
–––––––
–––––––
http://intelligence.org/files/IE-EI.pdf