Artificial Intelligence as a defined concept dates to 1950 with the paper written by Alan Turing, Computing Machinery and Intelligence. From this paper originates the "Turing Test" (initially called the Imitation Game by Turing), which intends to determine if a computer can think as a human does. As a concept within current human knowledge, it is possible to say that this idea goes back to 1939 with the Wizard of Oz and the heartless tin man.
Moving from Turing's idea to reality has, and continues to be, a challenge. During Turing's time, the problem was a machine's ability to store or remember its decisions. It could calculate, but it failed to possess the ability to store that information, which is fundamentally required to achieve AI and have the computer think like a human.
Much later, Stuart Russell and Peter Norvig published their book Artificial Intelligence: A Modern Approach in 1995, which is now in its 4th edition dated 2020. Russell and Norvig worked to clarify the term Artificial Intelligence. There are many definitions out there, so they broke AI into the logic of rationality and thinking vs. acting.
- The first two aim to have the computer be human-like.
- First, some systems think like humans.
- Then some systems act like humans.
- Separately the question is the rationality of the behavior of the computer.
- First, some systems think rationally.
- Separately some systems act rationally.
The first definition within AI encompasses systems that can think like a human. If systems can learn and solve problems as a human can, then they fit into this category. Haugeland defined this in 1985 as "machines with minds." Hellman in 1978 said this category is the automation of "activities that we associate with human thinking."
Separately some systems can act humanly; this is the category of systems that fit the Turing Test fit. If a system can act like a human, communicate successfully in English, understand what someone is saying to it, responding, evolving, and forming new conclusions, it fits this category. Kurzweil defined this category as "the art of creating machines that perform functions that require intelligence when performed by people."
The second set of categories for AI systems measure them against the ability to perform rationally; this is distinct from human behavior because people are just not rational at times. There are, again, 2 ways to approach AI here: the systems can think rationally or act rationally.
Charniak and McDermott described a system that can think rationally in 1985 as "the study of mental faculties through the use of computational models." This is considered the “laws of thought” approach.
Aristotle was the first to explain what he called “right-thinking,” or irrefutable reasoning processes. The example given by Russell and Norvig is "socrates is a man, all men are mortal; therefore, Socrates is mortal."
It is also possible for a system to act rationally, exhibiting skills based on the Turning Test. Poole described what must be done to create an AI system that can act rationally as "computational intelligence is the study of the design of intelligent agents."
The terms weak and strong are another way to distinguish types of AI systems. Weak AI is probably more appropriately referred to as Narrow AI or Artificial Narrow Intelligence. The AI focuses on performing specific tasks, such as Apple's Siri, Amazon's Alexa, or a Google self-driving car.
Strong AI comprises 2 types of AI: Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). AGI is a self-aware system with a consciousness. It can solve problems and even plan for the future. ASI is a system that surpasses human capabilities. An example of ASI does not exist yet unless you look to the movies. 2001: A Space Odyssey had a computer system called HAL. If you remember that, then you have an idea about an ASI system.