Turing Test - Tutorials |
Read this first: B.J. Copeland, who has written extensively about Alan Turing, has a nice summary of the Turing Test. Then take a look at Turing's article "Computing Machinery and Intelligence"
in which Alan Turing first describes what it means for a computer to be
considered intelligent. Don't worry about understanding the deep
mathematics; just focus on the ideas. What is artificial intelligence (AI)? Copeland defines it in human terms: learning, reasoning, problem-solving, perception, and language-understanding. Copeland's entire web site on AI is well worth reading as an introductory overview. Another very good overview comes from AI researcher John McCarthy. Turing Test: 50 years later gives a very good overview of the history of the Turing test and how it has evolved since Turing first proposed it in 1950. Another view of that evolution comes from cognitive scientist Daniel Dennett. Much of what there is to know about the Turing test is on the ultimate Turing Test website. The site includes many scholarly articles on the test itself plus a number of good ones on AI. Browse whatever looks interesting--there's far too much to consume in a short period of time. If we did create an artificial intelligence, how would we know? Does the intelligence have to be human-like or can it be something else? Some of the members of the AI Lab at MIT outline many of the necessary criteria for achieving a human intelligence. Marvin Minsky, one of the pioneers of AI at MIT, believes that there are certain absolutes that any intelligence should understand. Whether the intelligence is human-like or not, once created, such an intelligence will soon exceed human intelligence. Some people believe that this will lead to a technological singularity, a term coined by Vernor Vinge. Others point to the slow progress in the field of artificial intelligence, making the reasonable argument that intelligence is more complex than what we currently understand. Minsky, of course, does not see this as an intractible problem. He believes that the mind is a collection of hundreds of agents that do specific jobs. Presumably by assembling the right kinds of agents, a human-level (or above) AI entity could be created. When could the first such entity be created? Estimates have varied wildly, starting with Turing who believed that it would take 100 years (i.e., 2052). More optimistic is Ray Kurzweil. Writing in The Age of Spiritual Machines, Kurzweil believes that the memory and processing power of computers in 2020 will equal that of the human mind, and that AI will arise within the following decade. What might be expected from AI in the next 50 years? Here's a few opinions from leading researchers who attended the Dartmouth Artificial Intelligence Conference in 2006: Ray Solomonoff (London): Machine Learning— Past and Future. Solomonoff was one of the original pioneers of AI. Nils Nilsson (Stanford): Routes to the Summit Pat Langley (Stanford): Intelligent Behavior in Humans and Machines Ray Kurzweil : Why we can be confident of Turing Test capability within a quarter century J. Storrs Hall: Self-improving AI: an analysis Eric Steinhart: Survival as a Digital Ghost Eliezer Yudkowsky, founder of the Singularity Institute, has concerned himself with questions about whether an AI that is created will be friendly or hostile. This is an issue that must be resolved before a fully-aware AI wakes up, because after that it will be much harder to stop. Yudkowsky is optimistic that AI entities can be created that are friendly and stay that way. Yudkowsky is far from the only person to consider this scenario. For an account of other efforts to instill ethics in AIs, this article by Billings is a good introduction. For a more comprehensive and scholarly treatment, An Approach to Computing Ethics is a good read. |
Home
|
Overview
|
History
|
Tutorials
|
Multimedia and
Lectures Examples and Simulations | Advanced Topics |