Friendly artificial intelligence
|
A Friendly artificial intelligence (often abbreviated FAI) is an Artificial intelligence model designed around the principles of Friendliness theory, a specific model for creating safe and moral AI advanced by researcher Eliezer Yudkowsky and the Singularity Institute.
The word "Friendliness", starting with a capital F, is a technical term and completely separate from the English word "friendliness" that starts with a lowercase F. The word is capitalized in technical usage specifically to both differentiate from "friendliness" and to allow others to continue to use "friendliness" as they always have, aware or not of the technical use of "Friendliness".
Friendliness theory starts from the supposition that, in the future, AIs with intellectual and practical abilites vastly superior to humans will be created, or rather will create themselves from seed AI (a belief common in transhumanism and Singularitarianism). The problem then becomes how these AIs will interact with human beings, and if they will have a morality (if any) similar to ours. Many futurologists speculate the difference in power between AIs and humans will be so great that, in Yudkowsky's words, "if the AI stops wanting to be Friendly, you've already lost." Some believe the AIs' ability to reprogram themselves will quickly outpace the humans' ability to control them. Advocates of Friendly AI claim that if AIs do not actively desire to remain benevolent to humans, even if they are indifferent, the possibility that they may become malevolent is a risk unacceptable to humans.
Friendly AI is an enormously complicated subject, a fact that often annoys those looking for simple descriptions and explanations of how AI can be made Friendly. The Singularity Institute offers a wide range of introductory materials (http://singinst.org/friendly/) on their website. The most extensive description of Friendliness theory is available in Yudkowsky's publication Creating Friendly AI (http://www.singinst.org/CFAI/).
One of the most recent significant advancements in Friendliness theory is the Collective Volition model. In Yudkowsky's words, "our collective volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together". Yudkowsky believes that the initial dynamic of the Seed AI should be to determine and act upon the collective volition of the human race.
Criticisms
One notable critic of Friendly AI is Bill Hibbard, author of the book Super-intelligent Machines, who believes there should be more political process involved in the creation of AI morality. He also believes that seed AI can initially only be created by large corporations (a view not shared by Yudkowsky) and that such corporations will not be motivated to implement Friendly AI principles.
External links
- What is Friendly AI? (http://singinst.org/friendly/whatis.html) -- A brief explanation from the Singularity Institute
- Creating Friendly AI (http://www.singinst.org/CFAI/) -- A book-length publication explaining in detail the problems that Friendly AI seeks to answer
- Critique of the SIAI Guidelines on Friendly AI (http://www.ssec.wisc.edu/~billh/g/SIAI_critique.html) -- A critique of Friendly AI by Bill Hibbard
- SIAI's Guidelines for building 'Friendly' AI (http://www.optimal.org/peter/siai_guidelines.htm) -- Commentary from Peter Voss
- 3 Laws Unsafe (http://www.asimovlaws.com/) -- An Internet project to increase awareness of AI morality from the Singularity Institute