SOAR
|
SOAR (also spelled Soar) is a symbolic cognitive architecture, created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is both a view of what cognition is and an implementation of that view through a computer programming architecture for Artificial Intelligence (AI). Since its beginnings in 1983 and its presentation on a paper in 1987 it has been widely used by AI researchers to model different aspects of human behavior.
The main goal of the SOAR project is for it be able to handle the full range of capabilities of an intelligent agent, from highly routine to extremely difficult, open-ended problems. In order for that to happen, according to the view underlying SOAR, it needs to be able to create representations and use appropriate forms of knowledge (such as procedural, declarative, episodic, and possibly iconic knowledges). SOAR should then address a collection of mechanisms of the [mind]. Also underlying the SOAR architecture is the view that a symbolic system is necessary and sufficient for general intelligence (see brief comment on neats versus scruffies). This is known as the physical symbol system hypothesis. The views of cognition underlying SOAR is tied to the psychological theory expressed in Allen Newell's Unified Theory of Cognition.
Although SOAR's ultimate goal is to achieve general intelligence, there is no claim that this goal has already been reached. Advocates of the system recognize that SOAR is still missing some important aspects of intelligence. Some examples include a deliberate long-term planning facility, the ability to create new representations on its own, the ability to 'unlearn' something that it has learnt and better interaction with the world in real time.
SOAR is based on a production system, i.e. it uses explicit production rules to govern its behaviour (these are roughly of the form "if... then...", as also used in expert systems). Problem solving can be roughly described as a search through a problem space (the collection of different states which can be reached by the system at a particular time) for a goal state (which represents the solution for the problem). This is implemented by searching for the states which bring the system gradually closer its the goal. Each move consists of a decision cycle which has an elaboration phase (in which a variety of different pieces of knowledge bearing the problem are brought to SOAR's working memory) and a decision procedure (which weighs what was found on the previous phase and assigns preferences to ultimately decide the action to be taken).
If the decision procedure just described is not able to determine a unique course of action, SOAR may use different strategies, known as weak methods to solve the impasse. These methods are appropriate to situations in which knowledge is not abudant. Some examples are means-ends analysis (which may calculate the difference between each available option and the goal state) and a type of hill-climbing. When a solution is found by one of these methods, SOAR uses a learning technique called chunking to transform the course of action taken into a new rule. The new rule can then be applied whenever SOAR encounters the situation again (that is, there will be no longer an impasse).
ACT, e.g. ACT-R is another cognitive architecture by John R. Anderson that operates on similar principles. Other cognitive architectures are DUAL, Psi, Copycat, and subsumption architectures.
External Links
- Soar Homepage (http://sitemaker.umich.edu/soar)
- Soar: Frequently Asked Questions List (http://www.nottingham.ac.uk/pub/soar/nottingham/soar-faq.html)
References
- Franklin, Stan (1997), Artificial Minds, MIT Press/Bradford books, 464 pages, ISBN 0262561093 (paperback).
- Laird, John, Newell, Allen and Rosenbloom, Paul (1987). "SOAR: An Architecture for General Intelligence". Artificial Intelligence, 33: 1-64.