Artificial life
|
Artificial life, also known as alife or a-life, is the study of life through the use of human-made analogs of living systems. Computer scientist Christopher Langton coined the term in the late 1980s when he held the first "International Conference on the Synthesis and Simulation of Living Systems" (otherwise known as Artificial Life I) at the Los Alamos National Laboratory in 1987.
Contents |
Nature of the field
Although the study of artificial life does have some significant overlap with the study of artificial intelligence (AI), the two fields are very distinct in their history and approach. Organized AI research began early in the history of digital computers, and was often characterized in those years by a "top-down" approach based on complicated networks of rules. Students of alife did not have an organized field at all until the 1980s, and often worked in isolation, unaware of others doing similar work. Where they concerned themselves with intelligence at all, researchers tended to focus on the "bottom-up" nature of emergent behaviors.
Artificial life researchers have often been divided into two main groups (although other groupings are possible):
- The strong alife position states that "life is a process which can be abstracted away from any particular medium". (John Von Neumann). Notably, Tom Ray declared that his program Tierra was not simulating life in a computer, but was synthesizing it.
- The weak alife position denies the possibility of generating a "living process" outside of a carbon-based chemical solution. Its researchers try instead to mimic life processes to understand the appearance of single phenomena. The usual way is through an agent based model, which usually gives a minimal possible solution. That is: "we don't know what in nature generates this phenomenon, but it could be something as simple as..."
The field is characterized by the extensive use of computer programs and computer simulations which include evolutionary algorithms (EA), genetic algorithms (GA), genetic programming (GP), artificial chemistries (AC), agent-based models, and cellular automata (CA).
Artificial life is a meeting point for people from many other more traditional fields such as linguistics, physics, mathematics, philosophy, computer science, biology, anthropology and sociology in which unusual computational and theoretical approaches that would be controversial within their home discipline can be discussed. As a field, it has had a controversial history; John Maynard Smith criticized certain artificial life work in 1995 as "fact-free science", and it has not generally received much attention from biologists. However, the recent publication (http://myxo.css.msu.edu/cgi-bin/lenski/prefman.pl?group=al) of artificial life articles in widely read journals such as Science and Nature is evidence that artificial life techniques are becoming more accepted in the mainstream, at least as a method of studying evolution.
History and contributions
Pre-computer
A few inventions of the pre-digital era were early heralds of humankind's fascination with artificial life. Most famous was an artificial duck, with thousands of moving parts, created by Jacques de Vaucanson. The duck could reportedly eat and digest, drink, quack, and splash in a pool. It was exhibited all over Europe until it fell into disrepair. [1] (http://www.nyu.edu/pages/linguistics/courses/v610051/gelmanr/ling.html)
1950s-1970s
One of the earliest thinkers of the modern age to postulate the potentials of artificial life, separate from artificial intelligence, was math and computer prodigy John Von Neumann. At the Hixon Symposium, hosted by Linus Pauling in Pasadena, California in the late 1940s, Von Neumann delivered a lecture titled "The General and Logical Theory of Automata." He defined an "automata" as any machine whose behavior proceeded logically from step to step by combining information from the environment and its own programming, and said that natural organisms would in the end be found to follow similar simple rules. He also spoke about the idea of self-replicating machines. He postulated a machine -- a kinematic automaton -- made up of a control computer, a construction arm, and a long series of instructions, floating in a lake of parts. By following the instructions that were part of its own body, it could create an identical machine. He followed this idea by creating (with Stanislaw Ulam) a purely logic-based automata, not requiring a physical body but based on the changing states of the cells in an infinite grid -- the first cellular automaton. It was extraordinarily complicated compared to later CAs, having hundreds of thousands of cells which could each exist in one of twenty-nine states, but Von Neumann felt he needed the complexity in order for it to function not just as a self-replicating "machine", but also as a universal computer as defined by Alan Turing. This "universal constructor" read from a tape of instructions and wrote out a series of cells that could then be made active to leave a fully-functional copy of the original machine and its tape. Von Neumann worked on his automata theory intensively right up to his death, and considered it his most important work.
Homer Jacobsen illustrated basic self-replication in the 1950s with a model train set -- a seed "organism" consisting of a "head" and "tail" boxcar could use the simple rules of the system to consistently create new "organisms" identical to itself, so long as there was a random pool of new boxcars to draw from. Edward F. Moore proposed "Artificial Living Plants", which would be floating factories which could create copies of themselves. They could be programmed to perform some function (extracting fresh water, harvesting minerals from seawater) for an investment that would be relatively small compared to the huge returns from the exponentially growing numbers of factories. Freeman Dyson also studied the idea, envisioning self-replicating machines sent to explore and exploit other planets and moons, and a NASA group called the Self-Replicating Systems Concept Team performed a 1980 study on the feasibility of a self-building lunar factory.
University of Cambridge professor John Conway invented the most famous cellular automata in the 1960s. He called it the Game of Life, and publicized it through Martin Gardner's column in Scientific American magazine.
1970s-1980s
Philosophy scholar Arthur Burks, who had worked with Von Neumann (and indeed, organized his papers after his death), headed the Logic of Computers Group at the University of Michigan. He brought the overlooked views of 19th century American thinker Charles S. Peirce into the modern age. Peirce was a strong believer that all of nature's workings were based on logic. The Michigan group was one of the few groups still interested in alife and CAs in the early 1970s; one of its students, Tommaso Toffoli argued in his PhD thesis that the field should not be overlooked as a mathematical curiosity, because its results were so powerful in explaining the simple rules that underlay complex effects in nature. Toffoli later provided a key proof that CAs were reversible, just as the true universe is considered to be.
Christopher Langton was an unconventional researcher, with an undistinguished academic career that led him to a job programming DEC mainframes for a hospital. He became enthralled by Conway's Game of Life, and began pursuing the idea that the computer could emulate living creatures. After years of study (and a near-fatal hang-gliding accident), he began attempting to actualize Von Neumann's CA and the work of E.F. Codd, who had simplified Von Neumann's original twenty-nine state monster to one with only eight states. He succeeded in creating the first self-replicating computer organism in October of 1979, using only a Apple II desktop computer. He entered Burks' graduate program at the Logic of Computers Group in 1982, at the age of 33, and helped to found a new discipline.
Langton's official conference announcement of Artificial Life I was the earliest description of a field which had previously barely existed:
Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.
Microelectronic technology and genetic engineering will soon give us the capability to create new life forms in silico as well as in vitro, This capacity will present humanity with the most far-reaching technical,theoretical and ethical challenges it has ever confronted. The time seems appropriate for a gathering of those involved in attempts simulate or synthesize aspects of living systems.
Ed Fredkin founded the Information Mechanics Group at MIT, which united Toffoli, Norman Margolus, Gerard Vichniac, and Charles Bennett. This group created a computer especially designed to execute cellular automata, eventually reducing it to the size of a single circuit board. This "cellular automata machine" allowed an explosion of alife research among scientists who could not otherwise afford sophisticated computers.
In 1982, brilliant and controversial scientist Stephen Wolfram turned his attention to cellular automata. He explored and categorized the types of complexity displayed by one-dimensional CAs, and showed how they applied to natural phenomena such as the patterns of seashells and the nature of plant growth. Norman Packard, who worked with Wolfram at the Institute for Advanced Study, used CAs to simulate the growth of snowflakes, following very basic rules.
Computer animator Craig Reynolds similarly used three simple rules to create recognizable flocking behaviour in groups of computer-drawn "boids" in 1987. With no top-down programming at all, the boids produced life-like solutions to evading obstacles placed in their path. Computer animation has continued to be a key commercial driver of alife research as the creators of movies attempt to find more realistic and inexpensive ways to animate natural forms such as plant life, animal movement, hair growth, and complicated organic textures.
The Unit of Theoretical Behavioural Ecology at the Free University of Brussels applied the self-organization theories of Ilya Prigogine and the work of entomologist E.O. Wilson to research the behavior of social insects, particularly allelomimesis, in which an individual's actions are dictated by those of a neighbor. They wrote a script describing the behavior of termites, then modified the environment and watched the way that the simulated, script-driven insects reacted. They then compared that to the reaction of real termites to identical changes in laboratory colonies, and refined their theories about the rules which underlay the behavior.
James Doyne Farmer was a key figure in tying artificial life research to the emerging field of complex adaptive systems, working at the Center for Nonlinear Studies (a basic research section of Los Alamos National Laboratory), just as its star chaos theorist Mitchell Feigenbaum was leaving. Farmer and Norman Packard chaired a conference in May of 1985 called "Evolution, Games, and Learning", which was to presage many of the topics of later alife conferences.
Other figures:
- Stuart Kauffman
- Stanley Miller & Harold Urey
- Steen Rasmussen
- James Crutchfield
- Gerald Joyce
- John Henry Holland, inventor of genetic algorithms
- David Jefferson
- Richard Dawkins
- John Koza
- Danny Hillis
- Karl Sims
- Thomas Ray
- Steve Grand, creator of Creatures
Intelligent artificial life
Artificial individuals can be enabled to become intelligent by an artificial brain, constructed by means of an Artificial neural network or a Continuum calculator.
See also
- artificial consciousness
- artificial chemistry
- Artificial neural network
- Continuum calculator
- Biologically-inspired computing
- digital organisms
- evolutionary art
- clanking replicator
- carbon chauvinism
- robotics
- systems biology
- wet alife
- Lindenmayer systems
- game theory
Open problems
- "What is life?"
- "When can we say that a system, or a subsystem, is alive?"
- "What is the smallest system that we can consider alive?"
- "Why is nature able to achieve an open-ended evolutionary system, while all human models seem to fall short of it?"
- "How can we measure evolution?"
- "How can we measure emergence?"
References
- Levy, Steven (1992). Artificial Life: A Report from the Frontier Where Computers Meet Biology. Vintage Books: Random House, New York. ISBN 0-679-74389-8
- Rudy Rucker (1993). Artificial Life Lab. Waite Group Press. ISBN 1878739484
External links
- International Society for Artificial Life (ISAL) (http://www.alife.org/)
- Artificial Life (journal) (http://mitpress.mit.edu/catalog/item/default.asp?tid=41&ttype=4)
- Introduction to Artificial Life (http://www.rennard.org/alife/english/)de:Künstliches Leben