Newcomb's paradox
|
Newcomb's Paradox, also referred to as Newcomb's Problem, is a thought experiment involving a game between two players, one of whom purports to be able to predict the future. Whether or not the problem is actually a paradox is disputed.
Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed and was published in a philosophy paper spread to the philosophical community by Robert Nozick in 1969, and appeared in Martin Gardner's Scientific American column in 1974. Today it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side.
Contents |
The problem
There are two players named Predictor and Chooser. Chooser is presented with two boxes: an open box containing $1000, and a closed box that contains either $1,000,000, or $0 (he doesn't know which). Chooser must decide whether he wants to be given the contents of both boxes, or just the contents of the closed box.
The complication is that the day prior, Predictor predicts how Chooser will choose. If he predicts that Chooser will take only the closed box, then he will put $1,000,000 in the closed box. If he predicts that Chooser will take both boxes, he will leave that box empty. Chooser knows this rule of Predictor's behavior, but he does not know Predictor's actual prediction.
The question is: should Chooser take just the closed box or take both boxes?
If Predictor is 100% accurate and if Chooser takes only the closed box, he will get $1,000,000. If Chooser takes both boxes, the closed box will be empty and Chooser only gets $1,000. Even if the Predictor is only mostly accurate, Chooser may still not want to risk only getting $1000. By this reasoning, Chooser should only choose the closed box.
But at the time when Chooser walks up to the boxes the contents have already been set. The closed box is either empty or full. It's too late for the contents of the boxes to change. Chooser might as well take whatever's in both boxes. Whether the closed box is empty or full, he'll clearly make $1000 more by choosing both boxes than by choosing just one box. By this reasoning, Chooser should always choose both boxes.
In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
Thoughts on the paradox
Some argue that Newcomb's Problem is a paradox because it leads logically to self contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise Chooser is not really making a choice.
Other philosophers have proposed many solutions to the problem, many eliminating its seemingly paradoxical nature:
Some have suggested that a rational person will choose both boxes, and an irrational person will choose the closed one, therefore rational people do better at this game (since an accurate Predictor cannot actually exist). Others have suggested that an irrational person will do better than a rational person and interpret this paradox as showing how people can be punished for making rational decisions.
Time machines and free will
Others have suggested that in a world with perfect predictors (or time machines because a time machine could be the mechanism for making the prediction) causation can go backwards. If a person truly knows the future, and that knowledge affects his actions, then events in the future will be causing effects in the past. Chooser's choice will have already caused Predictor's action. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and Chooser will do whatever he's fated to do. Others conclude that the paradox shows that it is impossible to ever know the future. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since perfect predictors require determinism.
Some philosophers argued that this paradox is equivalent to the grandfather paradox. In the grandfather paradox, a person travels back in time, which leads to a chain of events preventing that from happening.
An analysis from the perspective of the Copenhagen interpretation of quantum mechanics sidesteps the incompatibility of free will and reverse causation by putting the closed box in a state of superposition until the actual choice is made. The box is simultaneously empty and full.
A multi-worlds cosmologist will conclude that Predictor's action results in two parallel time streams - one in which he has filled the box and one in which he has left it empty. The multi-worlds theory generally leads to the conclusion that both free will and causation are illusory artifacts of the matching of consciousness to a particular memory of the time stream.
Glass box
Newcomb's Problem has been extended with the question of how behaviors would be changed if the closed box is made of glass. Now what should Chooser do?
If he sees $1,000,000 in the closed box, then he might as well choose both boxes, and get both the $1,000,000 and the $1,000. If he sees the closed box is empty, he might be angry at being deprived of a chance at the big prize and so choose just the one box to demonstrate that the game is a fraud. Either way, his actions will be the opposite of what was predicted, which contradicts the premise that the prediction is always right.
Some philosophers take the glass box version of Newcomb's paradox as a proof that:
- It is impossible to know the future
- Knowledge of the future is only possible in cases where the knowledge itself won't prevent that future
- The universe will conspire to prevent self-contradictory causal loops (via the Novikov self-consistency principle, for example).
- Chooser might accidentally make the wrong selection, or he might misunderstand the rules, or the time machine/prediction engine might break.
Predictor has no special knowledge of the future
Suppose Predictor does not have special knowledge of the future and Chooser knows this. A game theory analysis for the case of multiple rounds with memory is straightforward.
If Chooser wants to maximize profit and Predictor wants to maximize the accuracy of his predictions, Chooser should consistently choose only the closed box. However, if Chooser defects from that strategy and chooses both boxes, he will benefit in that round but Predictor will be wrong and will probably retaliate. Nash equilibria (where any defection from the chosen strategies is no benefit) exist when the Chooser to always take 2 boxes and for Predictor to always predict that 2 boxes will be chosen (this gives a payout of $1000 and a perfect prediction every time) or when both choose only the closed box (payout $1,000,000 prediction 100%). An intelligent chooser would attempt to move from the first equilibrium to the second.
Now consider a different case: Predictor does not have special knowledge of the future, but Chooser believes he does. Readers of the Scientific American article responded to the paradox in approximately a 5 to 2 ratio in favor of choosing only the closed box. A Predictor working from that data point (and assuming the Chooser is himself a Scientific American reader) would believe that he could achieve about 71% accuracy by always predicting that Chooser will take the closed box.
In this case, the problem rapidly devolves into an analysis of statistical preferences for risk avoidance and tolerance. This can be seen more easily if the dollar values are changed. For example, if the amount in the open box is reduced to $1, essentially all Choosers will select the closed box - the incremental value of the dollar does not justify the risk. On the other hand, almost all Choosers will select both boxes if the amount in the open box is raised to $900,000.
References
- Nozick, Robert (1969), "Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempl, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.
- Gardner, Martin (1974), "Mathematical Games," Scientific American, March 1974, p. 102; reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics (ISBN 0-393-02023-1)
- Campbell, Richmond and Lanning Sowden, ed. (1985), Paradoxes of Rationality and Cooperation: Prisoners' Dilemma and Newcomb's Problem, Vancouver: University of British Columbia Press. (an anthology discussing Newcomb's Problem, with an extensive bibliography)
- Levi, Isaac (1982), "A Note on Newcombmania," Journal of Philosophy 79 (1982): 337-42. (a paper discussing the popularity of Newcomb's Problem)
- John Collins, "Newcomb's Problem", International Encyclopedia of the Social and Behavioral Sciences, Neil Smelser and Paul Baltes (eds), Elsevier Science (2001) (http://collins.philo.columbia.edu/econphil/newcomb.pdf)
External links
- Newcomb's Paradox (http://members.aol.com/kiekeben/newcomb.html) by Franz Kiekeben
- Thinking Inside the Boxes (http://slate.msn.com/?id=2061419) by Jim Holtde:Newcombs Problem