Thursday, June 5, 2008

Gambler's fallacy

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the false belief that the probability of an event in a random sequence is dependent on preceding events, its probability increasing with each successive occasion on which it fails to occur. If a fair coin is tossed repeatedly and tails comes up many times in a row, a gambler may believe, incorrectly, that heads is more likely on the following toss.[1] Such an event may be referred to as "due". This is an informal fallacy.
The inverse gambler's fallacy is the belief that an unlikely outcome of a random process (such as rolling double sixes on a pair of dice) implies that the process is likely to have occurred many times before reaching that outcome.
Contents
1 An example: coin-tossing
2 Psychology behind the fallacy
3 Other examples
4 Non-examples of the fallacy
5 References
6 See also
7 External links


An example: coin-tossing
The gambler's fallacy can be illustrated by considering the repeated toss of a coin. With a fair coin, the chances of getting heads are exactly 0.5 (one in two). The chances of it coming up heads twice in a row are 0.5×0.5=0.25 (one in four). The probability of three heads in a row is 0.5×0.5×0.5= 0.125 (one in eight) and so on.
Now suppose that we have just tossed four heads in a row. A believer in the gambler's fallacy might say, "If the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is (1 / 2)5 = 1 / 32; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads."
This is the fallacious step in the argument. If the coin is fair, then by definition the probability of tails must always be 0.5, never more or less, and the probability of heads must always be 0.5, never less (or more). While a run of five heads is only 1 in 32 (0.03125), it is 1 in 32 before the coin is first tossed. After the first four tosses the results are no longer unknown, so they do not count. The probability of five consecutive heads is the same as that of four successive heads followed by one tails. Tails isn't more likely. In fact, the calculation of the 1 in 32 probability relied on the assumption that heads and tails are equally likely at every step. Each of the two possible outcomes has equal probability no matter how many times the coin has been flipped previously and no matter what the result. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses is the fallacy. The fallacy is the idea that a run of luck in the past somehow influences the odds of a bet in the future. This kind of logic would only work if we had to guess all the tosses' results before they are carried out.
As an example, the popular doubling strategy of the Martingale betting system (where a gambler starts with a bet of $1, and doubles their stake after each loss, until they win) is flawed. Situations like these are investigated in the mathematical theory of random walks. This and similar strategies either trade many small wins for a few huge losses (as in this case) or vice versa. With an infinite amount of working capital, one would come out ahead using this strategy; as it stands, one is better off betting a constant amount if only because it makes it easier to estimate how much one stands to lose in an hour or day of play.

Psychology behind the fallacy
Amos Tversky and Daniel Kahneman proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic [2][3]. According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red,"[4] so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays close to .5 in any short segment moreso than would be predicted by chance [5]; Kahneman and Tverksy interpret this to mean that people believe short sequences of random events should be representative of longer ones [6].
The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect [7].

Other examples
The probability of flipping 21 heads in a row, with a fair coin is 1 in 2,097,152, but the probability of flipping a head after having already flipped 20 heads in a row is simply 0.5. This is an example of Bayes' theorem.
Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers (for fear of having to split the jackpot with them).
A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an airplane, a man decides always to bring a bomb with him. "The chances of an airplane having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!".
A similar example is in the film The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero.

Non-examples of the fallacy
There are many scenarios where the gambler's fallacy might superficially seem to apply but does not, including:
When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be of another rank. Thus, the odds for drawing a jack, assuming that it was the first card drawn and that there are no jokers, have decreased from 4/52 (7.69%) to 3/51 (5.88%), while the odds for each other rank have increased from 4/52 (7.69%) to 4/51 (7.84%). This is how counting cards really works, when playing the game of blackjack.
When the probability of each event is not known, such as with a loaded die or an unbalanced coin. As a run of heads (or, e.g., reds on a roulette wheel) gets longer and longer, the chance that the coin or wheel is loaded increases. If one flips heads 21 times in a row, the odds of the next flip being heads may actually be higher, because the coin is rigged.
The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g. changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game Theory.
Many riddles trick the reader into believing that they are an example of Gambler's Fallacy, such as the Monty Hall problem.

References
^ Colman, Andrew (2001). Gambler's Fallacy - Encyclopedia.com. A Dictionary of Psychology. Oxford University Press. Retrieved on 2007-11-26.
^ Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185: 1124-1131.
^ Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105-110.
^ Tverksy & Kahneman, 1974
^ Tune, G. S. (1964). "Response preferences: A review of some relevant literature". Psychological Bulletin 61: 286-302.
^ Tversky & Kahneman, 1971
^ Gilovich, Thomas (1991). How we know what isn't so. New York: The Free Press, 16-19. ISBN 0-02-911706-2.

0 comments: