The Allais Paradox and Misunderstanding Probability and Randomness

The Allais Paradox:

Suppose you are given two gambles to choose from, 1A and 1B:

1A: 1 million with 100% certainty
1B: 89% chance of 1 million, 1% chance of nothing, and 10% chance of 5 million

Later, you are given two more gambles to choose from, this time 2A and 2B:

2A: 1 million with 11% probability, nothing with 89% probability
2B: 5 million with 10% probability, nothing with 90% probability.

Which combination do you choose? The “paradox” is not logical but rather just highlights a quirk in human reasoning. Most people, for some reason, choose the combination 1A-2B. I can see why. I really have to fight the inner urge not to, but this is completely inconsistent.

One of the “axioms” of decision theory or rationality is that when the each in a set of alternatives change in some identical way, you don’t change your choices, because otherwise you’re inconsistent. Suppose you’re at a dinner party and the host decides to play a game. They flip a coin and if it lands heads, you get your pick of either ice cream or cake for desert. Otherwise you get nothing.

At this point, the host rushes to the kitchen and returns with good news – they do in fact have some strawberries so if the coin lands heads, you get to have strawberries! Now, logical decision theory dictates that you would not change your preference between ice cream and cake because of this. Just because there is now the alternative of strawberries does not mean that you should suddenly start to prefer ice cream over cake. Whatever is affecting your decision of one over the other should not change.

So similarly, what if choice 1A was instead represented by an 89% chance of getting 1 million…along with a 11% chance of getting 1 million. So now, if the 89% chance of getting 1 million is replaced with an 89% chance of getting nothing in both 1A and 1B…they suddenly transform into choices 2A and 2B. So to choose 1A and 2B, however attractive it may seem, really reflects an inconsistent way of viewing the world.

There’s an interesting trait of human psychology that’s closely related to this, called “zero-risk bias.” Basically, it says that a person is more likely to reduce a small risk to zero than to greatly reduce a larger risk. People seem to be irrationally drawn to certainty, which totally turns the expected-utility model on its head. For example, tests were done which indicated that the highest amount of money a person would prefer with certainty to a 50% chance of getting 1,000 dollars was somewhere between 300 and 400 dollars. In other words, people equated about 350 dollars of certain money with a 50% chance of getting 1,000 dollars, whereas the expected-utility model would equate a 100% chance of 350 with a 35% chance of 1,000 dollars ((.35)*1000 = 350), and a 50% chance of 1000 with a “sure-thing” of 500 dollars. Similarly, people are drawn to choice 1A because of cozy-sounding certainty, whereas in 1B, both probabilities are so low that it doesn’t really even matter, does it?

At the same time, the utility curve for gains is concave for gains and convex for losses. To explain this more clearly, imagine getting 10 dollars five times. Your subjective gain is lower on the fifth time than on the first. In economics terms, the marginal utility decreases with each subsequent 10-dollar gain. However, if you lose ten dollars five times, your loss becomes subsequently worse. The dissatisfaction of losing 50 dollars is more than 5 times worse than losing 10 dollars.

Amos Tversky and Daniel Kahneman dubbed these decision-making processes “prospect theory,” a descriptive decision theory about irrational decision-makers. It corresponds roughly with the philosophy of bounded rationality. Tversky and Kahneman found a number of other interesting results relating to this.

I think that there are a couple of things at work here. For one, the word “probability” has a lot of definitions – we can take the frequentist/statistical definition of “do enough trials and it will happen x% of the time” or the inductive/Bayesian definition of “that is a measure of my uncertainty about the event.” Of course, both rely on random events, which are intuitively difficult to grasp. If you flip a coin thirty times, it is more likely than not that there will be a string of 5 consecutive heads or 5 consecutive tails. Yet if you were to ask a person to simulate a 30-coin toss by writing a string of H-T-T-H-H…, they would almost certainly not have a consecutive string of length 5, for fear of “cheating.” Also, strings produced by humans tend to switch between H and T much more than a natural coin toss. In a natural coin toss, the probability of the next toss being different from the previous one is 0.5, so you would expect the outcome to change between tosses 15 times (H-T-T-H constitutes 2 changes, H-T and T-H). However, a typical human-simulated coin-toss would contain close to 19 or 20 changes. It’s the same mental algorithm that gives us gambler’s fallacy – if you have a 1/30th chance of winning roulette on a given turn and you haven’t won in 300 turns, you have the same chance of winning now as you did on your first turn. The universe does not keep track of past results! Yet they ask the same question on the AP Biology exam (If a woman has 6 male children, what’s the probability the next one is female? – although maybe there would be some sort of confounding variable in there).

There’s more evidence of human struggle to grasp probability – here is a discussion on the topic of randomness in iPod shuffling, highlighting some of the trouble people have.

Tversky and Kahnenman also created a “pi-function” which models human subjective perception of probability. Some essential features of the curve are that it is over-weighted at the top – e.g. a 90% or 95% probability is perceived as “practically certain” – and it is not defined at low values – who cares about 5% probabilities, anyway?

I definitely look more into this misunderstanding of randomness. Something in our brains must really be wired to struggle with it, or maybe we just haven’t defined it properly, or rather popularized a good definition. The Bayesian definition which refers to “uncertainty” is quite appealing since it’s more intuitive and since it’s more realistic as well – we can’t just go doing tons of trials for everything to figure out the probabilities…but I need to research it further.

Further Reading:

“Prospect Theory: An analysis of decision under risk” – Amos Tversky and Daniel Kahneman

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: