Black Swans, the “Ludic Fallacy”, and Bayesian inference

A few days ago, I finished reading The Black Swan by Nicholas Taleb, which goes in-depth on topics such as judgment under uncertainty and the issues relating to unrealistic models which deliberately ignore unlikely but possibly highly influential phenomena in order to stay simple. Taleb emphatically argues against the use of the Gaussian bell curve, or the GIF (“great intellectual fraud”), as he likes to call it, pointing out that it forecasts events several standard deviations from the norm as extremely unlikely. He points out that an event 4 SD away is twice as likely as 4.15 SD, and that “the precipitous decline in odds of encountering something is what allows you to ignore outliers. Only one curve can deliver this decline, and it is the bell curve (and its nonscalable siblings). Nassim instead expounds scalable “Mandelbrotian” curves, which, like all things Mandelbrotian, are fractal – The speed of the decrease of odds as one moves from the mean is constant, not declining. So the odds of having a net worth of over 8 million pounds is 1 in 4,000, for higher than 16 million pounds it’s one in 16,000, for 32 it’s 1 in 64,000, etc. So not only does the Mandelbrotian curve put more importance on outliers (“Black Swans”, or unpredictable but highly influential events such as stock market crashes that Gaussian models miss), but any small portion of the graph resembles the larger curve in a fractal sort of way.

To explain this better, consider the average human male heights, which are in fact described by the standard bell curve. If you make a curve of the heights of people taller than 7 feet tall, that no longer resembles the bell curve (it just looks like a tapered edge of it). On the other hand, whether you look at the wealth of everyone or only those earning over 1 million pounds a year, the Mandelbrotian curve resembles itself. This is not to say that the Mandelbrot curve should be used everywhere – the bell curve is perfectly adequate to describe human height. However, it falls short when applied ubiquitously to financial forecasting, which is far less predictable and far more susceptible to outliers.

Now onto the Ludic Fallacy – coined by Nick Taleb himself. The Ludic Fallacy pertains mainly to what Taleb deems “nerds” – people who work with models and are only capable of thinking “inside the box,” and mistakes they make when they try to impose the parameters of theoretical games and models onto the real world. Kind of like economists assuming that “everyone is rational” in order to work their models – BIG MISTAKE. Note: If Taleb could have it his way, economists have been out of business long ago. So I don’t want to write out the entire explanation of the Ludic Fallacy but it can be found here. Read it before going on.

This is where Bayesian inference comes in. Bayesian inference gives a probabilistic framework for the scientific method and making decisions about whether or not to accept a hypothesis given data. A fundamental aspect of Bayesian inference is updating your beliefs in light of new evidence. Essentially, you start out with a prior belief and then update it in light of new evidence. An important aspect of this prior belief is your degree of confidence in it.

The coin-tossing example is great for this. Let’s assume, since you know nothing about the coin, that you’re going to assume that it’s fair – heads 50% of the time. Now you have to state a degree of confidence in your estimate, which could be done fairly easily as follows – if you’re a little bit confident, you could have the prior assumption that the coin has been tossed 10 times and come up heads 5 times. If you’re more confident, you can go in with the assumption that it’s been tossed 20, or 50, or even 100 times and has come up 50% heads. Then, with every subsequent throw you would revise – if your internal model had 5 head and 5 tail tosses to start with, and you would revise it to 6 heads, 7 heads, 8 heads…with every subsequent toss. So even if you had a high degree of confidence before, say 100 prior fair tosses, you would be convinced after 99 consecutive heads that it wasn’t a fair coin. There’s a saying I think I’ve mentioned before, “A scientist is just a normal person outside the lab.” This DOESN’T have to be that way. Even if a “street-smart” person doesn’t think of Bayesian probability theory to aid them in social situations, they at least understand that human interaction is highly nuanced and has its own peculiar logic.

A similar question came up on an old AP Biology exam, and although it superficially intends to teach the opposite lesson, and seems to have the “wrong” correct answer, all is corrected by Bayesian statistics. The question reads something like, “A woman has given birth to 5 boys, and is expecting a 6th child. What is the probability that the 6th child is also a boy?” Now, unsuspecting “Fat Tonys” are supposedly “tricked” into choosing D) 5/6 whereas the “Dr. Johns” know that the answer is, truly, B) 1/2. The lesson is supposed to be that the next outcome isn’t affected by the previous ones, but in light of the ludic fallacy and Bayesian inference, I think the answer is a bit more nuanced than that (but 1/2 is still the correct answer).

The reason is this – 5 boys in a row isn’t THAT unlikely – 1/2^5, or 1/32. In addition, your prior belief about the likelihood of getting a boy better be about 51% (a little higher than 1/2, to be exact). In addition, as your prior “coin-toss model” you have this outcome played over literally billions of times. No matter how many boys this woman has isn’t going to significantly altered my confidence that roughly half of human babies are boys.

You might object – are some women more likely to have baby boys? Fair point. I think that’s going a little too in-depth for multiple choice question 4 on the 1998 Biology exam or whatever, but just for fun I looked into it – my “research” proved…inconclusive. Moral of the story is, prior outcomes don’t affect future outcomes, only your belief about them, and in this case prior outcomes shouldn’t even affect your belief about future outcomes.

Further Reading:

An Intuitive Explanation of Bayes’ Theorem – Eliezer Yudkowsky (shorter version here)

A Technical Explanation of Technical Explanation – Eliezer Yudkowsky

N.B. – The first link is an introduction to Bayes’ Theorem, the second is a demonstration of how it is applied to scientific thinking and decision-making in general.

The Black Swan – Nicholas Nassim Taleb (this book was the inspiration for this post)

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: