Must Read: Nassim Taleb on The Earth-Quake in Japan

Updated on

I do not remember who I received this article from, but I loved it. Everyone is now talking about Black Swans  in regards to the uprisings in the Middle East and the horrific Earthquake in Japan.

Taleb who is an expert in Black Swans is not surprised by these events. He wrote (before the Quake) about how Black Swans are becoming much more frequent. As I mention in this article I recommend reading all of Taleb’s books; The Black SwanFooled by Randomness, and “The Bed of Procrustes”.

At the end of 2010 when everyone was making their “2011 predictions”, I did not join in the chorus. Everyone was wrong!  Not one person predicted this earthquake in Japan in 2010. I stated many times that no one knows what events the year will bring. Of course, I did not predict any of these events. I merely predicted that events would happen that no one would predict at the time, but in hindsight would say it was obvious, this is the definition of a black swan event. Taleb sums it up beautifully below, but I will add one more word of caution to investors; never say never, and the next Black Swan could be in one minutes, one day, one year, and very likely within the next few years. Always, keep the possibility of any Black Swan, you will not be able to even guess what the event will be, but be prepared!

With that I leave you Taleb’s comments:

(I’ve received close to 600 requests for interviews on the “Black Swan” of Japan. Refused all (except for one). I think for a living & write books not interviews. This is what I have to say.)

The Japanese Nuclear Commission had the following goals set in 2003: ” The mean value of acute fatality risk by radiation exposure resultant from an accident of a nuclear installation to individuals of the public, who live in the vicinity of the site boundary of the nuclear installation, should not exceed the probability of about 1×10^6 per year (that is , at least 1 per million years)”.

That policy was designed only 8 years ago. Their one in a million-year accident occurred about 8 year later. We are clearly in the Fourth Quadrant there.

I spent the last two decades explaining (mostly to finance imbeciles, but also to anyone who would listen to me) why we should not talk about small probabilities in any domain. Science cannot deal with them. It is irresponsible to talk about small probabilities and make people rely on them, except for natural systems that have been standing for 3 billion years (not manmade ones for which the probabilities are derived theoretically, such as the nuclear field for which the effective track record is only 60 years).

1) Small probabilities tend to be incomputable; the smaller the probability, the less computable. (Forget the junk about “Knightian” uncertainty, all small probabilities are incomputable). (See TBS, 2nd Ed., or Douady and Taleb, Statistical undecidability, 2011.)

2) Model error causes the underestimation of small probabilities & their contribution (on balance, because of convexity effects). Any model error, just as any undertainty about flying time causes the expected arrival to be delayed (you rarely land 4 hours early, more often 4 hours late on a transatlantic flight, so “unforeseen” disturbances tend to delay you). See my argument about second order effects with my paper. [INTUITION: uncertainty about the model used for calculation of random effects causes a second layer of randomness, causing small probabilities to rise on balance].

3) The problem is more acute in Extremistan, particularly the manmade part. The probabilities are undestimated but the consequences are much, much more underestimated.

4) As I wrote, because of globalization, the costs of natural catastrophes are increasing in a nonlinear way.

5) Casanova problem (survivorship bias in probability): If you compute the frequency of a rare event and your survival depends on such event not taking place (such as nuclear events), then you underestimated that probability. See the revised note 93 on ??????.

6) Semi-technical Example: to illustrates the point (how models are Procrustean beds):

 

Take for example the binomial distribution with B[N, p] probability of success (avoidance of failure), with N=50. When p moves from 96% to 99% the probability quadruples. So small imprecision around the probability of success (error in its computation, uncertainty about how we computed the probability) leads to enormous ranges in the total result. This shows that there is no such thing as “measurable risk” in the tails, no matter what model we use.

(I’ve received close to 600 requests for interviews on the “Black Swan” of Japan. Refused all (except for one). I think for a living & write books not interviews. This is what I have to say.)

The Japanese Nuclear Commission had the following goals set in 2003: ” The mean value of acute fatality risk by radiation exposure resultant from an accident of a nuclear installation to individuals of the public, who live in the vicinity of the site boundary of the nuclear installation, should not exceed the probability of about 1×10^6 per year (that is , at least 1 per million years)”.

That policy was designed only 8 years ago. Their one in a million-year accident occurred about 8 year later. We are clearly in the Fourth Quadrant there.

I spent the last two decades explaining (mostly to finance imbeciles, but also to anyone who would listen to me) why we should not talk about small probabilities in any domain. Science cannot deal with them. It is irresponsible to talk about small probabilities and make people rely on them, except for natural systems that have been standing for 3 billion years (not manmade ones for which the probabilities are derived theoretically, such as the nuclear field for which the effective track record is only 60 years).

1) Small probabilities tend to be incomputable; the smaller the probability, the less computable. (Forget the junk about “Knightian” uncertainty, all small probabilities are incomputable). (See TBS, 2nd Ed., or Douady and Taleb, Statistical undecidability, 2011.)

2) Model error causes the underestimation of small probabilities & their contribution (on balance, because of convexity effects). Any model error, just as any undertainty about flying time causes the expected arrival to be delayed (you rarely land 4 hours early, more often 4 hours late on a transatlantic flight, so “unforeseen” disturbances tend to delay you). See my argument about second order effects with my paper. [INTUITION: uncertainty about the model used for calculation of random effects causes a second layer of randomness, causing small probabilities to rise on balance].

3) The problem is more acute in Extremistan, particularly the manmade part. The probabilities are undestimated but the consequences are much, much more underestimated.

4) As I wrote, because of globalization, the costs of natural catastrophes are increasing in a nonlinear way.

5) Casanova problem (survivorship bias in probability): If you compute the frequency of a rare event and your survival depends on such event not taking place (such as nuclear events), then you underestimated that probability. See the revised note 93 on ??????.

6) Semi-technical Example: to illustrates the point (how models are Procrustean beds):

 

Take for example the binomial distribution with B[N, p] probability of success (avoidance of failure), with N=50. When p moves from 96% to 99% the probability quadruples. So small imprecision around the probability of success (error in its computation, uncertainty about how we computed the probability) leads to enormous ranges in the total result. This shows that there is no such thing as “measurable risk” in the tails, no matter what model we use.

 

Black swan chart
click to enlarge

 

Leave a Comment