Revisiting, "a bird as rare upon the earth as a black swan"
But, are black swans really rare? If you're unfamiliar with "stopping times" and "the big jump principle", then beware of Black Swan events.
Black Swan: An unpredictable, high-consequence event that seems to be easily explained in hindsight.
It has been nearly 20 years since Nassim Taleb’s “The Black Swan: The Impact of the Highly Improbable” spent 36 week atop the New York Times Best Seller list. Taleb’s overly-sensational but important message asserts that extreme-impact, highly-improbable (negative Black Swan) events often have an oversimplified postmortem … fueling a mistaken belief that these events could have been predicted and thus prevented.
Rather than racing down the risk analysis rabbit hole trying to better predict Black Swans, Taleb insight fully promotes “robustness”… safeguarding valuable assets by redressing their vulnerabilities to change.
Let’s take a look at properties of black swan events and why robustness (and not risk analysis) is an essential strategy of dealing with them.
So what is risk, anyway? Don’t be taken in by the technical jargon. “Risk” and “risk analysis” are exactly what you think they are. Risk is a measure of YOUR certainty about the value of something. Risk analysis is the process of quantifying (putting numbers on) risk.
It is that simple.
So, for example, you might do some risk analysis when deciding whether or not to buy a comprehensive insurance policy on a car you own outright.
That is, you want to understand your certainty about the value of an insurance policy that will cover any damage … not just traffic accidents.
Of course the policy’s value depends on what happens to your car and when. For example if your car suffers no damage during the life of the policy, you will be out the premiums. If, however, your insured car is destroyed in a flood, you will be ahead if its replacement value exceeds the sum of previously paid premiums. So the value of the policy is determined by the difference between what the policy actually pays out and how much you actually spend on premiums over time. Obviously, the value of the policy is uncertain, because you are not clairvoyant.
The point of this example is that risk measures your certainty about (policy) value. So, how is certainty measured?
Without getting too far into the weeds, “certainty” is measured with probability (all other measures of certainty/uncertainty are well, not certain).
Risk analysis requires that, on the basis of presently available information, you put numbers on “R(v)” … the probability of that the policy will payout a value “V” that is greater than any specific number “v” that you want to talk about.
A graph of R(v) is always non-increasing and bounded between 0 and 1. The fact that risk is represented by a graph immediately tells you that risk is a function and not one number … a fact lost on many risk analysis pundits.
So, risk analysis boils down to constructing a risk graph (i.e, plotting R(v) vs v) using only information that is available to you right now.
A bit of focused thought reveals that constructing the risk graph (even for our simple insurance policy thought exercise) is pretty tricky.
First of all, since risk analysis is based on presently available information (i.e., the best of our knowledge) the policy value V is necessarily its net present value of the policy … which, unless you are clairvoyant, is always uncertain in practice because you must discount to the present all imaginable future policy payout scenarios in order to produce a risk graph.
The long and short is this: Something as simple risk analysis for an auto-insurance policy net present value is extremely complicated because it is a random variable where payout scenarios are always over time and future circumstances. Which brings us to “stopping times”
Stopping time is a terminology having roots in gambling theory that is now applied to more general situations. Specific events occur at unpredictable (i.e., random) times … for example vehicle damage that would qualify for an insurance claim occurs at random times.
When contemplating the value of a comprehensive auto insurance policy, you would identify many possible claim scenarios and then assess the likelihood of when and how often payouts would be issued according to your particular circumstances. A random time is called a stopping time if, at any point in history, you can determine whether or not the random time has yet occurred.
For instance you would receive a claim payout should your car be struck by a meteorite. Further, your personal history will always reveal whether or not your car has yet be struck by a meteorite.
But, not all random times are stopping times.
For instance, the final time that your car is destroyed by a meteorite. It is impossible to look at history and say for sure that you will never again have a vehicle struck by a meteorite. The bottom line is this: Stopping times always admit probability measures … not so with non-stopping times.
Hence, events that do not occur at a stopping time can come out of the blue, because you lack the information needed to probabilistic-ally assess the likelihood of their time of occurrence. Recall that Black Swan events are not predictable i.e., your history does not inform the likelihood of their occurrence. So, Black Swans do not occur at stopping times.
Interestingly, history will always tell you whether or not your car has been struck by a meteorites at least n times … no matter how big the n. So, even though they are exceedingly rare and catastrophic (to you) occurrences, meteorites striking your vehicle are not black swan events.
Suppose now, that you are an Uber driver working in an idyllic community where floods, windstorms and earthquakes are unheard of, and the large population of wealthy senior citizens rely heavily on ride-share transportation. Like all Uber drivers, your profit margin is narrow. You have done a thorough risk analysis and have concluded that the likelihood non-traffic related damage to your vehicle is extremely low and that you will forego a comprehensive insurance policy … even though loss of you car for any reason would surely bankrupt you.
After retrieving an elderly lady from the airport you stopped by police. You and the elderly lady are order to exit the vehicle at which time she tosses what appears to be a cosmetic compact into your back seat. The authorities immediately rush everyone away from your car and arrest the elderly lady on suspicion being a Russian assassin. Authorities fear that the compact contains botulinum toxin. Your uninsured car is now quarantined and is a total loss. Didn’t see that one coming, huh?
You’ve never even heard of botulinum toxin; hence, this first poisoning of your Uber vehicle did not occur at a stopping time. Interestingly, a second such incident will occur at a stopping time since Black Swan is now a part of history.
How frequently do Black Swans occur? No one can say.
It is impossible to talk about the probability of occurrence of things that you have not yet imagined. Black Swans are not probable … because you can’t put a probability on the time that they will occur.
So, would you take the word of someone who would tell you that risk analysis can predict the likelihood of a Black Swan occurrence?
The big jump principle has been around for a long time, yet risk analysts rarely mention it. Also called the catastrophe principle, it is closely associated with Black Swan events. By definition, all Black Swans events are high-consequence (i.e., high value). So, what constitutes “high value”? Without getting bogged down in the details of utility theory, common sense tells us that the value of anything depends on who you ask and when? Nearly everyone gauges value with respect to their total personal wealth. For example, $40,000 is extremely valuable to an Uber driver needing to replace a car. But, to Jeff Bezos $40,000 falls into the decimal dust of his immense wealth. The question facing everyone (Uber driver or Bezos) assessing Black Swan events is the same: “What is the likelihood that a Black Swan will have super-extreme consequences?” … whatever super-extreme values means to you personally. Here is where the big jump principle comes into play.
The big jump principle (a provable mathematical fact) shows that when extreme loss is due to only one among many independent loss sources, the likelihood of extreme loss is sub-exponential. Suppose that the number “x” represents an extremely large value to you. Exponential loss means the livelihood of a loss greater than “x+u”, given that loss will be greater than “x” is the same as the likelihood that loss is greater than “x”. The arithmetic for exponential loss is pretty straightforward. If for example suppose that x = $200,000 would be an extreme loss for you. Then, the likelihood that you will loose more than $400,000 given you are going to loose at least $200,000 is the same as the likelihood of loosing more than $200,000 in the first place. Sub-exponential losses are even worse … a super-extreme scenario. So, when the big jump principle is in play, the likelihood of extreme losses will exceed to super-extreme.
So, when is the big jump principle in play? Returning to the “Uber transporting an elderly assassin” thought exercise, the Black Swan car poisoning event is only one among the many events where passengers cost the driver money. If fact, every passenger brings costs (including fuel, cleaning, general maintenance, etc). It is reasonable to treat passenger rides as mutually independent events having similar likelihoods of costs they bring. It also stands to reason that the huge negative value of the assassin dominates the costs of all other passengers combined. Black Swan events often induce super-extreme loss that is not the cumulative costs over all loss sources. In this example the costs of cleaning up a botulinum toxin release would far exceed the Uber drivers loss. The point of this thought exercise is to point out that while by definition Black Swans are extreme-consequence events, the likelihood of super-extreme losses can be just as great as what is considered extreme. There you have it. Black Swan events cannot bet predicted (i.e., don’t yield to risk analysis) and when they occur, there is a reasonable likelihood that losses can go off of the charts. How does one defend against the consequences of Black Swan events? Recall that Taleb suggests designing robustness into systems.
Robustness is a familiar engineering concept. Since the dawn of the profession, safety conscience engineers have designed nearly everything to operate within margins of safety. From an engineering design perspective, robustness and safety margin are synonyms. But, protecting valuable assets from the ravages of Black Swans through safety margins requires a serious commitment of financial resources. Enhanced safety does not come for free.
I’m not a Taleb disciple, but I agree with his observation that pundits often find overly simplistic after-the-fact explanations for Black Swan events. This smug reasoning has lead to overconfidence in modern risk analysis methods … especially those that promise to forecast both frequency and consequences of rare catastrophes … so much so overconfidence that risk analyses are rapidly supplanting robust design in critical circumstances. Overconfidence is being supercharged by media narratives touting advanced data science, artificial intelligence, and the coming availability of quantum computing. But, as Taleb correctly understands (and tries to explain in non-analytical language), risk analysis is necessarily analytically detached from how information in the real world unfolds over time, guaranteeing that Black Swans will never yield to risk analysis. Committing the resources necessary for identifying and mitigating system vulnerabilities is the essential feature of robust design. Only then can you hope to defend against the big jump principle.
To read more of Marty ‘s writing and research, go to ORCiD.
Cover photo by Photo by tooheys on Freeimages.com


