I use the word “risk” often … and, so do you.
But, dictionary definitions do not do the word justice because the real meaning of “risk” is personal and contextual. For example,
cheating on your taxes,
insulting a street street gang leader,
flying on a Boeing 737 Max,
buying cattle futures, and
telling your mother a lie
each evokes its own perception of risk. And, all perceptions are grounded in personal experiences.
While cheating on taxes and lying to your mother are both nefarious activities, they are framed within different life experiences and difficult to calibrate. Seemingly simple risk comparisons are often difficult. “Which do you prefer more: Flying on a 737 or buying cattle futures?” It’s hard to say because information like detailed airplane safety records the price of a futures contract are dissimilar and typically beyond the experience of most of us.
Yet, in this era of AI and Big Data we are confronted daily with institutionally quantified risk, pushing out numbers on everything from weather predictions -to- life insurance policies -to- investment planning. Institutions are ubiquitously crunching data and telling us about our risk using their numbers.
But, are institutions really on the same page with you? Let’s be honest. Anytime someone tells you how much risk YOU are taking, hang onto you wallet … literally.
How can an algorithm possibly compute a number for your perception of risk … like when offending a street gangster … when you have never actually insulted a gangster.
This essay is not about how you should perceive your risk; rather, it’s all about why you should never accept institutional risk calculations … from government … from private enterprises … or from anyone … ever. You must always be an informed skeptic, because risk always involves YOUR money.
So, what does it take to be an informed skeptic of institutional risk calculations? First off, you should keep in mind that “risk” is simply information … information used to support making a decision.
Risk ALWAYS informs a decision.
So, you can’t really talk about risk without first talking about the decision it supports. Obviously, the decision is what is really important; risk is just information needed to help execute selecting a decision alternative. So, maybe we should begin by first talking about decisions.
Decisions are something you deal with every day. And, decision making is actually pretty straightforward. As you know, it goes like this:
Decision: When presented with a collection of two or more alternatives, choose that alternative that you most prefer.
But, all decisions share some features that are worth pointing out.
Decision epochs (arrival times) are unpredictable. That is, you can’t really say for certain when you will next be required to make a decision.
All decisions are made at the present time. Think about it. You choose an alternative now, based on information you have collected before now, hoping that your choice will result in the best future consequence.
You can never avoid a decision. Proclaiming that you are “not going to choose” is obviously choosing the null alternative … which is always among the set of decision alternatives.
At least one of the the alternatives has consequences that cannot be predicted with complete certainty. Again, think about it. If you know with complete certainty the consequence of every decision alternative, there is nothing to decide, since you already know for certain which alternative will give the best consequence. This is where risk enters the picture and it is characteristic of all decisions.
Risk: The characterization of your uncertainty about the value of a particular decision alternative.
There are MANY definitions for risk floating around in the engineering, finance, and public policy communities. Often, these definitions are given without being explicitly connected to any decision scenario.
In principle, there is nothing wrong with such definitions so long as risk is implicitly tied to some decision alternative. The truth is, all credible risk definitions derive from the very intuitive definition we use above.
When someone says, “Cheating on your income taxes is risky.” you intuitively understand that there is a decision implicitly tied to this statement.
Clearly, there is a decision having two alternatives from which to choose: (1) file an honest income tax return, or (2) do not file an honest return.
Alternative (1) presents no risk since value of submitting an honest return has a predictable outcome. On the other hand, choosing alternative (2) is risk encumbered since it carries the possibility of multiple outcomes. You might not be audited and get away clean. Or, you might get busted by the IRS … you can’t know for certain which will outcome will occur at the time you choose alternative (2).
We see immediately that “off the wall” questions like, “Which is riskier, cheating on your taxes or lying to your mother?” are a bit ridiculous because you must first concoct a hypothetical decision scenario where the available decision alternatives include both cheating on your taxes and lying to you mother.
Sometimes putting value on a decision alternative is challenging (but never impossible) … like lying to your mom. Where as in the “tax cheating” decision scenario, the value of a decision alternative is obviously measured in dollars. In any situation, choosing from among decision alternatives will influence your wealth.
It is safe to say that a rational decision maker seeks to maximize their wealth. But, we cannot dismiss the fact that two wealth-maximizing decision makers, when facing exactly the same decision, might not choose the same alternative. How could this be? It comes down to how different decision makers calibrate risk.
The fact is, rational decision makers always choose the alternative having the most preferred risk. So, we intuitively calibrate risk all the time and translate these calibrations into a rank-ordering among the available alternatives … for the vast majority of decisions we face, risk calibration is second nature.
Only in rare circumstances do we feel compelled to step beyond our usual intuitive calibration … obviously, this is where things become more complicated. The compulsion to more carefully calibrate risk almost always arises when a decision involve lots of money (remember, under certain circumstances, lying to your mom might be VERY costly). And, what defines lots of money very much depends on the decision maker. For example, $100,000 is a lot of money to me.
But, having an additional $100,000 to Bill Gates is about like me having an additional $4. Clearly, Bill and I calibrate risk differently.
But, overall wealth is not the only thing driving a decision maker’s calibration of risk. Returning to the tax-cheating decision scenario, suppose two wealth-maximizers face exactly the same decision scenario. One decision maker feels that any type of cheating is morally reprehensible, while the other decision maker believes tax-cheating is perfectly acceptable personal behavior. You would, therefore, expect them to calibrate risk differently.
All this said, it is now quite obvious that when an agent (private or government) steps forward to calibrate risk on your behalf, you should definitely hold onto your wallet. An experienced skeptic immediately understands that the agent is concerned with framing decisions, decision alternatives, and alternative risk calibration is a manner that may not reflect your best interests.
In Part 2 (our next installment in this serial of Beyond Harm posts on risk) we will consider naive risk … a ubiquitous and bias (not in your favor, by the way) method of calibrating risk. We will explain why you don’t need to be a math whiz to see where the paint is thin on naive risk calibration methods and why its bias is costing you serious money.
Armed only with your decision making experience, intuition, and common sense skepticism, you can know far more about risk than any risk analyst can tell you.