Nature's nature is not necessarily nice
risk, technology, the limits of knowledge, and protections against challenges of nature
Reports that say something hasn’t happened are always interesting to me because as we know, there are known knowns: there are things we know we know. We also know there are known unknowns: that is to say we know there are some things [we know] we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult one.1
At its most basic level, risk brings the possibility that one or more scenarios from our imagination, triggered by a person (even oneself), a machine, or nature, may play out to something like a catastrophic conclusion.
Names for risk (but it is just probability)
As a teenager I remember the times my brother and I would sneak out on the dock at my great aunt's house on dark, moonless nights. Once there on the dock, we would dive into the black void of water we of course well knew to be deep and clear of obstructions, having dove and swam in it many times in daylight. But standing there at night, just before diving into pure blackness, that tingling feeling in the pit of my stomach said BEWARE. It seems the unknown possibilities held within the void of the water's blackness, regardless of the certain prior knowledge held, brought risk to the game; a fear of disaster was there.
A significant effort expended by humankind over the millennia has been to overcome, or at least escape from, the various dangers nature presents by creating technological systems at various levels of complexity. Winter cold requires shelter and energy, animals and insects attack crops, drought and unseasonal rain destroy crops, getting goods across mountains, valleys and rivers requires transportation infrastructures, medicines are developed to fight disease and heal broken limbs. Unfortunately, such technological systems inevitably trigger new scenarios that, although intended to be less frequent and less deadly than those they are meant to overcome, nevertheless can cause harm.
Scenarios that come from technological systems developed by humans, even though they may result in the same kinds of harms as those posed by nature, hold different meaning than those from nature alone. Nature's scenarios such as floods, earthquakes, tornados, volcanic eruptions, and diseases hold meaning as unavoidable and uncontrollable ‘acts of God’; whereas those associated with technological systems hold a meaning closer to negligent injury that carries with it the notion of liability. An example of the way meanings attached to harmful scenarios are connected among nature, technological systems, and liability is the 1947 court case overseen by Judge Learned Hand.2 Although the scenario in Hand's case did not result in physical injury to humans, financial loss occurred; and the principles about liability ruled in the case now apply to many harmful scenarios including human injuries.
Hand's case involved a scenario triggered by nature — a North wind and adverse tidal forces broke the barge ‘Anna C’ loose and, after hitting a tanker's propeller, it sank. As stated previously, nature triggered the scenario that progressed to the point that the Anna C sank with its cargo, but the court case centered on the question of responsibility for damage — who should pay (who is liable) for Anna C's ruined cargo and recovery. In this case, nature caused the loss but escaped blameless, a subtle demonstration of how meanings are assigned to scenarios involving technological systems and scenarios involving nature. A final note about Hand's ruling is that he introduced the notion of probability, perhaps for the first time, to work out potential for liability as a calculation:
Since there are occasions when every vessel will break from her moorings, and since, if she does, she becomes a menace to those about her; the owner's duty, as in other similar situations, to provide against resulting injuries is a function of three variables: (1) The probability that she will break away; (2) the gravity of the resulting injury, if she does; (3) the burden of adequate precautions. Possibly it serves to bring this notion into relief to state it in algebraic terms: if the probability be called P; the injury, L; and the burden, B; liability depends upon whether B is less than L multiplied by P: i. e., whether B > PL. Applied to the situation at bar, the likelihood that a barge will break from her fasts and the damage she will do, vary with the place and time; for example, if a storm threatens, the danger is greater; so it is, if she is in a crowded harbor where moored barges are constantly being shifted about.3
Naming probability
At my great aunt's house, in addition to the dock on the lake, she had an old clock with two dials made by the Ithaca Calendar Clock Company. The top dial had the minute and hour hand; the lower dial had one hand that pointed to the month date. Two cylinders rolled in small windows showing the month and the day of the week, and it chimed the hour and half hour. As children, we were never allowed to touch the clock but it simply fascinated us; we wanted to know about all the gears and workings that made possible it knowing all that information. My sister and her cousin tried to stay up all night one time to see when the day changed. Epistemic uncertainty is like this, we knew the clock worked to keep all the information right, we could observe that; but we didn't know how all the gears, springs, escapements, and cams were connected and pushed on each other to make it work. Although a trivial example, this partial or complete unknowing of how something works, the details of the inner physics controlling its processes (from a child’s point of view) is the subject of epistemology (unknown unknowns). As Tribe points out in his book ‘Rational Descriptions, Decisions and Designs’:
No design is ever carried forward with full knowledge of the facts either as regards the uses to which the product will be put or the technical properties of the materials and subsystems which will be incorporated in it. For this reason, a designer must be able to cope with uncertainty.4
Performance of individual equipment is often evaluated using lifetime testing; several pieces of equipment are started up and run in a controlled test environment until they all fail. By noting the time of each failure, an idea of the on-average performance in service can be estimated. The variability that comes out of the data collection in lifetime testing is another kind of uncertainty called aleatoric; although using the word this way is particularized to engineering reliability. That is, aleatoric uncertainty is normally associated with a process more akin to rolling a fair die where the probability can be known from first principles.
The reason equipment must be tested to estimate failure time is because the time a particular part wears out can not be obtained from first principles or the physical sciences with exactitude; the inner workings of the gears of nature are so complex that the engineer cannot include them in the design calculation. This `aleatoric’ uncertainty is much like Donald Rumsfeld’s known unknowns. That is, by learning something about how things behave in an experiment repeated many times, we can get an idea about we can expect them to behave in the future. Regardless whether uncertainty is epistemic or aleatoric, it leaves the possibility that a scenario may end in harm no matter how carefully the engineer designs protections.
The engineering principles of protection applied to scenarios arising from technological systems that are imagined to end in harm, even catastrophe, are applied in an attempt to terminate them before they can progress to harms. Engineers want to have certain protection and are loathe to accept the possibility of design failures that cause loss of life or danger to health in some way, let alone assign a numerical probability of failures in protection — but the concept of uncertainty comes out in the development of any technological system design having reasonable complexity.
Protections and regulations
Technological systems designed to overcome nature's challenges, are designed with protective systems put in place to prevent harms by terminating imagined scenarios before they reach a harmful end. Shavell recommends social welfare in his article ‘Liability for Harm versus Regulation of Safety’ be measured ...
... to equal the benefits parties derive from engaging in their activities, less the sum of precautions, the harms done, and the administrative expenses associated with the means of social control.5
In the same article, Shavell makes the important point regarding the courts and regulation where he asserts the amount of a party’s assets must be considered against the level of harm they may cause; where harms exceed assets, regulation would be preferable to the courts that would likely assign liability following the guidance in Judge Hand's decision. Viability of technological systems is therefore connected to protections, liability for harms, regulation and, in a very complex way, to probability.
Regulations flow from laws enacted by politicians; they are the mechanism that defines and ensures the required efficacy of protection through inspection and enforcement. They prescribe protections regardless whether or not harm is caused — they result in reduced profit margins for the owners and investors of the applicable technological system. For example, most fire protection systems in hotels are never needed; nevertheless the owners and investors of the hotel must pay for the protections as well as inspection fees over the hotel's lifetime.
It is important to understand the philosophical backdrop within which technological systems operate. The owner-operator inevitably acts as an egoist who must maximize profit — because of their cost, protective systems, unless prescribed by regulation, are only installed when they act to maintain or increase profit margin. Because they have different objectives a tension between the regulator, enforcing laws, acting in the interest of anyone in the public who could be harmed, and the owner/investor, acting in the interest of profit maximization that is, cost reduction, revenue enhancement, or perhaps, greater production volume.
The citizen’s responsibility
In a democracy with a capitalistic economy, citizens have great responsibility to assess risk of technologies deployed among them and to elect representatives to Congress who would create laws requiring adequate protections. Only by knowing the meaning of risk and the methods by which risk can be assessed can the citizen make good choices for their elected officials who will create laws, manage the regulations coming from them, and properly adjudicate enforcement.
As stated above, citizens may elect representatives who create laws and regulations that they judge acceptable but on the other hand, add significant cost to the products created by a particular business owner; the owners and investors in a business are profit maximizers which means they will continually seek to increase the profit margin they realize on goods and services they provide.
Exporting risk — economics of protection
For a variety of complex reasons, different countries impose different levels of risk mitigation in the form of more or less regulatory oversight, an interesting business model around ‘risk shifting’ emerges from such protection requirement differentials. The implication of risk shifting is that the differential regulatory structures over a global marketplace can be exploited to increase profit margin. What this means is that an ‘exploited country’ or one with a lax regulatory structure, can create goods and services at lower cost than they can be had in the ‘exploiting country’ — the exploiting country's citizens get access to goods and services at a lower cost using risk shifting. In addition, owners and investors can realize greater profit margins while, at the same time, produce goods and services at a lower price point than their competitors.
This concept of risk shifting is something that creates a deeper moral overlay on invoking protections than what might exist in a particular country absent global engagements. All decisions are moral judgements and decision-making about protections is of course, included regardless where harm is caused.
Let’s take a leap …
When jumping into the unknown, when the inner gear works are not fully understood, and imagining some unpleasant scenarios, it is important the citizen is armed with a rational framework that would inform the best possible outcomes. There is a deep connection to the moral fiber in the setting where protections are invoked. Assessing hazards, setting the political landscape, paying for protections, keeping true to the setting and protections required — no moving harms around like in a shell game — are the citizen’s responsibility. It is a massive thing to undertake but it must be taken on.
To read more of Ernie ‘s writing and research, go to Ernie ‘s newsletter or ORCiD.
Donald Rumsfeld, 2002. See “Rumsfeld Papers” for his context.
{159 F.2d 169 (2d Cir. 1947), The case of `United States v. Carroll Towing Co.’
ibid.
Tribus, Myron. Rational Descriptions, Decisions and Designs: Pergamon Unified Engineering Series. Elsevier, 2013.
Shavell, Steven. "Liability for harm versus regulation of safety." The Journal of Legal Studies 13, no. 2 (1984): 357-374.