I recently attended the International Conference on System Reliability and Safety … a forum for presenting and discussing emerging research focused on the subject matter of the conference title. Promoting Probabilistic Risk Assessment (PRA) was a major theme in both Keynote Speeches and research presentations. For example, one Keynote address abstract read:
“Our next-generation leaders must begin to think more creatively, using risk-informed solutions to ensure safe, resilient, sustainable, and socially responsible technological advancements. It is time-critical to focus on risk-informed analysis of advanced nuclear energy systems to expedite their safe and commercially viable deployment. Probabilistic Risk Assessment has been utilized for nuclear power systems to consider potential uncertainties and support risk-informed design to balance safety and cost, risk-informed licensing to assure public safety, risk-informed construction to reduce unnecessary delays, and risk-informed operation and maintenance to enhance operational flexibility and profitability while maintaining safety. This talk highlights ongoing research studies that focus on Probabilistic Risk Assessment (PRA) methodology development to enable risk-informed analysis of advanced nuclear energy systems.”
This keynote speech abstract reflects the view by the Nuclear Energy Institute (NEI), the nuclear industry trade association, playbook. But, the truth is,
PRA computes a single (always optimistically biased) number predicting the long-run proportion of operational anomalies that will go on to cause a nuclear reactor core meltdown. Then, multiplying the long-run rate of occurrence of operational anomalies by this proportion gives what the NRC calls core damage frequency.
You don’t need a Ph.D. in Nuclear Science to see through the Keynote Speaker’s NEI-inspired hyperbole. Obviously, PRA is not probabilistic nor does it assess risk … i.e., “proportion” is never synonymous with “uncertainty”, and we’re all dead in the long-run. A simple thought exercise will convince you that PRA can only produce uninformative numbers that are almost always optimistically biased.
Suppose, you are asked to predict the long-run proportion of golfers who will ever score a hole-in-one. That is, divide the number of golfers who ever score a hole-in-one by the total number of people to ever play golf. Of course, this computation is purely hypothetical, because you can’t really know how many people will ever play golf let alone how many of those will ever score a hole-in-one. You can, at best, provide some estimate of this proportion based on your experience. A bit of careful thought reveals some issues that make creating a useful proportion-estimate tricky. First of all, since all experience is historical, your estimate is necessarily grounded in available historical information … clearly, because you are not clairvoyant. The estimate is unable to capture future golf club design innovations or future changes to the rules of golf.
So, for example, if the USGA (United States Golf Association) at some future time arbitrarily decides to reduce the cup diameter by half, your present day prediction of long-run proportion of hole-in-one golfers will be optimistically high. And, even if you anticipate the possibility of some arbitrary rule change, there is no particular historical experience to calibrate the likelihood of this rule change ever occurring.
The problem with predicting long-run proportions and rates is clear: Long-run predictions assume a convergence of information … i.e., looking beyond your experience, there can be no possibility of any future event that you have not anticipated.
But, in reality the long-run can be a VERY long time. And, if there remains any possibility of unanticipated events that might impact the long-run proportion of hole-in-one golfers, your prediction will suffer because of your lack of clairvoyance.
The Keynote seemed to overlook these limitations. While admonishing decision makers to wise up and expedite licensing of advanced nuclear systems by employing PRA, while choosing to assume that advanced nuclear systems will never face an unanticipated accident scenario … even though these technologies are brand new.
This is a ridiculously bold assumption. Incident investigations, and extrapolating from learning of complex engineering systems, tells us that industrial accidents are often caused by previously unanticipated operational scenarios. And, since unanticipated accident scenarios cannot possibly be included in any long-run proportion predictions (because they are beyond our present imagination), PRA is always an optimistically biased computation because there are no clairvoyants.
It is not simply the optimistic bias of PRA’s core damage frequency calculation that is troubling. Of far more concern is why anyone would believe that core damage frequency (a mathematical constant derived from impractical assumptions) is a useful metric for nuclear safety regulation.
Again, one does not need to a nuclear engineering degree to understand that there can be at most one core damage event for any given reactor … a meltdown ends the reactor’s life. So, in practice, there is really no such thing a core damage “frequency”.
Rather, core damage frequency is mathematical abstraction where statisticians imagine an infinite supply of identical fictitious copies of a specific nuclear reactor.A decomposition of the assumption looks like this:
One of these copies is placed in service where it operates a random time until it experiences a meltdown.
It is then immediately replaced by one of the identical reactors which in turn operates a different ransom time until meltdown ... so on and so forth forever.
All reactors are assumed to face identical stochastically stationary operating environments.
Core damage frequency is the hypothetical long-run number of meltdowns per year (after many many meltdowns), where failed reactors are replaced like lightbulbs.
Pretty far-fetched scenario, huh?
So why would one ever wish to study core damage frequency ... an abstraction that is so distant from reality and says nothing about the probabilistic likelihood (i.e., risk) of an imminent reactor meltdown? Why not simply estimate the risk of an imminent meltdown, instead? Good question.
The answer is simple: Because, there is never enough engineering information to accurately estimate the probabilistic risk of an imminent meltdown. So, proponents of PRA seem to be willing to accept an optimistically biased core damage frequency numerical estimate as satisfactory evidence of reactor safety.
Why do Congress, the Department of Energy (DOE), and the NRC buy into PRA? The answer is that government is responding to the special interest lobbying of the U.S. commercial nuclear industry. PRA offers industry an opportunity to sidestep the NRC general design criteria and its prescriptive design basis rules that industry believes are too expensive and unnecessarily conservative.
In other words, the commercial nuclear industry wants to be allowed to deploy advanced reactor designs while taking more risk, and garnering greater profits, than prescriptive regulations would allow (i.e., lowering their costs by passing greater accident risk to the general public). And, PRA is the analytical instrument used to make a case for greater industry risk taking. Be mindful that, in the United States, the costs of nuclear accidents are backstopped by the federal government through the Price-Anderson Nuclear Industries Indemnity Act … a topic for a future post. Consequently, the U.S. commercial nuclear industry has invested $billions in marketing, lobbying, and political campaign contributions in their efforts to groom government and the public opinion to be favorable towards accepting more safety risk.