Statistically assessing of the risks of commercial nuclear energy

As we approach the 5th anniversary of the Fukushima emergency and 30th anniversary of the Chernobyl disaster, Spencer Wheatley, Prof Benjamin Sovacool and Prof Didier Sornette argue that the risks of another major nuclear accident are much greater than the industry believes.
 

Article from SGR Newsletter no.44; online publication: 7 March 2016
 

Download pdf of article [0.1MB]
 

We recently performed a statistical study of the risk of accidents and incidents (events) occurring in nuclear power plants across the world. [1] [2] Here some of the findings of this study, as well as references that it contains, are discussed as they pertain to the current and near-term risks.
 

Gathering nuclear accident data

The accident at Fukushima in 2011 – which is expected to cost at least 170 Billion US Dollars – brought again the issue of the safety of nuclear power generation to the attention of the public. Unsatisfied with the perceived overly optimistic risk assessments provided by the industry, both academia and the media attempted to provide some kind of assessment. A main obstacle in such studies is the fact that the industry regulator (the International Atomic Energy Agency, IAEA) does not publish data about historical accidents.

Further, when the IAEA does publish a measure of the size of an event, it does so with the crude International Nuclear Event Scale (INES), a discrete seven point scale. To overcome this, we scoured the academic literature, news, and industry publications to compile a dataset of 184 events (from 1950 to 2014) with severity defined as the total resulting loss in inflation-adjusted US Dollars. This measure enables the holistic comparison of a variety of different types of events. To enable robust future studies of the risk of nuclear power the dataset has been published online with the public encouraged to review and recommend improvements. [3]
 

How likely is another Chernobyl or Fukushima?

We performed a statistical analysis of this data (namely the events with severity in excess of 20 million US Dollars), which is summarized within figure 1 (see reference 1 for the full account). The left panel of the figure concerns the frequency of events per reactor per year. For this, the observed rate of events was calculated both running backwards and forwards from the Chernobyl accident in 1986. The main observation here is that the frequency of accidents dropped substantially after Chernobyl, and has remained relatively constant since. [4] This drop was likely due to improvements in both technology and practices. The right panel of the figure concerns the severity of events. More specifically, the sample of events in excess of 20 million US Dollars (USD) both before and after the accident at Three Mile Island (TMI) in 1979, are plotted according to their complementary cumulative distribution function (CCDF) – a function that helps determine the probability that an event is in excess of a given size.
 

Download pdf of Figure 1 [0.1MB]

Figure 1. Left panel: the rate of events per reactor per year, where the estimates were taken running forward and backward from Chernobyl and are bounded by one Poisson standard error. Right panel: The damage/consequences in the pre and post Three Mile Island periods (light and dark, respectively) plotted according to their complementary cumulative distribution functions (CCDF).
 

The main observation is that, since TMI, moderate to large events have been less common. This is good news, but not an adequate improvement: the post-TMI distribution is so heavy tailed that the expected severity is mathematically infinite. This is reflected by the fact that the severity of Fukushima is larger than the sum of all remaining events. This point cannot be emphasized enough, as it implies that, if one wants to reduce the total risk level, one needs to effectively exclude the possibility of the most extreme events. Put simply, we need to move to a situation where major nuclear accidents are virtually impossible.

Given the statistical analysis of the historical data, we provide a characterization of the current (and near-term) risk level that is valid as long as the operational fleet of reactors is well represented by the current fleet. Our first result is that one should expect about one event per year causing damage in excess of 20 Million USD. Next, to compute expected annual losses, we must assume a finite maximum loss. If we accept that the Fukushima event represents the largest possible damage, then the mean yearly loss is approximately 1.5 Billion USD with a standard error of 8 Billion USD. This brackets the construction cost of a large nuclear plant, suggesting that about one full equivalent nuclear power plant value could be lost each year on average.

If we are less optimistic and assume that the largest possible damage is about 10 times that of the estimated damage of Fukushima, then the average yearly loss is about 5.5 Billion USD with a very large dispersion of 55 Billion USD. Concerning the probability of the most extreme accidents, we have computed the 50% probability return periods for such events. [5] Hence we estimate that there is at least a 50% probability of a Chernobyl-type event (causing about 32 Billion USD in damage costs) happening in the next 30-60 years. We further estimate that there is at least a 50% probability of a Fukushima-type event (170 Billion USD) happening in the next 65-150 years. [6] Having a standard error of about 50%, these estimates are highly uncertain, but what is certain is that they are much larger than what industry estimates would suggest.
 

Reducing the risks

Given the high risk level, and the insufficient effectiveness of past improvements, changes that will effectively truncate the risk of extreme events are necessary. Responses following the Fukushima event may have some impact, but this remains to be seen. Further, the implementation of passive safety systems is certainly a step in the right direction. However, given the current risk level, the importance of low-carbon energy sources, and that we are already committed to the stewardship of five decades’ worth of slowly decaying nuclear waste, it is clear that a significantly increased effort is needed to improve the state of nuclear technology. [7] Further, the authors strongly suggest that the industry publish a public dataset of nuclear accidents using a variety of precise and objective scientific measures such as radiation released and property damage caused. This would enable the best possible assessment of the risk, and better informed and more confident decision-making about energy policy.
 

Spencer Wheatley is a PhD student, and Prof. Didier Sornette his supervisor and Professor of the Chair of Entrepreneurial Risks at the Department of Management, Technology and Economics, ETH Zurich, Switzerland.Prof. Benjamin K. Sovacool is Professor of Business and Social Sciences at Aarhus University, Denmark, as well as Professor of Energy Policy at the Science Policy Research Unit (SPRU) at the University of Sussex, United Kingdom.
 

Notes and references



[1]. Wheatley S, Sovacool B, Sornette D (2015). Of Disasters and Dragon Kings: A Statistical Analysis of Nuclear Power Incidents and Accidents. arXiv preprint arXiv:1504.02380

[2]. Wheatley S, Sovacool B, Sornette D (2016). Of Disasters and Dragon Kings: A Statistical Analysis of Nuclear Power Incidents and Accidents. Risk Analysis (in press).

[4]. Not visible here is that the rate dropped from the 1960s until Chernobyl. The high observed frequency in the 1960s and 1970s for the small number of operating reactors has little influence on the cumulative estimate.

[5]. The time period such that the probability of observing at least one event in excess of the given size is 0.5.

[6]. The range of estimates is given for parameter values ranging from moderately conservative to optimistic.

[7]. Sornette D (2015). A civil super-Apollo project in nuclear research for a safer and prosperous world. Energy Research and Social Science, vol.8, pp.60-65.

Filed under: