Author Archives: radams

Business focused approach to molten salt reactors

by Rod Adams

I’ve been listening to an evangelical group of molten salt reactor enthusiasts for several years. Their pitch is attractive and they often make good arguments about the value of rethinking the light water reactor technology model, but most of the participants are unrealistic about the economic, material, technical, and regulatory barriers that their concepts must overcome before they can serve market needs.

Recently, I recognized that there are some companies interested in molten salt reactors that have a better-than-expected chance of success. They are led by hard-nosed, experienced businessmen with a balance between entrepreneurial optimism, a firm grasp of commercial technology requirements, and sound financial strategies.

One example is Terrestrial Energy, Inc. (TEI), a start-up company founded in 2013 and headquartered in Ontario, Canada. The officers and board of directors have the kind of heft and broad industry experience that reassures investors.

David LeBlanc, the chief technology officer and inventor of the firm’s basic technology, understands the need to take measured steps that take advantage of new ideas while using as much existing supply infrastructure as possible.

One of the key attractions of molten salt reactors over traditional water-cooled reactors is the ability to operate the radioactive portions of the system at atmospheric pressure. The fissionable material is dissolved in a chemical salt that has a boiling point in the range of 1400 ºC, so it operates as an atmospheric pressure liquid with a substantial margin at an operating temperature that can provide steam temperatures of 550–600 ºC.

In contrast to reactors where the fuel is composed of solid oxide pellets sealed into corrosion resistant cladding, molten salt reactors can be designed to allow fission product poisons to migrate out of the areas of high neutron flux, thus allowing a large portion of the neutrons to convert fertile materials into fissile isotopes to improve fuel economy.

The liquid fuel form allows a substantially higher burnup before reaching a condition where the core can no longer be used to produce heat; fuel pin swelling and cladding pressure are no longer operational concerns.

The integral molten salt reactor (IMSR) that LeBlanc has developed includes several key features that set it apart from some of the fanciful reactors that enthusiasts promise will extract 50–200 times more energy per unit mass of fuel using thorium “superfuel” than is possible using the conventional light water reactor fuel cycle.

One key feature is that the TEI’s IMSR uses low enriched uranium. Here is the logical explanation for that choice, quoted from TEI’s web site:

Other MSR development programs, including the extensive original U.S. program from the 1950s to 1970s, are generally focused on two key objectives: i) to use thorium-based fuels, and; ii) to “breed” fuel in an MSR-Breeder reactor.

Terrestrial Energy intentionally avoids these two objectives, and their additional technical and regulatory complexities, for the following reasons. Thorium is not currently licensed as a fuel. Liquid thorium fuels are the nuclear fuel equivalent of wet wood. Wet wood cannot be lit with a match; it requires a large torch. That large torch must come in the form of, for example, highly enriched uranium (HEU). Such a torch has no regulatory precedent in civilian nuclear power.

Furthermore, the use of proposed thorium fuel with HEU additive leads to valid criticisms of the proposed reactor’s proliferation and commercial credentials. The thorium fuel cycle would require its own involved regulatory process to become licensed for use on a wide commercial basis. The liquid uranium fuel of an IMSR can be lit easily, it is dry tinder.

Another key design decision was based on LeBlanc’s desire to avoid the complications of repairing systems or components that have been contaminated by direct exposure to molten fuel salts. The reactor, primary salt pumps, and primary heat exchangers are sealed into a single tank. There are redundant components inside the sealed boundary; replacement vice repair is the planned strategy.

Each reactor is designed to last for seven years of full power operation, but the reactor container has little in common with the thick-walled pressure vessels common in water-cooled reactors. The IMSR core is more like a single use, replaceable fuel cartridge that is inserted into a designed, shielded cavity in the power plant. There will be an empty cavity during initial startup, and after the initial core has completed its cycle, a replacement core unit is placed in the adjacent cavity. Secondary coolant lines and power production are then switched over to the new unit. The original unit thus has seven years of cooling before being moved to long-term storage to make way for a third core unit.

Refueling operations will be similar to those currently conducted. Instead of lifting individual fuel bundles, the whole core will be removed as a single unit. Instead of putting used fuel into deep pools of water, the sealed core units will be placed into shielded, cooled cavities.

As a consequence of the molten salt core, the same basic design can be arranged to produce a variety of power levels without redesigning the fuel or changing the fuel manufacturing tooling. The initially planned product lineup will include three reactor sizes scaled to produce between 29 and 290 MWe.

Steam conditions available from using a higher temperature reactor enable the use of compact, efficient superheated steam turbines instead of the larger saturated steam turbines more common in nuclear applications.

TEI investigated several possible headquarters alternatives and then selected Ontario, Canada, as having the best combination of available expertise, a sound manufacturing infrastructure, and a well-qualified nuclear regulator that uses a performance-based licensing system offering a quicker approval path for an innovative design than is available in the United States.

TEI has successfully passed through two phases of development and capital raising. Its second round of funding was significantly over-subscribed, attesting to the high level of interest in the technology and the recognized competence of the company’s management.

There is every reason to be skeptical about the chances for success for any new nuclear technology. Many readers here have heard dozens of stories before and often refer gushing salespeople to Rickover’s document on paper reactors versus real reactors. LeBlanc and his team appear to have done at least as much homework before becoming as actively public as Rickover and his team; their innovations seem well-informed, realistic, and well-timed.

It’s taken me several meetings and a good bit of additional reading about both the company and the technology before I reached the stage at which I was willing to share its story with a moderate endorsement. I’m now confident that there is no risk to my reputation from saying that Terrestrial Energy, Inc. is a company with an intriguing plan that is worth a look and a listen.

A version of the above article first appeared in the September 11, 2014, issue of Fuel Cycle Week. It is republished here with permission.

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Surface storage of used nuclear fuel – safe, cost-effective, and flexible

by Rod Adams

In August 2014, the U.S. Nuclear Regulatory Commission approved NUREG-2157, Generic Environmental Impact Statement for Continued Storage of Spent Nuclear Fuel. That action was the end result of several years worth of detailed analysis of the known and uncertain impacts of storing used nuclear fuel on the earth’s surface in licensed and monitored facilities.

As summarized in section 8 of the document, the staff determined that the environmental impact under expected conditions is small and acceptable even for an indefinite period of time. The analysis included consideration of a complete societal breakdown and loss of institutional control and determined that this situation would have an uncertain effect on the safety and security of used nuclear fuel, but determined that there is little likelihood that society will falter that much.

NUREG-2157 both eliminates the hold that was placed on issuing new or renewed nuclear facility licenses and it provides the technical basis supporting a decision to stop working on a geologic repository. If storing used material on the surface is acceptably safe, environmentally sound, and cost-effective for the foreseeable future, it would be a waste of resources to attempt to develop a facility using today’s technology. It is likely that technology will improve in the future. It is inevitable that the material of interest will become easier to handle as the shorter-lived, more active components decay at a rate established by physical laws.

NRC Chairman Allison Macfarlane wrote the following perceptive statement in her comments about her vote on the rule:

In essence, the GEIS concludes that unavoidable adverse environmental impacts are “small” for the short-term, long-term, and indefinite time frames for storage of spent nuclear fuel. The proverbial “elephant in the room” is this: if the environmental impacts of storing waste indefinitely on the surface are essentially small, then is it necessary to have a deep geologic disposal option?

Almost exactly right! We should ask hard questions of those who maintain that “deep geologic disposal is necessary” because “a majority of the public industry, academia, and regulators” say it is. Here are some questions worth asking:

  • Why do you think a mined deep geologic repository is required?
  • What makes it so important?
  • Where is the recorded vote on which you base your claim that it is the majority opinion?
  • If there was a vote, when was that vote taken?
  • Have there been any changes in circumstances that challenge the validity of that determination?
  • Should options besides a mined deep geologic repository be reconsidered?
  • How much will it cost each year to simply defer action into the indeterminate future?
  • From an accounting perspective, aren’t costs that are deferred far into the future worth less, not more, if they are recalculated into today’s dollars?

Those who have read Macfarlane’s full comment should recognize that she is not only the source of the “elephant in the room” statement above, but she is also the source of the assertions that the United States must continue pursuing a mined geologic repository because we have a “long-established responsibility to site a repository for the permanent disposal of spent nuclear fuel,” and she wants to make sure that the NRC’s determination that continued surface storage represents a small environmental impact for the indefinite future does not enable “avoiding this necessary task.”

Last week, I had the opportunity to ask Chairman Macfarlane if she thought that the NRC had a role in deciding U.S. policy on long-term nuclear waste storage. She explained that the only role for the NRC would be to review the license application submitted for any specific facility. The responsibility for planning and developing that facility and obtaining the funds necessary would be under the purview of a different agency.

I asked what the NRC’s role should be if no organization submits an application for a facility. She admitted that its only role in that case would be to continue monitoring existing facilities and approving license renewals or new licenses.

Congress can, and should, make a determination that the plan for nuclear waste for the indefinite future is to continue safely storing used material. It should remove the responsibility for permanent disposal of nuclear waste from the Department of Energy and put it into industry’s hands to solve. Of course, the industry will remain under the watchful eye of the already established federal regulator using procedures and processes that are already in place and continually being refined. It should make use of existing products and services, continue improving those offerings and should consider the need for facility consolidation as that makes economic sense.

Macfarlane and I also agree about when we would begin to believe that the United States can site, license, build, and operate a mined deep geologic repository, as she said:

I will have confidence in the timing when a renewed national consensus emerges on a repository for spent nuclear fuel.

(Emphasis added.)

There is no reason to suspect that a sufficiently bulletproof consensus will ever exist. Recent history has proven that it takes just a handful of people elected or appointed into the right positions to derail even the best laid plans made with strong support throughout the rest of the country.

Though Macfarlane seems concerned about the potential impact if there is a “loss of institutional control,” the controls required to ensure continued safety and environmental protection from used nuclear fuel are simple and easily implemented. As long as we do not believe that future generations will forget how to read, we can be sure enough that they will remember how to keep used nuclear fuel safely isolated.

Many people in Chairman Macfarlane’s generation—which is also my generation—probably believe at least some of the many entertainment products depicting that there is going to be an inevitable dystopia in the future. Those fictional predictions of the future might have made for good reading or viewing, but they are as useful a decision tool as any other wild fiction. Even if their fanciful dystopia becomes reality, used nuclear fuel will be low on the prioritized lists of risks.

Macfarlane has expressed some concerns about the financial responsibility associated with continued storage of used nuclear fuel. Establishing bonds or other forms of continued financial surety is a common business practice. Radioactive materials are not uniquely hazardous or even uniquely long-lived compared to other elements and compounds in common industrial service. We have learned to live with them. We have proven that we know how to protect the public from any harm. There is no reason to expect that society will forget the lessons it has already learned.

A simple financial solution would be to have nuclear plant owners establish a used fuel fund that would be as isolated from their normal finances as their decommissioning funds. The experience that we have with the current Nuclear Waste Fund shows that a tiny fee on each unit of nuclear electricity will grow into a very sizable fund if undisturbed over time. We should stop stealing the capital accumulated by such a fee to pay for other continuing government expenses and we should not fritter it away by conducting geologic studies of the depths under any region that has the proven potential to produce politically powerful majority leaders. (Nearly every state in the union has that potential given the longevity of any proposed repository program.)

In the conclusion of her seven page comment, Macfarlane included the following statement:

Finally, I note that at least one commenter has suggested that development of a repository in the U.S. has developed into a Sisyphean task. I agree that much in the national management of spent fuel and development of a geologic repository over the past decades fits this analogy.

Once again, I agree with Macfarlane’s description of the current situation associated with attempting to site a single geologic repository in the United States.

Americans must remember that we are not subjects of Greek gods condemned to continue the frustratingly impossible task of pushing a rock uphill every day just to have it roll back down at the end of the day. We are free members of a society that has the ability to make choices and to change its mind to adapt to new situations or when new information is revealed. The cancellation of Yucca Mountain through actions of a tiny group of people shows that successfully siting a repository in the United States, with its multiple interest groups and arcane procedural rules, is not possible.

The good news is that we don’t need a repository in order to operate nuclear power plants safely and to store the created residues in a way that produces negligible environmental impacts. We don’t need a government program that can be milked for assets and jobs for decades before being derailed. We don’t need to have the federal government—which means us, as taxpayers—pay the costs of continued storage; the costs are predictable and can be paid with a small fee on each unit of power generation.

Making the choice to quit now and spend our limited resources on something more useful must not be judged as unfair to future generations. Used nuclear fuel has potential value, and we can create savings accounts now that can enable a different long-term solution in the distant future when there is more general agreement that constipating nuclear energy would be a suicidal course of action for society.

As technology improves, assets build up in the coffers of responsible parties, nuclear power plant sites continue to be developed, nuclear power plant sites occasionally become repurposed, and the demand for nuclear fuel changes, future societies can change their mind. Nothing in the above plan precludes any choices for the future; the key action needed today is to stop digging the hole that currently seems to provide no possibility for escape.

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Motives for pushing a no-threshold dose radiation risk model (LNT) in 1955-56

By Rod Adams

Dr. Edward Calabrese recently published a paper titled The Genetics Panel of the NAS BEAR I Committee (1956): epistolary evidence suggests self‐interest may have prompted an exaggeration of radiation risks that led to the adoption of the LNT cancer risk assessment model.

Abstract: This paper extends a series of historical papers which demonstrated that the linear-no-threshold (LNT) model for cancer risk assessment was founded on ideological-based scientific deceptions by key radiation genetics leaders. Based on an assessment of recently uncovered personal correspondence, it is shown that some members of the United States (US) National Academy of Sciences (NAS) Biological effects of Atomic Radiation I (BEAR I) Genetics Panel were motivated by self-interest to exaggerate risks to promote their science and personal/professional agenda. Such activities have profound implications for public policy and may have had a significant impact on the adoption of the LNT model for cancer risk assessment.

This new work was inspired when Calabrese found a 2007 history of science dissertation by Michael W. Seltzer titled The technological infrastructure of science. One facet of the paper is to explain how self-interest can create biases that affect scientific conclusions, policy setting, and public communications. Identical measurements and observations can be used to support dramatically different reports depending on what the scientists are attempting to accomplish.

That is especially true when there is difficulty at the margins of measurement where it is not easy to discern “signal” from “noise.” The risk of agenda-driven conclusions has become greater as the scientific profession has expanded far beyond the sporadically funded idealists motivated by a pure search for knowledge, and into an occupation that provides “good jobs” with career progression, regular travel opportunities, political influence, and good salaries.

On the other hand, their efforts on the committee illustrate one component of the technological infrastructure of genetics outside of the laboratory: the increasing significance of large-scale laboratories, federal funding agencies, policy-making committees, and government regulatory bodies as critical components of the technological infrastructure of science. Clearly, how the science of genetics was to advance into the future would have much to do with traditionally non-epistemic factors, in addition to epistemic ones.

Finally, in considering all these themes together, it is difficult to conclude that there is any sharp separation between the practice of science and the practice of politics (in the Foucauldian sense of power/knowledge). Rouse’s view of the intra-twining of epistemology and power, his view of epistemic politics, is pertinent here. The practice of science was at times the playing of politically epistemic games, whether at the level of argumentation in the contestable theoretical disputes of population genetics, at the level of science policy-making, as with the various organizations and committees responding to the scientific and political controversies surrounding the efforts to establish exposure guidelines in the light of concerns over fallout from atomic testing, or with the planning of the future infrastructure of experimentation based on funding opportunities.

(Seltzer 2007, p. 307–308)

Admittedly, the language in the above quote uses jargon from the field of historians, but my translation is that Seltzer found ample evidence to support an assertion that the majority of geneticists on the BEAR I Genetics Panel were more concerned about fitting into a political narrative than they were in answering the questions they were ostensibly assembled to answer. Their tasking was to provide political decision-makers with scientifically supportable answers about the genetic effects of the radiation exposure that might be expected as a result of atomic weapons testing. However, they decided to complete a different task.

Some members of the committee had an agenda to assert the zero threshold dose response assertion desired by politically active members of the scientific community. They knew that answer—whether or not it was the truth—would assist their scientific colleagues in their efforts to raise concerns about fallout to a fever pitch. Fallout fear was their agreed-upon lever for gaining public support for their efforts to halt nuclear weapons testing.

Other members of the committee were more concerned about obtaining financial support for a long-term research program in general genetics research. That desired research program could only be tangentially related to determining the effect of the tiny, but chronic and largely unavoidable radiation exposures to human populations from highly dispersed atmospheric weapons testing fallout.

(Warning: If you are interested in the history of how the no-threshold dose assumption was imposed and you are pressed for time, please do not download Seltzer’s paper and begin reading it. It is full of intriguing information, but it is 450 pages long including footnotes. The section on radiation health effects controversies is 112 pages long.)

Here is a quote from Calabrese’s paper that does an excellent job of summarizing the important take-aways from Seltzer’s historical research for people who are mainly interested in encouraging a new look at radiation protection assumptions and regulations:

Seltzer provided evidence that members of the Genetics Panel clearly saw their role in the NAS BEAR I committee to be a vehicle to advocate and/or lobby for funding for radiation genetics (p. 285 footnote 208). Moreover, it was hoped that the committee, which would exist continuously over many years, would influence the direction and priorities for future research funding. According to Seltzer (2007), such hoped for funding possibilities for radiation geneticists can be seen in letter correspondence between Beadle, Dobzhansky, Muller and Demerec.

Demerec responded by saying that “I, myself, have a hard time keeping a straight face when there is talk about genetic deaths and the tremendous dangers of irradiation. I know that a number of very prominent geneticists, and people whose opinions you value highly, agree with me” (Demerec to Dobzhansky 1957). Dobzhansky to Demerec (1957b) responded by saying “let us be honest with ourselves—we are both interested in genetics research, and for the sake of it, we are willing to stretch a point when necessary. But let us not stretch it to the breaking point! Overstatements are sometimes dangerous since they result in their opposites when they approach the levels of absurdity.

Now, the business of genetic effects of atomic energy has produced a public scare, and a consequent interest in and recognition of (the) importance of genetics. This is to the good, since it will make some people read up on genetics who would not have done so otherwise, and it may lead to the powers-that-be giving money for genetic research which they would not give otherwise.” (Dobzhansky to Demerec (1957b)

Calabrese goes on to tie this newly uncovered history-of-science work to several other papers that he has recently published regarding his own excavation work digging through the collected papers of major players in the drama associated with using fears of radiation to slow and then stop nuclear weapons testing.

In retrospect, therefore, a historical assessment of the LNT reflects the so-called “perfect toxicological storm”: Muller receiving the Nobel Prize within 1.5 years after the atomic bomb blasts in Japan, the deliberate deceptions of Muller on the LNT during his Nobel Prize lecture (Calabrese 2011a, 2012), the series of stealth-like manuscript manipulations and deceptions by Stern to generate scientific support for the LNT and to prevent Muller’s Nobel lecture deceptions from being discovered (Calabrese 2011b), the series of subsequent false written statements by Muller to support Stern’s papers and to protect his own reputation (Calabrese 2013), the misdirection and manipulation of the NAS Genetics Panel by the actions of Muller and Stern (Calabrese 2013), and now evidence of subversive self-interest within the membership of the Genetics Panel to exaggerate risk for personal gain. This series of Muller/Stern-directed actions inflamed societal fear of ionizing radiation following the bombings of Japan and during the extreme tensions of the cold war with its concomitant environmental contamination with radionuclides from atmospheric testing of nuclear weapons, and led to the acceptance of the LNT model for cancer risk assessment by a human population that had become extremely fearful of radiation, even at very low doses.

(Calabrese 2014 p. 3)

Though the scientist-led antinuclear weapons movement saw fear of fallout as one way of inciting public action to limit atmospheric weapons testing and its uncontrolled releases, other people might have had less admirable motives. There are many solid financial reasons to encourage people to fear all sources of ionizing radiation, especially the doses that members of the public could possibly receive from nuclear energy production.

After all, even in the 1950s, the fuel industry was one of the largest and most important businesses in the world and was the source of a number of enormous fortunes. The industry has always been interested in avoiding the unprofitably low prices that result when there are more energy options and when the total supply of available energy is greater than the immediate need.

When I spoke to Dr. Calabrese for Atomic Show #218, he indicated that he had not done much to find out where the BEAR I committee members thought they would be obtaining the funds that might be made available if they exaggerated the dangers of low dose radiation. Modern scientists often assume that basic scientific research funding comes from a government agency, but that is something that developed gradually after World War II. Before then, nearly all funding for science came from private sources.

A 1987 biography of Warren Weaver published by the National Academies of Science described the genesis of the NAS study of radiation started in 1955.

Paraphrasing the description on pages 506–507, in the United States one of the largest basic science funders was the Rockefeller Foundation. In 1954, there were numerous articles in the press indicating that the public was confused about the effects of radiation. At a Rockefeller Foundation board meeting, attendees asked Detlev W. Bronk, who was both a Rockefeller Foundation board member and the NAS president, if there was a way to produce some definitive answers.

The NAS proposed forming six committees to investigate various aspects of the issue and the Rockefeller Foundation agreed to provide the necessary funds to produce the reports. Warren Weaver served as the chairman of the Genetics Committee for the first BEAR reports. Of the other members of the committee, at least four (George W. Beadle, M. Demerec, H. J. Muller, and A. H. Sturtevant) had been recipients of Rockefeller Foundation grants before 1956 and several continued receiving substantial grants well after their work on the committee.

The NAS biography described Weaver’s successful committee chairmanship:

The first committee was chaired by Weaver, who successfully mediated the opposing positions of the two groups of geneticists who were members of the committee and prepared a report that had their unanimous support. After the first summary report was published in 1956, there was virtual editorial unanimity in the nation’s newspapers that the “report should be read in its entirety to be appreciated” and that it deserved the close attention of all concerned citizens.

Pages 506–507

In the June 13, 1956, edition of the New York Times, the news of the committee’s report occupied the entire far right column of the front page from top to bottom. Here is the top portion of the article:

Peril to future of man

Below those scary, attention-grabbing phrases, the article’s lead was designed to shock and raise serious concerns:

Washington, June 12 — A committee of outstanding scientists reported today that atomic radiation, no matter how small the dose, harms not only the person receiving it but also all of his descendents [sic].

The article continued:

The six committees studied the radiation problem in the fields of genetics, pathology, meteorology, oceanography and fisheries, agriculture and food supplies, and disposal and dispersal of radioactive wastes.

Overshadowing all others because of its implication for mankind was the report of the genetics panel. This was headed by Dr. Warren Weaver of the Rockefeller Foundation. It was this foundation that provided the funds for the year-long survey.

It is important to understand that the primary data that the genetics committee had available to review were from experiments using X-rays on fruit flies, most of which were conducted by foundation grantees and members of the committee.

It is also worth noting that Warren Weaver served as director for Natural Sciences for the Rockefeller Foundation from 1932–1959. During that period the program that he directed provided more than $90 million in grants for experimental biology. (NAS biography pg 504.) He had a distinguished career, received many awards, and had a major influence in selecting the science that was funded for molecular biology, radiation health effects, and genetics.

Weaver was a mathematician by education with a lifelong interest in statistics and Lewis Carroll’s Alice in Wonderland. According to his obituary, he had the world’s largest collection of various editions of the book. Upon his death, the collection was given to the University of Texas.

The Rockefeller Foundation was, and remains, interested in maintaining the dominance of oil and natural gas in our energy supply system. Those fuels were the source of the largess that the foundation has been able to give for more than 100 years.


Note: A version of this article appeared on Atomic Insights on July 19, 2014, under the headline of Selfish motives for LNT assumption by geneticists on NAS BEAR I. At the time, I was not aware that the Rockefeller Foundation provided grants supporting all of the Biological Effects of Atomic Radiation committees from 1955-1962.




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Correcting History Can Be an Uphill Battle

By Rod Adams

In April 2014, ANS Nuclear Cafe published a valuable historical account and analysis of the Three Mile Island accident titled TMI operators did what they were trained to do.

As author Mike Derivan explained in great detail, the operators on duty at TMI-2 during the early morning hours of March 28, 1979, took exactly the actions that they were trained to take when provided indications of low primary plant pressure, combined with pressurizer water level indication that was “pegged high.” That level indicator told operators that the pressurizer was full of water. There were no other direct indications of water level provided.

Derivan has a unique perspective on this historical event; he was the shift supervisor on duty at the Davis Besse Nuclear Power Plant on September 24, 1977, when that plant experienced an event that was virtually identical to the event that initiated the TMI accident. Derivan and his crew initially responded like the crew at TMI; their indications were the same and both crews had been trained to make the same diagnosis and perform the same actions.

The primary reason that the event at Davis Besse turned out to be a historical footnote, while the event at TMI resulted in billions of dollars worth of equipment damage, world-wide attention, and changes throughout the nuclear industry, was that Derivan recognized that he had been trained to make a wrong diagnosis, which led to incorrect actions.

About 20 minutes after his event began, he took a new look at the symptoms his indications were describing and revised his overall diagnosis. That led him to recognize that his plant was experiencing a loss of coolant from the steam space of the pressurizer. He realized that the response of available indicators to that event was unlike the indicator response for loss-of-coolant accidents from any other part of the reactor coolant system. He directed his crew to shut the valves that isolated the stuck power-operated relief valve and to restore water flow from the high pressure injection system into the primary coolant system.

Derivan participated in the required post-event analysis and reporting, so he knew what the Nuclear Regulatory Commission and the plant vendor B&W were told. He was thus uniquely affected by the TMI accident, especially once technical explanations of the accident sequence were available. He spent a considerable amount of time during the subsequent 35 years reviewing the available reports on TMI, trying to understand why the lesson he and his crew had learned in September 1977 had not been absorbed by the operators at TMI.

His conclusion is that the operators never had a chance to absorb and incorporate the lessons he had learned because they were never told that his event happened and never informed how to revise their procedures and training to enable a safer response. Despite all the effort that was put into various commissions and internal lessons learned efforts (Kemeny, Rogovin, the NRC task force that wrote NUREG-0585, etc.), none of the documents clearly state that the specific root cause of the sequence of events that melted 25–40 percent of the fuel material at TMI-2 was that almost everyone associated with pressurized water reactor design and operation misunderstood how the system would respond to a leak in the steam space of the pressurizer.

Unlike any other leak location, a steam leak would provide indications of falling system pressure and rising indicated level in the pressurizer. Since designers, regulators, and trainers assumed that all loss-of-coolant accidents would cause both pressure and pressurizer level to fall, that is what the available training materials—including the computerized simulators—taught operators to expect.

From the start of the accident, operators at TMI were thus placed into a situation that almost no one expected; they did not have an emergency operating procedure to follow. They had strong warnings about not overfilling the pressurizer, so they stopped pumping water into the plant when the level indication showed that the pressurizer was already more full than it should be. That was not an error on their part; it was an error in system response understanding that carried through to all training materials and operating procedures.

It was also an error in processes for sharing operational experience; at the time, the NRC was the only agency that received all reports from operators, so it was the only one that could distribute those reports back out to others that might need the information.

Unfortunately for the hard-working people who chose to become plant operators, the court of public and industry opinion blamed “operator error” as a primary cause of the accident. An excerpt from the history page of the Professional Reactor Operator Society (PROS) provides an operator perspective on how this misapplied responsibility affected members of the elite community of commercial reactor operators.

Remember Three-Mile Island (TMI)? Even if you weren’t in the nuclear business in March of 1979, you couldn’t have missed all the references since. As markers for change go, the event itself will not soon be forgotten, but more important are the lessons learned on all fronts.

Life was not very pleasant for the nuclear plant operators in the early eighties. The Three Mile Island accident started a chain of reforms in the industry that to a large extent were directed at operators. The basis for change was reported to be that the accident was caused by operator error. That announcement was made to the public almost immediately after the accident began and, as the core was uncovering, every special interest group in the nuclear industry was racing to protect its image.

In the days that followed, a media picture of incompetence in the TMI control room emerged. As we operators picked up the bits and pieces of information, it became clear that the picture was somewhat distorted. The TMI operators were being held accountable for deficiencies that legions of engineers, designers, trainers, and regulators had failed to recognize. Operators everywhere began to imagine themselves in a similar situation and realized that the results would probably be the same.

During the next few years, the industry was deluged with solutions to the “problem” of operator incompetence. The solutions ranged from threats of jail sentences to mandatory college degrees for all nuclear power plant operators. Few thought it was necessary to ask operators what tools they needed to help them operate the plants.

In addition to writing his story for distribution and creating an informative website—Nuke Knews—with his collected wisdom about the TMI event, Derivan recently took one more step in his quest for an improved understanding of why TMI happened and who should bear the responsibility.

He wrote a letter to NRC Chairman Macfarlane asking her to remove “operator error” as the root cause of the accident. His letter and supporting documentation can be found in the NRC ADAMS document database with a search for accession number ML14167A165. The NRC’s official response was provided on July 21, 2014, by Thomas R. Wellock, the agency’s historian. It is available in the same place with accession number ML14197A635.

Here is a quote from that response letter:

… none of the five major investigations of the accident commissioned by the NRC, Congress, and President Jimmy Carter claimed that operator error was “the” root cause of the accident.

Virtually all of the studies I reviewed agree with your analysis that while the operators committed errors, the chief culprits behind the accident were industry-wide and regulatory flaws. These included a poor understanding of PWR plant response to loss-of-coolant accidents (LOCA), a failure to circulate information about several precursor events at plants in the United States and Europe, flawed operator training and plant procedures, and inadequate control-room design. While you argue that these reports implicitly blame the operators as a “default position,” my reading of them indicates they were careful to avoid such a conclusion, and some pointedly challenged the thesis that operator-error caused the accident.

It may be correct, as you argue, that more should have been made of the industry’s poor understanding of plant response during a LOCA in the pressurizer steam space, but as you know, the NRC and industry addressed this issue with numerous reforms in training, reporting requirements, and event analysis. In fact, learning from precursor events may be the most important history lesson from TMI. On the 25th anniversary of the accident, NRC Historian J. Samuel Walker published Three Mile Island: A Nuclear Crisis in Historical Perspective. The Davis-Besse event, Walker shows, was a critical missed lesson: “Neither Babcock and Wilcox nor the NRC had taken effective action to draw lessons from Davis-Besse or provide warnings to other plant operators that ‘could have prevented the accident’ at TMI-2.”

In sum, the official reports and NRC histories have been and continue to be in substantial agreement with your overall analysis as to the causes of the accident. Like you, they place the errors committed by the TMI operators in the context of general industry and regulatory failings regarding human factors.

That letter comes close to the pardon that Derivan is seeking, but it might have been better if he had asked for “operator error” to be removed as “a” root cause rather than as “the” root cause.

In the years since the accident, plant designers and regulators have made substantial improvements in their system understanding and in the processes that they use to share lessons learned and operating experience. However, it is still worthwhile to remind everyone, especially as newly designed systems and whole new technologies are introduced, that there is no replacement for a questioning attitude and careful incorporation of operating experience to enable continuous improvement.

There is a useful paragraph on page 2-3 of NUREG-0585 that can serve as a conclusion to remember:

In the Naval Nuclear Propulsion Program, Admiral Rickover has insisted that there be acceptance of personal responsibility throughout the program and that the designer, draftsman, or workman and their supervisor and managers are responsible for their work and, if a mistake is made, it is necessary that those responsible acknowledge it and take corrective action to prevent recurrence. This concept applies equally to the commercial nuclear power program, but it has not yet been achieved.




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.


tmi b&w 314x200

Research Reactor License Renewal Challenges

By Rod Adams

The process for renewing research and test reactor (RTR) licenses in the United States has been subject to lengthy delays and periodic backlogs since the early 1980s. Despite the apparent time invested in improvement efforts, the process does not seem to be getting better very fast. The difficulty, schedule uncertainty, and cost of renewing research reactor licenses adds to the burden of owning and operating research reactors. The scale of the challenge may contribute to regrettable institutional decisions that maintaining operable facilities is not worth the trouble.

Here is the background that led me to those conclusions:

A couple of weeks ago, one of the email lists I read provided an intriguing press release announcing the renewal of Dow Chemical Co.’s TRIGA research reactor located in Midland, Mich. The intriguing part of the story was that Dow had initially filed its application to review the license in April 2009 and the 20-year extension was awarded on June 18, 2014, more than five years later. One of the more frequent contributors to the list had the following reaction:

Seriously? It took more than five years to renew a TRIGA license? That in itself might be an interesting story.

I followed up with a request for information to the Nuclear Regulatory Commission’s public affairs office. Scott Burnell replied promptly with the following information:

The background on the staff’s ongoing effort to improve RTR license renewal goes back quite a ways. Here’s a relevant SECY and other material: (March 2012 Commission meeting transcript) (March 2012 Commission meeting staff slides) (regulatory basis for rulemaking to improve process)

I’ll check with the staff Monday on what information’s available re: staff hours on the Dow RTR renewal review.

Burnell sent the staff hour estimate for renewing the Dow TRIGA reactor license. Not including hours spent by contractors, the NRC staff took 1600 hours to review the renewal application. Since Dow is a for-profit company, it was charged $272 per hour, for a total of $435,000 plus whatever contractor costs were involved. That amount just covers the cost of regulator time, not the cost of salaries and contracts paid directly by Dow to prepare the license application, respond to requests for additional information (RAI), and engage in other communications associated with the applications.

Based on the cover letter for the issued license, Dow sent 19 letters to the NRC related to Dow’s application during the five-year process.

The references supplied by Burnell provided additional information about the process that is well known within the small community that specializes in research reactor operations, maintenance, and licensing.

For example, the last renewal of the Rensselaer Critical Facility, a 100-Watt open tank reactor that was originally licensed in 1965, was initially submitted in November 2002 and issued on June 27, 2011, nearly nine years later. The NRC did not send Rensselaer an RAI until three years after it had submitted its renewal application.

University of Missouri Research Reactor

University of Missouri Research Reactor (MURR)

In a second example, the University of Missouri-Columbia Research Reactor (MURR) submitted its most recent license application in August 2006. The NRC sent its first set of RAIs in July 2009 and followed up with at least five more sets of RAIs that included a total of 201 questions of varying complexity. According to the NRC’s listing of research reactors currently undergoing licensing review, the MURR license has not yet been issued.

A third example is the Armed Forces Radiobiology Research Institute TRIGA reactor. Its license renewal application was submitted in July 2004 and is still under review. In 2012, AFRRI estimated that it would be spending at least $1 million for its share of the license review process, not including expenditures by the NRC. Since AFRRI is a government organization, the NRC does not bill it for fees. Burnell indicated that the staff hours expended on that project could be 6,000 or more. It is sadly amusing to review the brief provided by the AFRRI to the NRC in 2012 about the process. (See page 52–65 of the linked document.) The following quote is a sample that indicates the briefer’s level of frustration.

Question: Once the licensee demonstrates that the reactor does not pose a risk to the heath and safety of the public, what is the benefit provided to the public by the expenditure of $1M to answer the additional 142 RAIs?

In a quirk of fate, numerous research license renewals have often come due when NRC priorities have been reordered by external events. Research reactors receive 20-year licenses; numerous facilities were constructed in the late 1950s and early 1960s. Dozens of renewals came due or were already under review in April 1979 when the Three Mile Island accident and its recovery became the NRC’s highest priority items.

About 20 years after that backlog got worked off, the 9/11-inspired security upgrades pushed everything else down on the priority list.

TRIGA at Oregon State University

TRIGA at Oregon State University

The research reactor office has experienced staffing shortages, often exacerbated by the small pool of people with sufficient knowledge and experience in the field. When the NRC hunts for talent, it is drawing from the same pool of people that staffs the plants and is responsible for filing the applications for license amendments and renewals.

One aspect of the law that eases the potential disruption of the licensing delays is a provision that allows continued facility operation as long as there was a timely submission of the renewal application. That provision, however, has often resulted in a lower priority being assigned to fixing the staffing shortages and the complex nature of the license application process.

The facility owners don’t want to complain too loudly about the amount of time that their application is taking, since they are not prohibited from operating due to an expired license. NRC budgeters and human resource personnel have not been pressured to make investments in improving their service level; not only do the customers have no other choice, but they have not squeaked very loudly. Here is a quote from a brief provided to the NRC by the chairman of the National Organization for Test, Research and Training Reactors (TRTR).

Position on License Renewal

  • TRTR recognizes the unique challenges imposed on NRC during RTR relicensing in the past decade (staffing issues, 9/11, etc.).
  • TRTR appreciates the efforts made by the Commission to alleviate the relicensing backlog.
  • TRTR appreciates the efforts of the NRC RTR group to update guidance for future relicensing efforts and the opportunity to participate in the update process via public meetings.

Generic Suggestions for Streamlining Relicensing

  • The process has become excessively complex compared to 20 years ago, with no quantifiable improvement to safety.
  • Consider the development of generic thermal hydraulic analysis models for TRIGA and plate-type fueled RTRs (1 MW or less).
  • Similarly for the Maximum Hypothetical Accident analysis.
  • Develop a systematic way outside of the RAI process to correct typographical and editing errors.
  • Develop a generic decommissioning cost analysis based on previous experiences, indexed to power level, and inflation.
  • Endorse the use of ANSI/ANS Standards in Regulatory Guidance.

(Pages 26–28 of the linked PDF document containing several briefs, each with its own slide numbering sequence.)

Once the high priority responses have died down and backlogs of license reviews in progress have reached levels in excess of 50 percent of the total number of research reactors in operation, the NRC has stepped in and directed improvement efforts. The staff has attempted to improve the process by issuing more guidance, but those attempts have often complicated and delayed the applications that are already under review.

The Interim Staff Guidance (ISG) issued in June 2009 appears to still be active; it is difficult to tell how much progress has been made on the long-range plan that ISG outlined. Once again, external events have changed the NRC’s priorities as most available resources during the past three years have been shifted to deal with the events that took place in Japan in 2011 and the effort to come up with some kind of waste confidence determination.

There are no easy solutions, but repairing the process will require focused and sustained management attention.

TRIGA at University of California, Davis

TRIGA at University of California, Davis




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Nuclear professionals: Establish standing now to improve operational radiation limits

By Rod Adams

On August 3, 2014, the window will close on a rare opportunity to use the political process to strongly support the use of science to establish radiation protection regulations. Though it is not terribly difficult for existing light water reactors and fuel cycle facilities to meet the existing limits from 40 CFR 190 regarding doses to the general public and annual release rate limits for specific isotopes, there is no scientific basis for the current limits. If they are maintained, it would hinder the deployment of many potentially valuable technologies that could help humanity achieve a growing level of prosperity while achieving substantial reductions in air pollution and persistent greenhouse gases like CO2.

In January 2014, the U.S. Environmental Protection Agency issued an Advanced Notice of Proposed Rulemaking (ANPR) to solicit comments from the general public and affected stakeholders about 40 CFR 190, Environmental Radiation Protection Standards for Nuclear Power Operations.

The ANPR page has links to summary webinars provided to the public during the spring of 2014, including presentation slides, presentation audio, and questions and answers. This is an important opportunity for members of the public, nuclear energy professionals, nuclear technical societies, and companies involved in various aspects of the nuclear fuel cycle to provide comments about the current regulations and recommendations for improvements. Providing comments now, in the information-gathering phase of a potential rulemaking process, is a critical component of establishing standing to continue participating in the process.

us epa logo no text 214x201It also avoids a situation where an onerous rule could be issued and enforced under the regulator’s principle that “we provided an opportunity for comment, but no one complained then.”

The existing version of 40 CFR 190—issued on January 13, 1977, during the last week of the Gerald Ford administration—established a limit of 0.25 mSv/year whole body dose and 0.75 mSv/year to the thyroid for any member of the general public from radiation coming from any part of the nuclear fuel cycle, with the exception of uranium mining and long-term waste disposal. Those two activities are covered under different regulations. Naturally occurring radioactive material is not covered by 40 CFR 190, nor are exposures from medical procedures.

40 CFR 190 also specifies annual emissions limits for the entire fuel cycle for three specific radionuclides for each gigawatt-year of nuclear generated electricity: krypton-85 (50,000 curies), iodine-129 (5 millicuries), and Pu-239 and other alpha emitters with longer than one year half-life (0.5 millicuries).

It is important to clarify the way that the U.S. federal government assigns responsibilities for radiation protection standards. The Nuclear Regulatory Commission has the responsibility for regulating individual facilities and for establishing radiation protection standards for workers, but the EPA has a role and an office of radiation protection as well.

The Atomic Energy Act of 1954 initially assigned all regulation relating to nuclear energy and radiation to the Atomic Energy Commission (AEC). However, as part of the President’s Reorganization Plan No. 3 of October 1970, President Nixon transferred responsibility for establishing generally applicable environmental radiation protection standards from the AEC to the newly formed EPA:

…to the extent that such functions of the Commission consist of establishing generally applicable environmental standards for the protection of the general environment from radioactive material. As used herein, standards mean limits on radiation exposures or levels or concentrations or quantities of radioactive material, in the general environment outside the boundaries of locations under the control of persons possessing or using radioactive material.

(Final Environmental Impact Statement, Environmental Radiation Protection Requirements for Normal Operations of Activities in the Uranium Fuel Cycle, p. 18.)

Before the transfer of environmental radiation responsibilities from the AEC to the EPA, and until the EPA issued the new rule in 1977, the annual radiation dose limit for a member of the general public from nuclear fuel cycle operations was 5 mSv—20 times higher than the EPA’s limit.

The AEC had conservatively assigned a limit of 1/10th of the 50 mSv/year applied to occupational radiation workers, which it had, in turn, conservatively chosen to provide a high level of worker protection from the potential negative health effects of atomic radiation.

The AEC’s occupational limit of 50 mSv was less than 1/10th of the previously applied “tolerance dose” of 2 mSv/day, which worked out to an annual limit of approximately 700 mSv/year. That daily limit recognized the observed effect that damage resulting from radiation doses was routinely repaired by normal physiological healing mechanisms.

Aside: After more than 100 years of human experience working with radiation and radioactive materials, there is still no data that prove negative health effects for people whose exposures have been maintained within the above tolerance dose, initially established for radiology workers in 1934. End Aside.

From the 1934 tolerance dose to the EPA limit specified in 1977 (and still in effect), requirements were tightened by a factor of 2800. The claimed basis for that large conservatism was a lack of data at low doses, leading to uncertainty about radiation health effects on humans. Based on reports from the National Academy of Sciences subcommittee on the Biological Effect of Ionizing Radiation (BEIR), the EPA rule writers simply assumed that every dose of radiation was hazardous to human health.

The EPA used that assumption to justify setting limits that were quite low, but could be met by the existing technology if it was maintained in a like-new condition for its entire operating life. Since the rule writers assumed that they were establishing a standard that would protect the public from an actual harm, they did not worry about the amount of effort that would be expended in surveys and monitoring to prove compliance. As gleaned from the public webinar questions and answers, EPA representatives do not even ask about compliance costs, because they are only given the responsibility of establishing the general rule; the NRC is responsible for inspections and monitoring enforcement of the standard.

The primary measured human health effects used by the BEIR committee in formulating their regulatory recommendations were determined based on epidemiological studies of atomic bomb survivors. That unique population was exposed to almost instantaneous doses greater than 100 mSv. Based on their interpretation of data from the Life Span Study of atomic bomb victims, which supported a linear relationship between dose and effect in the dose regions available, the BEIR committee recommended a conservative assumption that the linear relationship continued all the way down to a zero dose, zero effect origin.

For the radionuclide emissions limits, the EPA chose numbers that stretch the linear no-threshold dose assumption by applying it to extremely small doses spread to a very large population.

The Kr-85 standard is illustrative of this stretching. It took several hours of digging through the 240-page final environmental impact statement and the nearly 400-page collection of comments and responses to determine exactly what dose the EPA was seeking to limit decades ago, and how much it thought the industry should spend to achieve that protection.

The EPA determined that allowing the industry to continue its then-established practice of venting Kr-85 and allowing that inert gas to disperse posed an unacceptable risk to the world’s population.

It calculated that if no effort was made to contain Kr-85, and the U.S. industry grew to a projected 1000 GW of electricity production by 2000, an industry with full recycling would release enough radioactive Kr-85 gas to cause about 100 cases of cancer each year.

The EPA’s calculation was based on a world population of 5 billion people exposed to an average of 0.0004 mSv/year per individual.

At the time that this analysis was performed, the Barnwell nuclear fuel reprocessing facility was under construction and nearly complete. It had not been designed to contain Kr-85. The facility owners provided an estimate to the EPA that retrofitting a cryogenic capture and storage capability for Kr-85 would cost $44.6 million.

The EPA finessed this exceedingly large cost for tiny assumed benefit by saying that the estimated cost for the Barnwell facility was not representative of what it would cost other facilities that were designed to optimize the cost of Kr-85 capture. It based that assertion on the fact that Exxon Nuclear Fuels was in a conceptual design phase for a reprocessing facility and had determined that it might be able to include Kr-85 capture for less than half of the Barnwell estimate.

GE, the company that built the Midwest Fuel Recovery Plant in Morris, Illinois, provided several comments to the EPA, including one about the low cost-benefit ratio of attempting to impose controls on Kr-85:

Comment: The model used to determine the total population dose should have a cutoff point (generally considered to be less than 0.01 mSv/year) below which the radiation dose to individuals is small enough to be ignored.

In particular, holdup of krypton-85 is not justified since the average total body dose rate by the year 2000 is expected to be only 0.0004 mSv/year.

Response: Radiation doses caused by man’s activities are additive to the natural radiation background of about 0.8-1.0 mSv/year [note: the generally accepted range of background radiation in the mid 1970s, as indicated by other parts of the documents was 0.6 - 3.0 mSv/yr] whole-body dose to which everyone is exposed. It is extremely unlikely that there is an abrupt discontinuity in the dose-effect relationship, whatever its shape or slope. at the dose level represented by the natural background that would be required to justify a conclusion that some small additional radiation dose caused by man’s activities can be considered harmless and may be reasonably ignored.

For this reason, it is appropriate to sum small doses delivered to large population groups to determine the integrated population dose. The integrated population dose may then be used to calculate potential health effects to assist in making judgements on the risk resulting from radioactive effluent releases from uranium fuel cycle facilities, and the reasonableness of costs that would be incurred to mitigate this risk.

Existing Kr-85 rules are thus based on collective doses, and a calculation of risks, that is now specifically discouraged by both national (NCRP) and international (ICRP) radiation protection bodies. It is also based on the assumption of a full-recycle fuel system and 10 times as much nuclear power generating capacity as exists in the United States today.

Since the level specified is applied to the entire nuclear fuel cycle industry in the United States, the 40 CFR 190 ANPR asks the public to comment about the implications of attempting to apply limits to individual facilities. This portion of the discussion is important for molten salt reactor technology that does not include fuel cladding to seal fission product gases, and for fuel cycles that envision on-site recycling using a technology like pyroprocessing instead of transporting used fuel to a centralized facility for recycling.

There are many more facets of the existing rule that are worthy of comment, but one more worth particular attention is the concluding paragraph from the underlying policy for radiation protection, which is found on the last page of the final environmental impact statement:

The linear hypothesis by itself precludes the development of acceptable levels of risk based solely on health considerations. Therefore, in establishing radiation protection positions, the Agency will weigh not only the health impact, but also social, economic, and other considerations associated with the activities addressed.

In 1977, there was no consideration given to the fact that any power that was not generated using a uranium or thorium fuel cycle had a good chance of being generated by a power source producing a much higher level of carbon dioxide. In fact, the EPA in 1977 had not even begun to consider that CO2 was a problem. That “other consideration” must now play a role in any future decision-making about radiation limits or emission limits for radioactive noble gases.

If EPA bureaucrats are constrained to use the recommendations of a duly constituted body of scientists as the basis for writing its regulations, the least they could do before rewriting the rules is to ask the scientific community to determine if the linear no-threshold (LNT) dose response model is still valid. The last BEIR committee report is now close to 10 years old. The studies on which it was based were conducted during an era in which it was nearly impossible to conduct detailed studies of DNA, but that limitation has now been overcome by advances in biotechnology. There is also a well-developed community of specialists in dose response studies that have produced a growing body of evidence supporting the conclusion that the LNT is not “conservative”—it is simply incorrect.

Note: Dose rates from the original documents have been converted into SI units.

epa sign 313x201



Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Accepting the Science of Biological Effects of Low Level Radiation

By Rod Adams

A group of past presidents and fellows of the American Nuclear Society has composed an important open letter to ANS on a topic that has been the subject of controversy since before I first joined the society in 1994. The subject line of that letter is “Resolving the issue of the science of biological effects of low level radiation.” The letter is currently the only item on a new web site that has been created in memory of Ted Rockwell, one of the pioneers of ANS and the namesake of its award for lifetime achievement.

LNT and “no safe dose”

Ted was a strong science supporter who argued for many years that we needed to stop accepting an assumption created in the 1950s without data as the basis for our radiation protection regulations. That assumption, which most insiders call the “LNT”—linear no-threshold dose response—says that risk from radiation is linearly proportional to dose all the way to the origin of zero risk, zero dose.

Many people who support the continued use of this assumption as the basis for regulation plug their ears and cover their eyes to the fact that those who oppose the use of nuclear energy, food irradiation, or medical treatments that take advantage of radiation’s useful properties translate our mathematically neutral term into something far more fear-inspiring: They loudly and frequently proclaim that the scientific consensus is that there is “no safe dose” of radiation.

Some people who support the use of nuclear energy and who are nuclear professionals help turn up the volume of this repeated cry:

Delvan Neville, lead author of the study and a graduate research assistant in the Department of Nuclear Engineering and Radiation Health Physics at Oregon State University, told the Statesman Journal Apr. 28, “You can’t say there is absolutely zero risk because any radiation is assumed to carry at least some small risk.”

While most scientists and engineers understand that the LNT assumption means that tiny doses have tiny risks that disappear into the noise of daily living, the people who scream “no safe dose” want their listeners to believe it means that all radiation is dangerous. They see no need to complicate the conversation with trivial matters like measurements and units (I am being ironic here).

Scientists and engineers almost immediately ask “how much” before starting to get worried; but others can be spurred into action simply by hearing that there is “radiation” or “contamination” and it is coming to get them and their children. When it comes to radiation and radiation dose rates, we nuclear professionals have not made it easy for ourselves or for the public, using a complicated set of units, and in the United States remaining stubbornly “American” by refusing to convert to the international standards.

Aside: There is no good reason for our failure to accept international radiation-related measurement units of Sieverts, Bequerel, and Grays. Laziness and “it’s always been that way” are lousy reasons. I’m going to make a new pledge right now—I will use International System of Units (SI) units exclusively and no longer use Rem, Curies, or Rad. After experiencing the communications confusion complicated by incompatible units during and after the Fukushima event, the Health Physics Society adopted a position statement specifying exclusive use of SI units for talking or writing about radiation, and perhaps ANS should adopt it as well. End Aside.

Physics or biology?

Leaving aside the propaganda value associated with the cry of “no safe dose,” an important factor that supports a high priority to the effort to resolve the biological effects of low-level radiation is the fact that the LNT uses the wrong science altogether.

The LNT assumption was created by persons who viewed the world through the lens of physics. When dealing with inanimate physical objects all the way down to the tiniest particles like neutrons, protons, mesons, and baryons, statistics and uncertainty principles work well to predict the outcome of each event. An atom that fissions or decays into a new isotope has no mechanism that works to reverse that change. A radiation response assumption that applies in physics, however, is an inadequate assumption when the target is a living organism that has inherent repair mechanisms. Biology is the right science to use here.

At the time that the LNT was accepted, decision-makers had an excuse. Molecular biology was a brand new science and there were few tools available for measuring the effects that various doses of radiation have on living organisms.

The assumption itself, however, has since inhibited a major tool used by biologists and those who study the efficacy of medical treatments: Since all radiation was assumed to be damaging and could only be used in medicine in cases where there was an existing condition that might be improved, it was considered unethical to set up well-designed randomized controlled trials to expose healthy people to carefully measured doses of radiation while having a controlled, unexposed group.

Instead, health effects studies involving humans have normally been of the less precise observational methods of case-control or cohort variety, with occupationally or accidentally exposed persons. The nature of the exposures in those studies often introduces a large measurement uncertainty, and there are complicating factors that are often difficult to address in an observational study.

Science marches on, but will LNT?

Molecular biology and its available tools have progressed dramatically since the LNT was adopted by BEIR I (Committee on the Biological Effects of Ionizing Radiation) in 1956. It is now possible to measure effects, both short-term and long-term, and to watch the response and repair mechanisms actually at work. One of the key findings that biologists have uncovered in recent years is the fact that the number of radiation-induced DNA events at modest radiation dose rates are dwarfed, by several orders of magnitude, by essentially identical events caused by “ordinary” oxidative stress.

This area of research (and others) could lead to a far better understanding of the biological effects of low-level radiation. Unfortunately, the pace of the research effort has slowed down in the United States because the Department of Energy’s low dose research program was defunded in 2011 for unexplained reasons.

It is past time to replace the LNT assumption with a model that uses the correct scientific discipline—biology, rather than physics—to predict biological effects of low-level radiation. I’ll conclude by quoting the final paragraph of the ANS past presidents’ open letter, which I encourage all ANS members, both past and present, to read, understand, and sign:

The LNT model has been long-embedded into our thinking about radiation risk and nuclear energy to the point of near unquestioned acceptance. Because of strict adherence to this hypothesis, untold physiological damage has resulted from the Fukushima accident—a situation in which no person has received a sufficient radiation dose to cause a significant health issue—yet thousands have had their lives unnecessarily and intolerably uprooted. The proposed actions will spark controversy because it could very well dislodge long-held beliefs. But as a community of science-minded professionals, it is our responsibility to provide leadership. We ask that our Society serve in this capacity.

Additional reading

Yes Vermont Yankee (June 23, 2014)  “No Safe Dose” is Bad Science. Updated. Guest Post by Howard Shaffer

Atomic Insights (June 21, 2014) Resolving the issue of the science of biological effects of low level radiation

dna 432x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Spent fuel pool fire risk goes to zero a few months after reactor shutdown

By Rod Adams

It’s time to stop worrying about the risk of a spent fuel pool fire at decommissioned nuclear reactors. Even at operating reactors, there is good reason to put the risk quite low on any table that prioritizes items worth fretting over.

According to modern analysis using up-to-date data and physically representative models—with appropriately conservative assumptions—the staff at the U. S. Nuclear Regulatory Commission has reached a series of important conclusions:

1. Spent nuclear fuel storage pools are strong, robust structures that are highly likely to survive even the strongest of potentially damaging events. That is true for even the most limiting case of elevated pools used in Mk 1 boiling water reactors:

The staff first evaluated whether a severe, though unlikely, earthquake would damage the spent fuel pool to the point of leaking. In order to assess the consequences that might result from a spent fuel pool leak, the study assumed seismic forces greater than the maximum earthquake reasonably expected to occur at the reference plant location. The NRC expects that the ground motion used in this study is more challenging for the spent fuel pool structure than that experienced at the Fukushima Daiichi nuclear power plant from the earthquake that occurred off the coast of Japan on March 11, 2011. That earthquake did not result in any spent fuel pool leaks.

(Emphasis added.)

2. If an event even more powerful than the already extreme assumption occurs and a pool is damaged enough to cause a significant leak, it is almost certain that the used fuel inside the pool will remain intact. The only period in which there is any doubt about that statement is during the first few months after the most recently operating fuel has been put into the pool:

In the unlikely situation that a leak occurs, this study shows that for the scenarios and spent fuel pool studied, spent fuel is only susceptible to a radiological release within a few months after the fuel is moved from the reactor into the spent fuel pool. After that time, the spent fuel is coolable by air for at least 72 hours. This study shows the likelihood of a radiological release from the spent fuel after the analyzed severe earthquake at the reference plant to be about one time in 10 million years or lower.

(Emphasis added.)

3. Even if all else fails, and—somehow—there is an event that both empties the pool and causes the protective cladding on the fuel to catastrophically fail, the chance of anyone being exposed to enough radioactive material to cause a dose that would have any health impact is vanishingly tiny:

If a leak and radiological release were to occur, this study shows that the individual cancer fatality risk for a member of the public is several orders of magnitude lower than the Commission’s Quantitative Health Objective of two in one million (2×10-6/year). For such a radiological release, this study shows public and environmental effects are generally the same or smaller than earlier studies.

(Note: The quoted statements come from pages iii and iv of Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U.S. Mark I Boiling Water Reactor dated October 2013. The numbered statements are my interpretation of what the analysis results mean to the rest of us.)

The staff at the NRC did not reach these conclusions lightly. Even though the NRC and its predecessor agency have been encouraged to study this area in excruciating detail for more than 40 years, the publicity surrounding the events at Fukushima and the mistaken belief that there were leaks from the spent fuel pools at that plant caused the agency to initiate yet another study, the results of which are quoted above.

That study was not a minor effort. After reviewing the document, which includes a total of 416 pages of material, including detailed responses to comments, I contacted the NRC public affairs office and asked the following question: “How much NRC time and money was invested into the production of the document titled “Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U. S. Mark 1 Boiling Water Reactor” dated October 2013?”

Here is the answer I received from Scott Burnell, NRC Office of Public Affairs:

The staff was able to provide the following information. In FY2011, 11 staff worked a total of 275.75 hours on the Spent Fuel Pool Study. In FY2012, 24 staff worked a total of 4,623.25 hours on the project. In FY2013, 22 staff worked a total of 6,253.75 hours on the project. And for the portion of FY2014 until the report was submitted, eight staff worked a total of 378.5 hours on the project.

That makes a total of 11,530.25 hours. That is more than 5 person-years, but it involved at least 24 separate individuals. The current professional staff hour rate for the US NRC is $272, so the NRC staff time associated with the study cost licensees $3,136,228. There are many other costs associated with a study like this that are not included in that total.

It is no wonder to me that four of the five NRC commissioners—who have been appointed and confirmed as independent, knowledgeable professionals charged with ensuring that all activities associated with using radioactive material in the United States are adequately safe—voted to move on and stop studying the non-issue of storing used fuel in licensed spent fuel pools.

That decision was recorded on May 23, 2014, and documented in COMSECY-13-00030:

The Commission has approved the Staff’s recommendation that this Tier 3 Japan lessons-learned activity be closed and that no further generic assessments be pursued related to possible regulatory actions to require the expedited transfer of spent fuel to dry cask storage.

The only vote against closing the issue and continuing with more study and analysis was NRC Chairman Macfarlane. She is not only an academic with the  common academic belief that it’s worthwhile to keep studying interesting issues, even if they seemingly have little importance, but is also someone with a long publication history questioning virtually all spent fuel storage options.

It is also no surprise to me that Senators Boxer and Markey and their staffs have failed to get the message, and continue to insist that yet more expenditures be devoted to this issue, even though it has no impact on safety.

Finally, it is not surprising that professional nuclear energy skeptics such as the Natural Resources Defense Council and the Union of Concerned Scientists insist that the NRC analysis was incomplete, that there were scenarios that were not studied, and that there were hidden consequences that were not included. After all, interfering in all possible ways with any activity designed to provide temporary or permanent answers to the question: “What do you do with the waste?” has been a plank of the antinuclear platform for at least 40 years.

The effort to constipate nuclear energy by focusing on “the waste issue”, including pushing an expensive effort to move used fuel to dry casks as soon as possible, has been ongoing ever since Ralph Nader gathered a disparate collection of local activist groups in the summer of 1974 under the Critical Mass Energy Project banner.

There is no reason to keep expending money on this particular aspect of the waste issue. Dry cask storage is acceptably safe for an indefinite period of time. However, wet storage in licensed spent fuel pools is also acceptably safe for an indefinite period of time, even in spent fuel pools whose storage capacity has been computed, based on experience and careful study, to be substantially larger than was initially assumed when the pools were first designed in the 1960s and early 1970s.

spent fuel pool 303x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Save Vermont Yankee. If not you, who? If not now, when?

By Rod Adams

I told some friends the other day that I often feel like a time traveler from the Age of Reason who sees questionable behavior and is forced by training to ask, “Why?”

Although I have already written a couple of articles on this particular topic, it is time for one more post intended to provoke thoughts and discussions aimed at finding a way to prevent an action that we all know is wrong and shortsighted. I’m writing about the pending closure of the Vermont Yankee nuclear power plant, a 650-MWe nuclear power plant located on the Vermont side of the Vermont/New Hampshire border (also known as the Connecticut River) and only a dozen or so miles from the Massachusetts border.

It is a safe, reliable, zero-emission nuclear power plant with a low, predictable fuel cost and a moderately generous, but predictable payroll. It has recently been extensively refurbished as part of a power uprate program; it has an operating license that is good until 2032 and may be able to be extended; and it has a brand new emergency diesel engine.

It is in a region of the United States where the reliable generating capacity is suddenly so tight that the total auction price for capacity has recently tripled from $1 billion in 2013 to more than $3 billion in the most recent auction.

Aside: It’s probably worth mentioning that if Vermont Yankee had bid into that auction, the prices would have settled at a far lower level. That is the nature of the response in an under damped system that is in a delicate balance; wild swings can result from the imposition of minor disturbances. It is not at all surprising that companies with generating facilities participating in the New England capacity auction did not approach Entergy about purchasing Vermont Yankee. There is no shock in finding out that 100 percent of the companies approached as logical candidates with complimentary assets politely declined to make any bids after a due diligence presentation. End Aside.

Vermont Yankee is also in a region of the country with a growing dependence on natural gas for both electricity and heat, but a pipeline network that was not sized to carry enough gas for both types of customers.

Here is a recent quote from Leo Denault, Entergy Corporation chief executive officer and chairman, about the power situation in New England:

“If we continue to see Northeast power markets drive what should be economical units to retire prematurely and not fairly reward generators for the attributes they provide—including fuel supply diversity and reliability, as well as environmental benefits—what was a volatile outlier this winter… could become a recurring situation.”  Denault also noted the harsh winter’s ability to expose pipeline deficiencies that constrained certain resources during periods of high demand: “There is simply not enough natural gas pipeline capacity in New England to serve both heating demand and natural gas-fired power plants during extreme cold.”

(SNL Energy’s Power Daily — April 25, 2014)

Any industrial customers that are left in the region are left out in the cold, and it can get quite cold in New England, especially during a polar vortex.

The state of Vermont bears a large portion of the responsibility for the pending closure; in fact, there are politicians in the state who have bragged about their success in getting rid of a reliable, low cost, clean energy source (of course, they may slant their claims a bit).

Peter Shumlin—both as senator and then as governor—and his allies made life uncomfortable for Entergy during the 12 years that the company owned the facility. Their efforts added substantial costs to the total operations and maintenance costs and they demanded several different kinds of tribute in return for “allowing” the plant to keep operating.

It is understandable that there are many people on the plant staff who are sad that they are losing their jobs, but conflicted about leaving a state that did not value their contributions anyway.

Unfortunately, nuclear professionals did not do all they could to help the valiant efforts of Meredith and George Angwin, Howard Shaffer, Robert Hargraves, and others who worked hard to counter the FUD (fear, uncertainty and doubt) spread by the professional fear mongers like Arnie Gundersen, or the actions of professional nuclear energy industry critics like Mark Cooper and Peter Bradford.

So far, the antinuclear forces seem to have won the day.

Entergy has announced that no one wanted the plant. I will take them at their word, but I have to ask what kind of effort they invested to market the facility? It is almost like getting up one day and finding out that your neighbor, who owns a house that you always liked and thought would be a great place for your son or daughter to use to raise your grandchildren, had decided to tear down the house to leave a vacant lawn because that was easier than paying the upkeep after they retired to Florida.

He tells you that “everyone” knew the place was for sale and also knew that he planned to tear it down if no one came up with a reasonable offer. Somehow, you never noticed the little “For Sale” sign tucked in the bottom right hand corner of a front window. Perhaps it was because there was an overgrown plant out in front covering the sign.

At any rate, my little allegory would have a happy ending if you just happened to wake up and get your paper early enough on the day that the dumpsters were being delivered to stop your neighbor and halt the destruction before it started.

In the case of Vermont Yankee, there are potentially interested investors that never knew that the plant was for sale. There are also plenty of technically qualified people who could be formed into a technically qualified management team in short order to own and operate a nuclear plant that has already done all of the hard work of establishing procedures, schedules, required programs like QA and RP, and all of the host of other things that would need to be done for any new facility.

The reactions I have received from some very bright people when I describe the current plan can be summarized by the quote I received—second hand—from a correspondent who knows Nathan Myhrvold, the CEO of Intellectual Ventures and a partner with Bill Gates in Terrapower. My correspondent asked Myhrvold if he had any ideas about saving the plant. This is the response he received:

Not really…. It is an insane decision to shut it, but that is what nuclear has become…

Perhaps I am just a little odd, but I just don’t see how people can stand idly by and watch while a small group of people take actions that will harm a much larger group of people over a long time to come. If the action is, indeed, insane, the question is why should we allow it to happen?

Who is going to point out the insanity? When?

Back to the headline, which was the motto over one of the doorways at my alma mater.

“If not you, who? If not now, when?”

I guess that—for now—it’s going to be me and a few diehards who are still working hard in Vermont. With any luck, in a short period of time it will be me, those few diehards, and a dedicated team of well-resourced professionals who recognize that shutting down a well-operated nuclear plant is a betrayal of the people who have worked so hard to try to make the United States less dependent on foreign supplies of energy.

Some might say, who am I to question the analysis and decisions of a big company like Entergy. Surely the people working there know more about the situation than I do and should be trusted to have made the right call. As one of my many heroes famously advised: “Trust, but verify.” After I see the numbers, I might make a different call, but all of the publicly available numbers are pointing me in a different direction.

I may just be a guy who spends a good bit of his day blogging on the Internet, yes sometimes in my PJs. However, I’m also a guy who has been doing that for a long time while also holding down responsible positions in the US Navy and at a respected nuclear power plant design firm.

If you’re fortunate enough to have had the assignments I have had and you are any good at all, you end up meeting a few credible people who respect your ability. I even have a few friends in finance, some from my days at the Naval Academy and some from my sustained but eventually failed efforts to raise capital for Adams Atomic Engines, Inc.

BTW—did you know that the New England power grid burned diesel and jet fuel to supply 4 percent of its winter power this past year and that on some days, generators that were burning distilled petroleum products represented fully 25 percent of the electrical power supply? And those figures happened even WITH Vermont Yankee and Brayton Point supplying reliable power…

vermont yankee c 405x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

TMI operators did what they were trained to do

Note by Rod Adams:  This post has a deep background story. The author, Mike Derivan, was the shift supervisor at the Davis Besse nuclear power plant (DBNPP) on September 24, 1977, when it experienced an event that started out almost exactly like the event at Three Mile Island on March 28, 1979.

The event began with a loss of feed water to the steam generator. The rapid halt of heat removal resulted in a primary loop temperature increase, primary coolant expansion, and primary system pressure exceeding the set point for a pilot operated relieve valve in the steam space of the pressurizer. As at TMI, that relief valve stayed open after system pressure was lowered, resulting in a continuing loss of coolant. For the first 20 minutes, the plant and operator response at Davis Besse were virtually identical to those at TMI.

After that initial similarity, Derivan had an “Ah-ha” moment and took actions that made the event at Davis Besse turn into a historical footnote instead of a multi-billion dollar accident.

When Three Mile Island happened and details of the event emerged from the fog of initial coverage, Mike was more personally struck than almost anyone else. He has spent a good deal of time during the past 35 years trying to answer questions about the event, some that nagged and others that burned more intensely.

In order to more fully understand the narrative below, please review Derivan’s presentation describing the events at Davis Besse, complete with annotated system drawings to show how the event progressed.

This story is a little longer and more technical than most of the posts on ANS Nuclear Cafe or Atomic Insights (where this post originally appeared). It is intended to be a significant contribution to historical understanding of an important event from a man with a unique perspective on that event. If you are intensely curious about nuclear energy and its history, this story is worth the effort it requires.

The rest of this post is Mike’s story and his analysis, told in his own words.


By Mike Derivan

My first real introduction to the Three Mile Island-2 (TMI) accident happened on Saturday, March 31, 1979, a few days after the accident. TMI-2 was a Babcock and Wilcox (B&W) pressurized water reactor plant.

At the Davis Besse nuclear power plant (DBNPP) in Ohio where I worked, we initially heard something serious had happened at TMI-2 as early as the day of the event, March 28, and interest was high because TMI was our sister plant. DBNPP also is a B&W PWR plant.

Actual details were sketchy for the next couple of days, and mainly by watching the nightly TV news it became clear to me that something serious was going on. It was clear from watching the TV news reports that conflicting information was being reported. Some reports indicated there had been radiation releases and also reports by the plant owner of no radiation releases.

I even remember hearing the words “core damage” first mentioned. It was Saturday on a TV news report that I saw the first explanation using pictures of the system to the suspected sequence of events and it became clear to me the pilot operated relief valve had stuck open.

My reaction was gut-wrenching and I was also in disbelief that TMI did not know what had happened at Davis Besse. That evening I watched the Walter Cronkite news report. I sat there with total disbelief as he discussed potential core meltdown. Disbelief because if you were a trained reactor operator in those days it was pretty much embedded in your head that a core meltdown was not even possible; and here that possibility was staring me right in the face.

Cronkite’s report was also my first exposure to the infamous hydrogen bubble story. I had enough loss of coolant accident (LOCA) training to understand that some hydrogen could be generated during LOCAs; after all we had containment vessel hydrogen concentration monitoring and control systems installed at our plant. But the actual described scenario at TMI seemed incredible, except that it had apparently happened.

I would expect that my reaction was the same as many nuclear plant operators at that time. The exception was that the apparent initiating scenario had actually happened to me 18 months earlier at Davis Besse and I just couldn’t get the question out of my mind: “Why didn’t they know?”

The real root cause of the TMI accident

Since the time of the TMI accident virtually hundreds of people have stuck their nose into the root cause of the TMI accident. Both the Kemeny and Rogovin investigations identified a lot of programmatic “stuff” that needed to be fixed, and I agree with most of it.

I feel, however, that both of them skirted one important issue by using different flavors of “weasel words” in the discussion of operator error. The two reports handled that specific topic a bit differently, but the discussions got couched with side topics of contributing factors. The general consensus of all the current discussion summaries I read is that TMI was caused by operator error.

The TMI operators did make some operator errors and I am not denying that. But my contention is all the errors they made were after the fact that they got outside of the design-basis understanding of PWRs at that time. It is no surprise to anyone that when a machine this complicated gets outside of its design basis, anything might happen. You basically hope for the best, but you are going to have to take what you get.

Fukushima proves that, and everyone knows why/how Fukushima got outside of its design basis. The how/why that TMI operators got outside of their design basis is going to be the focus of my discussion. I will also discuss the fact that I think this was understood at the time of the investigations, but it was consciously decided not to pursue it.

My whole point of contention is the turning off the high pressure injection flow early in the event in response to the increasing pressurizer level is the crux of the whole operator error argument. All discussions say that if the operators hadn’t done that, the TMI event would have been a no-never-mind. And I agree.

But nobody really wants to believe that they were told to do that for the symptoms they saw.

In other words, they were told to do that, by their training, compounded by tunnel vision bad procedure guidance. I have believed this since the day I understood what happened at TMI. Furthermore, the TMI operators were trying to defend their actions from a position of weakness; their core was melted, nobody wanted to believe them.

I am not in a position of weakness on this issue, my event came out okay at DBNPP, and so I have no reason to not be totally honest or objective on this issue. During the precursor event at DBNPP, we also turned off high pressure injection early in the event in response to the symptoms that we saw, and for the same reason the TMI operators did it 18 months later; we were told to do it that way.

This fact is apparently a hard pill to swallow. But if it is hard for you to accept, just imagine how I felt watching TMI unfold in real-time.

And right there is the crux of the issue. Once those high pressure injection pumps were off, both plants were then outside the design-basis understanding for that particular small break LOCA.

So you hope for the best, but take what you get. But still, obviously an error has been made if not taking that action would have made the event a no-never-mind.

So who exactly made the error? Both the Kemeny and Rogovin reports discuss the problems with the B&W simulator training for the operators. The important point that they both apparently missed (or didn’t want to deal with, which I prefer as the explanation) is that this is really an independent two-part problem.

I will refer to controlling high pressure injection during a small break LOCA as part A of the problem, and to the actual physical PWR plant response to a small break LOCA during a leak in the pressurizer steam space as part B of the problem.

It really is that simple. B&W was training correctly for high pressure injection control (part A) for small break LOCAs in the water space of their PWR. But neither they nor Westinghouse correctly understood the correct plant response for a small break LOCA in the pressurizer steam space.

By omission they were not training correctly for a small break LOCA in the pressurizer steam space (part B). To make matters worse, B&W was overstressing in training the importance of the part A “rules”, to the extent that an operator would fail a B&W administered operator certification exam for failure to correctly implement the part A rules.

Thus, when fate would have it and the two occurrences (part A and part B) combined in the real world, where the plant responds per the rules of Mother Nature, the B&W training and procedures ended up leading the operators to actions that put them outside the actual design basis, not the falsely perceived (and trained upon) design basis.

Up until very recently my argument has been one using just simple logic and sheer numbers of operators involved. In Davis Besse’s September 1977 event, there were five licensed operators involved in that decision, either by direct action or complacent compliance. In other words, all five agreed that it was the right thing to do. Of course, it wasn’t the right thing to do, but nobody objected because it was the correct part A thing to do and nobody understood the part B of the problem.

Eighteen months later at TMI, in March 1979, an additional number of operators (just how many depends on the time line) repeated the same initial wrong actions. So we have about a dozen operators, at two separate plants 18 months apart, all doing the same thing and all convinced that they were doing the right thing.

Is it even conceivable to think that they did not all believe they did the right thing according to part A? I just don’t believe so; of course, we are all arguing from a position of weakness. It is the wrong thing to do for part A and part B combined, so nobody really wants to believe that we were trained to do it.

But as I explained, it is really the two-part problem that created the issue. My point can be further emphasized by the fact that the Nuclear Regulatory Commission’s Region III had heartburn over the report that DBNPP submitted for its event. The NRC did not like the fact that the report did not say that the operators made an error turning off high pressure injection.

I know why that happened. The person most responsible for writing the report narrative was actually in the control room during the event. He did not believe the action was wrong based on his same training relative to part A of the problem. So why would he put that statement in the report? He was so convinced that his own (complacent) agreement was correct that saying otherwise would be a false statement.

Just recently new information came to my attention that absolutely confirms my belief that B&W was in fact totally emphasizing high pressure injection control in their training based solely on their understanding of the part A problem, with no understanding on B&W’s part of the part B problem or its affect when combined with the part A problem.

My understanding comes directly from seeing the whole infamous Walters’ response memo of November 10, 1977, to the original Kelly memo of November 1, 1977. It is absolutely remarkable to me that 35+ years after the DBNPP event and almost the same amount of time after TMI that a totally unrelated Google search turns up a complete version of the Walters memo.

After half a lifetime of studying all the TMI reports, I had only seen one “cherry picked” excerpt from the Walters memo, basically saying that he agreed with the operators’ response at DBNPP. The whole memo in context basically confirms that the operator claims of “we were trained to do it” are correct.

The original Kelly memo also confirms that Kelly still didn’t grasp the significance of the part B problem, as related to the DBNPP event; or if he did he didn’t relate it thoroughly and clearly in his memo. Both memos are presented and discussed below; make up your own conclusions. (The source document is here.

The Kelly memo

Kelly Memo

Kelly Memo

The referenced source document is basically a critique of these memos by textual communications experts. Here’s a summary: First, Kelly is talking “uphill” in the organization, so he couches his memo with that in mind. He asks no one for a decision, but basically asks for “thoughts.” And he makes a non-emphatic recommendation for “guidelines.”

My personal additional notations are that he dilutes the importance of and possibly adds confusion to the recommendation by adding “LPI” to the discussion, but most importantly he totally misses any part B problem discussion. He does say “the operator stopped High Pressure Injection when Pressurizer level began to recover, without regard to primary pressure.”

But there is no mention about the fact that the system response was not as expected, e.g. the pressurizer level went up drastically in response to the reactor cooling system boiling. He never articulates that the operator’s reluctance to re-initiate high pressure injection, even after we understood the cause of the off-scale pressurizer level indication, was based solely on that indicated pressurizer level and our training. Thus, the memo totally misses addressing the part B problem point that the system response was not as expected by anybody, which was crucial to getting the guidance fixed.

The other thing I notice is that the memo is not addressed to Walters. I’ve also “been there, done that” in a large organization. I can easily understand how the recipient (Walters’ boss) upon receiving this memo, with no specific articulation of a new problem (part B), would pass it to Walters with a “handle it, handle it… make it go away.” I also note that N.S. Elliott ison the distribution. He was the B&W Training Department manager, thus B&W training was directly in the loop on this issue also.

The Walters response memo

Note that the original Walters’ response memo to Kelly was hand written, so it has been apparently typed someplace along the line. This is how it appears in the reference source, typos and all.

Walters Memo

Walters Memo

I’m omitting the communications expert’s comments, because they are in the reference. Here are my comments: In simple operator lingo, this response is a “smart ass slap down” to Kelly, including all the accompanying sarcasm. But there are some very important admissions revealed here. First, an admission, including Walters’ discussion with the B&W Training Department, that we responded in the correct manner considering how we were trained, and also including the bases behind our training.

This is what we operators had been claiming all along, but nobody wanted to believe it. Second, Walters clearly states both as his personal assumption and the B&W Training Department assumption that reactor coolant pressure and pressurizer level will trend in the same direction during a LOCA. Bingo. He has just admitted that they don’t get, still, the specific part B contribution to the problem.

So they are in fact training wrong for this event because they don’t understand part B. Further, this discussion is happening after the DBNPP event, as a result of the Kelly concerns, and well before TMI. Third, the tone of Walters’ sarcastic comments about a “hydro” (hydrostatic pressure testing) of the reactor coolant system every time high pressure injection is initiated shows the disproportional emphasis that the B&W training was placing on “never let High Pressure Injection pump you solid.” Again, something that the operators were claiming that nobody wanted to believe.

My conclusion, and it hasn’t changed in 35 years, is that the root cause of the TMI accident was that the B&W simulator training and inadequate procedures put the TMI operators in a box, outside of their design-basis understanding for that specific small break loss of coolant. And a contributing cause is B&W itself didn’t understand the actual plant response to that steam space loss of coolant event because it was never analyzed correctly. Then, they also missed the warning that the Davis Besse event provided.

For a long time I wondered why both the Kemeny and Rogovin investigations didn’t reach the same specific conclusion as I have. After all, both investigations had some very smart people involved in both processes, and they both looked at the same evidence. My thinking today is that they did reach that same conclusion. But I don’t actually know what they may have seen as the bottom line purpose for their investigations either.

If you consider that no investigation report was going to change the condition of TMI, it may have been as simple as there is enough wrong that needs fundamental changing, so let’s just get those changes done and move forward. So neither group saw a need to identify the actual bottom line root cause, rather they just gave recommendations for prevention of another TMI–type accident.

Further, by the time those two reports were published, it was well understood that there was going to be a lawsuit between GPU and B&W. If one of those reports had specifically identified B&W with partial liability for the root cause, that conclusion along with the report that made it, would be inherently dragged into the lawsuit.

I have no doubt that this was actually discussed at the time. And I will further speculate that it was actually decided that there was no reason to identify the actual true single root cause in the reports because the lawsuit itself would decide that liability issue independently of the reports. My problem with that is the lawsuit, which started in 1982, never really settled the liability issue as it was mutually “settled” in 1983 before a conclusion was reached.

Another thing that I think was actually discussed at that time was the fact that if the reports stated that the root cause was because the B&W training put the operators outside of the design-basis understanding for that event (because the event wasn’t understood by B&W), it would open Pandora’s Box. They didn’t want to deal with “What else do you have wrong?” and there was well over a $100 billion worth of these nuclear power plants still operating.

This conclusion is strongly reinforced for me by the Kemeny Report section “Causes of the Accident”. This section of the report lists a “fundamental cause” as operator error, and specifically lists turning off high pressure injection early in the event. And then the report lists several “Contributing Factors” including B&W missing the warning provided by the Davis Besse event.

If you read the contributing factors listed, there is a screaming omission; it is never stated that B&W (actually the whole PWR industry if you consider the precursors) did not understand the actual plant response to a leak in the pressurizer steam space (what I refer here as part B of the problem). And that is why B&W and the NRC both missed the DBNPP warning. Virtually nothing will ever convince me that all those smart people did not put that truth together.

Thus, it was both their fear of opening Pandora’s Box and a conscious decision that there was no need to implicate B&W with any partial liability that ruled the process. By doing that, they collectively decided to throw the TMI operators under the bus as the default position.

My conclusion for the missing Contributing Factor problem is an Occam’s razor solution; it is not “missing” at all with respect to they didn’t “Get It”; it was a decision to not include it. After all, if that Contributing Factor had been included, who on earth would believe it is an operator error when they simply did what they were told to do in that situation? So, they just simply did not want to deal with the real issue; who made the error?

A simple analogy

For years I struggled with finding a simple analogy to explain the position that the TMI operators were placed in by their training, one that could be understood by common everyday knowledge that everyone was familiar with (and not the technical detail that required understanding the complications of nuke plant operations). One of the reasons that it was difficult was that it required a “phenomena” that is commonly understood today, but was not understood at all at the time of the training. This is the best that I can come up with.

Suppose in learning to drive a car you are being trained to respond to the car veering to the left. It’s simple enough, simply turn the steering wheel to the right to recover. It is also what your basic instinct would lead you to do, so there is no mental conflict in believing it.

It is also actually reinforced and practiced during actual driver training on a curvy road. That response is soon imbedded as the right thing to do. Now suppose your driver training also includes training on a car simulator training machine. It is where you learn and practice emergency situation driving. After all, nobody is going to do those emergency things in an actual car on the road.

Here’s where it gets complicated. Assume virtually no one yet understands that when the car skids to the left on ice (because of loss of front wheel steering traction), the correct response is to turn the steering wheel into the skid direction, or to the left. This is just the opposite of the non-ice response. And to make matters worse, because no one understands it yet, including the guy who built the car simulator, the car simulator has been programmed to make this wrong response work correctly on the simulator.

So in your emergency driver training you practice it this way, the simulator responds wrong to the actual phenomena, but it shows the successful result and you recover control. Since this probably also agrees with your instinct, and you see success on the simulator, this action is also embedded as the right thing to do. One additional point, if you don’t do this wrong action, you will flunk your simulator driver training test.

You know where this is going, now you are out driving on an icy road for the first time and the car skids to the left. You respond exactly as you were instructed to do and exactly as the simulator showed was successful, and you have an accident because the car responds to the real world rules of Mother Nature.

An investigation is obviously necessary because, I forgot to tell you, the car cost $4 billion and you don’t own it. During the subsequent investigation everything is uncovered; the unknown phenomenon is finally correctly understood, the simulator incorrect programming is discovered, it is uncovered that the previously unknown phenomenon had been discovered before your accident, and your accident was even predicted as possible.

But the investigation results are published and the finding is that the accident was caused by your error of turning the steering wheel the wrong way on the ice. Nobody else is found to have made an error in the stated conclusions but you; it is simply a case of driver error. Do you feel you have been wronged? This is what happened to the TMI operators.

For everybody out there who doesn’t like my conclusions, I’ll just say that many of the principals of the investigations are still alive, but choose not to talk. So, simply ask them, especially the principals in the GPU vs. B&W lawsuit that should have determined any liability issues. Ask them why it didn’t happen. My idea of justice involves getting the truth, the whole truth, and nothing but the truth exposed. That process is still unfinished.

tmi b&w 314x200

Small Modular Reactors—US Capabilities and the Global Market

By Rod Adams

On March 31–April 1, Nuclear Energy Insider held its 4th Annual Small Modular Reactor (SMR) conference in Charlotte, NC (following on the 2nd ANS SMR Conference in November 2013—for notes and report from that embedded topical meeting, see here).

You can find a report of the first day of talks, presentations, and hallway conversations at SMRs—Why Not Now? Then When? That first day was focused almost exclusively on the US domestic market—the second day included some talks about US capabilities, but it was mainly focused on information useful to people interested in developing non-US markets.

Before I describe the specifics, I want to take the opportunity to compliment Nuclear Energy Insider for its well-organized meeting. Siobhan O’Meara did an admirable job putting together an informative agenda with capable speakers and keeping the event on schedule.

westinghouse smr 200x336

Westinghouse SMR

Robin Rickman, director of the SMR Project Office for Westinghouse Electric Company, provided a brief update on his company’s SMR effort and the status of its development. He then focused much of his talk on describing the mutual challenges faced by the SMR industry and the incredible array of commercial opportunities that he sees developing if the industry successfully addresses the challenges together.In early February, Danny Roderick, chief executive officer of Westinghouse, announced that his company was shifting engineering and licensing resources away from SMR development toward providing enhanced support for efforts to refine and complete the eight AP1000 construction projects in progress around the world.

Rickman explained this decision and its overall impact on SMR development. He told us that Westinghouse remains committed to the SMR industry and to resolving the mutual challenges that currently inhibit SMR development. His project office has retained a core group of licensing experts and design engineers and is fully supporting all industry efforts. The SMR design is at a stage of completion that enables the company to continue to engage with both customers and regulators based on a mature conceptual design.

The company, however, does not want to get ahead of potential customers and invest hundreds of millions of dollars into completing a design certification if there are no committed customers. Rickman didn’t say it, but Westinghouse has a corporate memory from the AP600 project of completing the process of getting a design certification in January 1999 without ever building a single unit. It’s not an experience that they have any desire to repeat.

Westinghouse determined that its resources could be best invested in making sure that the AP1000 is successful and enables others to succeed in attracting financing and additional interest in nuclear energy.

For SMRs, Westinghouse has a business model that indicates a need for a minimum order book of 30–50 units before it would make financial sense to invest in the detailed design and the modular manufacturing infrastructure required to build a competitive product. Rickman emphasized that all of the plant modules must be assembled in a factory and delivered to the site ready to be joined together in order to achieve the capital cost and delivery schedule needed to make SMRs competitive.

That model requires a substantial investment in the factories that will produce the components and the various modules that make up the completed plant. He told us that the state of Missouri is already investing in creating such an infrastructure with the support of all of its major universities, every electricity supplier, a large contingent of qualified manufacturing enterprises, both political parties, and the governor’s office.

He told the audience that Missouri’s efforts are not limited to supporting a single reactor vendor; it is building an infrastructure that will be able to support all of the proposed light water reactor designs including NuScale, mPower, and Holtec.

Rickman included a heartfelt plea for everyone to recognize the importance of creating a new clean energy alternative in a world where billions of people do not have access to light at the flip of a switch or clean water by opening a simple tap.

In what was a surprise to most attendees, the FBI had a table in the expo hall and gave a talk about its interest in the safety and security of nuclear materials. I will reveal my own skepticism about the notion that nuclear power plants are especially vulnerable or attractive targets for people with nefarious intent. It is hard to imagine anyone making off with nuclear fuel assemblies or being able to do anything especially dangerous with them in the highly unlikely event that they did manage to figure out how to get them out of a facility.

Bryan Hernadez, a refreshingly young engineer, gave an excellent presentation about the super heavy forging capabilities available in the United States at Lehigh Heavy Forge in Bethlehem, Pa. That facility is a legacy of what formerly was the Bethlehem Steel Corporation’s massive integrated steel mill. It has the capacity to forge essentially every component that would be required to produce any of the proposed light water SMR designs.

The presentation included a number of photos that must have warmed the heart of anyone in the audience who likes learning about massive equipment designed to produce high quality goods with tight tolerances that weigh several hundred tons.

In a presentation that would have pleased several of my former bosses, Dr. Ben Amaba, a worldwide sales executive from IBM, talked about the importance of approaching complex designs with a system engineering approach and modern information tools capable of managing interrelated requirements. That is especially important in a highly regulated environment with a globally integrated supply chain.

Jonathan Hinze, senior vice president of Ux Consulting, provided an overview of both national and international markets and described those places that his company believes have the most pressing interest in machines with the characteristics being designed into SMRs.

He reminded the audience that US suppliers are not the only players in the market and that they are not even the current market leaders. He noted the fact that Russia is installing two KLT-40 power plants (light water reactors derived from established icebreaker power plants) onto a barge and that those reactors should be operating in a couple of years. He pointed to the Chinese HTR-PM, which is a power plant with two helium–cooled pebble bed reactors producing 250 MW of thermal power producing steam and feeding a common 210-MWe steam turbine power plant. He also mentioned that Argentina had recently announced that it had broken ground on a 25-MWe CAREM light water reactor.

Douglass Miller, acting director of New Major Facilities Division of the Canadian Nuclear Safety Commission, described his organization’s performance-based approach to nuclear plant licensing. He noted that the commission does not have a design certification process and that each project needs to develop its safety case individually to present to the regulator. It appears that the process is not as prescribed or as time-consuming as the existing process in the United States.

Tony Irwin, technical director for SMR Nuclear Technology Pty Ltd, told us that Australia is moving ever closer to accepting the idea that nuclear energy could play a role in its energy supply system. Currently, the only reactor operating in Australia is a research and isotope production reactor built by INVAP of Argentina. He described the large power requirements for mining operations in places not served by the grid and the fact that his country has widely distributed settlements that are not well-integrated in a large power grid. He believes that SMRs are well suited to meeting Australia’s needs.

Unfortunately, I had to get on the road to avoid traffic and get home at a reasonable hour, so I missed the last two presentations of the day. I probably should have stayed to hear about the cost benefits of advanced, non-light water reactors and about Sweden’s efforts to develop a 3-MWe lead–cooled fast reactor for deployment to Canadian arctic communities.

As I was finalizing this post, I noted that Marv Fertel has just published a guest post at NEI Nuclear Notes titled Why DOE Should Back SMR Development. I recommend that anyone interested in SMRs go and read Fertel’s thoughts on the important role that SMRs can play in meeting future energy needs.

SMR on trailer courtesy NuScale Power

SMR on trailer – courtesy NuScale Power




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

What Did We Learn From Three Mile Island?

By Rod Adams

Thirty-five years ago this week, a nuclear reactor located on an island in the Susquehanna River near Harrisburg, Pennsylvania, suffered a partial core melt.

On some levels, the accident that became known as TMI (Three Mile Island) was a wake-up call and an expensive learning opportunity for both the nuclear industry and the society it was attempting to serve. Some people woke up, some considered the event a nightmare that they would do anything to avoid repeating, and some hard lessons were properly identified and absorbed. Unfortunately, some people learned the wrong lessons and some of the available lessons were never properly interpreted or assimilated.

The melted fuel remained inside the TMI unit 2 pressure vessel, nearly all the volatile and water-soluble fission products remained inside the reactor containment, and there were no public health impacts. The plant was a total loss after just three months of commercial operation, the plant buildings required a clean-up effort that took 14 years, the plant owner went bankrupt, and the utility customers paid dearly for the accident.

The other unit on the same site, TMI-1, continues to operate well today under a different owner.

Although the orders for new nuclear power plants had already stopped several years before the accident, and there were already people writing off the nuclear industry’s chances for a recovery, the TMI accident’s emotional and financial impacts added another obstacle to new plant project development.

In the United States, it took more than 30 years to finally begin building new nuclear power plants. These plants incorporate some of the most important lessons in their design and operational concepts from the beginning of the project development process. During the new plant construction hiatus, the U.S. electricity industry remained as dependent as ever on burning coal and burning natural gas.

Aside: A description of the sequence of events at TMI is beyond the scope of this post. There is a good backgrounder—with a system sketch—about the event on the Nuclear Regulatory Commission’s web site. Another site with useful information is Inside TMI Three Mile Island Accident: Moment by Moment. End Aside.


The TMI event was the result of a series of human decisions, many of which were made long before the event or in places far from the control room. Of those decisions, there were some that were good, some that were bad, some that were reactions based on little or no information, and many made without taking advantage of readily available information.

One of the best decisions, made long before the event happened, was the industry’s adoption of a defense-in-depth approach to design. From the very beginning of nuclear reactor design, responsible people recognized that bad things could happen, that it was impossible to predict exactly which bad things could happen, and that the public should be protected from excess exposure to radioactive materials through the use of multiple barriers and appropriate reactor siting.

The TMI accident autopsy shows that the basic design of large pressurized water reactors inside sturdy containment buildings was fundamentally sound and adequately safe. As intended by the designers, the defense-in-depth approach and generous engineering margins allowed numerous things to go wrong while still keeping the vast majority of radioactive materials contained away from humans. Here is a quote from the Kemeny Commission report:

We are convinced that if the only problems were equipment problems, this Presidential Commission would never have been created. The equipment was sufficiently good that, except for human failures, the major accident at Three Mile Island would have been a minor incident.

Though it is not well-known, the NRC completed a study called the State of the Art Reactor Consequences Analysis (SOARCA aka NUREG-1935) that indicated that there would be few, if any, public casualties as the result of a credible accident at a U.S. nuclear power plant, even if there were a failure in the containment system.

One of the most regrettable aspects of TMI was that the heavy investment that the United States had made into the infrastructure for manufacturing components and constructing large nuclear power plants—factories, equipment, and people— was mostly lost, even though the large components and basic design did what they were supposed to do.

There were, however, numerous lessons learned about specific design choices, control systems, human machine interfaces, training programs, and information sharing programs.

Emergency core cooling

The Union of Concerned Scientists and Ralph Nader’s Critical Mass Energy Project had been warning about a hypothetical nuclear reactor accident for several years, though it turns out that they were wrong about why the emergency core cooling system did not work as designed.

The core damage at TMI was not caused by a failure of the cooling system to provide adequate water in the case of a worst case condition of a double-ended sheer of a large pipe; it was caused by a slow loss of cooling water that went unnoticed for 2 hours and 20 minutes. The leak, in this case, was a stuck-open relief valve that had initially opened during a loss of feedwater accident.

While the slow leak was in progress, the operators purposely reduced the flow of water from the high pressure injection pumps, preventing them from performing their design task of keeping the primary system full of water when its pressure is low.

It’s worthwhile to understand that the operators did not reduce injection flow by mistake or out of malice. They did what they had been trained to do. Their instructors had carefully taught them to worry about the effects of completely filling the pressurizer with water because that would eliminate its cushioning steam bubble. Their instructors and the regulators that tested them apparently did not emphasize the importance of understanding the relationship between saturation temperature and saturation pressure.

The admonition to avoid “going solid” (filling the pressurizer with water instead of maintaining its normal steam bubble) was a clearly communicated and memorable lesson in both classroom and simulator training sessions. When TMI control room operators saw pressurizer level nearing or exceeding the top of its indicating range, they took action to slow the inflow of water. At the time, they had still not recognized that cooling water was leaving the system via the stuck open relief valve.

The physical system had responded as it had been designed, but the designers had neglected to ensure that their training department fully understood the system response to various conditions that might be expected to occur. It’s possible that the designers did not know that a pressurizer steam space leak could cause pressure to fall and the pressurizer level to rise at the time that they designed the system. There was not yet much operating experience; the large plants being built in the 1960s and 1970s could not be fully tested at scale, and computer models have always had their limitations, especially at a time when processing power was many orders of magnitude lower than it is today.

There was also a generally accepted assumption that safety analysis could be simplified by focusing on the worst case accident.  If the system could be proven to respond safely to the worst case conditions, the assumption was that less challenging conditions would also be handled safely. The focus on worst case scenarios, emphasized by very public emergency core cooling system hearings, took some attention away from analyzing other possible scenarios.

Lessons learned

  • Following the TMI accident, there was a belated push to complete the loss of flow and loss of coolant testing program that the Atomic Energy Commission had initiated in the early 1960s. For a variety of political, financial, and managerial reasons, that program had received low priority and was chronically underfunded and behind schedule.
  • Today’s plant designs undergo far more rigorous testing programs and have better, more completely validated computer models.
  • Far more attention has been focused on the possible impact of events like “small break” loss of cooling accidents.
  • All new operators at pressurized water reactors learn to understand the importance of the relationship between saturation pressure and saturation temperature.

At the time of the accident, there was no defined system of sharing experiences gained during reactor plant operation with all the right people. TMI might have been a minor event if information about a similar event at Davis-Besse, a similar but not identical plant, that happened in September 1977 had made it to the control room staff at TMI-2.

Certain sections of the NRC knew about the Davis-Besse event, engineers at the reactor supplier knew about it, and even the Advisory Committee on Reactor Safeguards was aware of the event, but there was no established process for sharing the information to other operating units.

Lesson learned: After the accident, the industry invested a great deal of effort into a sustained program to share operating experience.

The plant designers also did not do their operators any favors in the design and layout of the control room. Key indicators were haphazardly arranged, there were thousands of different parameters that could cause an alarm if out of their normal range, and there was no prioritization of alarming conditions.

Lesson learned: After the accident, an extensive effort was made to improve the control rooms for existing plants and to devise regulations that increased the attention paid to human factors, man-machine interfaces, and other facets of control room design. All plants now have their own simulators that are designed to mimic the particular plant and are provided with the same operating procedures used in the actual plant. Operators are on a shift routine that puts them in the simulator for a week at a time every four to six weeks.

The initiating failures that started the whole sequence took place in the steam plant, a portion of the power plant that was not subject to as much regulatory or design scrutiny as the portions that were more closely associated with the nuclear reactor and its direct cooling systems.

Lesson still being learned: An increased level of attention is now paid to structures, systems, and components that are not directly related to a reactor, but there is still a confusing, expensive, and potentially vulnerable system that attempts to classify systems and give them an appropriate level of attention.

For at least 10 years prior to March 28, 1979, there had been an increasingly active movement focused on opposing the use of nuclear energy, while at the same time the industry was expanding near many major media markets and was one of the fastest growing employment opportunities, especially for people interested in technical fields. The technology was often in the spotlight, with the opposition claiming grave safety concerns and the industry—rather arrogantly, quite frankly—pointing to what had been a relatively unblemished record.

The industry did not do enough in the way of public outreach or routine advertising to explain the value of their product. They rarely compared the characteristics of nuclear energy against other possible electricity sources—mainly because there are no purely nuclear companies. In addition, the electric utility industry has a long tradition of preferring to be quiet and left alone.

The accident at TMI developed slowly over several days, but it became a major news story by mid-morning on the first day. Not only was it a “man bites dog” unusual event, but it was an event that the nuclear industry, the general public, the government, and the news media had been conditioned to take very seriously. Although nuclear experts from around the United States sprang into action to assist where they could at the plant itself, there was no established group of communications experts who could help reporters understand what was happening.

No reporter on a deadline is motivated or willing to wait for information to be gathered, evaluated, and verified. In the absence of real experts willing to talk, they turned to activists with impressive sounding credentials who were quite willing to speculate and spin tall tales designed to generate public interest and concern.

Lesson not yet learned: Although most decision makers in the nuclear industry understand the importance of planned maintenance systems to keep their equipment in top condition and the importance of a systematic approach to training to keep their employees performing at the top of their game, they have not yet implemented an effective, adequately resourced, planned communications program that helps to ensure that the public and the media understand the importance of a strong nuclear energy sector.

Planned communications efforts have a lot in common with planned maintenance systems. They might appear to be expensive with little immediate return on investment, but repairing a broken public image is almost as challenging and expensive as repairing a major plant component that failed due to a decision to reuse a gasket or postpone an oil change. As the guy in the commercial says, “You can pay me now or pay me later.”

That is probably the most tragic part of the TMI event. Despite being the subject of several expensively researched and documented studies, countless articles, thousands of documented training events, and more than a handful of books, the event could have—and should have—made the established nuclear industry stronger and the electric power generation system around the world cleaner and safer.

So far, however, TMI Unit 2′s destruction remains a sacrifice made partially in vain to the harsh master of human experience.

Note: I have purposely decided to avoid attempting to discuss the performance of the NRC or to judge their implementation of the lessons that were available to be learned. That effort would require a post at least twice as long as this one.

Additional Reading

General Public Utilities (March 28, 1980) Three Mile Island: One Year Later

Gray, Mike, and Rosen, Ira The Warning: Accident at Three Mile Island a Nuclear Omen for the Age of Terror W. W. Norton, 1982

Ford, Daniel Three Mile Island: Thirty Minutes to Meltdown Penguin Books, 1981

Hampton, Wilborn Meltdown: A Race Against Disaster at Three Mile Island A Reporter’s Story Candlewick Press, 2001

Report of the President’s Commission On The Accident At Three Mile Island. The Need for Change: The Legacy of TMI, October 1979

Three Mile Island A Report to the Commissioners and to the Public, January 1980

three mile island 300x237




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Three years of available lessons from Fukushima

By Rod Adams

During the three years since March 11, 2011, the world has had the opportunity to learn a number of challenging but necessary lessons about the commercial use of nuclear energy. Without diminishing the seriousness of the events in any way, Fukushima should also be considered a teachable moment that continues to be open for thought and consideration.

As a long time member of the learning community of nuclear professionals, I thought it would be worthwhile to start a conversation that will allow us to document some of the “take-aways” from the accident and the costly efforts to begin the recovery process.

Since there are many people who are more qualified than I am to discuss the specific design details of the reactors that were destroyed and the specific site on which they were installed, I will shy away from those topics. Feel free, however, to add your expert views in the comment thread.

Before Fukushima

fukushima 216x144The overriding lesson for me is a recognition that people who favor the use of nuclear technology were quite unprepared for an event like Fukushima. Our technology had been working so well, for so long, that we had become complacent perfectionists.

In some ways, we were collectively similar to perennial honor roll students who prefer doing homework to engaging in risky sports. We have been “grinds” who studied hard, followed the rules, became the teachers’ pets, scored high marks on all of the routine tests, and were utterly devastated the first time we moved to a new level and encountered a test so difficult that our first attempt to pass resulted in a D-.

Many of us—and I will freely include myself in this category—had become so confident in our ability to earn outstanding grades that we did not pay attention to the boundaries of the box in which our confidence was justified.

We confidently accepted the fact that our technology was safe, had numerous layers of defense-in-depth, and was designed to be able to withstand external events, but we forgot that those statements were only true within a certain set of bounding parameters we normally call the “design basis.” Because we had only rarely approached those boundaries, we had no real concept for what might happen once we found ourselves outside of our expected conditions without most of the expected supporting tools.

An extended period of exceptional performance not only made us over-confident, it raised expectations to an unsustainable level. Corporate executives, the media, and government leaders played roles similar to the parents, teachers, and administrators associated with precocious straight A students. They were used to dealing with serious mistakes and outright failures among the rest of the student body, but were surprised and flustered when one of us let them down.

We also failed to understand that we were in the same vulnerable and unpopular position as the geeks who continuously break the curve and make others look bad, year after year. As the excellent report cards kept coming, we did not pay attention to the effect those high grades were having on our peers. We did not see other students gathering into groups after the grades were posted. We did not sense their anger or overhear their plans to be ready to take advantage the first time we gave them an opportunity.

We had no similar plans prepared in case we failed; we expected we would keep performing exceptionally well.

The Fukushima test

fukushima tsunamiWhen the nearly impossible test came, our technology performed as designed, but that was not good enough. Our technology was not designed to match a natural disaster that destroyed all available sources of electrical power. The loss of vital power at a large, multi-unit facility interfered with the ability to understand plant conditions and to put water into the places that desperately needed it.

Aside: That is not to say that it could not have been designed to handle the imposed conditions. As the performance of Onagawa and Fukushima Daini demonstrate, it is possible through better design or more fortuitous operational decisions to improve the chances of avoiding the consequences seen at Fukushima Daiichi, but there is never a guarantee of perfection. End Aside.

Without water flow, the rate of heating inside the cores was determined by inescapable laws of physics. As nuclear energy and materials experts have been predicting for nearly 50 years, once the temperatures inside the water-cooled cores reached a certain point, the zirconium cladding of the fuel rods began reacting with the water (H2O) to chemically capture the oxygen and release the hydrogen.

Fukushima Daiichi plant designers expected that human operators would pay attention to the pressure building inside the primary containment and release some of the steam before breaking the containment. They apparently neglected to consider that operators would not be able to monitor pressure using their installed systems without any available electrical power.

For valid reasons, the designers did not make containment relief an automatic function or even an easy process. They probably did not expect that the operators would wait for a politician located at the end of a tenuous communications link to make the decision to release that pressure, expect that they might feel the need to wait for a report that evacuations had been completed or realize that the time delay could allow pressure to rise so high that it would be almost impossible to open the necessary valves.

The operators performed their tasks with dedication and tenacity, but their efforts fell a little short of the heroically successful similar efforts at Fukushima Daini. It’s worth mentioning one particular example of unfortunate timing; the Daiichi operators invested dozens of back-breaking man hours to install a mobile generator and run heavy cables across 200 obstacle-filled meters in order to provide emergency power. They completed the hook up at 1530 on March 12. At 1536, the first hydrogen explosion injured five workers, spread contamination, and damaged the just-installed equipment enough to prevent it from functioning. (See page 8-9 of INPO Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station.)

The excessive pressures in the primary containments did what excessive pressure almost always does; it eventually found weak points that would open to release the pressure. The separated hydrogen left the containments, found some available oxygen and did what comes naturally; it exploded to further complicate the event and provide a terrific visual tool for the jealous competitors who were ready to take advantage of our failure.

The lesson available from that sequence of events were not design-specific. More foresight in the design process, solid understanding of basic materials and thermodynamic principles, and, if all else fails, empowered operators with the ability to resist political pressure can further reduce the potential for core damage and radioactive material release.

Once one of us encountered a test we could not pass, we were dazed and confused, obviously unsure what to do next. That period of uncertainty provided a wonderful opening for the opponents and competitors to take charge of the narrative, emphasize our failure under our own mantra of “an accident anywhere is an accident everywhere” and spread the word that we should not be allowed to get up anytime soon. They reminded formerly disinterested observers that we had fallen far short of our claimed perfection, took the opportunity to land a few blows while we were down, and made arrangements to ensure that our recovery was as difficult and expensive as possible.

Fears of radiation

As a group, nuclear technologists have often emphasized our cleanliness, our ability to operate reliably, and our improving cost structure.

radiationsafetyWe overlooked the efforts over the years by opponents and competitors to raise special fears about the materials that might be released in the event of an accident that breaks our multiple barriers. Though we all recognize that exposure to radioactive material at certain doses is dangerous, our opponents—sometimes aided by our own perfectionist tendencies—have instilled the myth that exposure to the tiniest quantities also carries unacceptable risk.

We had become so good at keeping those materials tightly locked up that we accepted ever-tightening standards, because they were easy enough to meet under routine conditions. Even under the “beyond design basis” conditions at Fukushima, our multiple barriers did a good enough job of retaining dangerous materials so that there were no immediate radiation-related injuries or deaths, but that isn’t good enough.

There were dangerous radiation levels on site; workers only avoided injury and fatalities by paying attention and minimizing exposure times. The myth of “no safe dose” and the reality that any possible effects may occur in the distant future has continued to result in fear that effects are uncertain and will probably get worse.

The no-safe-dose assumption has made us terribly vulnerable to an effort to force us to continue meeting the expectation of zero discharges. Our stuff does “stink” on occasion; in this case if we try to hold it all in we are going to eventually suffer severe distress. The tank farm at Fukushima, with its millions of gallons of tritiated water cannot expand forever, but our opponents will prevent controlled releases as long as they can to make the pain as large as possible.

It’s worth quoting the International Atomic Energy Agency’s recent report about its late 2013 visit to Japan to provide an independent peer review of recovery actions. This passage comes in the context of a carefully-phrased “advisory point” that strongly recommends that Japan prepare to discharge water where most isotopes other than tritium have been removed.

… the IAEA team encourages the Government of Japan, TEPCO and the NRA to hold constructive discussions with the relevant stakeholders on the implications of such authorized discharges, taking into account that they could involve tritiated water. Because tritium in tritiated water (HTO) is practically not accumulated by marine biota and shows a very low dose conversion factor, it therefore has an almost negligible contribution to radiation exposure to individuals.

Reliability and perfection

Not only did the accident destroy the ability of four plants to ever operate again, it has reminded us that reliability is not just a matter of technology and operational excellence. If the powers-that-be refuse permission to operate, the best technology in the world will fail at the task of providing reliable power. Our competitors are perfectly content to take over the markets that we are failing to serve. The longer they perform the easier it is for people to assert that we are not needed.

We have also been taught that we have no real control over cost. The aftermath of Fukushima has shown that it’s possible to establish conditions in which even the most dire prediction of economic cost is an underestimate. There is no upper bound under conditions where perfection is the only available standard.

If we do not learn how to occasionally fail, how to make reasonable peace with our powerful opposition, and continue to help everyone understand that a search for perfection does not mean that its achievement is actually possible, nuclear energy does not have much hope for rapid growth in the near future.

That would be a tragic situation for the long term health and prosperity of humanity. The wealthy portions of our current world population can probably do okay for a while without much nuclear fission power. However, that choice would harm the underpowered people who are already living and innumerable future generations who will not live as well as they could if we shy away from improving and using nuclear fission technology.

Fission technology is not perfect and poses a certain level of risk, but it is pretty darned good and the risks are well within the range of those that we accept for many other technologies that can perform similar tasks.


INPO 11-005 Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station





Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Is St. Lucie next on the antinuclear movement target list?

By Rod Adams

The most informative paragraph in a lengthy article titled Cooling tubes at FPL St. Lucie nuke plant show significant wear published in the Saturday, February 22, 2014, edition of the Tampa Bay Times is buried after the 33rd paragraph:

In answers to questions from the Tampa Bay Times, the NRC said the plant has no safety issues and operates within established guidelines. That includes holding up under “postulated accident conditions.”

Unfortunately, that statement comes after a number of paragraphs intended to cause fear, uncertainty, and doubt in the minds of Floridians about the safety of one of the state’s largest sources of electricity. St. Lucie is not only a major source of electricity, but it is also one of the few power plants in the state that is not dependent on the tenuous supply of natural gas that fuels about 60 percent of Florida’s electrical generation.

In March 2013, at the height of the political battle about the continued operation of the San Onofre Nuclear Generating Station—a battle that ended with the decision to retire both of San Onofre’s units—Southern California Edison issued a press release that contained words of warning for the rest of the nuclear industry.

The Nuclear Energy Institute’s Scott Peterson called the Friends of the Earth claims “ideological rhetoric from activists who move from plant to plant with the goal of shutting them down.” He goes on to say: “Not providing proper context for these statements incorrectly changes the meaning and intent of engineering and industry practices cited in the report, and it misleads the public and policymakers.”

In San Onofre’s case, the context of the public discussion should have included a widespread understanding that the decision to shut down the plant was based on a single steam generator tube leak that was calculated to be one-half of the allowable operating limit. That leak was detected by an alarm on a radiation sensing device sensitive enough to alarm with a leak that might have exposed someone to a maximum of 5.2 x 10-5 millirem.

The antinuclear movement has a long history of using steam generator material conditions as a way to force nuclear plants to shut down. Most nuclear energy professionals will freely admit that the devices have been problematic since the beginning of the industry. There was a period of acrimonious litigation when the utilities sued the vendors because the devices did not last as long as initially expected. However, with an extensive replacement program, focused research, attention to detailed operating procedures, and material improvements, steam generators are more reliable today than they were 25 or even 15 years ago.

It is also worth understanding that steam generator leaks do not cause a public health issue. Operating history shows that essentially all of the leaks have been modest in size and resulted in tiny releases of radioactive material outside of the plant boundaries. U-tubes are part of the primary coolant boundary and are thus classified as “safety-related.” Their integrity is important to reliable plant operation, but the 30 percent of the plants operating in the United States that are boiling water reactors don’t even try to keep radioactive coolant out of the steam plant.

The Tampa Bay Times feature article, written by Ivan Penn, included quotes from some of the same players involved in the—unfortunately—successful effort to close down San Onofre. Their words have that familiar ring of “ideological rhetoric,” indicating that St. Lucie might be high on the target list for the activists who move from plant to plant.

Arnie Gundersen, who Penn correctly identified as a frequent nuclear critic, provided a fairly explicit quote supporting the guess that the antinuclear movement has selected its next campaign victim. “St. Lucie is the outlier of all the active plants.” Later in the article, he stated that St. Lucie’s steam generators have a hundred times as many “dents” as the industry average. That might be true, but that is mainly because the industry average is in the single digits. The important measure is not the number of wear spots, but their depth.

Daniel Hirsch, described as a “nuclear policy lecturer” from the University of California at Santa Cruz, used more colorful language, “The damn thing is grinding down. They must be terrified internally. They’ve got steam generators that are now just falling apart.” Like Gundersen, Hirsch has fought against nuclear energy for several decades.

David Lochbaum, from the Union of Concerned Scientists, indicated that he thought that the plant owners were gambling, even though their engineering analysis, which was supported by the Nuclear Regulatory Commission, indicates that the plant has no safety issues and is operating within its design parameters.

Those quotes from the usual suspects, spread throughout the article, are balanced by quotes explaining or supporting FPL’s selected course of action to continue operating and to continue conducting frequent inspections to ensure that conditions do not approach limits that would require additional action.

Here is an example from Michael Waldron, a spokesman for FPL, that appears near the end of the article:

“We have very detailed, sophisticated engineering analysis that allow us to predict the rate of wear, and we are actually seeing the rate of wear slow significantly.”

Even though it is balanced with an almost equal number of pro and con quotes, Ivan Penn’s article includes a number of phrases that appear to be carefully selected to increase public uncertainty and worry about St. Lucie’s continued operation. It is possible to also attribute the words to the author’s desire to add drama and emotion to attract additional readers; that can be difficult to do while maintaining accuracy. Unfortunately for people who love drama, nuclear power plants are quite boring. The vast majority of the time they simply keep working.

Here is an example of the type of rhetorical enhancement that frustrates people who value the accurate use of words:

Worst case: A tube bursts and spews radioactive fluid. That’s what happened at the San Onofre plant in California two years ago.

As stated above, the tube at San Onofre did not “burst” and it did not “spew” radioactive fluid. A tube developed a small, 75–85 gallon-per-day leak from the primary system into the secondary steam system. The installed equipment provided an immediate indication of a problem and the operators promptly took a very conservative course of action to shut down the plant.

While the responsible engineers were performing their detailed investigations and drafting their recommendations, the activists and the politicians took charge of the public communications and worked hard to ensure that San Onofre never restarted. Their focused misinformation offensive resulted in the early retirement of an emission-free power plant that reliably provided 2200 MW of electricity at a key node in the California power grid.

Today, local residents in California are not safer, the air is not cleaner, and the wholesale price of power has already increased by more than 50 percent. Several large-scale infrastructure investments are being planned to restore resiliency to California’s grid. The primary beneficiaries of the antinuclear actions are the people who sell the 300–400 million cubic feet of natural gas needed every day to make up for the loss of San Onofre.

Let’s hope that the regulators and the politicians do a better job of finding sound technical advice, and that the responsible experts do a better job of helping people to understand that St. Lucie is safe, even if its steam generator tubes have more wear marks than anyone wants.

st. lucie 386x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

How can we stop premature nuclear plant closures?

By Rod Adams

During an earnings call on February 6, 2014, Exelon Corporation indicated that it may decide to shut down two or more of its nuclear reactors because of poor economic return. Exelon spokespeople have been warning about the effects of negative electricity prices for several years.

On February 8, 2013, almost exactly a year ago, the Chicago Tribune published a story titled Exelon chief: Wind-power subsidies could threaten nuclear plants. The Tribune noted that Christopher Crane, Exelon’s CEO, was concerned about the continued operation of some of the units in the company’s large fleet of reactors:

“What worries me is if we continue to build an excessive amount of wind and subsidize wind, the unintended consequence could be that it leads to shutting down plants,” Crane said in an interview.

Crane said states that have helped to subsidize wind development in order to create jobs might find themselves losing jobs if nuclear plants shut down.

The Chicago-based company doesn’t have any immediate plans to mothball nuclear plants, although at least one analyst has predicted that could occur as soon as 2015.

“We continue to believe that our assets are some of the lowest-cost, most-dispatchable baseload assets and don’t have any plans at this point of early shutdown on them,” Crane said.

If the discussed nuclear reactor shutdowns occur, they would be numbers six and seven in the count of prematurely closed nuclear power plants in the United States since the beginning of 2013. Though there are certainly antinuclear activists and analysts who will point to this record with a delighted “We told you so,” this is not a trend that bodes well for the economic stability of the United States or for the continued effort of the US to reduce its dependence on hydrocarbon fuel sources.

It is also a trend that puts a number of nuclear professionals at risk of suffering a significant economic setback and life-altering job loss, despite having participated in an exceptional example of continued performance improvements over a sustained period of time.

During a recent industry gathering hosted by Platts, Dr. Pete Lyons pointed to the trend of shutting down well-maintained and licensed nuclear power plants as something that is worrying the current Administration, especially because it will make it difficult to achieve progress in reducing CO2 emissions.

Jim Conca, writing for Forbes, noticed Exelon’s announcement and wondered about its effect on a number of important attributes of energy production. He reminds his readers that nuclear plants represent a large fraction of the emission free electricity produced in the United States each year. He also points out that the longer nuclear plants run and produce revenue, the better. Construction costs are already sunk, the plants already have stored inventories of spent fuel, and they already require some form of decommissioning. The costs and pollution associated with all of those features should be spread over as many kilowatt hours of generation and revenue as possible.

There are several things that nuclear energy advocates can do that might help to eliminate the pressures that have been encouraging nuclear plant operating companies to either shut down or consider shutting down useful assets.

  1. Learn enough about the natural gas market to discuss it with your friends and colleagues
  2. Advocate policies that put a fair value on generating clean electricity
  3. Advocate policies that reward generating sources for reliability
  4. Cheer efforts to market electricity to restore growth in demand

During the winter of 2013-2014, there have been a number of examples of the risks associated with concentrating heating, industrial uses and electricity production on natural gas, just because it has been accepted as “clean” and seems to have become abundant and cheap—ever since 2008—which is apparently a long time ago in the memory of some market observers and decision makers. The Nuclear Energy Institute continues to produce excellent materials and testimony about the importance of fuel diversity; they need as much assistance as they can get in spreading the message.

This winter there have been reported shortages and price spikes that have exceeded $100 per MMBTU. That is roughly equivalent to oil prices hitting $580 per barrel, since every barrel of oil contains 5.8 MMBTU of heat energy. Natural gas price spikes have not been limited to the northeast; spikes exceeding $20 per MMBTU (five times the pre-winter price) have occurred in the mid-Atlantic, the Pacific Northwest, the Chicago area, southern California and even Texas. Last week, a price spike of $8.00 per MMBTU even showed up at Henry Hub, at the intersection of several prime US gas production areas.

Henry Hub spot prices as of Feb 10, 2014

Henry Hub spot prices for week ending Feb 5, 2014

When gas prices reach the levels seen this winter, many customers stop buying, even if they have no alternative fuel source available. If they are operating an industrial facility that needs the gas to run, they stop operating. If they are operating a household that needs the gas to stay warm, they put on more sweaters. If they are operating a school system; they shut the doors and tell the children to stay home.

In markets where wholesale electricity prices have been deregulated, gas fired generators are usually the marginal price setters, so the spikes in natural gas prices have directly affected electricity prices at times of peak demand, driving them to infrequently seen levels. It remains to be seen how the electricity price spikes this winter have affected revenues at generating companies, but it is unlikely to have harmed their bottom line. Unfortunately, brief spells of profitability may not be enough to encourage nuclear plant operators to keep running their plants if wholesale prices return quickly to loss-making level for much of the year.

Though many of us value the fact that nuclear plants do not produce any greenhouse gases or other air or water pollutants, that feature does not produce any additional revenue for plant owners. For the past twenty years, every alternative to fossil fuel except nuclear and large hydroelectric dams have been given direct subsidies, preferential tax treatment and quotas. Fossil fuel generators have not been charged for their use of our common atmosphere as a waste disposal site. It is time to put pressure on our representatives to pass legislation that establishes a price on carbon so that investors are encouraged to fairly value clean generation.

My personal favorite proposal is James Hansen’e fee and dividend approach where all hydrocarbon fuels pay a fee based on their carbon content and the public receives an equal share of the revenue. People who are careful and do not use much fuel will see a positive increase in their income; people who use more than average will see a net cost. Investors will recognize that it is worth their effort to identify technologies that do not emit CO2.

We also should advocate policies that reward generators for their ability to produce reliable electricity. It is a valuable service that helps to ensure that the grid is adequately served with a sufficient margin, and that we avoid the kind of volatility seen this past winter and that nearly bankrupted California in 2001.

Finally, we should seek to reverse the reluctance to tout the product we produce. Electricity is a wonderful tool that makes life better. It can be produced using a variety of fuels, though most readers here would probably agree that uranium and thorium are the best available electricity generation fuels. It’s time to recognize that the energy business is competitive. Like all competitive enterprises, it rewards people who fight for market share by producing a better product and by taking effective action to ensure that people know they are producing a better product.

While traveling through the southeast US last week, I heard an advertisement that made me smile. Alabama Power was offering to give people water heaters as long as they were shifting from gas heaters to electric heaters. Why have we allowed competitive energy producers to steal markets for so many years without fighting back?

I encourage people in the electricity production business to download a copy of the Jan/Feb 2014 issue of EnergyBiz and read the article titled Gas Competes with Power; A New Foundation Fuel, New Business Channels. While you are at it, you might also enjoy reading the challenge that NRG Energy’s David Crane lays down for the traditional business of generating and distributing electricity in his guest opinion piece titled Keep Digging: What Lethal Threat?

Exelon's Clinton Power Station

Exelon’s Clinton Power Station




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.