Author Archives: radams

TMI operators did what they were trained to do

Note by Rod Adams:  This post has a deep background story. The author, Mike Derivan, was the shift supervisor at the Davis Besse nuclear power plant (DBNPP) on September 24, 1977, when it experienced an event that started out almost exactly like the event at Three Mile Island on March 28, 1979.

The event began with a loss of feed water to the steam generator. The rapid halt of heat removal resulted in a primary loop temperature increase, primary coolant expansion, and primary system pressure exceeding the set point for a pilot operated relieve valve in the steam space of the pressurizer. As at TMI, that relief valve stayed open after system pressure was lowered, resulting in a continuing loss of coolant. For the first 20 minutes, the plant and operator response at Davis Besse were virtually identical to those at TMI.

After that initial similarity, Derivan had an “Ah-ha” moment and took actions that made the event at Davis Besse turn into a historical footnote instead of a multi-billion dollar accident.

When Three Mile Island happened and details of the event emerged from the fog of initial coverage, Mike was more personally struck than almost anyone else. He has spent a good deal of time during the past 35 years trying to answer questions about the event, some that nagged and others that burned more intensely.

In order to more fully understand the narrative below, please review Derivan’s presentation describing the events at Davis Besse, complete with annotated system drawings to show how the event progressed.

This story is a little longer and more technical than most of the posts on ANS Nuclear Cafe or Atomic Insights (where this post originally appeared). It is intended to be a significant contribution to historical understanding of an important event from a man with a unique perspective on that event. If you are intensely curious about nuclear energy and its history, this story is worth the effort it requires.

The rest of this post is Mike’s story and his analysis, told in his own words.


By Mike Derivan

My first real introduction to the Three Mile Island-2 (TMI) accident happened on Saturday, March 31, 1979, a few days after the accident. TMI-2 was a Babcock and Wilcox (B&W) pressurized water reactor plant.

At the Davis Besse nuclear power plant (DBNPP) in Ohio where I worked, we initially heard something serious had happened at TMI-2 as early as the day of the event, March 28, and interest was high because TMI was our sister plant. DBNPP also is a B&W PWR plant.

Actual details were sketchy for the next couple of days, and mainly by watching the nightly TV news it became clear to me that something serious was going on. It was clear from watching the TV news reports that conflicting information was being reported. Some reports indicated there had been radiation releases and also reports by the plant owner of no radiation releases.

I even remember hearing the words “core damage” first mentioned. It was Saturday on a TV news report that I saw the first explanation using pictures of the system to the suspected sequence of events and it became clear to me the pilot operated relief valve had stuck open.

My reaction was gut-wrenching and I was also in disbelief that TMI did not know what had happened at Davis Besse. That evening I watched the Walter Cronkite news report. I sat there with total disbelief as he discussed potential core meltdown. Disbelief because if you were a trained reactor operator in those days it was pretty much embedded in your head that a core meltdown was not even possible; and here that possibility was staring me right in the face.

Cronkite’s report was also my first exposure to the infamous hydrogen bubble story. I had enough loss of coolant accident (LOCA) training to understand that some hydrogen could be generated during LOCAs; after all we had containment vessel hydrogen concentration monitoring and control systems installed at our plant. But the actual described scenario at TMI seemed incredible, except that it had apparently happened.

I would expect that my reaction was the same as many nuclear plant operators at that time. The exception was that the apparent initiating scenario had actually happened to me 18 months earlier at Davis Besse and I just couldn’t get the question out of my mind: “Why didn’t they know?”

The real root cause of the TMI accident

Since the time of the TMI accident virtually hundreds of people have stuck their nose into the root cause of the TMI accident. Both the Kemeny and Rogovin investigations identified a lot of programmatic “stuff” that needed to be fixed, and I agree with most of it.

I feel, however, that both of them skirted one important issue by using different flavors of “weasel words” in the discussion of operator error. The two reports handled that specific topic a bit differently, but the discussions got couched with side topics of contributing factors. The general consensus of all the current discussion summaries I read is that TMI was caused by operator error.

The TMI operators did make some operator errors and I am not denying that. But my contention is all the errors they made were after the fact that they got outside of the design-basis understanding of PWRs at that time. It is no surprise to anyone that when a machine this complicated gets outside of its design basis, anything might happen. You basically hope for the best, but you are going to have to take what you get.

Fukushima proves that, and everyone knows why/how Fukushima got outside of its design basis. The how/why that TMI operators got outside of their design basis is going to be the focus of my discussion. I will also discuss the fact that I think this was understood at the time of the investigations, but it was consciously decided not to pursue it.

My whole point of contention is the turning off the high pressure injection flow early in the event in response to the increasing pressurizer level is the crux of the whole operator error argument. All discussions say that if the operators hadn’t done that, the TMI event would have been a no-never-mind. And I agree.

But nobody really wants to believe that they were told to do that for the symptoms they saw.

In other words, they were told to do that, by their training, compounded by tunnel vision bad procedure guidance. I have believed this since the day I understood what happened at TMI. Furthermore, the TMI operators were trying to defend their actions from a position of weakness; their core was melted, nobody wanted to believe them.

I am not in a position of weakness on this issue, my event came out okay at DBNPP, and so I have no reason to not be totally honest or objective on this issue. During the precursor event at DBNPP, we also turned off high pressure injection early in the event in response to the symptoms that we saw, and for the same reason the TMI operators did it 18 months later; we were told to do it that way.

This fact is apparently a hard pill to swallow. But if it is hard for you to accept, just imagine how I felt watching TMI unfold in real-time.

And right there is the crux of the issue. Once those high pressure injection pumps were off, both plants were then outside the design-basis understanding for that particular small break LOCA.

So you hope for the best, but take what you get. But still, obviously an error has been made if not taking that action would have made the event a no-never-mind.

So who exactly made the error? Both the Kemeny and Rogovin reports discuss the problems with the B&W simulator training for the operators. The important point that they both apparently missed (or didn’t want to deal with, which I prefer as the explanation) is that this is really an independent two-part problem.

I will refer to controlling high pressure injection during a small break LOCA as part A of the problem, and to the actual physical PWR plant response to a small break LOCA during a leak in the pressurizer steam space as part B of the problem.

It really is that simple. B&W was training correctly for high pressure injection control (part A) for small break LOCAs in the water space of their PWR. But neither they nor Westinghouse correctly understood the correct plant response for a small break LOCA in the pressurizer steam space.

By omission they were not training correctly for a small break LOCA in the pressurizer steam space (part B). To make matters worse, B&W was overstressing in training the importance of the part A “rules”, to the extent that an operator would fail a B&W administered operator certification exam for failure to correctly implement the part A rules.

Thus, when fate would have it and the two occurrences (part A and part B) combined in the real world, where the plant responds per the rules of Mother Nature, the B&W training and procedures ended up leading the operators to actions that put them outside the actual design basis, not the falsely perceived (and trained upon) design basis.

Up until very recently my argument has been one using just simple logic and sheer numbers of operators involved. In Davis Besse’s September 1977 event, there were five licensed operators involved in that decision, either by direct action or complacent compliance. In other words, all five agreed that it was the right thing to do. Of course, it wasn’t the right thing to do, but nobody objected because it was the correct part A thing to do and nobody understood the part B of the problem.

Eighteen months later at TMI, in March 1979, an additional number of operators (just how many depends on the time line) repeated the same initial wrong actions. So we have about a dozen operators, at two separate plants 18 months apart, all doing the same thing and all convinced that they were doing the right thing.

Is it even conceivable to think that they did not all believe they did the right thing according to part A? I just don’t believe so; of course, we are all arguing from a position of weakness. It is the wrong thing to do for part A and part B combined, so nobody really wants to believe that we were trained to do it.

But as I explained, it is really the two-part problem that created the issue. My point can be further emphasized by the fact that the Nuclear Regulatory Commission’s Region III had heartburn over the report that DBNPP submitted for its event. The NRC did not like the fact that the report did not say that the operators made an error turning off high pressure injection.

I know why that happened. The person most responsible for writing the report narrative was actually in the control room during the event. He did not believe the action was wrong based on his same training relative to part A of the problem. So why would he put that statement in the report? He was so convinced that his own (complacent) agreement was correct that saying otherwise would be a false statement.

Just recently new information came to my attention that absolutely confirms my belief that B&W was in fact totally emphasizing high pressure injection control in their training based solely on their understanding of the part A problem, with no understanding on B&W’s part of the part B problem or its affect when combined with the part A problem.

My understanding comes directly from seeing the whole infamous Walters’ response memo of November 10, 1977, to the original Kelly memo of November 1, 1977. It is absolutely remarkable to me that 35+ years after the DBNPP event and almost the same amount of time after TMI that a totally unrelated Google search turns up a complete version of the Walters memo.

After half a lifetime of studying all the TMI reports, I had only seen one “cherry picked” excerpt from the Walters memo, basically saying that he agreed with the operators’ response at DBNPP. The whole memo in context basically confirms that the operator claims of “we were trained to do it” are correct.

The original Kelly memo also confirms that Kelly still didn’t grasp the significance of the part B problem, as related to the DBNPP event; or if he did he didn’t relate it thoroughly and clearly in his memo. Both memos are presented and discussed below; make up your own conclusions. (The source document is here.

The Kelly memo

Kelly Memo

Kelly Memo

The referenced source document is basically a critique of these memos by textual communications experts. Here’s a summary: First, Kelly is talking “uphill” in the organization, so he couches his memo with that in mind. He asks no one for a decision, but basically asks for “thoughts.” And he makes a non-emphatic recommendation for “guidelines.”

My personal additional notations are that he dilutes the importance of and possibly adds confusion to the recommendation by adding “LPI” to the discussion, but most importantly he totally misses any part B problem discussion. He does say “the operator stopped High Pressure Injection when Pressurizer level began to recover, without regard to primary pressure.”

But there is no mention about the fact that the system response was not as expected, e.g. the pressurizer level went up drastically in response to the reactor cooling system boiling. He never articulates that the operator’s reluctance to re-initiate high pressure injection, even after we understood the cause of the off-scale pressurizer level indication, was based solely on that indicated pressurizer level and our training. Thus, the memo totally misses addressing the part B problem point that the system response was not as expected by anybody, which was crucial to getting the guidance fixed.

The other thing I notice is that the memo is not addressed to Walters. I’ve also “been there, done that” in a large organization. I can easily understand how the recipient (Walters’ boss) upon receiving this memo, with no specific articulation of a new problem (part B), would pass it to Walters with a “handle it, handle it… make it go away.” I also note that N.S. Elliott ison the distribution. He was the B&W Training Department manager, thus B&W training was directly in the loop on this issue also.

The Walters response memo

Note that the original Walters’ response memo to Kelly was hand written, so it has been apparently typed someplace along the line. This is how it appears in the reference source, typos and all.

Walters Memo

Walters Memo

I’m omitting the communications expert’s comments, because they are in the reference. Here are my comments: In simple operator lingo, this response is a “smart ass slap down” to Kelly, including all the accompanying sarcasm. But there are some very important admissions revealed here. First, an admission, including Walters’ discussion with the B&W Training Department, that we responded in the correct manner considering how we were trained, and also including the bases behind our training.

This is what we operators had been claiming all along, but nobody wanted to believe it. Second, Walters clearly states both as his personal assumption and the B&W Training Department assumption that reactor coolant pressure and pressurizer level will trend in the same direction during a LOCA. Bingo. He has just admitted that they don’t get, still, the specific part B contribution to the problem.

So they are in fact training wrong for this event because they don’t understand part B. Further, this discussion is happening after the DBNPP event, as a result of the Kelly concerns, and well before TMI. Third, the tone of Walters’ sarcastic comments about a “hydro” (hydrostatic pressure testing) of the reactor coolant system every time high pressure injection is initiated shows the disproportional emphasis that the B&W training was placing on “never let High Pressure Injection pump you solid.” Again, something that the operators were claiming that nobody wanted to believe.

My conclusion, and it hasn’t changed in 35 years, is that the root cause of the TMI accident was that the B&W simulator training and inadequate procedures put the TMI operators in a box, outside of their design-basis understanding for that specific small break loss of coolant. And a contributing cause is B&W itself didn’t understand the actual plant response to that steam space loss of coolant event because it was never analyzed correctly. Then, they also missed the warning that the Davis Besse event provided.

For a long time I wondered why both the Kemeny and Rogovin investigations didn’t reach the same specific conclusion as I have. After all, both investigations had some very smart people involved in both processes, and they both looked at the same evidence. My thinking today is that they did reach that same conclusion. But I don’t actually know what they may have seen as the bottom line purpose for their investigations either.

If you consider that no investigation report was going to change the condition of TMI, it may have been as simple as there is enough wrong that needs fundamental changing, so let’s just get those changes done and move forward. So neither group saw a need to identify the actual bottom line root cause, rather they just gave recommendations for prevention of another TMI–type accident.

Further, by the time those two reports were published, it was well understood that there was going to be a lawsuit between GPU and B&W. If one of those reports had specifically identified B&W with partial liability for the root cause, that conclusion along with the report that made it, would be inherently dragged into the lawsuit.

I have no doubt that this was actually discussed at the time. And I will further speculate that it was actually decided that there was no reason to identify the actual true single root cause in the reports because the lawsuit itself would decide that liability issue independently of the reports. My problem with that is the lawsuit, which started in 1982, never really settled the liability issue as it was mutually “settled” in 1983 before a conclusion was reached.

Another thing that I think was actually discussed at that time was the fact that if the reports stated that the root cause was because the B&W training put the operators outside of the design-basis understanding for that event (because the event wasn’t understood by B&W), it would open Pandora’s Box. They didn’t want to deal with “What else do you have wrong?” and there was well over a $100 billion worth of these nuclear power plants still operating.

This conclusion is strongly reinforced for me by the Kemeny Report section “Causes of the Accident”. This section of the report lists a “fundamental cause” as operator error, and specifically lists turning off high pressure injection early in the event. And then the report lists several “Contributing Factors” including B&W missing the warning provided by the Davis Besse event.

If you read the contributing factors listed, there is a screaming omission; it is never stated that B&W (actually the whole PWR industry if you consider the precursors) did not understand the actual plant response to a leak in the pressurizer steam space (what I refer here as part B of the problem). And that is why B&W and the NRC both missed the DBNPP warning. Virtually nothing will ever convince me that all those smart people did not put that truth together.

Thus, it was both their fear of opening Pandora’s Box and a conscious decision that there was no need to implicate B&W with any partial liability that ruled the process. By doing that, they collectively decided to throw the TMI operators under the bus as the default position.

My conclusion for the missing Contributing Factor problem is an Occam’s razor solution; it is not “missing” at all with respect to they didn’t “Get It”; it was a decision to not include it. After all, if that Contributing Factor had been included, who on earth would believe it is an operator error when they simply did what they were told to do in that situation? So, they just simply did not want to deal with the real issue; who made the error?

A simple analogy

For years I struggled with finding a simple analogy to explain the position that the TMI operators were placed in by their training, one that could be understood by common everyday knowledge that everyone was familiar with (and not the technical detail that required understanding the complications of nuke plant operations). One of the reasons that it was difficult was that it required a “phenomena” that is commonly understood today, but was not understood at all at the time of the training. This is the best that I can come up with.

Suppose in learning to drive a car you are being trained to respond to the car veering to the left. It’s simple enough, simply turn the steering wheel to the right to recover. It is also what your basic instinct would lead you to do, so there is no mental conflict in believing it.

It is also actually reinforced and practiced during actual driver training on a curvy road. That response is soon imbedded as the right thing to do. Now suppose your driver training also includes training on a car simulator training machine. It is where you learn and practice emergency situation driving. After all, nobody is going to do those emergency things in an actual car on the road.

Here’s where it gets complicated. Assume virtually no one yet understands that when the car skids to the left on ice (because of loss of front wheel steering traction), the correct response is to turn the steering wheel into the skid direction, or to the left. This is just the opposite of the non-ice response. And to make matters worse, because no one understands it yet, including the guy who built the car simulator, the car simulator has been programmed to make this wrong response work correctly on the simulator.

So in your emergency driver training you practice it this way, the simulator responds wrong to the actual phenomena, but it shows the successful result and you recover control. Since this probably also agrees with your instinct, and you see success on the simulator, this action is also embedded as the right thing to do. One additional point, if you don’t do this wrong action, you will flunk your simulator driver training test.

You know where this is going, now you are out driving on an icy road for the first time and the car skids to the left. You respond exactly as you were instructed to do and exactly as the simulator showed was successful, and you have an accident because the car responds to the real world rules of Mother Nature.

An investigation is obviously necessary because, I forgot to tell you, the car cost $4 billion and you don’t own it. During the subsequent investigation everything is uncovered; the unknown phenomenon is finally correctly understood, the simulator incorrect programming is discovered, it is uncovered that the previously unknown phenomenon had been discovered before your accident, and your accident was even predicted as possible.

But the investigation results are published and the finding is that the accident was caused by your error of turning the steering wheel the wrong way on the ice. Nobody else is found to have made an error in the stated conclusions but you; it is simply a case of driver error. Do you feel you have been wronged? This is what happened to the TMI operators.

For everybody out there who doesn’t like my conclusions, I’ll just say that many of the principals of the investigations are still alive, but choose not to talk. So, simply ask them, especially the principals in the GPU vs. B&W lawsuit that should have determined any liability issues. Ask them why it didn’t happen. My idea of justice involves getting the truth, the whole truth, and nothing but the truth exposed. That process is still unfinished.

tmi b&w 314x200

Small Modular Reactors—US Capabilities and the Global Market

By Rod Adams

On March 31–April 1, Nuclear Energy Insider held its 4th Annual Small Modular Reactor (SMR) conference in Charlotte, NC (following on the 2nd ANS SMR Conference in November 2013—for notes and report from that embedded topical meeting, see here).

You can find a report of the first day of talks, presentations, and hallway conversations at SMRs—Why Not Now? Then When? That first day was focused almost exclusively on the US domestic market—the second day included some talks about US capabilities, but it was mainly focused on information useful to people interested in developing non-US markets.

Before I describe the specifics, I want to take the opportunity to compliment Nuclear Energy Insider for its well-organized meeting. Siobhan O’Meara did an admirable job putting together an informative agenda with capable speakers and keeping the event on schedule.

westinghouse smr 200x336

Westinghouse SMR

Robin Rickman, director of the SMR Project Office for Westinghouse Electric Company, provided a brief update on his company’s SMR effort and the status of its development. He then focused much of his talk on describing the mutual challenges faced by the SMR industry and the incredible array of commercial opportunities that he sees developing if the industry successfully addresses the challenges together.In early February, Danny Roderick, chief executive officer of Westinghouse, announced that his company was shifting engineering and licensing resources away from SMR development toward providing enhanced support for efforts to refine and complete the eight AP1000 construction projects in progress around the world.

Rickman explained this decision and its overall impact on SMR development. He told us that Westinghouse remains committed to the SMR industry and to resolving the mutual challenges that currently inhibit SMR development. His project office has retained a core group of licensing experts and design engineers and is fully supporting all industry efforts. The SMR design is at a stage of completion that enables the company to continue to engage with both customers and regulators based on a mature conceptual design.

The company, however, does not want to get ahead of potential customers and invest hundreds of millions of dollars into completing a design certification if there are no committed customers. Rickman didn’t say it, but Westinghouse has a corporate memory from the AP600 project of completing the process of getting a design certification in January 1999 without ever building a single unit. It’s not an experience that they have any desire to repeat.

Westinghouse determined that its resources could be best invested in making sure that the AP1000 is successful and enables others to succeed in attracting financing and additional interest in nuclear energy.

For SMRs, Westinghouse has a business model that indicates a need for a minimum order book of 30–50 units before it would make financial sense to invest in the detailed design and the modular manufacturing infrastructure required to build a competitive product. Rickman emphasized that all of the plant modules must be assembled in a factory and delivered to the site ready to be joined together in order to achieve the capital cost and delivery schedule needed to make SMRs competitive.

That model requires a substantial investment in the factories that will produce the components and the various modules that make up the completed plant. He told us that the state of Missouri is already investing in creating such an infrastructure with the support of all of its major universities, every electricity supplier, a large contingent of qualified manufacturing enterprises, both political parties, and the governor’s office.

He told the audience that Missouri’s efforts are not limited to supporting a single reactor vendor; it is building an infrastructure that will be able to support all of the proposed light water reactor designs including NuScale, mPower, and Holtec.

Rickman included a heartfelt plea for everyone to recognize the importance of creating a new clean energy alternative in a world where billions of people do not have access to light at the flip of a switch or clean water by opening a simple tap.

In what was a surprise to most attendees, the FBI had a table in the expo hall and gave a talk about its interest in the safety and security of nuclear materials. I will reveal my own skepticism about the notion that nuclear power plants are especially vulnerable or attractive targets for people with nefarious intent. It is hard to imagine anyone making off with nuclear fuel assemblies or being able to do anything especially dangerous with them in the highly unlikely event that they did manage to figure out how to get them out of a facility.

Bryan Hernadez, a refreshingly young engineer, gave an excellent presentation about the super heavy forging capabilities available in the United States at Lehigh Heavy Forge in Bethlehem, Pa. That facility is a legacy of what formerly was the Bethlehem Steel Corporation’s massive integrated steel mill. It has the capacity to forge essentially every component that would be required to produce any of the proposed light water SMR designs.

The presentation included a number of photos that must have warmed the heart of anyone in the audience who likes learning about massive equipment designed to produce high quality goods with tight tolerances that weigh several hundred tons.

In a presentation that would have pleased several of my former bosses, Dr. Ben Amaba, a worldwide sales executive from IBM, talked about the importance of approaching complex designs with a system engineering approach and modern information tools capable of managing interrelated requirements. That is especially important in a highly regulated environment with a globally integrated supply chain.

Jonathan Hinze, senior vice president of Ux Consulting, provided an overview of both national and international markets and described those places that his company believes have the most pressing interest in machines with the characteristics being designed into SMRs.

He reminded the audience that US suppliers are not the only players in the market and that they are not even the current market leaders. He noted the fact that Russia is installing two KLT-40 power plants (light water reactors derived from established icebreaker power plants) onto a barge and that those reactors should be operating in a couple of years. He pointed to the Chinese HTR-PM, which is a power plant with two helium–cooled pebble bed reactors producing 250 MW of thermal power producing steam and feeding a common 210-MWe steam turbine power plant. He also mentioned that Argentina had recently announced that it had broken ground on a 25-MWe CAREM light water reactor.

Douglass Miller, acting director of New Major Facilities Division of the Canadian Nuclear Safety Commission, described his organization’s performance-based approach to nuclear plant licensing. He noted that the commission does not have a design certification process and that each project needs to develop its safety case individually to present to the regulator. It appears that the process is not as prescribed or as time-consuming as the existing process in the United States.

Tony Irwin, technical director for SMR Nuclear Technology Pty Ltd, told us that Australia is moving ever closer to accepting the idea that nuclear energy could play a role in its energy supply system. Currently, the only reactor operating in Australia is a research and isotope production reactor built by INVAP of Argentina. He described the large power requirements for mining operations in places not served by the grid and the fact that his country has widely distributed settlements that are not well-integrated in a large power grid. He believes that SMRs are well suited to meeting Australia’s needs.

Unfortunately, I had to get on the road to avoid traffic and get home at a reasonable hour, so I missed the last two presentations of the day. I probably should have stayed to hear about the cost benefits of advanced, non-light water reactors and about Sweden’s efforts to develop a 3-MWe lead–cooled fast reactor for deployment to Canadian arctic communities.

As I was finalizing this post, I noted that Marv Fertel has just published a guest post at NEI Nuclear Notes titled Why DOE Should Back SMR Development. I recommend that anyone interested in SMRs go and read Fertel’s thoughts on the important role that SMRs can play in meeting future energy needs.

SMR on trailer courtesy NuScale Power

SMR on trailer – courtesy NuScale Power




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

What Did We Learn From Three Mile Island?

By Rod Adams

Thirty-five years ago this week, a nuclear reactor located on an island in the Susquehanna River near Harrisburg, Pennsylvania, suffered a partial core melt.

On some levels, the accident that became known as TMI (Three Mile Island) was a wake-up call and an expensive learning opportunity for both the nuclear industry and the society it was attempting to serve. Some people woke up, some considered the event a nightmare that they would do anything to avoid repeating, and some hard lessons were properly identified and absorbed. Unfortunately, some people learned the wrong lessons and some of the available lessons were never properly interpreted or assimilated.

The melted fuel remained inside the TMI unit 2 pressure vessel, nearly all the volatile and water-soluble fission products remained inside the reactor containment, and there were no public health impacts. The plant was a total loss after just three months of commercial operation, the plant buildings required a clean-up effort that took 14 years, the plant owner went bankrupt, and the utility customers paid dearly for the accident.

The other unit on the same site, TMI-1, continues to operate well today under a different owner.

Although the orders for new nuclear power plants had already stopped several years before the accident, and there were already people writing off the nuclear industry’s chances for a recovery, the TMI accident’s emotional and financial impacts added another obstacle to new plant project development.

In the United States, it took more than 30 years to finally begin building new nuclear power plants. These plants incorporate some of the most important lessons in their design and operational concepts from the beginning of the project development process. During the new plant construction hiatus, the U.S. electricity industry remained as dependent as ever on burning coal and burning natural gas.

Aside: A description of the sequence of events at TMI is beyond the scope of this post. There is a good backgrounder—with a system sketch—about the event on the Nuclear Regulatory Commission’s web site. Another site with useful information is Inside TMI Three Mile Island Accident: Moment by Moment. End Aside.


The TMI event was the result of a series of human decisions, many of which were made long before the event or in places far from the control room. Of those decisions, there were some that were good, some that were bad, some that were reactions based on little or no information, and many made without taking advantage of readily available information.

One of the best decisions, made long before the event happened, was the industry’s adoption of a defense-in-depth approach to design. From the very beginning of nuclear reactor design, responsible people recognized that bad things could happen, that it was impossible to predict exactly which bad things could happen, and that the public should be protected from excess exposure to radioactive materials through the use of multiple barriers and appropriate reactor siting.

The TMI accident autopsy shows that the basic design of large pressurized water reactors inside sturdy containment buildings was fundamentally sound and adequately safe. As intended by the designers, the defense-in-depth approach and generous engineering margins allowed numerous things to go wrong while still keeping the vast majority of radioactive materials contained away from humans. Here is a quote from the Kemeny Commission report:

We are convinced that if the only problems were equipment problems, this Presidential Commission would never have been created. The equipment was sufficiently good that, except for human failures, the major accident at Three Mile Island would have been a minor incident.

Though it is not well-known, the NRC completed a study called the State of the Art Reactor Consequences Analysis (SOARCA aka NUREG-1935) that indicated that there would be few, if any, public casualties as the result of a credible accident at a U.S. nuclear power plant, even if there were a failure in the containment system.

One of the most regrettable aspects of TMI was that the heavy investment that the United States had made into the infrastructure for manufacturing components and constructing large nuclear power plants—factories, equipment, and people— was mostly lost, even though the large components and basic design did what they were supposed to do.

There were, however, numerous lessons learned about specific design choices, control systems, human machine interfaces, training programs, and information sharing programs.

Emergency core cooling

The Union of Concerned Scientists and Ralph Nader’s Critical Mass Energy Project had been warning about a hypothetical nuclear reactor accident for several years, though it turns out that they were wrong about why the emergency core cooling system did not work as designed.

The core damage at TMI was not caused by a failure of the cooling system to provide adequate water in the case of a worst case condition of a double-ended sheer of a large pipe; it was caused by a slow loss of cooling water that went unnoticed for 2 hours and 20 minutes. The leak, in this case, was a stuck-open relief valve that had initially opened during a loss of feedwater accident.

While the slow leak was in progress, the operators purposely reduced the flow of water from the high pressure injection pumps, preventing them from performing their design task of keeping the primary system full of water when its pressure is low.

It’s worthwhile to understand that the operators did not reduce injection flow by mistake or out of malice. They did what they had been trained to do. Their instructors had carefully taught them to worry about the effects of completely filling the pressurizer with water because that would eliminate its cushioning steam bubble. Their instructors and the regulators that tested them apparently did not emphasize the importance of understanding the relationship between saturation temperature and saturation pressure.

The admonition to avoid “going solid” (filling the pressurizer with water instead of maintaining its normal steam bubble) was a clearly communicated and memorable lesson in both classroom and simulator training sessions. When TMI control room operators saw pressurizer level nearing or exceeding the top of its indicating range, they took action to slow the inflow of water. At the time, they had still not recognized that cooling water was leaving the system via the stuck open relief valve.

The physical system had responded as it had been designed, but the designers had neglected to ensure that their training department fully understood the system response to various conditions that might be expected to occur. It’s possible that the designers did not know that a pressurizer steam space leak could cause pressure to fall and the pressurizer level to rise at the time that they designed the system. There was not yet much operating experience; the large plants being built in the 1960s and 1970s could not be fully tested at scale, and computer models have always had their limitations, especially at a time when processing power was many orders of magnitude lower than it is today.

There was also a generally accepted assumption that safety analysis could be simplified by focusing on the worst case accident.  If the system could be proven to respond safely to the worst case conditions, the assumption was that less challenging conditions would also be handled safely. The focus on worst case scenarios, emphasized by very public emergency core cooling system hearings, took some attention away from analyzing other possible scenarios.

Lessons learned

  • Following the TMI accident, there was a belated push to complete the loss of flow and loss of coolant testing program that the Atomic Energy Commission had initiated in the early 1960s. For a variety of political, financial, and managerial reasons, that program had received low priority and was chronically underfunded and behind schedule.
  • Today’s plant designs undergo far more rigorous testing programs and have better, more completely validated computer models.
  • Far more attention has been focused on the possible impact of events like “small break” loss of cooling accidents.
  • All new operators at pressurized water reactors learn to understand the importance of the relationship between saturation pressure and saturation temperature.

At the time of the accident, there was no defined system of sharing experiences gained during reactor plant operation with all the right people. TMI might have been a minor event if information about a similar event at Davis-Besse, a similar but not identical plant, that happened in September 1977 had made it to the control room staff at TMI-2.

Certain sections of the NRC knew about the Davis-Besse event, engineers at the reactor supplier knew about it, and even the Advisory Committee on Reactor Safeguards was aware of the event, but there was no established process for sharing the information to other operating units.

Lesson learned: After the accident, the industry invested a great deal of effort into a sustained program to share operating experience.

The plant designers also did not do their operators any favors in the design and layout of the control room. Key indicators were haphazardly arranged, there were thousands of different parameters that could cause an alarm if out of their normal range, and there was no prioritization of alarming conditions.

Lesson learned: After the accident, an extensive effort was made to improve the control rooms for existing plants and to devise regulations that increased the attention paid to human factors, man-machine interfaces, and other facets of control room design. All plants now have their own simulators that are designed to mimic the particular plant and are provided with the same operating procedures used in the actual plant. Operators are on a shift routine that puts them in the simulator for a week at a time every four to six weeks.

The initiating failures that started the whole sequence took place in the steam plant, a portion of the power plant that was not subject to as much regulatory or design scrutiny as the portions that were more closely associated with the nuclear reactor and its direct cooling systems.

Lesson still being learned: An increased level of attention is now paid to structures, systems, and components that are not directly related to a reactor, but there is still a confusing, expensive, and potentially vulnerable system that attempts to classify systems and give them an appropriate level of attention.

For at least 10 years prior to March 28, 1979, there had been an increasingly active movement focused on opposing the use of nuclear energy, while at the same time the industry was expanding near many major media markets and was one of the fastest growing employment opportunities, especially for people interested in technical fields. The technology was often in the spotlight, with the opposition claiming grave safety concerns and the industry—rather arrogantly, quite frankly—pointing to what had been a relatively unblemished record.

The industry did not do enough in the way of public outreach or routine advertising to explain the value of their product. They rarely compared the characteristics of nuclear energy against other possible electricity sources—mainly because there are no purely nuclear companies. In addition, the electric utility industry has a long tradition of preferring to be quiet and left alone.

The accident at TMI developed slowly over several days, but it became a major news story by mid-morning on the first day. Not only was it a “man bites dog” unusual event, but it was an event that the nuclear industry, the general public, the government, and the news media had been conditioned to take very seriously. Although nuclear experts from around the United States sprang into action to assist where they could at the plant itself, there was no established group of communications experts who could help reporters understand what was happening.

No reporter on a deadline is motivated or willing to wait for information to be gathered, evaluated, and verified. In the absence of real experts willing to talk, they turned to activists with impressive sounding credentials who were quite willing to speculate and spin tall tales designed to generate public interest and concern.

Lesson not yet learned: Although most decision makers in the nuclear industry understand the importance of planned maintenance systems to keep their equipment in top condition and the importance of a systematic approach to training to keep their employees performing at the top of their game, they have not yet implemented an effective, adequately resourced, planned communications program that helps to ensure that the public and the media understand the importance of a strong nuclear energy sector.

Planned communications efforts have a lot in common with planned maintenance systems. They might appear to be expensive with little immediate return on investment, but repairing a broken public image is almost as challenging and expensive as repairing a major plant component that failed due to a decision to reuse a gasket or postpone an oil change. As the guy in the commercial says, “You can pay me now or pay me later.”

That is probably the most tragic part of the TMI event. Despite being the subject of several expensively researched and documented studies, countless articles, thousands of documented training events, and more than a handful of books, the event could have—and should have—made the established nuclear industry stronger and the electric power generation system around the world cleaner and safer.

So far, however, TMI Unit 2′s destruction remains a sacrifice made partially in vain to the harsh master of human experience.

Note: I have purposely decided to avoid attempting to discuss the performance of the NRC or to judge their implementation of the lessons that were available to be learned. That effort would require a post at least twice as long as this one.

Additional Reading

General Public Utilities (March 28, 1980) Three Mile Island: One Year Later

Gray, Mike, and Rosen, Ira The Warning: Accident at Three Mile Island a Nuclear Omen for the Age of Terror W. W. Norton, 1982

Ford, Daniel Three Mile Island: Thirty Minutes to Meltdown Penguin Books, 1981

Hampton, Wilborn Meltdown: A Race Against Disaster at Three Mile Island A Reporter’s Story Candlewick Press, 2001

Report of the President’s Commission On The Accident At Three Mile Island. The Need for Change: The Legacy of TMI, October 1979

Three Mile Island A Report to the Commissioners and to the Public, January 1980

three mile island 300x237




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Three years of available lessons from Fukushima

By Rod Adams

During the three years since March 11, 2011, the world has had the opportunity to learn a number of challenging but necessary lessons about the commercial use of nuclear energy. Without diminishing the seriousness of the events in any way, Fukushima should also be considered a teachable moment that continues to be open for thought and consideration.

As a long time member of the learning community of nuclear professionals, I thought it would be worthwhile to start a conversation that will allow us to document some of the “take-aways” from the accident and the costly efforts to begin the recovery process.

Since there are many people who are more qualified than I am to discuss the specific design details of the reactors that were destroyed and the specific site on which they were installed, I will shy away from those topics. Feel free, however, to add your expert views in the comment thread.

Before Fukushima

fukushima 216x144The overriding lesson for me is a recognition that people who favor the use of nuclear technology were quite unprepared for an event like Fukushima. Our technology had been working so well, for so long, that we had become complacent perfectionists.

In some ways, we were collectively similar to perennial honor roll students who prefer doing homework to engaging in risky sports. We have been “grinds” who studied hard, followed the rules, became the teachers’ pets, scored high marks on all of the routine tests, and were utterly devastated the first time we moved to a new level and encountered a test so difficult that our first attempt to pass resulted in a D-.

Many of us—and I will freely include myself in this category—had become so confident in our ability to earn outstanding grades that we did not pay attention to the boundaries of the box in which our confidence was justified.

We confidently accepted the fact that our technology was safe, had numerous layers of defense-in-depth, and was designed to be able to withstand external events, but we forgot that those statements were only true within a certain set of bounding parameters we normally call the “design basis.” Because we had only rarely approached those boundaries, we had no real concept for what might happen once we found ourselves outside of our expected conditions without most of the expected supporting tools.

An extended period of exceptional performance not only made us over-confident, it raised expectations to an unsustainable level. Corporate executives, the media, and government leaders played roles similar to the parents, teachers, and administrators associated with precocious straight A students. They were used to dealing with serious mistakes and outright failures among the rest of the student body, but were surprised and flustered when one of us let them down.

We also failed to understand that we were in the same vulnerable and unpopular position as the geeks who continuously break the curve and make others look bad, year after year. As the excellent report cards kept coming, we did not pay attention to the effect those high grades were having on our peers. We did not see other students gathering into groups after the grades were posted. We did not sense their anger or overhear their plans to be ready to take advantage the first time we gave them an opportunity.

We had no similar plans prepared in case we failed; we expected we would keep performing exceptionally well.

The Fukushima test

fukushima tsunamiWhen the nearly impossible test came, our technology performed as designed, but that was not good enough. Our technology was not designed to match a natural disaster that destroyed all available sources of electrical power. The loss of vital power at a large, multi-unit facility interfered with the ability to understand plant conditions and to put water into the places that desperately needed it.

Aside: That is not to say that it could not have been designed to handle the imposed conditions. As the performance of Onagawa and Fukushima Daini demonstrate, it is possible through better design or more fortuitous operational decisions to improve the chances of avoiding the consequences seen at Fukushima Daiichi, but there is never a guarantee of perfection. End Aside.

Without water flow, the rate of heating inside the cores was determined by inescapable laws of physics. As nuclear energy and materials experts have been predicting for nearly 50 years, once the temperatures inside the water-cooled cores reached a certain point, the zirconium cladding of the fuel rods began reacting with the water (H2O) to chemically capture the oxygen and release the hydrogen.

Fukushima Daiichi plant designers expected that human operators would pay attention to the pressure building inside the primary containment and release some of the steam before breaking the containment. They apparently neglected to consider that operators would not be able to monitor pressure using their installed systems without any available electrical power.

For valid reasons, the designers did not make containment relief an automatic function or even an easy process. They probably did not expect that the operators would wait for a politician located at the end of a tenuous communications link to make the decision to release that pressure, expect that they might feel the need to wait for a report that evacuations had been completed or realize that the time delay could allow pressure to rise so high that it would be almost impossible to open the necessary valves.

The operators performed their tasks with dedication and tenacity, but their efforts fell a little short of the heroically successful similar efforts at Fukushima Daini. It’s worth mentioning one particular example of unfortunate timing; the Daiichi operators invested dozens of back-breaking man hours to install a mobile generator and run heavy cables across 200 obstacle-filled meters in order to provide emergency power. They completed the hook up at 1530 on March 12. At 1536, the first hydrogen explosion injured five workers, spread contamination, and damaged the just-installed equipment enough to prevent it from functioning. (See page 8-9 of INPO Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station.)

The excessive pressures in the primary containments did what excessive pressure almost always does; it eventually found weak points that would open to release the pressure. The separated hydrogen left the containments, found some available oxygen and did what comes naturally; it exploded to further complicate the event and provide a terrific visual tool for the jealous competitors who were ready to take advantage of our failure.

The lesson available from that sequence of events were not design-specific. More foresight in the design process, solid understanding of basic materials and thermodynamic principles, and, if all else fails, empowered operators with the ability to resist political pressure can further reduce the potential for core damage and radioactive material release.

Once one of us encountered a test we could not pass, we were dazed and confused, obviously unsure what to do next. That period of uncertainty provided a wonderful opening for the opponents and competitors to take charge of the narrative, emphasize our failure under our own mantra of “an accident anywhere is an accident everywhere” and spread the word that we should not be allowed to get up anytime soon. They reminded formerly disinterested observers that we had fallen far short of our claimed perfection, took the opportunity to land a few blows while we were down, and made arrangements to ensure that our recovery was as difficult and expensive as possible.

Fears of radiation

As a group, nuclear technologists have often emphasized our cleanliness, our ability to operate reliably, and our improving cost structure.

radiationsafetyWe overlooked the efforts over the years by opponents and competitors to raise special fears about the materials that might be released in the event of an accident that breaks our multiple barriers. Though we all recognize that exposure to radioactive material at certain doses is dangerous, our opponents—sometimes aided by our own perfectionist tendencies—have instilled the myth that exposure to the tiniest quantities also carries unacceptable risk.

We had become so good at keeping those materials tightly locked up that we accepted ever-tightening standards, because they were easy enough to meet under routine conditions. Even under the “beyond design basis” conditions at Fukushima, our multiple barriers did a good enough job of retaining dangerous materials so that there were no immediate radiation-related injuries or deaths, but that isn’t good enough.

There were dangerous radiation levels on site; workers only avoided injury and fatalities by paying attention and minimizing exposure times. The myth of “no safe dose” and the reality that any possible effects may occur in the distant future has continued to result in fear that effects are uncertain and will probably get worse.

The no-safe-dose assumption has made us terribly vulnerable to an effort to force us to continue meeting the expectation of zero discharges. Our stuff does “stink” on occasion; in this case if we try to hold it all in we are going to eventually suffer severe distress. The tank farm at Fukushima, with its millions of gallons of tritiated water cannot expand forever, but our opponents will prevent controlled releases as long as they can to make the pain as large as possible.

It’s worth quoting the International Atomic Energy Agency’s recent report about its late 2013 visit to Japan to provide an independent peer review of recovery actions. This passage comes in the context of a carefully-phrased “advisory point” that strongly recommends that Japan prepare to discharge water where most isotopes other than tritium have been removed.

… the IAEA team encourages the Government of Japan, TEPCO and the NRA to hold constructive discussions with the relevant stakeholders on the implications of such authorized discharges, taking into account that they could involve tritiated water. Because tritium in tritiated water (HTO) is practically not accumulated by marine biota and shows a very low dose conversion factor, it therefore has an almost negligible contribution to radiation exposure to individuals.

Reliability and perfection

Not only did the accident destroy the ability of four plants to ever operate again, it has reminded us that reliability is not just a matter of technology and operational excellence. If the powers-that-be refuse permission to operate, the best technology in the world will fail at the task of providing reliable power. Our competitors are perfectly content to take over the markets that we are failing to serve. The longer they perform the easier it is for people to assert that we are not needed.

We have also been taught that we have no real control over cost. The aftermath of Fukushima has shown that it’s possible to establish conditions in which even the most dire prediction of economic cost is an underestimate. There is no upper bound under conditions where perfection is the only available standard.

If we do not learn how to occasionally fail, how to make reasonable peace with our powerful opposition, and continue to help everyone understand that a search for perfection does not mean that its achievement is actually possible, nuclear energy does not have much hope for rapid growth in the near future.

That would be a tragic situation for the long term health and prosperity of humanity. The wealthy portions of our current world population can probably do okay for a while without much nuclear fission power. However, that choice would harm the underpowered people who are already living and innumerable future generations who will not live as well as they could if we shy away from improving and using nuclear fission technology.

Fission technology is not perfect and poses a certain level of risk, but it is pretty darned good and the risks are well within the range of those that we accept for many other technologies that can perform similar tasks.


INPO 11-005 Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station





Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Is St. Lucie next on the antinuclear movement target list?

By Rod Adams

The most informative paragraph in a lengthy article titled Cooling tubes at FPL St. Lucie nuke plant show significant wear published in the Saturday, February 22, 2014, edition of the Tampa Bay Times is buried after the 33rd paragraph:

In answers to questions from the Tampa Bay Times, the NRC said the plant has no safety issues and operates within established guidelines. That includes holding up under “postulated accident conditions.”

Unfortunately, that statement comes after a number of paragraphs intended to cause fear, uncertainty, and doubt in the minds of Floridians about the safety of one of the state’s largest sources of electricity. St. Lucie is not only a major source of electricity, but it is also one of the few power plants in the state that is not dependent on the tenuous supply of natural gas that fuels about 60 percent of Florida’s electrical generation.

In March 2013, at the height of the political battle about the continued operation of the San Onofre Nuclear Generating Station—a battle that ended with the decision to retire both of San Onofre’s units—Southern California Edison issued a press release that contained words of warning for the rest of the nuclear industry.

The Nuclear Energy Institute’s Scott Peterson called the Friends of the Earth claims “ideological rhetoric from activists who move from plant to plant with the goal of shutting them down.” He goes on to say: “Not providing proper context for these statements incorrectly changes the meaning and intent of engineering and industry practices cited in the report, and it misleads the public and policymakers.”

In San Onofre’s case, the context of the public discussion should have included a widespread understanding that the decision to shut down the plant was based on a single steam generator tube leak that was calculated to be one-half of the allowable operating limit. That leak was detected by an alarm on a radiation sensing device sensitive enough to alarm with a leak that might have exposed someone to a maximum of 5.2 x 10-5 millirem.

The antinuclear movement has a long history of using steam generator material conditions as a way to force nuclear plants to shut down. Most nuclear energy professionals will freely admit that the devices have been problematic since the beginning of the industry. There was a period of acrimonious litigation when the utilities sued the vendors because the devices did not last as long as initially expected. However, with an extensive replacement program, focused research, attention to detailed operating procedures, and material improvements, steam generators are more reliable today than they were 25 or even 15 years ago.

It is also worth understanding that steam generator leaks do not cause a public health issue. Operating history shows that essentially all of the leaks have been modest in size and resulted in tiny releases of radioactive material outside of the plant boundaries. U-tubes are part of the primary coolant boundary and are thus classified as “safety-related.” Their integrity is important to reliable plant operation, but the 30 percent of the plants operating in the United States that are boiling water reactors don’t even try to keep radioactive coolant out of the steam plant.

The Tampa Bay Times feature article, written by Ivan Penn, included quotes from some of the same players involved in the—unfortunately—successful effort to close down San Onofre. Their words have that familiar ring of “ideological rhetoric,” indicating that St. Lucie might be high on the target list for the activists who move from plant to plant.

Arnie Gundersen, who Penn correctly identified as a frequent nuclear critic, provided a fairly explicit quote supporting the guess that the antinuclear movement has selected its next campaign victim. “St. Lucie is the outlier of all the active plants.” Later in the article, he stated that St. Lucie’s steam generators have a hundred times as many “dents” as the industry average. That might be true, but that is mainly because the industry average is in the single digits. The important measure is not the number of wear spots, but their depth.

Daniel Hirsch, described as a “nuclear policy lecturer” from the University of California at Santa Cruz, used more colorful language, “The damn thing is grinding down. They must be terrified internally. They’ve got steam generators that are now just falling apart.” Like Gundersen, Hirsch has fought against nuclear energy for several decades.

David Lochbaum, from the Union of Concerned Scientists, indicated that he thought that the plant owners were gambling, even though their engineering analysis, which was supported by the Nuclear Regulatory Commission, indicates that the plant has no safety issues and is operating within its design parameters.

Those quotes from the usual suspects, spread throughout the article, are balanced by quotes explaining or supporting FPL’s selected course of action to continue operating and to continue conducting frequent inspections to ensure that conditions do not approach limits that would require additional action.

Here is an example from Michael Waldron, a spokesman for FPL, that appears near the end of the article:

“We have very detailed, sophisticated engineering analysis that allow us to predict the rate of wear, and we are actually seeing the rate of wear slow significantly.”

Even though it is balanced with an almost equal number of pro and con quotes, Ivan Penn’s article includes a number of phrases that appear to be carefully selected to increase public uncertainty and worry about St. Lucie’s continued operation. It is possible to also attribute the words to the author’s desire to add drama and emotion to attract additional readers; that can be difficult to do while maintaining accuracy. Unfortunately for people who love drama, nuclear power plants are quite boring. The vast majority of the time they simply keep working.

Here is an example of the type of rhetorical enhancement that frustrates people who value the accurate use of words:

Worst case: A tube bursts and spews radioactive fluid. That’s what happened at the San Onofre plant in California two years ago.

As stated above, the tube at San Onofre did not “burst” and it did not “spew” radioactive fluid. A tube developed a small, 75–85 gallon-per-day leak from the primary system into the secondary steam system. The installed equipment provided an immediate indication of a problem and the operators promptly took a very conservative course of action to shut down the plant.

While the responsible engineers were performing their detailed investigations and drafting their recommendations, the activists and the politicians took charge of the public communications and worked hard to ensure that San Onofre never restarted. Their focused misinformation offensive resulted in the early retirement of an emission-free power plant that reliably provided 2200 MW of electricity at a key node in the California power grid.

Today, local residents in California are not safer, the air is not cleaner, and the wholesale price of power has already increased by more than 50 percent. Several large-scale infrastructure investments are being planned to restore resiliency to California’s grid. The primary beneficiaries of the antinuclear actions are the people who sell the 300–400 million cubic feet of natural gas needed every day to make up for the loss of San Onofre.

Let’s hope that the regulators and the politicians do a better job of finding sound technical advice, and that the responsible experts do a better job of helping people to understand that St. Lucie is safe, even if its steam generator tubes have more wear marks than anyone wants.

st. lucie 386x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

How can we stop premature nuclear plant closures?

By Rod Adams

During an earnings call on February 6, 2014, Exelon Corporation indicated that it may decide to shut down two or more of its nuclear reactors because of poor economic return. Exelon spokespeople have been warning about the effects of negative electricity prices for several years.

On February 8, 2013, almost exactly a year ago, the Chicago Tribune published a story titled Exelon chief: Wind-power subsidies could threaten nuclear plants. The Tribune noted that Christopher Crane, Exelon’s CEO, was concerned about the continued operation of some of the units in the company’s large fleet of reactors:

“What worries me is if we continue to build an excessive amount of wind and subsidize wind, the unintended consequence could be that it leads to shutting down plants,” Crane said in an interview.

Crane said states that have helped to subsidize wind development in order to create jobs might find themselves losing jobs if nuclear plants shut down.

The Chicago-based company doesn’t have any immediate plans to mothball nuclear plants, although at least one analyst has predicted that could occur as soon as 2015.

“We continue to believe that our assets are some of the lowest-cost, most-dispatchable baseload assets and don’t have any plans at this point of early shutdown on them,” Crane said.

If the discussed nuclear reactor shutdowns occur, they would be numbers six and seven in the count of prematurely closed nuclear power plants in the United States since the beginning of 2013. Though there are certainly antinuclear activists and analysts who will point to this record with a delighted “We told you so,” this is not a trend that bodes well for the economic stability of the United States or for the continued effort of the US to reduce its dependence on hydrocarbon fuel sources.

It is also a trend that puts a number of nuclear professionals at risk of suffering a significant economic setback and life-altering job loss, despite having participated in an exceptional example of continued performance improvements over a sustained period of time.

During a recent industry gathering hosted by Platts, Dr. Pete Lyons pointed to the trend of shutting down well-maintained and licensed nuclear power plants as something that is worrying the current Administration, especially because it will make it difficult to achieve progress in reducing CO2 emissions.

Jim Conca, writing for Forbes, noticed Exelon’s announcement and wondered about its effect on a number of important attributes of energy production. He reminds his readers that nuclear plants represent a large fraction of the emission free electricity produced in the United States each year. He also points out that the longer nuclear plants run and produce revenue, the better. Construction costs are already sunk, the plants already have stored inventories of spent fuel, and they already require some form of decommissioning. The costs and pollution associated with all of those features should be spread over as many kilowatt hours of generation and revenue as possible.

There are several things that nuclear energy advocates can do that might help to eliminate the pressures that have been encouraging nuclear plant operating companies to either shut down or consider shutting down useful assets.

  1. Learn enough about the natural gas market to discuss it with your friends and colleagues
  2. Advocate policies that put a fair value on generating clean electricity
  3. Advocate policies that reward generating sources for reliability
  4. Cheer efforts to market electricity to restore growth in demand

During the winter of 2013-2014, there have been a number of examples of the risks associated with concentrating heating, industrial uses and electricity production on natural gas, just because it has been accepted as “clean” and seems to have become abundant and cheap—ever since 2008—which is apparently a long time ago in the memory of some market observers and decision makers. The Nuclear Energy Institute continues to produce excellent materials and testimony about the importance of fuel diversity; they need as much assistance as they can get in spreading the message.

This winter there have been reported shortages and price spikes that have exceeded $100 per MMBTU. That is roughly equivalent to oil prices hitting $580 per barrel, since every barrel of oil contains 5.8 MMBTU of heat energy. Natural gas price spikes have not been limited to the northeast; spikes exceeding $20 per MMBTU (five times the pre-winter price) have occurred in the mid-Atlantic, the Pacific Northwest, the Chicago area, southern California and even Texas. Last week, a price spike of $8.00 per MMBTU even showed up at Henry Hub, at the intersection of several prime US gas production areas.

Henry Hub spot prices as of Feb 10, 2014

Henry Hub spot prices for week ending Feb 5, 2014

When gas prices reach the levels seen this winter, many customers stop buying, even if they have no alternative fuel source available. If they are operating an industrial facility that needs the gas to run, they stop operating. If they are operating a household that needs the gas to stay warm, they put on more sweaters. If they are operating a school system; they shut the doors and tell the children to stay home.

In markets where wholesale electricity prices have been deregulated, gas fired generators are usually the marginal price setters, so the spikes in natural gas prices have directly affected electricity prices at times of peak demand, driving them to infrequently seen levels. It remains to be seen how the electricity price spikes this winter have affected revenues at generating companies, but it is unlikely to have harmed their bottom line. Unfortunately, brief spells of profitability may not be enough to encourage nuclear plant operators to keep running their plants if wholesale prices return quickly to loss-making level for much of the year.

Though many of us value the fact that nuclear plants do not produce any greenhouse gases or other air or water pollutants, that feature does not produce any additional revenue for plant owners. For the past twenty years, every alternative to fossil fuel except nuclear and large hydroelectric dams have been given direct subsidies, preferential tax treatment and quotas. Fossil fuel generators have not been charged for their use of our common atmosphere as a waste disposal site. It is time to put pressure on our representatives to pass legislation that establishes a price on carbon so that investors are encouraged to fairly value clean generation.

My personal favorite proposal is James Hansen’e fee and dividend approach where all hydrocarbon fuels pay a fee based on their carbon content and the public receives an equal share of the revenue. People who are careful and do not use much fuel will see a positive increase in their income; people who use more than average will see a net cost. Investors will recognize that it is worth their effort to identify technologies that do not emit CO2.

We also should advocate policies that reward generators for their ability to produce reliable electricity. It is a valuable service that helps to ensure that the grid is adequately served with a sufficient margin, and that we avoid the kind of volatility seen this past winter and that nearly bankrupted California in 2001.

Finally, we should seek to reverse the reluctance to tout the product we produce. Electricity is a wonderful tool that makes life better. It can be produced using a variety of fuels, though most readers here would probably agree that uranium and thorium are the best available electricity generation fuels. It’s time to recognize that the energy business is competitive. Like all competitive enterprises, it rewards people who fight for market share by producing a better product and by taking effective action to ensure that people know they are producing a better product.

While traveling through the southeast US last week, I heard an advertisement that made me smile. Alabama Power was offering to give people water heaters as long as they were shifting from gas heaters to electric heaters. Why have we allowed competitive energy producers to steal markets for so many years without fighting back?

I encourage people in the electricity production business to download a copy of the Jan/Feb 2014 issue of EnergyBiz and read the article titled Gas Competes with Power; A New Foundation Fuel, New Business Channels. While you are at it, you might also enjoy reading the challenge that NRG Energy’s David Crane lays down for the traditional business of generating and distributing electricity in his guest opinion piece titled Keep Digging: What Lethal Threat?

Exelon's Clinton Power Station

Exelon’s Clinton Power Station




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

The Value of Energy Diversity (Especially In A Polar Vortex)

By Rod Adams

Since the natural gas price collapse that started in summer 2008, many observers have become accustomed to using the adjective “cheap” when talking about natural gas. Like the word “clean,” another adjective often applied to methane, “cheap” is a relative term. It is also a term whose applicability depends on time and location. As I wrote in a recent post on Atomic Insights, gas is only really cheap if nobody needs it. When demand increases due to some kind of perfectly natural phenomenon—like a winter with near normal temperatures—demand can exceed deliverability by a large margin.

When that happens, the only way that markets can match demand to supply is to allow the price to climb to a level high enough to destroy some of the demand. Because the infrastructure for extracting, storing, and delivering gas cannot be rapidly altered, suppliers are unable to bring additional supplies to market in time to provide relief.

Late last week, the price of natural gas at three major trading locations—New England, New York, and Mid-Atlantic—exceeded $70.00 per MMBTU. It is worth seeing the table for yourself.

Daily natural gas prices January 22, 2014

Daily natural gas prices January 22, 2014

Those prices are, of course, spot market prices that do not apply to customers that have signed long-term supply contracts; but since long-term contracts are often priced at a level that is substantially higher than the short-term spot market, many customers have been loath to buy the protection offered. Home heating delivery companies are generally seen as utilities that supply a vital need, so they have traditionally signed long-term contracts with priority delivery clauses. Most merchant power generators have taken the risk associated with short-term contracts.

When gas prices get too high, those merchant generation companies have a simple choice; they stop buying fuel and stop generating power.

During last week’s brutal cold weather in New England there was a day when 75 percent of the region’s natural gas-fired power generators were unable to operate, presumably because there was an insufficient amount of gas to supply both heating demands and power demands.

Even with the delivery-related demand destruction, withdrawals from working gas-in-storage reservoirs has been running at a higher pace than at any time during the past five years, resulting in a current gas-in-storage inventory that is about 14 percent below the five year average for this time of year. Natural gas analysts are starting to speculate about the ability to maintain a sufficient storage buffer to complete the winter.

The total working gas in storage in the United States for the week ending January 17 is 2.4 trillion cubic feet (TCF). To put that number in perspective, average daily use in January has been running at 97 billion cubic feet per day for a monthly total of 3 trillion cubic feet. Traders are starting to pay attention, and long-term pricing at the main delivery hubs is starting to climb rather steeply.

Natural gas prices at Henry Hub Jan 2012 - Jan 2014

Natural gas prices at Henry Hub

To maintain grid stability, the New England independent system operator resorted to using combustion turbines supplied by diesel or jet fuel. Though distillate oil is normally a premium fuel best reserved for transportation, it has an advantage over gas in times of high demand. Because it is more readily stored, it can be staged in advance so that it is ready to run when demand soars—at least until the tanks run dry.

It has not yet made the news, but there are probably quite a few New Englanders who are happy that they still have heating oil in tanks on their own property. The oil heat advocates at American Energy Coalition would certainly like to spread the word that gas may not always be the best source of winter heat.

Fortunately, the US power grid has not yet arrived at the state that seems to be the goal of the natural gas marketing departments and their allies in the media. Not only are there still a number of coal- and oil-fired power plants that are still capable of running, there are still 100 operable nuclear power plants that thrive on colder weather.

Though there have been one or two operational issues, the monthly nuclear power plant performance report for December 2013 showed a total generation of more than 71 billion kilowatt hours for an average capacity factor of 97.6 percent.

So far in January, nuclear plant performance remains impressive; with some days reaching average capacity factors in excess of 97 percent. Much of this performance comes from well executed maintenance strategies and adverse weather plans. Those preparations allow operators to take timely action to minimize the probability of weather-related outages.

Nuclear plants have a reliability advantage over their fossil fuel competitors; they usually enter high demand, bad weather seasons with “fuel tanks” that contain many months’ worth of accessible fuel. All other competitors can run into fuel-related problems when deep cold persists for too long. Coal piles have been known to become solid blocks of ice, gas lines can freeze, and even diesel fuel can get syrupy if not properly stored.

Nuclear power plant operators also benefit from fuel prices that do not change as a result of high demand periods—the average cost of commercial nuclear fuel in the United States remains steady at between $0.50 to $0.60 per MMBTU. For merchant power plant operators, the cold weather is providing a great opportunity to bank some terrific returns. If you look at the daily spot market price table above, you can see that electricity prices were very robust, especially for companies that operate generating plants with an average operating and maintenance cost of $24 per MW-hr.

It would be terrific if the operators that benefit from selling their output at those generous prices stash some of the money away for those balmy spring days when few people need gas for heat. Gas still is a cheap and relatively clean fuel when the demand is low. There will again be times in the near future when gas-fired generators sell their output at prices that are not profitable for many others on the grid.

Maybe one lesson worth learning this winter is that an electric grid supplied by integrated power utilities operating under rate regulation with an obligation to serve is not such a bad arrangement after all. Electricity is too important for the rest of the economy to allow its price and availability to be so dependent on the whims of the weather.

There is another lesson that is specifically applicable to the state of Vermont. Vermonters, you still have a licensed and operating nuclear power plant that supplies power to your regional grid that is equivalent to 85 percent of your total consumption. For political reasons, you elected a governor and representatives that made that plant feel so unwelcome that the owners have decided to shut down the plant instead of refueling it and continuing to operate for the rest of its licensed life.

It’s not too late to take note of the way weather has been affecting your regional grid this year and consider how bad things might get if Vermont Yankee gets shut down as currently scheduled. Take a look at the possible impacts of following through with the proposed Total Energy Study.

Once you have imagined that scenario, pick up the phone and call some of your government leaders. Tell them that you want them to ask Entergy to keep the plant running. Tell your representatives that they have your permission to beg for forgiveness if necessary.




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Nuclear energy is built on an actinide foundation

By Rod Adams

During the past several years, I have been following the progress of a strange situation in my adopted state of Virginia. Despite being a state with a long history of mining and mineral extraction, we have a law in place that forbids mining one specific element—uranium. The law is technically just a temporary moratorium put in place in order to give the state’s regulators time to draft effective regulations, but the law enacting the moratorium was put into place more than 30 years ago.

At this point, it is rather difficult to consider that it is just a temporary measure, especially since there is no longer any progress being made to begin drafting the required regulations. There has been work in progress since 2008, but it has recently hit a pretty substantial barrier.

The governor-elect, Terry McAuliffe, made a statement about a week after his election party ended, stating that he would veto any legislation that ended the moratorium. Since he expects no change in the moratorium while he is governor, he said he would oppose any effort to begin drafting rules as a waste of time and money. The governor made that decision after a strong sales effort by people who did not like the idea of allowing uranium to be mined in the state.

I’ve spent some time on the phone with Ben Davenport, the leader of one of the main opposition groups. He told me that he and his group are strongly pronuclear and believe that nuclear energy is the cleanest and best way to produce electricity. However, Mr. Davenport and his group believe that mining uranium is the dirty end of the business that should be done somewhere else.

I believe that the established nuclear energy interests in the state have missed a good opportunity to build an effective coalition that would take advantage of a teachable period to help people understand more about nuclear energy, the basic materials that enable it to function, the measurably minor health and environmental impacts associated with modern mining, and the economic benefits that result from materials extraction from the earth.

The people who are already in the nuclear industry are the ones who are most likely to understand that it is a safe, clean, and productive industry with the ability to provide great benefits to society. We need to practice our ability to more clearly communicate those aspects of our business to a public that has been subjected to many negative perspectives, often from people with economic interests for spreading fear, uncertainty, and doubt about our technology.

One aspect of economic development that seems to elude most people who do not live in Texas, Oklahoma, or Alaska is that businesses that pull valuable materials out of the earth are essentially finding money that makes the resource pie larger for all of us. Though mining opponents claim that there is plenty of uranium available on the world market—and they are correct under current conditions—they fail to understand that the money spent to purchase that uranium from somewhere else goes to the supplier region and is spent there.

Money used to purchase uranium in Virginia, on the other hand, stays in the United States. It ends up in the pockets of people who shop locally, dine in local restaurants, buy propane from local distributors, pay mortgages to local banks, and send their children to local schools. The valuable material adds wealth and capability, especially compared to simply leaving the material resting in the ground.

Because the nuclear industry in the United States grew up after the Cold War weapons program, we ended up with a geographically dispersed industry that prevents the kinds of sensible concentrations that yield substantial scale benefits to most other industries. Virginia-based companies have the opportunity to streamline the nuclear fuel fabrication supply chain and to take advantage of synergies that result when there are several different employers looking for people with similar skill sets in a geographic area.

There are potential political and public acceptance benefits for concentrating a complete industry supply chain in a defined geographic area. That is especially true when the industry is something disruptive that has complex or unique features that require knowledgeable communicators who can help the public and the politicians understand the impact of their decisions on the industry.

Many of the elements of this kind of concentration exist in southern Virginia, where there are nuclear power plant vendors, nuclear fuel suppliers, a nuclear power plant operator, a nuclear capable shipyard, nuclear engineering programs at regional universities, and a number of nuclear-powered ships. Unfortunately, many of these elements are not allowed to talk to each other and have a long history of maintaining “radio silence” among their neighbors and friends.

Here is a vision that I would love to see being pursued—I’d like to see the companies that are already engaged in the business of creating finished actinide fuel components and the machines that use those finished assemblies talk with the people who own a large uranium deposit with a potential worth of $7 billion. The same people own several thousand acres of land surrounding that deposit.

The discussions need to include businessmen who are already operating successful enterprises and are devoted to improving the foundations of the local economy. I’d like them all to think and talk about the possibility of siting additional training facilities or laboratories related to fuel conversion, enrichment, and fabrication on the site. The site might even be suitable for demonstration and test reactors that can serve as long term training facilities.

There is already usable railroad infrastructure in the area that is connected to one of the most capable ports in the world. The supply chain of manufactured parts required for small modular reactors need to be produced somewhere; why not in some of the places in southern Virginia with a long history of manufacturing, with skilled populations that know how to work with their hands? Craftsmen can learn to master the demanding quality assurance requirements for nuclear parts; those skills have wide applications and are not easy to outsource.

Some of the people who have opposed the uranium mining make it very clear that they are opposed because they believe that the perceived risks outweigh the benefits. So far, those benefits have been described to them as the potential for several hundred good jobs sometime in the uncertain future, after all of the licensing and permitting work is complete.

The good, practically minded people in the area know that job promises do not provide any meals, do not help educate any children, and do not increase the customer flows at any local businesses. Perhaps, by applying some creative thinking and vision, good jobs can start more quickly and lay the groundwork for a sustainable industry that will enable long term prosperity and resilience.

The site of the Alliance for Progress in Southern Virginia has some rotating photos that include one with a beautiful rolling pasture, complete with a few dispersed bales of hay. I enjoy bucolic scenery, and believe strongly that appropriate nuclear energy facilities can fit into that scenery quite nicely. However, from an economic development point of view, there are few land uses that are less progressive or economically important than growing hay.

Uranium ore




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Do oil and gas suppliers worry about nuclear energy development?

By Rod Adams

The world oil market is not a free market. Prices are manipulated by a small number of producers that adjust production rates to achieve desired prices that are high enough to provide maximum profits, without being high enough to encourage customers to aggressively pursue alternative energy sources.

That is the most important take away for attendees at the OPEC Embargo +40 summit held in Washington DC on October 16. Unfortunately, the meeting sponsors avoided acknowledging that nuclear energy is the alternative energy source that most worries established hydrocarbon suppliers. Nuclear has held that position since the early 1960s, when General Electric first won a head-to-head competition against coal to sell the Oyster Creek nuclear power plant.

Nuclear energy is reliable, virtually emission-free, and uses a widely distributed, abundant fuel source that is no longer subject to influence by the same producers that manipulate other fuel prices. Its cheap, clean heat can help turn coal, natural gas, and plants (vegetation) into liquid fuels that can be drop-in replacements for petroleum-based fuels.

A glittering cast of American energy pundits gathered in Washington DC for the summit held on the 40th anniversary of the 1973 OPEC oil embargo. Natural gas was the celebrity invitee everyone wanted to faun over, while nuclear energy was an uninvited guest disrespected by almost all of the speakers whenever it was brought up.

The event was hosted by a group of retired large company executives and military flag officers who have served in roles in which they should have learned about the vital role that energy plays in our economy and in our politics.

That organization, Securing America’s Future Energy (SAFE), recently produced a document titled A National Strategy for Energy Security: Harnessing America’s Resources and Innovation 2013. There are only three uses of the word nuclear in that 125-page document. Two of those appearances are in the legends of graphs about energy sources; one is followed by the word “physics” in a list of education focus areas.

People who want to sell uranium, fabricate fuel, build and operate new plants, and stop a dramatic shift of leadership in technical innovation to other countries (e.g., Korea and China) must recognize that it’s past time to take action to force ourselves into the conversation, even if our technology makes some people uncomfortable.

During the summit, negative words about nuclear energy came from people representing numerous points in the political spectrum. Doubters included a man who had served as both chairman of the Atomic Energy Commission and as Secretary of Energy, a woman who had been the Secretary of State, a man who is the chief executive officer of a large ship operating company, and a man who is the CEO of one of the world’s pioneering nuclear power plant vendors.

Madeline Albright, the former Secretary of State, described the Atoms for Peace program as a mistake that led to too many unsolved “unintended consequences.” Meanwhile, according to James Schlesinger (former AEC chairman and one-time energy secretary), cheap natural gas has killed the nuclear renaissance and no utility CEO is going to consider proposing a new nuclear plant to his board of directors.

But you asked a question about nuclear. Madeline (Albright) mentioned the unintended consequences [of the Atoms for Peace speech]. There are unanticipated consequences. What we have seen as a result of shale oil development and shale gas development is natural gas so cheap now that nobody, no utility, is going to build a nuclear plant unless very heavily subsidized, and we are not seeing that. Philosophically we may be more interested in having more nuclear plants but as a practical matter, we’re just not going to see them. There is no nuclear renaissance coming.
(See SAFE video titled Insight from the Oval Office. Schlesinger’s comment dismissing nuclear energy starts at 24:55)

Adam Goldstein was asked if his Royal Caribbean Cruise Lines would be interested in nuclear power, as the company has replaced oil on large ships for more than 50 years. He chuckled uncomfortably—along with the audience—and stated that those ships do not have to carry passengers into Australia. He stated that costs make it prohibitive. He appeared unaware that his huge passenger ships are a tempting “early adopter” market for smaller reactor vendors; they operate baseload power plants running on low sulfur diesel fuel that costs more than $25 per MMBTU.

Jeff Immelt described GE’s new jet engine, which improves fuel economy by 15 percent, as his company’s most innovative technology for reducing oil dependence. When pressed about nuclear energy, he said that his company is going to keep their nuclear energy division on life support because his “successor’s successor” might be grateful to have that option available. He never mentioned the ABWR, the PRISM, or the ESBWR.

Nuclear energy received a few positive mentions; most of the best came from Fred Smith, the founder and CEO of Federal Express, a world-wide logistics company founded in 1971, just two years before the OPEC embargo. Smith fundamentally understands the importance of a reliable supply of fuel for his trucks, planes, and delivery vehicles.

He is also well aware of the fact—through repeated experience—that apparent abundance can rapidly turn into price-spiking shortage. He knows what that shift means to his company’s profits and what it means to the profits of companies that sell oil or alternative energy equipment. He noted the ongoing nuclear renaissance in China and his interest in what he called “pocket nukes” that are receiving investments from Bill Gates and Babcock & Wilcox.

Aside:  SAFE recently posted A Conversation with Jeff Immelt and Fred Smith on YouTube. Immelt repeatedly sings the praises of natural gas and explains how his company is involved in the industry. His comments about the most innovative technologies is in response to a question that Becky Quick asked starting at 23:46. Their discussion about nuclear energy begins with a question from Becky starting at 28:35. End Aside.

Carol Browner, who served as the Environmental Protection Agency administrator in a Democratic administration, insisted that nuclear energy has an important role to play in reducing fossil fuel dependence and reducing CO2 emissions.

Those examples show that the most receptive audiences for the nuclear energy alternative are people who buy a lot of fuel without selling any, and people who are deeply concerned about air pollution and climate change. The former understand that having additional supplies of reliable power will mean more competition to provide more stable and lower prices. The latter group knows that we cannot continue to dump CO2 into the atmosphere at an ever-increasing rate without unexpected consequences.

It’s time to get more aggressive in nuclear energy marketing. The uranium industry should teach people how heat is fungible in order to excite its potential supporters and capture attention from energy pundits.

Nuclear fission heat has already reduced the world’s dependence on oil; there is plenty of remaining opportunity. Nuclear energy pushed oil out of the electricity market in most of the developed world. Fission has replaced oil combustion in larger ships, but most others still burn oil. Nuclear-generated electricity has replaced oil burned for locomotives, city trolleys, and space heat, but there is room for substantial growth in these markets. Uranium producers should be influential members in the coalitions that are working to electrify transportation systems. Fission heat, especially with higher temperature reactors, can replace oil heat in industrial processes, including those well-proven processes that can turn coal, natural gas, and biomass into liquid fuels.

Fission can also reduce oil use by pushing gas out of the power generation business, thus freeing up more natural gas for other uses. As the gas promoters love to point out, methane is a flexible and clean burning fuel. It is important to remind their customers that fuel burned in power plants is not available for any other use.

There should no longer be meetings in Washington in which serious energy observers can hold sessions about efforts to reduce oil dependence, without discussing uranium’s important role in achieving that goal. There should also not be another meeting in DC discussing how natural gas is going to reduce our dependence on petroleum, without any apparent recognition that gas and oil are almost identical chemicals that come from essentially the same places in the earth’s crust, are supplied by essentially the same multinational conglomerates, and are delivered to customers using very similar types of pipes, ships, and trucks.

gas plant 290x201

Note: An abbreviated version of this article first appeared in the November 7, 2013 issue of Fuel Cycle Week.



Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Excitement about U-235 as coal competitor–circa 1939 & 1940

By Rod Adams

Conventional wisdom says that the general public was introduced to atomic energy by the explosions at Hiroshima and Nagasaki. According to that version of history, the introduction instilled a strong dose of fear that remains to be overcome.

Some observers who like to paint nuclear energy in a negative light have stated that the program to build nuclear power plants grew from a desire to find a civilian use for a technology developed solely from a desire to create weapons.

Accounts of the early days after the discovery of the fission chain reaction, however, show that physicists who were engaged in the study of the atomic nucleus and the use of neutrons to produce artificial radioactivity were keenly interested in producing useful power. They were motivated not only by a scientific desire to gain a better understanding of the fundamental structure of the atom, but also by a desire to provide the world with a new power source to compete with coal and oil. The stories also show, however, that writers who covered the scientific advances often asked questions indicating that they envisioned weapons or doomsday scenarios.

As a digital subscriber to the New York Times, widely referred to as “the paper of record,” I recently performed an archive search using the term “chain reaction” and a date range starting on 01/01/1938 and ending on 01/01/1944. The results of that search confirmed my suspicion that the atomic pioneers were primarily interested in fuel production—though, when pressed, they acknowledged the possibility of explosive energy release.

The search returned 10 articles published between February 1939 and March 1941, with no additional results after that date. Even before the Manhattan Project started, scientists apparently stopped discussing chain reactions in public. Some of the 10 pieces discovered were short inclusions in a regular column titled Science in the News. Here are sample quotes from those pieces showing atomic energy optimism:

Frederic Joliot, co-winner of the 1935 Nobel Prize for chemistry, is trying to find a way to make a $2 pound of uranium give up as much heat or power as is now obtained from burning $10,000 worth of coal.

Uranium atoms will do the firecracker trick under certain restrictions. If scientists can find practical means to set up uranium chain reactions, then it is estimated that it may be possible to obtain from one pound of uranium as much energy as is at present obtained from 1,250 tons of coal.

(Associated Press, Uranium as a Coal Substitute, New York Times, June 19, 1939)

Roberts and Kuper agree that “a chain reaction cannot be ruled out definitely for either slow or fast neutrons,” but decide that “there is no evidence of any kind that such a reaction will really occur.” They throw more cold water over dreamers by showing that uranium has not very great economic advantage over coal even if it could be used. “Uranium oxide (96 per cent pure) sells for approximately $2 a pound, which is roughly equal to the price of a ton of coal at the mine. In terms of energy dollar—uranium is cheaper by a factor of 8.5.”

Though this may look good to a financier, Roberts and Kuper point out that as the demand for uranium increases so does the price. In the end further refinement would be necessary and the limited supply of high-grade ore would soon be exhausted. “If uranium were to replace 500,000,000 tons of coal used annually in this country,” argue these skeptics, “the amount of uranium consumed would increase 15,000 per cent.”

(Kaempffert, Waldemar, Atomic Energy From Uranium, The New York Times, October 22, 1939)

There was also a lengthy front-page article titled Vast Power Source In Atomic Energy Opened by Science published on May 5, 1940. That article documented a high level of public interest in the new discoveries and described an optimistic attitude among both academic and industrial researchers. That article provided technical information that I had previously thought was a closely-guarded, Manhattan Project secret.

A natural substance found abundantly in many parts of the earth, now separated for the first time in pure form, has been found in pioneer experiments at the Physics Department of Columbia University to be capable of yielding such energy that one pound of it is equal in power output to 5,000,000 pounds of coal or 3,000,000 pounds of gasoline, it became known yesterday.

The discovery was announced in the current issue of The Physical Review, official publication of American physicists and one of the leading scientific journals of its kind in the world.

Professor John R. Dunning, Columbia physicist, who headed the scientific team whose research led to the experimental proof of the vast power in the newly isolated substance, told a colleague, it was learned, that improvement in the methods of extraction of the substance was the only step that remained to be solved for its introduction as a new source of power. Other leading physicists agreed with him.

A chunk of five to ten pounds of the new substance, a close relative of uranium and known as U-235, would drive an ocean liner or an ocean-going submarine for an indefinite period around the oceans of the world without refueling, it was said. For such a chunk would possess the power-output of 25,000,000 to 50,000,000 pounds of coal, or 15,000,000 to 30,000,000 pounds of gasoline.

Uranium ore, in which the U-235 also is present, is found in the Belgian Congo, Canada, Colorado, England and Germany, in relatively large amounts. It is 1,000,000 times more abundant than radium, with which it is associated in pitchblende ores.

(Laurence, William L., Vast Power Source in Atomic Energy Opened by Science, New York Times, May 7, 1940, P. 1)

The article continues on page 51 to provide a number of details that show a rather remarkable pace of advancement in understanding, considering the fact that only 18 months had passed since the initial recognition that neutrons could cause uranium to split into two pieces.

Not only is the energy-liberating process automatic and self-regenerating, it was explained, but it also is self-regulating. The energy liberated from the atoms heats up the water so that it turns into steam. When all the water supplied has been turned into steam, there is nothing left to slow down the fast-traveling neutrons, and fast neutrons just go through the uranium without breaking up its atoms and releasing its energy. This brings the whole process to a stop until more cool water is supplied.

As one leading physicist explained it, “the colder the water the better the reaction. The reaction is self-limiting because heat (generated by the split atoms) speeds up the neutrons and the faster the neutrons the less the reaction.”

“The faster you feed in the cold water,” the scientist added, “the faster the water will come out hot on the other side, because more neutrons will be slowed down and thus more atoms split and more energy is liberated. Thus the process is admirably suited for power generation.”

Because of the nature of the neutrons, even the slow-traveling ones, it was explained further, it is necessary to have a mass of at least five pounds, and possibly as high as twenty, to make the process work on a practical scale. In a smaller amount even low energy neutrons would escape into the open without splitting the initial “trigger-atom” that sets off the process. To start the process it is necessary for the neutron to remain inside the mass, so that it would enter the nucleus of an atom to start the splitting process.

One of the scientists explained the process of the energy-liberation from U-235 by comparing it to the burning of coal. Whereas coal uses oxygen to liberate its energy, he explained, the U-235 uses slow neutrons for the same purpose. The process of combustion in the case of the U-235, he added, is, atom for atom, 100,000,000 times as effective as is the case in the combustion of coal. However, as the atomic weight of the uranium is 235, compared with 16 for the oxygen and 12 for the carbon, there are fewer uranium atoms for a given weight than there are oxygen and carbon atoms. This reduces the energy relations of the U-235, compared with coal, to a ratio of 5,000,000 to 1.

There are several new methods being considered for increasing the yield of the new substance to large-scale amounts. But as to this, scientists greet the questioner with a profound silence.

(Laurence, William L., Vast Power Source in Atomic Energy Opened by Science, New York Times, May 7, 1940, P. 51)

On May 12, 1940, the New York Times Science in the News column written by Waldemar Kaempffert, its longtime science editor, included a section titled Atomic Power—Not Yet. That piece, published just one week later, had a completely different tone and expressed a sense of impossibility for the near term development of the technology:

Last week’s hullabaloo about atomic power naturally prompted this department to look into the possibility of dispensing with coal and oil. It is our sad duty to report that the prospect is not bright. If there is any thought of Germany’s making use of the work done at the universities of Columbia and Minnesota, and the General Electric Company’s laboratories, it must be dismissed. Yet physicists never were so near to doing away with coal and oil as sources of energy and turning to ordinary matter as they are now.

It takes about 100 hours to make one microgram of uranium-235 or 1,000,000 hours or over a century to make one gram. About 100 grams (a little more than three ounces) would be required to make serious experiments in generating energy on a small scale. At least five pounds would be required to drive an ocean liner. It may be that a more rapid means of producing U-235 than that now available may be evolved. But the prospect of using U-235 in the present war is zero.

As matters stand we are not likely to spend centuries in accumulating the necessary uranium-235. By the time we had it so much would be known about the structure of matter that easier means of developing power from the atom would have been discovered. Accordingly, this department has decided to place the usual order for coal to be shot into the cellar, and preparing itself for the usual task of shoveling expensive black lumps into a hungry furnace.

(Kaempffert, Waldemar, Science in the News: Atomic Power—Not Yet, The New York Times, May 12, 1940)

I was immensely curious about the abrupt turnaround in such a short period of time from the same publication. The mystery was solved when I found out that Germany’s push west into the Low Countries and France started on May 10, 1940. Based on the expressed concerns that Germany might be actively pursuing the technology, it’s possible that the discouragement was motivated by something other than telling the complete truth.

It seems quite apparent that if the fission chain reaction had been discovered just a few years earlier or later, nuclear energy history would not have been defined by explosives—but by steady, controllable, non-coal power produced in simple piles, designed to turn heat into useful power in ways similar to those used to turn coal combustion heat into useful power.

U-235 200x200




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Realistic look at Small Modular Reactors in Idaho

By Rod Adams

From October 30 through November 1, 2013, a group of about 150 people with questioning attitudes about small, modular reactors (SMRs) met in Idaho Falls, Idaho.  They were treated to a number of presentations that described the technical progress that has been made so far and also provided a realistic, sobering look at the long, challenging development path that must be traversed to allow the technology to begin contributing to the world’s energy security.

A wide variety of organizations sponsored the meeting; there were reactor vendors, several supplier companies, and a couple of focused development organizations from Missouri.  (Come to think of it, the active involvement from the “Show Me” state might have had something to do with the fact that the meeting addressed a lot of hard questions with open-ended answers rather than being dominated by optimistic sales pitches.)

Though the Idaho National Laboratory (INL) was not a conference sponsor, it was an active participant.  On the first day of the event, the Lab provided a tour of several of its historical and operating facilities, including the EBR-I, the Advanced Test Reactor, and the Hot Fuels Examination Facility.

smr tour inside ebr 1 control room - first nuclear power plant to generate electricity, in 1951

SMR tour in the EBR-1 control room – first nuclear reactor to generate electricity, in 1951

First devices powered by electricity from nuclear were four 200-watt light bulbs

First devices powered by electricity from nuclear were four 200-watt light bulbs

The INL facilities tour also included several in-town labs in Idaho Falls that perform research that does not require the isolation of INL’s desert facilities.  One of the most impressive facilities on that tour was the Human Systems Simulation Laboratory (HSSL).  It is a fully reconfigurable, digital representation of a nuclear power plant control room with impressive fidelity.

According to the technicians supporting the tour, it is possible to shift the HSSL from one plant’s control room to another in approximately 30 minutes.  As a national lab, INL has been able to develop agreements and relationships with a number of different simulator vendors and utilities.  INL is a trusted agent that has shown that it can help distribute important operating experience that should be shared and protect intellectual property that should not be shared.

Our tour guides “took the fifth” with a chuckle on a question about the ability of the HSSL system to display commercial high-definition TV (there was a World Series game scheduled on the day of our visit).

John Grossenbacher, the Director of the Idaho National Laboratory, gave a talk that identified several important contributions that the national labs, his own in particular, can make to the development of small modular reactor technology.  He reminded the attendees that there is plenty of space within the 860 square miles of lab property to site first-of-a-kind reactors if needed.  Several INL scientists participated in the conference, including Dr. Piyush Sabharwall, who was recently featured in an ANS Nuclear Cafe post about his selection as the 2013 Young Member Excellence Award recipient.

Jeff Sayer, the Director of the Idaho Department of Commerce and Chairman of the Leadership in Nuclear Energy Commission 2.0, served as the master of ceremonies for the conference.  Throughout the event, he reiterated his home state’s long history in nuclear energy development, its record of having been the site for more than 50 first-of-a-kind small reactors, and its interest in continued involvement in nuclear energy development.

Brad Little, the Lieutenant Governor of Idaho, provided a luncheon address that reinforced what Mr. Sayer had been telling us.  Unlike many politicians when invited to a technical conference, he attended the entire day’s sessions and incorporated some of what he heard in the morning in his enthusiastic and engaging talk.

Based on the number of references by other speakers after she gave her talk, Andrea Jennetta, the publisher of Fuel Cycle Week, certainly made a lasting impression.  Her talk was titled “Industry Observer, Provocateur – Uranium Saves Lives… And Other Shocking Truths about the Science and Politics of Nuclear Power.”  Among her many memorable points was an admonition to nuclear technology promoters to remember that there is “no ‘R’ in safe.”  That is, she asked people to stop trying to sell their systems based on the idea that they are “safer” than the existing systems – that have not exposed anyone to dangerous radiation doses in 50 years.

Aside:  Jennetta has several more people to convince, including the NRC and the scientists that recently wrote a pronuclear letter titled To Those Influencing Environmental Policy But Opposed to Nuclear PowerEnd Aside.

She also made the bold statement that nuclear energy’s ONLY obstacle was POLITICS.  Several later speakers stated that they believed that economics was an equally important obstacle, but Andrea insisted that most of the most difficult economic challenges have been imposed by political processes.

Paul Genoa, Senior Director of Policy Development for the Nuclear Energy Institute, described the importance of improving the dysfunctional markets that have resulted in the recent decision to close two, relatively small, existing reactors.  He agreed with many of the motivations for building smaller, simpler, factory-produced power plants, while also offering a warning that there might not be a market if we all do not work together.  He recommended action to fix the way that existing market rules place little or no monetary value on important characteristics like voltage support, steady baseload, and ultra low emissions, all of which are strengths of nuclear energy.

Finis Southwirth, the Chief Technical Officer for AREVA, described his company’s expertise in supplying a wide variety of nuclear fuel for existing power plants and offered the somewhat surprising fact that qualifying a slightly modified light water reactor fuel might cost $100 to $200 million, while qualifying a brand new fuel for a different kind of coolant might require $1 billion and at least 10-15 years worth of lead time.  That explains why all of the SMR projects that are planning to have commercial offerings before 2025 are light water reactors using only slightly modified fuel.

Newport News Shipbuilding (NNS) had three representatives at the event.  Bob Granata, Vice President, Operations and Technology Development, informed the power plant vendors that shipbuilders have been manufacturing and assembling modular nuclear systems for many years.  He described how the current process for building Virginia class submarines has some modules of the ship being made by Electric Boat Company in New England and others being manufactured by NNS in Virginia.  The key to the program’s success is design and processes that ensure that those modules fit together.  The shipyard is ready for orders to “bend metal” whenever the vendors have finished their designs and found power plant customers.

Mike McGough, Chief Commercial Officer of NuScale Power, described his company’s history and unique technology.  The NuScale concept of building a 540 MWe power plant from a collection of twelve identical, independently contained, natural circulation 45 MWe reactors, each with its own power turbine is quite different from any of the other proposed systems.  As McGough reminded everyone, NuScale opened up its initial licensing dialog with the NRC in 2008.  McGough claimed to have been happy that NuScale was later joined in the race to commercialization by B&W and Westinghouse as they each recognized the potential value of the smaller reactor market.

Throughout the event, it was apparent that the state of Missouri is very interested in the potential of SMRs as a statewide development effort.  It was difficult to join any small group conversation without it including someone from a Missouri organization; there were representatives there from the state economic development office, from several universities, from Ameren, and from several potential suppliers.

Missouri has formed a strong, bipartisan coalition with those groups plus support from a Republican legislature, a Democratic governor, and the public power cooperatives.  The state has selected Westinghouse as its partnering vendor; everyone I talked to is eagerly awaiting the announcement of the selection for the second Department of Energy SMR Funding Opportunity Announcement (FOA), believing they have made a very strong case.

One of the best things about the event was the opportunity to engage in frank discussions with experienced people who understand that major new developments do not happen quickly in the nuclear energy industry, but who also understand the importance of making steady progress.  The vendors all acknowledged that their systems will be tough sells in the US under conditions of current natural gas prices, but a number of attendees reminded everyone that no one really knows what natural gas prices will be in the 2022 to 2025 time frame when the first SMRs will begin commercial operations.  Even more importantly, no one knows what the prices will be during an SMR’s 60-year lifetime.

As some speakers pointed out, natural gas prices in Europe, parts of South America, and the Far East are already high enough to encourage a reasonably high level of excitement about SMR development.  With ongoing concern about climate change, it is always worthwhile to invest in a zero emission power source that can compete with methane (aka natural gas).  That fuel’s climate-related boast is that it is… only half as dirty as coal.




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

European renewable energy subsidies under fire from major power generators

By Rod Adams

The leaders of electric power companies owning half of Europe’s generating capacity have joined together to inform the European Union that its policies are leading to a dangerously unstable power grid. According to GDF Suez CEO Gerard Mestrallet,

“The risk of black-outs has never been higher.”

That is a pretty strong statement of concern. In addition to worrying about grid stability, the power suppliers are also concerned that their continent is not on a path to achieve its CO2 emissions targets and they are worried about the response of customers that continue to see their electricity bills rise at the same time that they read about ever lower wholesale prices.

The problems stem from a series of decisions that have been made with the expressed intent of achieving three goals – improved energy security, reduced greenhouse gases and reduced energy prices. However, the EU’s decisions to subsidize selected technologies, to flood the market for carbon emissions credits, and to discourage less popular, ultra-low emission technologies — like nuclear energy — have increased prices for energy users, slowed CO2 emissions reductions, and reduced grid stability.

Not surprisingly, outside observers have not yet noticed the grid stability risks. Most parts of the European grid, especially in the anchor countries of Germany and France, have experienced fewer power outages than in North America, but the people who are intimately involved in supplying the grid understand the importance of anticipation and early action as margins get thinner. Instability happens in complex systems at unexpected times when they operate close to their capacity limits.

Europe retains abundant electricity generating capacity, but more and more of its nameplate capacity is in the form of unreliable wind and solar systems that can only generate electricity when nature decides to supply the motive force. Too much of Europe’s capacity cannot be scheduled by either humans or their automated systems. Renewable power systems have production rates that are only coincidentally related to electricity demand; production is often too much, too little or not in the right place.

In response to frequent periods of low or negative wholesale prices and lack of compensation for providing on-demand capacity, generators are mothballing unprofitable generating plants. According to an October 12, 2013 article in The Economist titled How to lose half a trillion euros, European power suppliers have shuttered more than 30 GWe of modern gas-fired generating capacity and are considering shutting down even more.

Electricity generators, formerly predictable investments suitable for widows and orphans, are losing money and investor interest. Their market capitalization has fallen by more than a third from its peak, with the worst performances occurring in German utilities like E.ON, whose share price has dropped by 75%.

In contrast to the situation in North America, natural gas is not cheap in Europe; prices are about three times as high as they are in the US. In 2012, natural gas in the US was so cheap that it captured a substantial portion of the electricity fuel market from coal. That situation directly affected the European market.

Since coal is much easier to transport than natural gas and since the Federal Energy Regulatory Commission does not have to approve coal export permits — as long as the exports can pass through existing terminals — the coal industry’s natural response was to market its surplus coal production capacity outside the US.

Though the EU puts a price on carbon dioxide dumping through a carbon emissions trading system (ETS), it issued so many permits that the price has fallen to just 5 euros per ton. Part of the reason for the drop in price is the weakness of the European economy and the associated weakness in electricity sales. At current carbon emission permit prices, it is often cheaper to buy permits and burn coal imported from the US than to burn natural gas imported from Russia or the Middle East. In 2012, the European market accounted for 45% of a record US coal export volume of 114 million metric tons. CO2 emissions in Europe continued to fall in 2012 compared to 2011, but the rate was less than expected due to the increased use of coal.

Partly as a result of the various subsidies and mandates provided to selected technologies (primarily wind and solar but also biomass) there is a growing gap between the wholesale trading prices published for electricity sales and the prices that consumers pay for electricity. For example, in Germany, peak wholesale prices averaged 38 euros per MW-hr while retail electricity prices averaged 285 euros per MW-hr. While some of that price differential is due to the cost of transmission and distribution, it also covers the above market Feed In Tariffs paid to wind and solar generators. That kind of price differential makes the electricity generators nervous about customer backlash; they read about how electricity is getting cheaper, but they see their own bills increasing.

There is a risk that the EU policy makers will respond to the challenges illuminated by the generating companies with an effort to patch up the existing system. They are under pressure from environmental groups and renewable energy system suppliers to continue to provide preferential treatment to the currently popular, but unreliable power sources.

The generating company CEOs would prefer different solutions; they want a technology neutral climate goal and a halt to special subsidies and set asides for favored technologies. They recommend implementing capacity payments for reliable generation systems, especially if the preferential grid access for renewable power systems remains in place. They believe they are well positioned to make sound technical choices that would provide the most reliable service at the lowest overall cost – if they are given clear, fair rules.

Though the articles I’ve read so far on the topic do not say it in so many words, my guess is that the electricity generating company leaders recognize that nuclear energy would compete quite well under their preferred regulatory and incentive regime.

wind turbines wales 268x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.


How painful will the coming spike in natural gas prices be?

By Rod Adams

There is a good reason for American nuclear energy professionals to learn more about the dynamics of the natural gas market. We have been told numerous times that cheap natural gas is making our technology less and less viable in the competitive market place. Natural gas (also known as methane) is a terrific product, but it has been promoted as being capable of supplying a much larger portion of our overall energy demand. That promotional effort is putting us all at risk of a severe hangover when the low price bubble bursts.

I freely admit it; I am a contrarian who believes that the more the crowd pushes in one direction, the more beneficial it will be for me to move in the opposite direction. It is becoming more and more fashionable for casual observers of the North American energy market to make assertions about a long future of low natural gas prices that will benefit consumers and give energy intensive industries a competitive advantage in the world market.

In contrast, I am increasingly worried that there is going to be a painful spike in North American natural gas prices that will remind everyone that gas is a volatile commodity in both physical form and market price. Producers with supply that is not committed to long-term contracts will benefit enormously; consumers will suffer, independent power producers will suffer, and industrial customers will suffer, especially if they have recently made investments under an assumption that gas prices will remain low.

There are a number of factors in the multi-term differential equation that governs the balance between supply and demand in the gas market that are aligning to create an increasingly tight market.

  • Multinational companies like Sasol and Shell are planning or building gas-to-liquids (GTL) plants in the United States.
  • Drilling companies are scaling back drilling, especially in gas-rich areas.
  • The Department of Energy continues to approve export permits for liquified natural gas (LNG).
  • The Environmental Protection Agency has proposed CO2 emissions limits on new power plants that cannot be met with the best available coal burning technology.
  • Five existing nuclear power plant units, with a combined total capacity of more than 4,000 MWe, have either been permanently shut down or have announced an imminent closure.
  • Pipeline gas exports to Mexico have doubled in the past five years. There are projects underway that will result in another doubling in the rate of export to Mexico as our neighbor’s production capacity falls.
  • Canada is planning several west coast LNG export facilities.

Though increasing natural gas prices might seem to be a potential boon for nuclear energy development, there will be negative economic effects whose overall impact is unpredictable. History shows that a dramatically higher energy price reduces or eliminates energy demand growth, leads to inflationary pressures, and contributes to the risk of increased interest rates. Each of those effects puts new nuclear power plant projects at risk. The high prices may not last long; those effects tend to work to eventually bring markets back into balance.

United States citizens are often surprisingly unaware of events and market trends in other portions of the world. Even within my circle of colleagues that are working in the energy business, few realize that cheap natural gas is an almost purely North American phenomenon. European prices are approximately 2.5-3 times higher than current US prices, while Asian LNG buyers are often paying 4 or 5 times as much per unit energy as consumers in the United States.

That helps to explain why so many other countries are still planning a significant increase in their nuclear electricity production capacity. Outside of the United States, the nuclear renaissance is still moving forward, but that is not necessarily helping the nuclear professionals that like living and working inside the United States. (I am one of those people; with three young grandchildren, I am not interested in living overseas.)

All the above data points tell me that nuclear power plant owners should be much more reluctant to shut down their operating reactors, especially if they are making that decision based on an assumption that natural gas prices are going to remain low for many more years into the future. While it can sometimes require more patience than is common in corporate board rooms, a permanent decision to destroy a generating asset that meets all possible emission standards and does not burn natural gas seems to be a very short term decision. Observing those kinds of decisions makes my brain replay a refrain from a Jimmy Buffett song—”It’s a permanent reminder of a temporary feeling.”

The data also tell me that Southern Company and SCANA are going to be pleased that they made the long-term choice to expand their nuclear energy generating capability at just the right time to take advantage of low interest rates, low energy prices, low wage inflation, and new, passively safe nuclear power plant designs. Even though it seems to be a remote possibility today, someday in the near future their customers are going to be happy that they are served by utilities that did not follow the crowd down the seemingly easy path of increased natural gas dependence.

gas plant 290x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Anniversary – 80 years ago, Leo Szilard envisioned neutron chain reaction

By Rod Adams

nuclearpioneers_final-200-x-75On September 12, 1933, slightly more than 80 years ago, Leo Szilard was the first person to imagine a reasonable mechanism for releasing the vast quantities of energy known to be stored in atomic nuclei. As it turned out, his concept worked the first time it was tried on December 2, 1942.



Szilard’s inspired thought occurred on a dreary fall afternoon in London at the intersection of Russell Square and Southampton Row. Earlier that day, Szilard had read an article in The Times that described a talk given by Ernest Rutherford about breaking down atomic nuclei using accelerated protons. There was a brief mention in the article about the possibility of using recently discovered neutrons to transmute nuclei, but the article gave the impression that Rutherford thought that fast-moving protons were a better option because they could be accelerated with reasonably achievable voltages due to their positive charges. (Note: James Chadwick announced his discovery of neutrons on February 27, 1932, in a letter to the British science journal Nature.)

According to the newspaper account, Rutherford dismissed any possibility that the process of bombarding atomic nuclei would result in a net energy output, even if each individual reaction produced densely concentrated energy. The total amount of energy required to get the protons up to the required velocity would be substantially more than the amount of energy released when the nucleus broke apart. According to Rutherford, anyone who believed that nuclear reactions would be a potent source of useful energy was talking “moonshine”.

Not only did Szilard have a natural tendency to regard such assertions as a challenge, but Szilard had many motivations for thinking about ways to liberate atomic energy. He had been engaged for some time in thoughts about releasing the energy stored in atomic nuclei as a means of producing the power required to travel into space. Those thoughts had been inspired by conversations with Otto Mandl about ways to save mankind from itself by heroically succeeding in developing the means to leave our home planet.

He had read a novel by H. G. Wells titled The World Set Free that described a world in which atomic energy had been liberated in the service of mankind, but Szilard claimed that he considered that story as mere fiction and did not credit it as any part of his inspiration. Coincidentally, there is a line in Wells’s book, which was published in 1914, that predicted that someone would solve the puzzle of releasing atomic energy as early as 1933 with a combination of “induction, intuition and luck”.

The problem which was already being mooted by such scientific men as Ramsay, Rutherford, and Soddy, in the very beginning of the twentieth century, the problem of inducing radio-activity in the heavier elements and so tapping the internal energy of atoms, was solved by a wonderful combination of induction, intuition, and luck by Holsten so soon as the year 1933.

Finally, Szilard was a recently emigrated refugee from Nazi Germany. He was a native Hungarian, but had been living and working in Germany since the end of the first World War. He had been thinking deeply about the implications of the Nazis developing weaponry based on some of the nuclear physics concepts that he and his colleagues had just begun to recognize experimentally.

As Szilard later recounted the story, when he reached the intersection of Southampton Row and Russell Square a red light caused him to pause, giving time for his fertile imagination to engage. Then the idea struck him: If a neutron entered an atomic nuclei, and the subsequent reaction released two neutrons, it would be possible to produce a chain reaction. Since neutrons have no charge, each of those newly released neutrons would be able to travel freely through matter until they struck another nucleus.

If there was a sufficiently large mass, with a sufficient purity of the material whose nuclei released two neutrons every time it was hit with one neutron, Szilard realized that there was a distinct potential for industrial-scale power sources. He recognized immediately that there was also a possibility that the reactions could be produced in a manner that was rapid enough to cause an explosion of great force before the material was scattered and the reaction stopped.

At the time of his creative thought, Szilard had no idea what kind of experiments would be needed to find the right material or who would be willing to fund the experiments. He did not have a job, did not have a laboratory, and did not have much experience in developing experiments. All he had was enough money saved up from previous work to support himself for about a year while living in a London hotel, taking long baths, keeping up with published papers and eating out at restaurants. He spent the next few months after September 1933 thinking, reading, and occasionally writing down his thoughts. This process was similar to that which Szilard had followed when he earned his PhD less than a year after starting his focused study of physics.

On March 12, 1934, Szilard applied for a patent that was eventually merged with several other patents into Improvements in or relating to the Transmutation of Chemical Elements. The key improvement that Szilard proposed over the work done by people like Rutherford and the Joliot-Curies was using neutrons and chain reactions instead of protons or alpha particles.

It is unfortunate that Szilard’s contribution to the improvement of the human condition has been too often overlooked.

Note: Much of the above is adapted and summarized from Richard Rhodes, “The Making of the Atomic Bomb

neutron chain reaction c 267x200




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Why don’t we “mothball” shutdown nuclear plants?

By Rod Adams

In May 2013, the United States lost a perfectly functional and well-maintained nuclear power plant, the Kewaunee Nuclear Power Plant. Last week, Entergy announced that it would be shutting down a second such plant, Vermont Yankee, after its current fuel load has been consumed. In both cases, the owners indicated that the plants were no longer economical due to market conditions; namely, the low price of natural gas, the presence of subsidized renewable energy suppliers that can pay the grid to take their power and still receive revenue for every kilowatt-hour generated, and an insufficient market demand for electricity in the markets where the plants were attempting to sell their output.

Vermont Yankee Nuclear Power Plant

Vermont Yankee Nuclear Power Plant

Under similar market conditions, conventional power plant owners might decide to shutdown the plant but make provisions to ensure that the plant could be restored to service if needed, or if the market conditions change by either increasing revenue opportunities, lowering operating costs, or both. However, in each of the nuclear power plant cases under discussion, the owners decided that their best course of action was to announce a permanent shutdown with the concurrent action of giving up the plant operating license. In both cases, the plant operating licenses had been recently extended for an additional 20 years.

Giving up an operating license for a nuclear power plant in the United States is a permanent choice with implications that run into the many billions of dollars; there has never been a situation where a plant owner gave up an operating license and was subsequently granted another license to operate that plant.

The closest precedent available is the Tennessee Valley Authority’s Browns Ferry. All three units were shutdown in 1985, each was later restored to operating status (1991, 1995, and 2007). The difference at Browns Ferry was that the owner (TVA) never gave up the operating licenses.

Unfortunately, there are several aspects of current rules that discourage nuclear plant owners from choosing to mothball plants.

There are only two license choices available for the owner of a nuclear power plant. The owner can maintain an operating license, which costs a minimum of $4.4 million per year in fees to the Nuclear Regulatory Commission, or the owner can choose to give up the operating license for a “possession only” license. That costs just $231,000 per year, plus the cost of any additional regulatory services, which are billed to licensees at a rate of $274 per staff hour. (Note: Some operating licensees pay more than the minimum because they have special conditions that require additional regulatory services. If that is true, those services are billed at the same $274 per staff hour rate.)

In addition to the annual operating license fee, a company that seeks to maintain an operating license must maintain a certain level of staff proficiency and must maintain a security force sized to prevent a design basis threat from gaining control of the facility and causing the plant to release radioactive material. Of course, a plant that is in a state of semi-permanent shutdown could probably make a successful case for maintaining a substantially reduced staff compliment; there might already be a reduced staffing precedent available from the long-term shutdown and eventual restoration of TVA’s Browns Ferry.

The owners of a plant that is being held in a semi-permanent shutdown state could also make a good case to the NRC that they should be allowed to defer any required investments in new capabilities until such time as they decide that they are going to restart the plant. A semi-permanently shutdown plant would not need to purchase any new fuel or pay any additional contributions to the nuclear waste fund; those contributions are based on the amount of nuclear electricity generation.

However, during any period of semi-permanent shutdown, a nuclear plant will be consuming days of potential operation; nuclear plant operating licenses are issued on a strict calendar basis with no ability to reclaim days. Even if there is no stress or strain put on any plant components because the plant is shut down and cooled down, the calendar keeps turning pages. Owners are logically reluctant to keep up the spending on a plant that might only have a few years of life remaining after the market finally turns around.

Without access to the detailed financial analysis used by Dominion and Entergy to determine that the best course of action was to permanently shutdown Kewaunee and Vermont Yankee, I have to make an educated guess about the considerations that drove their decision. It seems highly unlikely that the operating license fee difference was enough to cause utilities to give up an asset whose replacement cost would be at least $3 billion–$5 billion. The ongoing personnel costs might have been high enough to tip the balance, but I doubt it.

I got a hint in a Bloomberg article about Entergy’s decision to shut down Vermont Yankee.

The reactor was expected to break even this year, with earnings declining in futures years, the company said. Closing it will increase cash flow by about $150 million to $200 million through 2017.

(Emphasis added.)

That’s right. Entergy has determined, and announced to the investment community, that closing down a production facility that produces about 4.8 billion kilowatt hours of electricity each year using fuel that costs just 0.7 cents per kilowatt hour will result in a substantial improvement in their cash flow. That is true even though the plant will not be producing any product and even though the company will incur some transition costs.

The jewel for Entergy is that the owner of a plant in a decommissioning status has access to the decommissioning fund that was set aside at the time that the plant was built and received additional funds over the years that the plant operated. In the case of Vermont Yankee, the decommissioning fund balance is $582 million. Tapping that fund will allow the company to book more revenue.

There is one more factor that is probably more important for Entergy than it was for Dominion. Removing production facilities in a market that is suffering from low prices as a result of insufficient market demand is a tried and true strategy for commodity suppliers. If enough production facilities stop producing the oversupplied product, it will enable the remaining facilities to raise prices to a more profitable level.

Since Entergy has a number of other facilities that sell into the Northeast U.S. electricity market, it will benefit when those price increases happen. Since Dominion’s Kewaunee was its only facility in the Midwest, it is hard to see any direct benefit to Dominion in the form of increased market prices.

I hope that your reaction to reading this explanation is to start thinking about ways to change the situation, before we lose any more emission-free, reliable, low-cost nuclear electricity production facilities.

Kewaunee Power Station

Kewaunee Power Station




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.