Author Archives: radams

Research Reactor License Renewal Challenges

By Rod Adams

The process for renewing research and test reactor (RTR) licenses in the United States has been subject to lengthy delays and periodic backlogs since the early 1980s. Despite the apparent time invested in improvement efforts, the process does not seem to be getting better very fast. The difficulty, schedule uncertainty, and cost of renewing research reactor licenses adds to the burden of owning and operating research reactors. The scale of the challenge may contribute to regrettable institutional decisions that maintaining operable facilities is not worth the trouble.

Here is the background that led me to those conclusions:

A couple of weeks ago, one of the email lists I read provided an intriguing press release announcing the renewal of Dow Chemical Co.’s TRIGA research reactor located in Midland, Mich. The intriguing part of the story was that Dow had initially filed its application to review the license in April 2009 and the 20-year extension was awarded on June 18, 2014, more than five years later. One of the more frequent contributors to the list had the following reaction:

Seriously? It took more than five years to renew a TRIGA license? That in itself might be an interesting story.

I followed up with a request for information to the Nuclear Regulatory Commission’s public affairs office. Scott Burnell replied promptly with the following information:

The background on the staff’s ongoing effort to improve RTR license renewal goes back quite a ways. Here’s a relevant SECY and other material:

http://pbadupws.nrc.gov/docs/ML0921/ML092150717.pdf

http://adamswebsearch2.nrc.gov/webSearch2/main.jsp?AccessionNumber=ML120930333 (March 2012 Commission meeting transcript)

http://adamswebsearch2.nrc.gov/webSearch2/main.jsp?AccessionNumber=ML12087A060 (March 2012 Commission meeting staff slides)

http://adamswebsearch2.nrc.gov/webSearch2/main.jsp?AccessionNumber=ML12240A677 (regulatory basis for rulemaking to improve process)

I’ll check with the staff Monday on what information’s available re: staff hours on the Dow RTR renewal review.

Burnell sent the staff hour estimate for renewing the Dow TRIGA reactor license. Not including hours spent by contractors, the NRC staff took 1600 hours to review the renewal application. Since Dow is a for-profit company, it was charged $272 per hour, for a total of $435,000 plus whatever contractor costs were involved. That amount just covers the cost of regulator time, not the cost of salaries and contracts paid directly by Dow to prepare the license application, respond to requests for additional information (RAI), and engage in other communications associated with the applications.

Based on the cover letter for the issued license, Dow sent 19 letters to the NRC related to Dow’s application during the five-year process.

The references supplied by Burnell provided additional information about the process that is well known within the small community that specializes in research reactor operations, maintenance, and licensing.

For example, the last renewal of the Rensselaer Critical Facility, a 100-Watt open tank reactor that was originally licensed in 1965, was initially submitted in November 2002 and issued on June 27, 2011, nearly nine years later. The NRC did not send Rensselaer an RAI until three years after it had submitted its renewal application.

University of Missouri Research Reactor

University of Missouri Research Reactor (MURR)

In a second example, the University of Missouri-Columbia Research Reactor (MURR) submitted its most recent license application in August 2006. The NRC sent its first set of RAIs in July 2009 and followed up with at least five more sets of RAIs that included a total of 201 questions of varying complexity. According to the NRC’s listing of research reactors currently undergoing licensing review, the MURR license has not yet been issued.

A third example is the Armed Forces Radiobiology Research Institute TRIGA reactor. Its license renewal application was submitted in July 2004 and is still under review. In 2012, AFRRI estimated that it would be spending at least $1 million for its share of the license review process, not including expenditures by the NRC. Since AFRRI is a government organization, the NRC does not bill it for fees. Burnell indicated that the staff hours expended on that project could be 6,000 or more. It is sadly amusing to review the brief provided by the AFRRI to the NRC in 2012 about the process. (See page 52–65 of the linked document.) The following quote is a sample that indicates the briefer’s level of frustration.

Question: Once the licensee demonstrates that the reactor does not pose a risk to the heath and safety of the public, what is the benefit provided to the public by the expenditure of $1M to answer the additional 142 RAIs?

In a quirk of fate, numerous research license renewals have often come due when NRC priorities have been reordered by external events. Research reactors receive 20-year licenses; numerous facilities were constructed in the late 1950s and early 1960s. Dozens of renewals came due or were already under review in April 1979 when the Three Mile Island accident and its recovery became the NRC’s highest priority items.

About 20 years after that backlog got worked off, the 9/11-inspired security upgrades pushed everything else down on the priority list.

TRIGA at Oregon State University

TRIGA at Oregon State University

The research reactor office has experienced staffing shortages, often exacerbated by the small pool of people with sufficient knowledge and experience in the field. When the NRC hunts for talent, it is drawing from the same pool of people that staffs the plants and is responsible for filing the applications for license amendments and renewals.

One aspect of the law that eases the potential disruption of the licensing delays is a provision that allows continued facility operation as long as there was a timely submission of the renewal application. That provision, however, has often resulted in a lower priority being assigned to fixing the staffing shortages and the complex nature of the license application process.

The facility owners don’t want to complain too loudly about the amount of time that their application is taking, since they are not prohibited from operating due to an expired license. NRC budgeters and human resource personnel have not been pressured to make investments in improving their service level; not only do the customers have no other choice, but they have not squeaked very loudly. Here is a quote from a brief provided to the NRC by the chairman of the National Organization for Test, Research and Training Reactors (TRTR).

Position on License Renewal

  • TRTR recognizes the unique challenges imposed on NRC during RTR relicensing in the past decade (staffing issues, 9/11, etc.).
  • TRTR appreciates the efforts made by the Commission to alleviate the relicensing backlog.
  • TRTR appreciates the efforts of the NRC RTR group to update guidance for future relicensing efforts and the opportunity to participate in the update process via public meetings.

Generic Suggestions for Streamlining Relicensing

  • The process has become excessively complex compared to 20 years ago, with no quantifiable improvement to safety.
  • Consider the development of generic thermal hydraulic analysis models for TRIGA and plate-type fueled RTRs (1 MW or less).
  • Similarly for the Maximum Hypothetical Accident analysis.
  • Develop a systematic way outside of the RAI process to correct typographical and editing errors.
  • Develop a generic decommissioning cost analysis based on previous experiences, indexed to power level, and inflation.
  • Endorse the use of ANSI/ANS Standards in Regulatory Guidance.

(Pages 26–28 of the linked PDF document containing several briefs, each with its own slide numbering sequence.)

Once the high priority responses have died down and backlogs of license reviews in progress have reached levels in excess of 50 percent of the total number of research reactors in operation, the NRC has stepped in and directed improvement efforts. The staff has attempted to improve the process by issuing more guidance, but those attempts have often complicated and delayed the applications that are already under review.

The Interim Staff Guidance (ISG) issued in June 2009 appears to still be active; it is difficult to tell how much progress has been made on the long-range plan that ISG outlined. Once again, external events have changed the NRC’s priorities as most available resources during the past three years have been shifted to deal with the events that took place in Japan in 2011 and the effort to come up with some kind of waste confidence determination.

There are no easy solutions, but repairing the process will require focused and sustained management attention.

TRIGA at University of California, Davis

TRIGA at University of California, Davis

______________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Nuclear professionals: Establish standing now to improve operational radiation limits

By Rod Adams

On August 3, 2014, the window will close on a rare opportunity to use the political process to strongly support the use of science to establish radiation protection regulations. Though it is not terribly difficult for existing light water reactors and fuel cycle facilities to meet the existing limits from 40 CFR 190 regarding doses to the general public and annual release rate limits for specific isotopes, there is no scientific basis for the current limits. If they are maintained, it would hinder the deployment of many potentially valuable technologies that could help humanity achieve a growing level of prosperity while achieving substantial reductions in air pollution and persistent greenhouse gases like CO2.

In January 2014, the U.S. Environmental Protection Agency issued an Advanced Notice of Proposed Rulemaking (ANPR) to solicit comments from the general public and affected stakeholders about 40 CFR 190, Environmental Radiation Protection Standards for Nuclear Power Operations.

The ANPR page has links to summary webinars provided to the public during the spring of 2014, including presentation slides, presentation audio, and questions and answers. This is an important opportunity for members of the public, nuclear energy professionals, nuclear technical societies, and companies involved in various aspects of the nuclear fuel cycle to provide comments about the current regulations and recommendations for improvements. Providing comments now, in the information-gathering phase of a potential rulemaking process, is a critical component of establishing standing to continue participating in the process.

us epa logo no text 214x201It also avoids a situation where an onerous rule could be issued and enforced under the regulator’s principle that “we provided an opportunity for comment, but no one complained then.”

The existing version of 40 CFR 190—issued on January 13, 1977, during the last week of the Gerald Ford administration—established a limit of 0.25 mSv/year whole body dose and 0.75 mSv/year to the thyroid for any member of the general public from radiation coming from any part of the nuclear fuel cycle, with the exception of uranium mining and long-term waste disposal. Those two activities are covered under different regulations. Naturally occurring radioactive material is not covered by 40 CFR 190, nor are exposures from medical procedures.

40 CFR 190 also specifies annual emissions limits for the entire fuel cycle for three specific radionuclides for each gigawatt-year of nuclear generated electricity: krypton-85 (50,000 curies), iodine-129 (5 millicuries), and Pu-239 and other alpha emitters with longer than one year half-life (0.5 millicuries).

It is important to clarify the way that the U.S. federal government assigns responsibilities for radiation protection standards. The Nuclear Regulatory Commission has the responsibility for regulating individual facilities and for establishing radiation protection standards for workers, but the EPA has a role and an office of radiation protection as well.

The Atomic Energy Act of 1954 initially assigned all regulation relating to nuclear energy and radiation to the Atomic Energy Commission (AEC). However, as part of the President’s Reorganization Plan No. 3 of October 1970, President Nixon transferred responsibility for establishing generally applicable environmental radiation protection standards from the AEC to the newly formed EPA:

…to the extent that such functions of the Commission consist of establishing generally applicable environmental standards for the protection of the general environment from radioactive material. As used herein, standards mean limits on radiation exposures or levels or concentrations or quantities of radioactive material, in the general environment outside the boundaries of locations under the control of persons possessing or using radioactive material.

(Final Environmental Impact Statement, Environmental Radiation Protection Requirements for Normal Operations of Activities in the Uranium Fuel Cycle, p. 18.)

Before the transfer of environmental radiation responsibilities from the AEC to the EPA, and until the EPA issued the new rule in 1977, the annual radiation dose limit for a member of the general public from nuclear fuel cycle operations was 5 mSv—20 times higher than the EPA’s limit.

The AEC had conservatively assigned a limit of 1/10th of the 50 mSv/year applied to occupational radiation workers, which it had, in turn, conservatively chosen to provide a high level of worker protection from the potential negative health effects of atomic radiation.

The AEC’s occupational limit of 50 mSv was less than 1/10th of the previously applied “tolerance dose” of 2 mSv/day, which worked out to an annual limit of approximately 700 mSv/year. That daily limit recognized the observed effect that damage resulting from radiation doses was routinely repaired by normal physiological healing mechanisms.

Aside: After more than 100 years of human experience working with radiation and radioactive materials, there is still no data that prove negative health effects for people whose exposures have been maintained within the above tolerance dose, initially established for radiology workers in 1934. End Aside.

From the 1934 tolerance dose to the EPA limit specified in 1977 (and still in effect), requirements were tightened by a factor of 2800. The claimed basis for that large conservatism was a lack of data at low doses, leading to uncertainty about radiation health effects on humans. Based on reports from the National Academy of Sciences subcommittee on the Biological Effect of Ionizing Radiation (BEIR), the EPA rule writers simply assumed that every dose of radiation was hazardous to human health.

The EPA used that assumption to justify setting limits that were quite low, but could be met by the existing technology if it was maintained in a like-new condition for its entire operating life. Since the rule writers assumed that they were establishing a standard that would protect the public from an actual harm, they did not worry about the amount of effort that would be expended in surveys and monitoring to prove compliance. As gleaned from the public webinar questions and answers, EPA representatives do not even ask about compliance costs, because they are only given the responsibility of establishing the general rule; the NRC is responsible for inspections and monitoring enforcement of the standard.

The primary measured human health effects used by the BEIR committee in formulating their regulatory recommendations were determined based on epidemiological studies of atomic bomb survivors. That unique population was exposed to almost instantaneous doses greater than 100 mSv. Based on their interpretation of data from the Life Span Study of atomic bomb victims, which supported a linear relationship between dose and effect in the dose regions available, the BEIR committee recommended a conservative assumption that the linear relationship continued all the way down to a zero dose, zero effect origin.

For the radionuclide emissions limits, the EPA chose numbers that stretch the linear no-threshold dose assumption by applying it to extremely small doses spread to a very large population.

The Kr-85 standard is illustrative of this stretching. It took several hours of digging through the 240-page final environmental impact statement and the nearly 400-page collection of comments and responses to determine exactly what dose the EPA was seeking to limit decades ago, and how much it thought the industry should spend to achieve that protection.

The EPA determined that allowing the industry to continue its then-established practice of venting Kr-85 and allowing that inert gas to disperse posed an unacceptable risk to the world’s population.

It calculated that if no effort was made to contain Kr-85, and the U.S. industry grew to a projected 1000 GW of electricity production by 2000, an industry with full recycling would release enough radioactive Kr-85 gas to cause about 100 cases of cancer each year.

The EPA’s calculation was based on a world population of 5 billion people exposed to an average of 0.0004 mSv/year per individual.

At the time that this analysis was performed, the Barnwell nuclear fuel reprocessing facility was under construction and nearly complete. It had not been designed to contain Kr-85. The facility owners provided an estimate to the EPA that retrofitting a cryogenic capture and storage capability for Kr-85 would cost $44.6 million.

The EPA finessed this exceedingly large cost for tiny assumed benefit by saying that the estimated cost for the Barnwell facility was not representative of what it would cost other facilities that were designed to optimize the cost of Kr-85 capture. It based that assertion on the fact that Exxon Nuclear Fuels was in a conceptual design phase for a reprocessing facility and had determined that it might be able to include Kr-85 capture for less than half of the Barnwell estimate.

GE, the company that built the Midwest Fuel Recovery Plant in Morris, Illinois, provided several comments to the EPA, including one about the low cost-benefit ratio of attempting to impose controls on Kr-85:

Comment: The model used to determine the total population dose should have a cutoff point (generally considered to be less than 0.01 mSv/year) below which the radiation dose to individuals is small enough to be ignored.

In particular, holdup of krypton-85 is not justified since the average total body dose rate by the year 2000 is expected to be only 0.0004 mSv/year.

Response: Radiation doses caused by man’s activities are additive to the natural radiation background of about 0.8-1.0 mSv/year [note: the generally accepted range of background radiation in the mid 1970s, as indicated by other parts of the documents was 0.6 - 3.0 mSv/yr] whole-body dose to which everyone is exposed. It is extremely unlikely that there is an abrupt discontinuity in the dose-effect relationship, whatever its shape or slope. at the dose level represented by the natural background that would be required to justify a conclusion that some small additional radiation dose caused by man’s activities can be considered harmless and may be reasonably ignored.

For this reason, it is appropriate to sum small doses delivered to large population groups to determine the integrated population dose. The integrated population dose may then be used to calculate potential health effects to assist in making judgements on the risk resulting from radioactive effluent releases from uranium fuel cycle facilities, and the reasonableness of costs that would be incurred to mitigate this risk.

Existing Kr-85 rules are thus based on collective doses, and a calculation of risks, that is now specifically discouraged by both national (NCRP) and international (ICRP) radiation protection bodies. It is also based on the assumption of a full-recycle fuel system and 10 times as much nuclear power generating capacity as exists in the United States today.

Since the level specified is applied to the entire nuclear fuel cycle industry in the United States, the 40 CFR 190 ANPR asks the public to comment about the implications of attempting to apply limits to individual facilities. This portion of the discussion is important for molten salt reactor technology that does not include fuel cladding to seal fission product gases, and for fuel cycles that envision on-site recycling using a technology like pyroprocessing instead of transporting used fuel to a centralized facility for recycling.

There are many more facets of the existing rule that are worthy of comment, but one more worth particular attention is the concluding paragraph from the underlying policy for radiation protection, which is found on the last page of the final environmental impact statement:

The linear hypothesis by itself precludes the development of acceptable levels of risk based solely on health considerations. Therefore, in establishing radiation protection positions, the Agency will weigh not only the health impact, but also social, economic, and other considerations associated with the activities addressed.

In 1977, there was no consideration given to the fact that any power that was not generated using a uranium or thorium fuel cycle had a good chance of being generated by a power source producing a much higher level of carbon dioxide. In fact, the EPA in 1977 had not even begun to consider that CO2 was a problem. That “other consideration” must now play a role in any future decision-making about radiation limits or emission limits for radioactive noble gases.

If EPA bureaucrats are constrained to use the recommendations of a duly constituted body of scientists as the basis for writing its regulations, the least they could do before rewriting the rules is to ask the scientific community to determine if the linear no-threshold (LNT) dose response model is still valid. The last BEIR committee report is now close to 10 years old. The studies on which it was based were conducted during an era in which it was nearly impossible to conduct detailed studies of DNA, but that limitation has now been overcome by advances in biotechnology. There is also a well-developed community of specialists in dose response studies that have produced a growing body of evidence supporting the conclusion that the LNT is not “conservative”—it is simply incorrect.

Note: Dose rates from the original documents have been converted into SI units.

epa sign 313x201


Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Accepting the Science of Biological Effects of Low Level Radiation

By Rod Adams

A group of past presidents and fellows of the American Nuclear Society has composed an important open letter to ANS on a topic that has been the subject of controversy since before I first joined the society in 1994. The subject line of that letter is “Resolving the issue of the science of biological effects of low level radiation.” The letter is currently the only item on a new web site that has been created in memory of Ted Rockwell, one of the pioneers of ANS and the namesake of its award for lifetime achievement.

LNT and “no safe dose”

Ted was a strong science supporter who argued for many years that we needed to stop accepting an assumption created in the 1950s without data as the basis for our radiation protection regulations. That assumption, which most insiders call the “LNT”—linear no-threshold dose response—says that risk from radiation is linearly proportional to dose all the way to the origin of zero risk, zero dose.

Many people who support the continued use of this assumption as the basis for regulation plug their ears and cover their eyes to the fact that those who oppose the use of nuclear energy, food irradiation, or medical treatments that take advantage of radiation’s useful properties translate our mathematically neutral term into something far more fear-inspiring: They loudly and frequently proclaim that the scientific consensus is that there is “no safe dose” of radiation.

Some people who support the use of nuclear energy and who are nuclear professionals help turn up the volume of this repeated cry:

Delvan Neville, lead author of the study and a graduate research assistant in the Department of Nuclear Engineering and Radiation Health Physics at Oregon State University, told the Statesman Journal Apr. 28, “You can’t say there is absolutely zero risk because any radiation is assumed to carry at least some small risk.”

While most scientists and engineers understand that the LNT assumption means that tiny doses have tiny risks that disappear into the noise of daily living, the people who scream “no safe dose” want their listeners to believe it means that all radiation is dangerous. They see no need to complicate the conversation with trivial matters like measurements and units (I am being ironic here).

Scientists and engineers almost immediately ask “how much” before starting to get worried; but others can be spurred into action simply by hearing that there is “radiation” or “contamination” and it is coming to get them and their children. When it comes to radiation and radiation dose rates, we nuclear professionals have not made it easy for ourselves or for the public, using a complicated set of units, and in the United States remaining stubbornly “American” by refusing to convert to the international standards.

Aside: There is no good reason for our failure to accept international radiation-related measurement units of Sieverts, Bequerel, and Grays. Laziness and “it’s always been that way” are lousy reasons. I’m going to make a new pledge right now—I will use International System of Units (SI) units exclusively and no longer use Rem, Curies, or Rad. After experiencing the communications confusion complicated by incompatible units during and after the Fukushima event, the Health Physics Society adopted a position statement specifying exclusive use of SI units for talking or writing about radiation, and perhaps ANS should adopt it as well. End Aside.

Physics or biology?

Leaving aside the propaganda value associated with the cry of “no safe dose,” an important factor that supports a high priority to the effort to resolve the biological effects of low-level radiation is the fact that the LNT uses the wrong science altogether.

The LNT assumption was created by persons who viewed the world through the lens of physics. When dealing with inanimate physical objects all the way down to the tiniest particles like neutrons, protons, mesons, and baryons, statistics and uncertainty principles work well to predict the outcome of each event. An atom that fissions or decays into a new isotope has no mechanism that works to reverse that change. A radiation response assumption that applies in physics, however, is an inadequate assumption when the target is a living organism that has inherent repair mechanisms. Biology is the right science to use here.

At the time that the LNT was accepted, decision-makers had an excuse. Molecular biology was a brand new science and there were few tools available for measuring the effects that various doses of radiation have on living organisms.

The assumption itself, however, has since inhibited a major tool used by biologists and those who study the efficacy of medical treatments: Since all radiation was assumed to be damaging and could only be used in medicine in cases where there was an existing condition that might be improved, it was considered unethical to set up well-designed randomized controlled trials to expose healthy people to carefully measured doses of radiation while having a controlled, unexposed group.

Instead, health effects studies involving humans have normally been of the less precise observational methods of case-control or cohort variety, with occupationally or accidentally exposed persons. The nature of the exposures in those studies often introduces a large measurement uncertainty, and there are complicating factors that are often difficult to address in an observational study.

Science marches on, but will LNT?

Molecular biology and its available tools have progressed dramatically since the LNT was adopted by BEIR I (Committee on the Biological Effects of Ionizing Radiation) in 1956. It is now possible to measure effects, both short-term and long-term, and to watch the response and repair mechanisms actually at work. One of the key findings that biologists have uncovered in recent years is the fact that the number of radiation-induced DNA events at modest radiation dose rates are dwarfed, by several orders of magnitude, by essentially identical events caused by “ordinary” oxidative stress.

This area of research (and others) could lead to a far better understanding of the biological effects of low-level radiation. Unfortunately, the pace of the research effort has slowed down in the United States because the Department of Energy’s low dose research program was defunded in 2011 for unexplained reasons.

It is past time to replace the LNT assumption with a model that uses the correct scientific discipline—biology, rather than physics—to predict biological effects of low-level radiation. I’ll conclude by quoting the final paragraph of the ANS past presidents’ open letter, which I encourage all ANS members, both past and present, to read, understand, and sign:

The LNT model has been long-embedded into our thinking about radiation risk and nuclear energy to the point of near unquestioned acceptance. Because of strict adherence to this hypothesis, untold physiological damage has resulted from the Fukushima accident—a situation in which no person has received a sufficient radiation dose to cause a significant health issue—yet thousands have had their lives unnecessarily and intolerably uprooted. The proposed actions will spark controversy because it could very well dislodge long-held beliefs. But as a community of science-minded professionals, it is our responsibility to provide leadership. We ask that our Society serve in this capacity.

Additional reading

Yes Vermont Yankee (June 23, 2014)  “No Safe Dose” is Bad Science. Updated. Guest Post by Howard Shaffer

Atomic Insights (June 21, 2014) Resolving the issue of the science of biological effects of low level radiation

dna 432x201

____________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Spent fuel pool fire risk goes to zero a few months after reactor shutdown

By Rod Adams

It’s time to stop worrying about the risk of a spent fuel pool fire at decommissioned nuclear reactors. Even at operating reactors, there is good reason to put the risk quite low on any table that prioritizes items worth fretting over.

According to modern analysis using up-to-date data and physically representative models—with appropriately conservative assumptions—the staff at the U. S. Nuclear Regulatory Commission has reached a series of important conclusions:

1. Spent nuclear fuel storage pools are strong, robust structures that are highly likely to survive even the strongest of potentially damaging events. That is true for even the most limiting case of elevated pools used in Mk 1 boiling water reactors:

The staff first evaluated whether a severe, though unlikely, earthquake would damage the spent fuel pool to the point of leaking. In order to assess the consequences that might result from a spent fuel pool leak, the study assumed seismic forces greater than the maximum earthquake reasonably expected to occur at the reference plant location. The NRC expects that the ground motion used in this study is more challenging for the spent fuel pool structure than that experienced at the Fukushima Daiichi nuclear power plant from the earthquake that occurred off the coast of Japan on March 11, 2011. That earthquake did not result in any spent fuel pool leaks.

(Emphasis added.)

2. If an event even more powerful than the already extreme assumption occurs and a pool is damaged enough to cause a significant leak, it is almost certain that the used fuel inside the pool will remain intact. The only period in which there is any doubt about that statement is during the first few months after the most recently operating fuel has been put into the pool:

In the unlikely situation that a leak occurs, this study shows that for the scenarios and spent fuel pool studied, spent fuel is only susceptible to a radiological release within a few months after the fuel is moved from the reactor into the spent fuel pool. After that time, the spent fuel is coolable by air for at least 72 hours. This study shows the likelihood of a radiological release from the spent fuel after the analyzed severe earthquake at the reference plant to be about one time in 10 million years or lower.

(Emphasis added.)

3. Even if all else fails, and—somehow—there is an event that both empties the pool and causes the protective cladding on the fuel to catastrophically fail, the chance of anyone being exposed to enough radioactive material to cause a dose that would have any health impact is vanishingly tiny:

If a leak and radiological release were to occur, this study shows that the individual cancer fatality risk for a member of the public is several orders of magnitude lower than the Commission’s Quantitative Health Objective of two in one million (2×10-6/year). For such a radiological release, this study shows public and environmental effects are generally the same or smaller than earlier studies.

(Note: The quoted statements come from pages iii and iv of Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U.S. Mark I Boiling Water Reactor dated October 2013. The numbered statements are my interpretation of what the analysis results mean to the rest of us.)

The staff at the NRC did not reach these conclusions lightly. Even though the NRC and its predecessor agency have been encouraged to study this area in excruciating detail for more than 40 years, the publicity surrounding the events at Fukushima and the mistaken belief that there were leaks from the spent fuel pools at that plant caused the agency to initiate yet another study, the results of which are quoted above.

That study was not a minor effort. After reviewing the document, which includes a total of 416 pages of material, including detailed responses to comments, I contacted the NRC public affairs office and asked the following question: “How much NRC time and money was invested into the production of the document titled “Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U. S. Mark 1 Boiling Water Reactor” dated October 2013?”

Here is the answer I received from Scott Burnell, NRC Office of Public Affairs:

The staff was able to provide the following information. In FY2011, 11 staff worked a total of 275.75 hours on the Spent Fuel Pool Study. In FY2012, 24 staff worked a total of 4,623.25 hours on the project. In FY2013, 22 staff worked a total of 6,253.75 hours on the project. And for the portion of FY2014 until the report was submitted, eight staff worked a total of 378.5 hours on the project.

That makes a total of 11,530.25 hours. That is more than 5 person-years, but it involved at least 24 separate individuals. The current professional staff hour rate for the US NRC is $272, so the NRC staff time associated with the study cost licensees $3,136,228. There are many other costs associated with a study like this that are not included in that total.

It is no wonder to me that four of the five NRC commissioners—who have been appointed and confirmed as independent, knowledgeable professionals charged with ensuring that all activities associated with using radioactive material in the United States are adequately safe—voted to move on and stop studying the non-issue of storing used fuel in licensed spent fuel pools.

That decision was recorded on May 23, 2014, and documented in COMSECY-13-00030:

The Commission has approved the Staff’s recommendation that this Tier 3 Japan lessons-learned activity be closed and that no further generic assessments be pursued related to possible regulatory actions to require the expedited transfer of spent fuel to dry cask storage.

The only vote against closing the issue and continuing with more study and analysis was NRC Chairman Macfarlane. She is not only an academic with the  common academic belief that it’s worthwhile to keep studying interesting issues, even if they seemingly have little importance, but is also someone with a long publication history questioning virtually all spent fuel storage options.

It is also no surprise to me that Senators Boxer and Markey and their staffs have failed to get the message, and continue to insist that yet more expenditures be devoted to this issue, even though it has no impact on safety.

Finally, it is not surprising that professional nuclear energy skeptics such as the Natural Resources Defense Council and the Union of Concerned Scientists insist that the NRC analysis was incomplete, that there were scenarios that were not studied, and that there were hidden consequences that were not included. After all, interfering in all possible ways with any activity designed to provide temporary or permanent answers to the question: “What do you do with the waste?” has been a plank of the antinuclear platform for at least 40 years.

The effort to constipate nuclear energy by focusing on “the waste issue”, including pushing an expensive effort to move used fuel to dry casks as soon as possible, has been ongoing ever since Ralph Nader gathered a disparate collection of local activist groups in the summer of 1974 under the Critical Mass Energy Project banner.

There is no reason to keep expending money on this particular aspect of the waste issue. Dry cask storage is acceptably safe for an indefinite period of time. However, wet storage in licensed spent fuel pools is also acceptably safe for an indefinite period of time, even in spent fuel pools whose storage capacity has been computed, based on experience and careful study, to be substantially larger than was initially assumed when the pools were first designed in the 1960s and early 1970s.

spent fuel pool 303x201

_____________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Save Vermont Yankee. If not you, who? If not now, when?

By Rod Adams

I told some friends the other day that I often feel like a time traveler from the Age of Reason who sees questionable behavior and is forced by training to ask, “Why?”

Although I have already written a couple of articles on this particular topic, it is time for one more post intended to provoke thoughts and discussions aimed at finding a way to prevent an action that we all know is wrong and shortsighted. I’m writing about the pending closure of the Vermont Yankee nuclear power plant, a 650-MWe nuclear power plant located on the Vermont side of the Vermont/New Hampshire border (also known as the Connecticut River) and only a dozen or so miles from the Massachusetts border.

It is a safe, reliable, zero-emission nuclear power plant with a low, predictable fuel cost and a moderately generous, but predictable payroll. It has recently been extensively refurbished as part of a power uprate program; it has an operating license that is good until 2032 and may be able to be extended; and it has a brand new emergency diesel engine.

It is in a region of the United States where the reliable generating capacity is suddenly so tight that the total auction price for capacity has recently tripled from $1 billion in 2013 to more than $3 billion in the most recent auction.

Aside: It’s probably worth mentioning that if Vermont Yankee had bid into that auction, the prices would have settled at a far lower level. That is the nature of the response in an under damped system that is in a delicate balance; wild swings can result from the imposition of minor disturbances. It is not at all surprising that companies with generating facilities participating in the New England capacity auction did not approach Entergy about purchasing Vermont Yankee. There is no shock in finding out that 100 percent of the companies approached as logical candidates with complimentary assets politely declined to make any bids after a due diligence presentation. End Aside.

Vermont Yankee is also in a region of the country with a growing dependence on natural gas for both electricity and heat, but a pipeline network that was not sized to carry enough gas for both types of customers.

Here is a recent quote from Leo Denault, Entergy Corporation chief executive officer and chairman, about the power situation in New England:

“If we continue to see Northeast power markets drive what should be economical units to retire prematurely and not fairly reward generators for the attributes they provide—including fuel supply diversity and reliability, as well as environmental benefits—what was a volatile outlier this winter… could become a recurring situation.”  Denault also noted the harsh winter’s ability to expose pipeline deficiencies that constrained certain resources during periods of high demand: “There is simply not enough natural gas pipeline capacity in New England to serve both heating demand and natural gas-fired power plants during extreme cold.”

(SNL Energy’s Power Daily — April 25, 2014)

Any industrial customers that are left in the region are left out in the cold, and it can get quite cold in New England, especially during a polar vortex.

The state of Vermont bears a large portion of the responsibility for the pending closure; in fact, there are politicians in the state who have bragged about their success in getting rid of a reliable, low cost, clean energy source (of course, they may slant their claims a bit).

Peter Shumlin—both as senator and then as governor—and his allies made life uncomfortable for Entergy during the 12 years that the company owned the facility. Their efforts added substantial costs to the total operations and maintenance costs and they demanded several different kinds of tribute in return for “allowing” the plant to keep operating.

It is understandable that there are many people on the plant staff who are sad that they are losing their jobs, but conflicted about leaving a state that did not value their contributions anyway.

Unfortunately, nuclear professionals did not do all they could to help the valiant efforts of Meredith and George Angwin, Howard Shaffer, Robert Hargraves, and others who worked hard to counter the FUD (fear, uncertainty and doubt) spread by the professional fear mongers like Arnie Gundersen, or the actions of professional nuclear energy industry critics like Mark Cooper and Peter Bradford.

So far, the antinuclear forces seem to have won the day.

Entergy has announced that no one wanted the plant. I will take them at their word, but I have to ask what kind of effort they invested to market the facility? It is almost like getting up one day and finding out that your neighbor, who owns a house that you always liked and thought would be a great place for your son or daughter to use to raise your grandchildren, had decided to tear down the house to leave a vacant lawn because that was easier than paying the upkeep after they retired to Florida.

He tells you that “everyone” knew the place was for sale and also knew that he planned to tear it down if no one came up with a reasonable offer. Somehow, you never noticed the little “For Sale” sign tucked in the bottom right hand corner of a front window. Perhaps it was because there was an overgrown plant out in front covering the sign.

At any rate, my little allegory would have a happy ending if you just happened to wake up and get your paper early enough on the day that the dumpsters were being delivered to stop your neighbor and halt the destruction before it started.

In the case of Vermont Yankee, there are potentially interested investors that never knew that the plant was for sale. There are also plenty of technically qualified people who could be formed into a technically qualified management team in short order to own and operate a nuclear plant that has already done all of the hard work of establishing procedures, schedules, required programs like QA and RP, and all of the host of other things that would need to be done for any new facility.

The reactions I have received from some very bright people when I describe the current plan can be summarized by the quote I received—second hand—from a correspondent who knows Nathan Myhrvold, the CEO of Intellectual Ventures and a partner with Bill Gates in Terrapower. My correspondent asked Myhrvold if he had any ideas about saving the plant. This is the response he received:

Not really…. It is an insane decision to shut it, but that is what nuclear has become…

Perhaps I am just a little odd, but I just don’t see how people can stand idly by and watch while a small group of people take actions that will harm a much larger group of people over a long time to come. If the action is, indeed, insane, the question is why should we allow it to happen?

Who is going to point out the insanity? When?

Back to the headline, which was the motto over one of the doorways at my alma mater.

“If not you, who? If not now, when?”

I guess that—for now—it’s going to be me and a few diehards who are still working hard in Vermont. With any luck, in a short period of time it will be me, those few diehards, and a dedicated team of well-resourced professionals who recognize that shutting down a well-operated nuclear plant is a betrayal of the people who have worked so hard to try to make the United States less dependent on foreign supplies of energy.

Some might say, who am I to question the analysis and decisions of a big company like Entergy. Surely the people working there know more about the situation than I do and should be trusted to have made the right call. As one of my many heroes famously advised: “Trust, but verify.” After I see the numbers, I might make a different call, but all of the publicly available numbers are pointing me in a different direction.

I may just be a guy who spends a good bit of his day blogging on the Internet, yes sometimes in my PJs. However, I’m also a guy who has been doing that for a long time while also holding down responsible positions in the US Navy and at a respected nuclear power plant design firm.

If you’re fortunate enough to have had the assignments I have had and you are any good at all, you end up meeting a few credible people who respect your ability. I even have a few friends in finance, some from my days at the Naval Academy and some from my sustained but eventually failed efforts to raise capital for Adams Atomic Engines, Inc.

BTW—did you know that the New England power grid burned diesel and jet fuel to supply 4 percent of its winter power this past year and that on some days, generators that were burning distilled petroleum products represented fully 25 percent of the electrical power supply? And those figures happened even WITH Vermont Yankee and Brayton Point supplying reliable power…

vermont yankee c 405x201

_______________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

TMI operators did what they were trained to do

Note by Rod Adams:  This post has a deep background story. The author, Mike Derivan, was the shift supervisor at the Davis Besse nuclear power plant (DBNPP) on September 24, 1977, when it experienced an event that started out almost exactly like the event at Three Mile Island on March 28, 1979.

The event began with a loss of feed water to the steam generator. The rapid halt of heat removal resulted in a primary loop temperature increase, primary coolant expansion, and primary system pressure exceeding the set point for a pilot operated relieve valve in the steam space of the pressurizer. As at TMI, that relief valve stayed open after system pressure was lowered, resulting in a continuing loss of coolant. For the first 20 minutes, the plant and operator response at Davis Besse were virtually identical to those at TMI.

After that initial similarity, Derivan had an “Ah-ha” moment and took actions that made the event at Davis Besse turn into a historical footnote instead of a multi-billion dollar accident.

When Three Mile Island happened and details of the event emerged from the fog of initial coverage, Mike was more personally struck than almost anyone else. He has spent a good deal of time during the past 35 years trying to answer questions about the event, some that nagged and others that burned more intensely.

In order to more fully understand the narrative below, please review Derivan’s presentation describing the events at Davis Besse, complete with annotated system drawings to show how the event progressed.

This story is a little longer and more technical than most of the posts on ANS Nuclear Cafe or Atomic Insights (where this post originally appeared). It is intended to be a significant contribution to historical understanding of an important event from a man with a unique perspective on that event. If you are intensely curious about nuclear energy and its history, this story is worth the effort it requires.

The rest of this post is Mike’s story and his analysis, told in his own words.

______________________

By Mike Derivan

My first real introduction to the Three Mile Island-2 (TMI) accident happened on Saturday, March 31, 1979, a few days after the accident. TMI-2 was a Babcock and Wilcox (B&W) pressurized water reactor plant.

At the Davis Besse nuclear power plant (DBNPP) in Ohio where I worked, we initially heard something serious had happened at TMI-2 as early as the day of the event, March 28, and interest was high because TMI was our sister plant. DBNPP also is a B&W PWR plant.

Actual details were sketchy for the next couple of days, and mainly by watching the nightly TV news it became clear to me that something serious was going on. It was clear from watching the TV news reports that conflicting information was being reported. Some reports indicated there had been radiation releases and also reports by the plant owner of no radiation releases.

I even remember hearing the words “core damage” first mentioned. It was Saturday on a TV news report that I saw the first explanation using pictures of the system to the suspected sequence of events and it became clear to me the pilot operated relief valve had stuck open.

My reaction was gut-wrenching and I was also in disbelief that TMI did not know what had happened at Davis Besse. That evening I watched the Walter Cronkite news report. I sat there with total disbelief as he discussed potential core meltdown. Disbelief because if you were a trained reactor operator in those days it was pretty much embedded in your head that a core meltdown was not even possible; and here that possibility was staring me right in the face.

Cronkite’s report was also my first exposure to the infamous hydrogen bubble story. I had enough loss of coolant accident (LOCA) training to understand that some hydrogen could be generated during LOCAs; after all we had containment vessel hydrogen concentration monitoring and control systems installed at our plant. But the actual described scenario at TMI seemed incredible, except that it had apparently happened.

I would expect that my reaction was the same as many nuclear plant operators at that time. The exception was that the apparent initiating scenario had actually happened to me 18 months earlier at Davis Besse and I just couldn’t get the question out of my mind: “Why didn’t they know?”

The real root cause of the TMI accident

Since the time of the TMI accident virtually hundreds of people have stuck their nose into the root cause of the TMI accident. Both the Kemeny and Rogovin investigations identified a lot of programmatic “stuff” that needed to be fixed, and I agree with most of it.

I feel, however, that both of them skirted one important issue by using different flavors of “weasel words” in the discussion of operator error. The two reports handled that specific topic a bit differently, but the discussions got couched with side topics of contributing factors. The general consensus of all the current discussion summaries I read is that TMI was caused by operator error.

The TMI operators did make some operator errors and I am not denying that. But my contention is all the errors they made were after the fact that they got outside of the design-basis understanding of PWRs at that time. It is no surprise to anyone that when a machine this complicated gets outside of its design basis, anything might happen. You basically hope for the best, but you are going to have to take what you get.

Fukushima proves that, and everyone knows why/how Fukushima got outside of its design basis. The how/why that TMI operators got outside of their design basis is going to be the focus of my discussion. I will also discuss the fact that I think this was understood at the time of the investigations, but it was consciously decided not to pursue it.

My whole point of contention is the turning off the high pressure injection flow early in the event in response to the increasing pressurizer level is the crux of the whole operator error argument. All discussions say that if the operators hadn’t done that, the TMI event would have been a no-never-mind. And I agree.

But nobody really wants to believe that they were told to do that for the symptoms they saw.

In other words, they were told to do that, by their training, compounded by tunnel vision bad procedure guidance. I have believed this since the day I understood what happened at TMI. Furthermore, the TMI operators were trying to defend their actions from a position of weakness; their core was melted, nobody wanted to believe them.

I am not in a position of weakness on this issue, my event came out okay at DBNPP, and so I have no reason to not be totally honest or objective on this issue. During the precursor event at DBNPP, we also turned off high pressure injection early in the event in response to the symptoms that we saw, and for the same reason the TMI operators did it 18 months later; we were told to do it that way.

This fact is apparently a hard pill to swallow. But if it is hard for you to accept, just imagine how I felt watching TMI unfold in real-time.

And right there is the crux of the issue. Once those high pressure injection pumps were off, both plants were then outside the design-basis understanding for that particular small break LOCA.

So you hope for the best, but take what you get. But still, obviously an error has been made if not taking that action would have made the event a no-never-mind.

So who exactly made the error? Both the Kemeny and Rogovin reports discuss the problems with the B&W simulator training for the operators. The important point that they both apparently missed (or didn’t want to deal with, which I prefer as the explanation) is that this is really an independent two-part problem.

I will refer to controlling high pressure injection during a small break LOCA as part A of the problem, and to the actual physical PWR plant response to a small break LOCA during a leak in the pressurizer steam space as part B of the problem.

It really is that simple. B&W was training correctly for high pressure injection control (part A) for small break LOCAs in the water space of their PWR. But neither they nor Westinghouse correctly understood the correct plant response for a small break LOCA in the pressurizer steam space.

By omission they were not training correctly for a small break LOCA in the pressurizer steam space (part B). To make matters worse, B&W was overstressing in training the importance of the part A “rules”, to the extent that an operator would fail a B&W administered operator certification exam for failure to correctly implement the part A rules.

Thus, when fate would have it and the two occurrences (part A and part B) combined in the real world, where the plant responds per the rules of Mother Nature, the B&W training and procedures ended up leading the operators to actions that put them outside the actual design basis, not the falsely perceived (and trained upon) design basis.

Up until very recently my argument has been one using just simple logic and sheer numbers of operators involved. In Davis Besse’s September 1977 event, there were five licensed operators involved in that decision, either by direct action or complacent compliance. In other words, all five agreed that it was the right thing to do. Of course, it wasn’t the right thing to do, but nobody objected because it was the correct part A thing to do and nobody understood the part B of the problem.

Eighteen months later at TMI, in March 1979, an additional number of operators (just how many depends on the time line) repeated the same initial wrong actions. So we have about a dozen operators, at two separate plants 18 months apart, all doing the same thing and all convinced that they were doing the right thing.

Is it even conceivable to think that they did not all believe they did the right thing according to part A? I just don’t believe so; of course, we are all arguing from a position of weakness. It is the wrong thing to do for part A and part B combined, so nobody really wants to believe that we were trained to do it.

But as I explained, it is really the two-part problem that created the issue. My point can be further emphasized by the fact that the Nuclear Regulatory Commission’s Region III had heartburn over the report that DBNPP submitted for its event. The NRC did not like the fact that the report did not say that the operators made an error turning off high pressure injection.

I know why that happened. The person most responsible for writing the report narrative was actually in the control room during the event. He did not believe the action was wrong based on his same training relative to part A of the problem. So why would he put that statement in the report? He was so convinced that his own (complacent) agreement was correct that saying otherwise would be a false statement.

Just recently new information came to my attention that absolutely confirms my belief that B&W was in fact totally emphasizing high pressure injection control in their training based solely on their understanding of the part A problem, with no understanding on B&W’s part of the part B problem or its affect when combined with the part A problem.

My understanding comes directly from seeing the whole infamous Walters’ response memo of November 10, 1977, to the original Kelly memo of November 1, 1977. It is absolutely remarkable to me that 35+ years after the DBNPP event and almost the same amount of time after TMI that a totally unrelated Google search turns up a complete version of the Walters memo.

After half a lifetime of studying all the TMI reports, I had only seen one “cherry picked” excerpt from the Walters memo, basically saying that he agreed with the operators’ response at DBNPP. The whole memo in context basically confirms that the operator claims of “we were trained to do it” are correct.

The original Kelly memo also confirms that Kelly still didn’t grasp the significance of the part B problem, as related to the DBNPP event; or if he did he didn’t relate it thoroughly and clearly in his memo. Both memos are presented and discussed below; make up your own conclusions. (The source document is here.

The Kelly memo

Kelly Memo

Kelly Memo

The referenced source document is basically a critique of these memos by textual communications experts. Here’s a summary: First, Kelly is talking “uphill” in the organization, so he couches his memo with that in mind. He asks no one for a decision, but basically asks for “thoughts.” And he makes a non-emphatic recommendation for “guidelines.”

My personal additional notations are that he dilutes the importance of and possibly adds confusion to the recommendation by adding “LPI” to the discussion, but most importantly he totally misses any part B problem discussion. He does say “the operator stopped High Pressure Injection when Pressurizer level began to recover, without regard to primary pressure.”

But there is no mention about the fact that the system response was not as expected, e.g. the pressurizer level went up drastically in response to the reactor cooling system boiling. He never articulates that the operator’s reluctance to re-initiate high pressure injection, even after we understood the cause of the off-scale pressurizer level indication, was based solely on that indicated pressurizer level and our training. Thus, the memo totally misses addressing the part B problem point that the system response was not as expected by anybody, which was crucial to getting the guidance fixed.

The other thing I notice is that the memo is not addressed to Walters. I’ve also “been there, done that” in a large organization. I can easily understand how the recipient (Walters’ boss) upon receiving this memo, with no specific articulation of a new problem (part B), would pass it to Walters with a “handle it, handle it… make it go away.” I also note that N.S. Elliott ison the distribution. He was the B&W Training Department manager, thus B&W training was directly in the loop on this issue also.

The Walters response memo

Note that the original Walters’ response memo to Kelly was hand written, so it has been apparently typed someplace along the line. This is how it appears in the reference source, typos and all.

Walters Memo

Walters Memo

I’m omitting the communications expert’s comments, because they are in the reference. Here are my comments: In simple operator lingo, this response is a “smart ass slap down” to Kelly, including all the accompanying sarcasm. But there are some very important admissions revealed here. First, an admission, including Walters’ discussion with the B&W Training Department, that we responded in the correct manner considering how we were trained, and also including the bases behind our training.

This is what we operators had been claiming all along, but nobody wanted to believe it. Second, Walters clearly states both as his personal assumption and the B&W Training Department assumption that reactor coolant pressure and pressurizer level will trend in the same direction during a LOCA. Bingo. He has just admitted that they don’t get, still, the specific part B contribution to the problem.

So they are in fact training wrong for this event because they don’t understand part B. Further, this discussion is happening after the DBNPP event, as a result of the Kelly concerns, and well before TMI. Third, the tone of Walters’ sarcastic comments about a “hydro” (hydrostatic pressure testing) of the reactor coolant system every time high pressure injection is initiated shows the disproportional emphasis that the B&W training was placing on “never let High Pressure Injection pump you solid.” Again, something that the operators were claiming that nobody wanted to believe.

My conclusion, and it hasn’t changed in 35 years, is that the root cause of the TMI accident was that the B&W simulator training and inadequate procedures put the TMI operators in a box, outside of their design-basis understanding for that specific small break loss of coolant. And a contributing cause is B&W itself didn’t understand the actual plant response to that steam space loss of coolant event because it was never analyzed correctly. Then, they also missed the warning that the Davis Besse event provided.

For a long time I wondered why both the Kemeny and Rogovin investigations didn’t reach the same specific conclusion as I have. After all, both investigations had some very smart people involved in both processes, and they both looked at the same evidence. My thinking today is that they did reach that same conclusion. But I don’t actually know what they may have seen as the bottom line purpose for their investigations either.

If you consider that no investigation report was going to change the condition of TMI, it may have been as simple as there is enough wrong that needs fundamental changing, so let’s just get those changes done and move forward. So neither group saw a need to identify the actual bottom line root cause, rather they just gave recommendations for prevention of another TMI–type accident.

Further, by the time those two reports were published, it was well understood that there was going to be a lawsuit between GPU and B&W. If one of those reports had specifically identified B&W with partial liability for the root cause, that conclusion along with the report that made it, would be inherently dragged into the lawsuit.

I have no doubt that this was actually discussed at the time. And I will further speculate that it was actually decided that there was no reason to identify the actual true single root cause in the reports because the lawsuit itself would decide that liability issue independently of the reports. My problem with that is the lawsuit, which started in 1982, never really settled the liability issue as it was mutually “settled” in 1983 before a conclusion was reached.

Another thing that I think was actually discussed at that time was the fact that if the reports stated that the root cause was because the B&W training put the operators outside of the design-basis understanding for that event (because the event wasn’t understood by B&W), it would open Pandora’s Box. They didn’t want to deal with “What else do you have wrong?” and there was well over a $100 billion worth of these nuclear power plants still operating.

This conclusion is strongly reinforced for me by the Kemeny Report section “Causes of the Accident”. This section of the report lists a “fundamental cause” as operator error, and specifically lists turning off high pressure injection early in the event. And then the report lists several “Contributing Factors” including B&W missing the warning provided by the Davis Besse event.

If you read the contributing factors listed, there is a screaming omission; it is never stated that B&W (actually the whole PWR industry if you consider the precursors) did not understand the actual plant response to a leak in the pressurizer steam space (what I refer here as part B of the problem). And that is why B&W and the NRC both missed the DBNPP warning. Virtually nothing will ever convince me that all those smart people did not put that truth together.

Thus, it was both their fear of opening Pandora’s Box and a conscious decision that there was no need to implicate B&W with any partial liability that ruled the process. By doing that, they collectively decided to throw the TMI operators under the bus as the default position.

My conclusion for the missing Contributing Factor problem is an Occam’s razor solution; it is not “missing” at all with respect to they didn’t “Get It”; it was a decision to not include it. After all, if that Contributing Factor had been included, who on earth would believe it is an operator error when they simply did what they were told to do in that situation? So, they just simply did not want to deal with the real issue; who made the error?

A simple analogy

For years I struggled with finding a simple analogy to explain the position that the TMI operators were placed in by their training, one that could be understood by common everyday knowledge that everyone was familiar with (and not the technical detail that required understanding the complications of nuke plant operations). One of the reasons that it was difficult was that it required a “phenomena” that is commonly understood today, but was not understood at all at the time of the training. This is the best that I can come up with.

Suppose in learning to drive a car you are being trained to respond to the car veering to the left. It’s simple enough, simply turn the steering wheel to the right to recover. It is also what your basic instinct would lead you to do, so there is no mental conflict in believing it.

It is also actually reinforced and practiced during actual driver training on a curvy road. That response is soon imbedded as the right thing to do. Now suppose your driver training also includes training on a car simulator training machine. It is where you learn and practice emergency situation driving. After all, nobody is going to do those emergency things in an actual car on the road.

Here’s where it gets complicated. Assume virtually no one yet understands that when the car skids to the left on ice (because of loss of front wheel steering traction), the correct response is to turn the steering wheel into the skid direction, or to the left. This is just the opposite of the non-ice response. And to make matters worse, because no one understands it yet, including the guy who built the car simulator, the car simulator has been programmed to make this wrong response work correctly on the simulator.

So in your emergency driver training you practice it this way, the simulator responds wrong to the actual phenomena, but it shows the successful result and you recover control. Since this probably also agrees with your instinct, and you see success on the simulator, this action is also embedded as the right thing to do. One additional point, if you don’t do this wrong action, you will flunk your simulator driver training test.

You know where this is going, now you are out driving on an icy road for the first time and the car skids to the left. You respond exactly as you were instructed to do and exactly as the simulator showed was successful, and you have an accident because the car responds to the real world rules of Mother Nature.

An investigation is obviously necessary because, I forgot to tell you, the car cost $4 billion and you don’t own it. During the subsequent investigation everything is uncovered; the unknown phenomenon is finally correctly understood, the simulator incorrect programming is discovered, it is uncovered that the previously unknown phenomenon had been discovered before your accident, and your accident was even predicted as possible.

But the investigation results are published and the finding is that the accident was caused by your error of turning the steering wheel the wrong way on the ice. Nobody else is found to have made an error in the stated conclusions but you; it is simply a case of driver error. Do you feel you have been wronged? This is what happened to the TMI operators.

For everybody out there who doesn’t like my conclusions, I’ll just say that many of the principals of the investigations are still alive, but choose not to talk. So, simply ask them, especially the principals in the GPU vs. B&W lawsuit that should have determined any liability issues. Ask them why it didn’t happen. My idea of justice involves getting the truth, the whole truth, and nothing but the truth exposed. That process is still unfinished.

tmi b&w 314x200

Small Modular Reactors—US Capabilities and the Global Market

By Rod Adams

On March 31–April 1, Nuclear Energy Insider held its 4th Annual Small Modular Reactor (SMR) conference in Charlotte, NC (following on the 2nd ANS SMR Conference in November 2013—for notes and report from that embedded topical meeting, see here).

You can find a report of the first day of talks, presentations, and hallway conversations at SMRs—Why Not Now? Then When? That first day was focused almost exclusively on the US domestic market—the second day included some talks about US capabilities, but it was mainly focused on information useful to people interested in developing non-US markets.

Before I describe the specifics, I want to take the opportunity to compliment Nuclear Energy Insider for its well-organized meeting. Siobhan O’Meara did an admirable job putting together an informative agenda with capable speakers and keeping the event on schedule.

westinghouse smr 200x336

Westinghouse SMR

Robin Rickman, director of the SMR Project Office for Westinghouse Electric Company, provided a brief update on his company’s SMR effort and the status of its development. He then focused much of his talk on describing the mutual challenges faced by the SMR industry and the incredible array of commercial opportunities that he sees developing if the industry successfully addresses the challenges together.In early February, Danny Roderick, chief executive officer of Westinghouse, announced that his company was shifting engineering and licensing resources away from SMR development toward providing enhanced support for efforts to refine and complete the eight AP1000 construction projects in progress around the world.

Rickman explained this decision and its overall impact on SMR development. He told us that Westinghouse remains committed to the SMR industry and to resolving the mutual challenges that currently inhibit SMR development. His project office has retained a core group of licensing experts and design engineers and is fully supporting all industry efforts. The SMR design is at a stage of completion that enables the company to continue to engage with both customers and regulators based on a mature conceptual design.

The company, however, does not want to get ahead of potential customers and invest hundreds of millions of dollars into completing a design certification if there are no committed customers. Rickman didn’t say it, but Westinghouse has a corporate memory from the AP600 project of completing the process of getting a design certification in January 1999 without ever building a single unit. It’s not an experience that they have any desire to repeat.

Westinghouse determined that its resources could be best invested in making sure that the AP1000 is successful and enables others to succeed in attracting financing and additional interest in nuclear energy.

For SMRs, Westinghouse has a business model that indicates a need for a minimum order book of 30–50 units before it would make financial sense to invest in the detailed design and the modular manufacturing infrastructure required to build a competitive product. Rickman emphasized that all of the plant modules must be assembled in a factory and delivered to the site ready to be joined together in order to achieve the capital cost and delivery schedule needed to make SMRs competitive.

That model requires a substantial investment in the factories that will produce the components and the various modules that make up the completed plant. He told us that the state of Missouri is already investing in creating such an infrastructure with the support of all of its major universities, every electricity supplier, a large contingent of qualified manufacturing enterprises, both political parties, and the governor’s office.

He told the audience that Missouri’s efforts are not limited to supporting a single reactor vendor; it is building an infrastructure that will be able to support all of the proposed light water reactor designs including NuScale, mPower, and Holtec.

Rickman included a heartfelt plea for everyone to recognize the importance of creating a new clean energy alternative in a world where billions of people do not have access to light at the flip of a switch or clean water by opening a simple tap.

In what was a surprise to most attendees, the FBI had a table in the expo hall and gave a talk about its interest in the safety and security of nuclear materials. I will reveal my own skepticism about the notion that nuclear power plants are especially vulnerable or attractive targets for people with nefarious intent. It is hard to imagine anyone making off with nuclear fuel assemblies or being able to do anything especially dangerous with them in the highly unlikely event that they did manage to figure out how to get them out of a facility.

Bryan Hernadez, a refreshingly young engineer, gave an excellent presentation about the super heavy forging capabilities available in the United States at Lehigh Heavy Forge in Bethlehem, Pa. That facility is a legacy of what formerly was the Bethlehem Steel Corporation’s massive integrated steel mill. It has the capacity to forge essentially every component that would be required to produce any of the proposed light water SMR designs.

The presentation included a number of photos that must have warmed the heart of anyone in the audience who likes learning about massive equipment designed to produce high quality goods with tight tolerances that weigh several hundred tons.

In a presentation that would have pleased several of my former bosses, Dr. Ben Amaba, a worldwide sales executive from IBM, talked about the importance of approaching complex designs with a system engineering approach and modern information tools capable of managing interrelated requirements. That is especially important in a highly regulated environment with a globally integrated supply chain.

Jonathan Hinze, senior vice president of Ux Consulting, provided an overview of both national and international markets and described those places that his company believes have the most pressing interest in machines with the characteristics being designed into SMRs.

He reminded the audience that US suppliers are not the only players in the market and that they are not even the current market leaders. He noted the fact that Russia is installing two KLT-40 power plants (light water reactors derived from established icebreaker power plants) onto a barge and that those reactors should be operating in a couple of years. He pointed to the Chinese HTR-PM, which is a power plant with two helium–cooled pebble bed reactors producing 250 MW of thermal power producing steam and feeding a common 210-MWe steam turbine power plant. He also mentioned that Argentina had recently announced that it had broken ground on a 25-MWe CAREM light water reactor.

Douglass Miller, acting director of New Major Facilities Division of the Canadian Nuclear Safety Commission, described his organization’s performance-based approach to nuclear plant licensing. He noted that the commission does not have a design certification process and that each project needs to develop its safety case individually to present to the regulator. It appears that the process is not as prescribed or as time-consuming as the existing process in the United States.

Tony Irwin, technical director for SMR Nuclear Technology Pty Ltd, told us that Australia is moving ever closer to accepting the idea that nuclear energy could play a role in its energy supply system. Currently, the only reactor operating in Australia is a research and isotope production reactor built by INVAP of Argentina. He described the large power requirements for mining operations in places not served by the grid and the fact that his country has widely distributed settlements that are not well-integrated in a large power grid. He believes that SMRs are well suited to meeting Australia’s needs.

Unfortunately, I had to get on the road to avoid traffic and get home at a reasonable hour, so I missed the last two presentations of the day. I probably should have stayed to hear about the cost benefits of advanced, non-light water reactors and about Sweden’s efforts to develop a 3-MWe lead–cooled fast reactor for deployment to Canadian arctic communities.

As I was finalizing this post, I noted that Marv Fertel has just published a guest post at NEI Nuclear Notes titled Why DOE Should Back SMR Development. I recommend that anyone interested in SMRs go and read Fertel’s thoughts on the important role that SMRs can play in meeting future energy needs.

SMR on trailer courtesy NuScale Power

SMR on trailer – courtesy NuScale Power

____________________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

What Did We Learn From Three Mile Island?

By Rod Adams

Thirty-five years ago this week, a nuclear reactor located on an island in the Susquehanna River near Harrisburg, Pennsylvania, suffered a partial core melt.

On some levels, the accident that became known as TMI (Three Mile Island) was a wake-up call and an expensive learning opportunity for both the nuclear industry and the society it was attempting to serve. Some people woke up, some considered the event a nightmare that they would do anything to avoid repeating, and some hard lessons were properly identified and absorbed. Unfortunately, some people learned the wrong lessons and some of the available lessons were never properly interpreted or assimilated.

The melted fuel remained inside the TMI unit 2 pressure vessel, nearly all the volatile and water-soluble fission products remained inside the reactor containment, and there were no public health impacts. The plant was a total loss after just three months of commercial operation, the plant buildings required a clean-up effort that took 14 years, the plant owner went bankrupt, and the utility customers paid dearly for the accident.

The other unit on the same site, TMI-1, continues to operate well today under a different owner.

Although the orders for new nuclear power plants had already stopped several years before the accident, and there were already people writing off the nuclear industry’s chances for a recovery, the TMI accident’s emotional and financial impacts added another obstacle to new plant project development.

In the United States, it took more than 30 years to finally begin building new nuclear power plants. These plants incorporate some of the most important lessons in their design and operational concepts from the beginning of the project development process. During the new plant construction hiatus, the U.S. electricity industry remained as dependent as ever on burning coal and burning natural gas.

Aside: A description of the sequence of events at TMI is beyond the scope of this post. There is a good backgrounder—with a system sketch—about the event on the Nuclear Regulatory Commission’s web site. Another site with useful information is Inside TMI Three Mile Island Accident: Moment by Moment. End Aside.

Decisions

The TMI event was the result of a series of human decisions, many of which were made long before the event or in places far from the control room. Of those decisions, there were some that were good, some that were bad, some that were reactions based on little or no information, and many made without taking advantage of readily available information.

One of the best decisions, made long before the event happened, was the industry’s adoption of a defense-in-depth approach to design. From the very beginning of nuclear reactor design, responsible people recognized that bad things could happen, that it was impossible to predict exactly which bad things could happen, and that the public should be protected from excess exposure to radioactive materials through the use of multiple barriers and appropriate reactor siting.

The TMI accident autopsy shows that the basic design of large pressurized water reactors inside sturdy containment buildings was fundamentally sound and adequately safe. As intended by the designers, the defense-in-depth approach and generous engineering margins allowed numerous things to go wrong while still keeping the vast majority of radioactive materials contained away from humans. Here is a quote from the Kemeny Commission report:

We are convinced that if the only problems were equipment problems, this Presidential Commission would never have been created. The equipment was sufficiently good that, except for human failures, the major accident at Three Mile Island would have been a minor incident.

Though it is not well-known, the NRC completed a study called the State of the Art Reactor Consequences Analysis (SOARCA aka NUREG-1935) that indicated that there would be few, if any, public casualties as the result of a credible accident at a U.S. nuclear power plant, even if there were a failure in the containment system.

One of the most regrettable aspects of TMI was that the heavy investment that the United States had made into the infrastructure for manufacturing components and constructing large nuclear power plants—factories, equipment, and people— was mostly lost, even though the large components and basic design did what they were supposed to do.

There were, however, numerous lessons learned about specific design choices, control systems, human machine interfaces, training programs, and information sharing programs.

Emergency core cooling

The Union of Concerned Scientists and Ralph Nader’s Critical Mass Energy Project had been warning about a hypothetical nuclear reactor accident for several years, though it turns out that they were wrong about why the emergency core cooling system did not work as designed.

The core damage at TMI was not caused by a failure of the cooling system to provide adequate water in the case of a worst case condition of a double-ended sheer of a large pipe; it was caused by a slow loss of cooling water that went unnoticed for 2 hours and 20 minutes. The leak, in this case, was a stuck-open relief valve that had initially opened during a loss of feedwater accident.

While the slow leak was in progress, the operators purposely reduced the flow of water from the high pressure injection pumps, preventing them from performing their design task of keeping the primary system full of water when its pressure is low.

It’s worthwhile to understand that the operators did not reduce injection flow by mistake or out of malice. They did what they had been trained to do. Their instructors had carefully taught them to worry about the effects of completely filling the pressurizer with water because that would eliminate its cushioning steam bubble. Their instructors and the regulators that tested them apparently did not emphasize the importance of understanding the relationship between saturation temperature and saturation pressure.

The admonition to avoid “going solid” (filling the pressurizer with water instead of maintaining its normal steam bubble) was a clearly communicated and memorable lesson in both classroom and simulator training sessions. When TMI control room operators saw pressurizer level nearing or exceeding the top of its indicating range, they took action to slow the inflow of water. At the time, they had still not recognized that cooling water was leaving the system via the stuck open relief valve.

The physical system had responded as it had been designed, but the designers had neglected to ensure that their training department fully understood the system response to various conditions that might be expected to occur. It’s possible that the designers did not know that a pressurizer steam space leak could cause pressure to fall and the pressurizer level to rise at the time that they designed the system. There was not yet much operating experience; the large plants being built in the 1960s and 1970s could not be fully tested at scale, and computer models have always had their limitations, especially at a time when processing power was many orders of magnitude lower than it is today.

There was also a generally accepted assumption that safety analysis could be simplified by focusing on the worst case accident.  If the system could be proven to respond safely to the worst case conditions, the assumption was that less challenging conditions would also be handled safely. The focus on worst case scenarios, emphasized by very public emergency core cooling system hearings, took some attention away from analyzing other possible scenarios.

Lessons learned

  • Following the TMI accident, there was a belated push to complete the loss of flow and loss of coolant testing program that the Atomic Energy Commission had initiated in the early 1960s. For a variety of political, financial, and managerial reasons, that program had received low priority and was chronically underfunded and behind schedule.
  • Today’s plant designs undergo far more rigorous testing programs and have better, more completely validated computer models.
  • Far more attention has been focused on the possible impact of events like “small break” loss of cooling accidents.
  • All new operators at pressurized water reactors learn to understand the importance of the relationship between saturation pressure and saturation temperature.

At the time of the accident, there was no defined system of sharing experiences gained during reactor plant operation with all the right people. TMI might have been a minor event if information about a similar event at Davis-Besse, a similar but not identical plant, that happened in September 1977 had made it to the control room staff at TMI-2.

Certain sections of the NRC knew about the Davis-Besse event, engineers at the reactor supplier knew about it, and even the Advisory Committee on Reactor Safeguards was aware of the event, but there was no established process for sharing the information to other operating units.

Lesson learned: After the accident, the industry invested a great deal of effort into a sustained program to share operating experience.

The plant designers also did not do their operators any favors in the design and layout of the control room. Key indicators were haphazardly arranged, there were thousands of different parameters that could cause an alarm if out of their normal range, and there was no prioritization of alarming conditions.

Lesson learned: After the accident, an extensive effort was made to improve the control rooms for existing plants and to devise regulations that increased the attention paid to human factors, man-machine interfaces, and other facets of control room design. All plants now have their own simulators that are designed to mimic the particular plant and are provided with the same operating procedures used in the actual plant. Operators are on a shift routine that puts them in the simulator for a week at a time every four to six weeks.

The initiating failures that started the whole sequence took place in the steam plant, a portion of the power plant that was not subject to as much regulatory or design scrutiny as the portions that were more closely associated with the nuclear reactor and its direct cooling systems.

Lesson still being learned: An increased level of attention is now paid to structures, systems, and components that are not directly related to a reactor, but there is still a confusing, expensive, and potentially vulnerable system that attempts to classify systems and give them an appropriate level of attention.

For at least 10 years prior to March 28, 1979, there had been an increasingly active movement focused on opposing the use of nuclear energy, while at the same time the industry was expanding near many major media markets and was one of the fastest growing employment opportunities, especially for people interested in technical fields. The technology was often in the spotlight, with the opposition claiming grave safety concerns and the industry—rather arrogantly, quite frankly—pointing to what had been a relatively unblemished record.

The industry did not do enough in the way of public outreach or routine advertising to explain the value of their product. They rarely compared the characteristics of nuclear energy against other possible electricity sources—mainly because there are no purely nuclear companies. In addition, the electric utility industry has a long tradition of preferring to be quiet and left alone.

The accident at TMI developed slowly over several days, but it became a major news story by mid-morning on the first day. Not only was it a “man bites dog” unusual event, but it was an event that the nuclear industry, the general public, the government, and the news media had been conditioned to take very seriously. Although nuclear experts from around the United States sprang into action to assist where they could at the plant itself, there was no established group of communications experts who could help reporters understand what was happening.

No reporter on a deadline is motivated or willing to wait for information to be gathered, evaluated, and verified. In the absence of real experts willing to talk, they turned to activists with impressive sounding credentials who were quite willing to speculate and spin tall tales designed to generate public interest and concern.

Lesson not yet learned: Although most decision makers in the nuclear industry understand the importance of planned maintenance systems to keep their equipment in top condition and the importance of a systematic approach to training to keep their employees performing at the top of their game, they have not yet implemented an effective, adequately resourced, planned communications program that helps to ensure that the public and the media understand the importance of a strong nuclear energy sector.

Planned communications efforts have a lot in common with planned maintenance systems. They might appear to be expensive with little immediate return on investment, but repairing a broken public image is almost as challenging and expensive as repairing a major plant component that failed due to a decision to reuse a gasket or postpone an oil change. As the guy in the commercial says, “You can pay me now or pay me later.”

That is probably the most tragic part of the TMI event. Despite being the subject of several expensively researched and documented studies, countless articles, thousands of documented training events, and more than a handful of books, the event could have—and should have—made the established nuclear industry stronger and the electric power generation system around the world cleaner and safer.

So far, however, TMI Unit 2′s destruction remains a sacrifice made partially in vain to the harsh master of human experience.

Note: I have purposely decided to avoid attempting to discuss the performance of the NRC or to judge their implementation of the lessons that were available to be learned. That effort would require a post at least twice as long as this one.

Additional Reading

General Public Utilities (March 28, 1980) Three Mile Island: One Year Later

Gray, Mike, and Rosen, Ira The Warning: Accident at Three Mile Island a Nuclear Omen for the Age of Terror W. W. Norton, 1982

Ford, Daniel Three Mile Island: Thirty Minutes to Meltdown Penguin Books, 1981

Hampton, Wilborn Meltdown: A Race Against Disaster at Three Mile Island A Reporter’s Story Candlewick Press, 2001

Report of the President’s Commission On The Accident At Three Mile Island. The Need for Change: The Legacy of TMI, October 1979

Three Mile Island A Report to the Commissioners and to the Public, January 1980

three mile island 300x237

____________________________________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Three years of available lessons from Fukushima

By Rod Adams

During the three years since March 11, 2011, the world has had the opportunity to learn a number of challenging but necessary lessons about the commercial use of nuclear energy. Without diminishing the seriousness of the events in any way, Fukushima should also be considered a teachable moment that continues to be open for thought and consideration.

As a long time member of the learning community of nuclear professionals, I thought it would be worthwhile to start a conversation that will allow us to document some of the “take-aways” from the accident and the costly efforts to begin the recovery process.

Since there are many people who are more qualified than I am to discuss the specific design details of the reactors that were destroyed and the specific site on which they were installed, I will shy away from those topics. Feel free, however, to add your expert views in the comment thread.

Before Fukushima

fukushima 216x144The overriding lesson for me is a recognition that people who favor the use of nuclear technology were quite unprepared for an event like Fukushima. Our technology had been working so well, for so long, that we had become complacent perfectionists.

In some ways, we were collectively similar to perennial honor roll students who prefer doing homework to engaging in risky sports. We have been “grinds” who studied hard, followed the rules, became the teachers’ pets, scored high marks on all of the routine tests, and were utterly devastated the first time we moved to a new level and encountered a test so difficult that our first attempt to pass resulted in a D-.

Many of us—and I will freely include myself in this category—had become so confident in our ability to earn outstanding grades that we did not pay attention to the boundaries of the box in which our confidence was justified.

We confidently accepted the fact that our technology was safe, had numerous layers of defense-in-depth, and was designed to be able to withstand external events, but we forgot that those statements were only true within a certain set of bounding parameters we normally call the “design basis.” Because we had only rarely approached those boundaries, we had no real concept for what might happen once we found ourselves outside of our expected conditions without most of the expected supporting tools.

An extended period of exceptional performance not only made us over-confident, it raised expectations to an unsustainable level. Corporate executives, the media, and government leaders played roles similar to the parents, teachers, and administrators associated with precocious straight A students. They were used to dealing with serious mistakes and outright failures among the rest of the student body, but were surprised and flustered when one of us let them down.

We also failed to understand that we were in the same vulnerable and unpopular position as the geeks who continuously break the curve and make others look bad, year after year. As the excellent report cards kept coming, we did not pay attention to the effect those high grades were having on our peers. We did not see other students gathering into groups after the grades were posted. We did not sense their anger or overhear their plans to be ready to take advantage the first time we gave them an opportunity.

We had no similar plans prepared in case we failed; we expected we would keep performing exceptionally well.

The Fukushima test

fukushima tsunamiWhen the nearly impossible test came, our technology performed as designed, but that was not good enough. Our technology was not designed to match a natural disaster that destroyed all available sources of electrical power. The loss of vital power at a large, multi-unit facility interfered with the ability to understand plant conditions and to put water into the places that desperately needed it.

Aside: That is not to say that it could not have been designed to handle the imposed conditions. As the performance of Onagawa and Fukushima Daini demonstrate, it is possible through better design or more fortuitous operational decisions to improve the chances of avoiding the consequences seen at Fukushima Daiichi, but there is never a guarantee of perfection. End Aside.

Without water flow, the rate of heating inside the cores was determined by inescapable laws of physics. As nuclear energy and materials experts have been predicting for nearly 50 years, once the temperatures inside the water-cooled cores reached a certain point, the zirconium cladding of the fuel rods began reacting with the water (H2O) to chemically capture the oxygen and release the hydrogen.

Fukushima Daiichi plant designers expected that human operators would pay attention to the pressure building inside the primary containment and release some of the steam before breaking the containment. They apparently neglected to consider that operators would not be able to monitor pressure using their installed systems without any available electrical power.

For valid reasons, the designers did not make containment relief an automatic function or even an easy process. They probably did not expect that the operators would wait for a politician located at the end of a tenuous communications link to make the decision to release that pressure, expect that they might feel the need to wait for a report that evacuations had been completed or realize that the time delay could allow pressure to rise so high that it would be almost impossible to open the necessary valves.

The operators performed their tasks with dedication and tenacity, but their efforts fell a little short of the heroically successful similar efforts at Fukushima Daini. It’s worth mentioning one particular example of unfortunate timing; the Daiichi operators invested dozens of back-breaking man hours to install a mobile generator and run heavy cables across 200 obstacle-filled meters in order to provide emergency power. They completed the hook up at 1530 on March 12. At 1536, the first hydrogen explosion injured five workers, spread contamination, and damaged the just-installed equipment enough to prevent it from functioning. (See page 8-9 of INPO Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station.)

The excessive pressures in the primary containments did what excessive pressure almost always does; it eventually found weak points that would open to release the pressure. The separated hydrogen left the containments, found some available oxygen and did what comes naturally; it exploded to further complicate the event and provide a terrific visual tool for the jealous competitors who were ready to take advantage of our failure.

The lesson available from that sequence of events were not design-specific. More foresight in the design process, solid understanding of basic materials and thermodynamic principles, and, if all else fails, empowered operators with the ability to resist political pressure can further reduce the potential for core damage and radioactive material release.

Once one of us encountered a test we could not pass, we were dazed and confused, obviously unsure what to do next. That period of uncertainty provided a wonderful opening for the opponents and competitors to take charge of the narrative, emphasize our failure under our own mantra of “an accident anywhere is an accident everywhere” and spread the word that we should not be allowed to get up anytime soon. They reminded formerly disinterested observers that we had fallen far short of our claimed perfection, took the opportunity to land a few blows while we were down, and made arrangements to ensure that our recovery was as difficult and expensive as possible.

Fears of radiation

As a group, nuclear technologists have often emphasized our cleanliness, our ability to operate reliably, and our improving cost structure.

radiationsafetyWe overlooked the efforts over the years by opponents and competitors to raise special fears about the materials that might be released in the event of an accident that breaks our multiple barriers. Though we all recognize that exposure to radioactive material at certain doses is dangerous, our opponents—sometimes aided by our own perfectionist tendencies—have instilled the myth that exposure to the tiniest quantities also carries unacceptable risk.

We had become so good at keeping those materials tightly locked up that we accepted ever-tightening standards, because they were easy enough to meet under routine conditions. Even under the “beyond design basis” conditions at Fukushima, our multiple barriers did a good enough job of retaining dangerous materials so that there were no immediate radiation-related injuries or deaths, but that isn’t good enough.

There were dangerous radiation levels on site; workers only avoided injury and fatalities by paying attention and minimizing exposure times. The myth of “no safe dose” and the reality that any possible effects may occur in the distant future has continued to result in fear that effects are uncertain and will probably get worse.

The no-safe-dose assumption has made us terribly vulnerable to an effort to force us to continue meeting the expectation of zero discharges. Our stuff does “stink” on occasion; in this case if we try to hold it all in we are going to eventually suffer severe distress. The tank farm at Fukushima, with its millions of gallons of tritiated water cannot expand forever, but our opponents will prevent controlled releases as long as they can to make the pain as large as possible.

It’s worth quoting the International Atomic Energy Agency’s recent report about its late 2013 visit to Japan to provide an independent peer review of recovery actions. This passage comes in the context of a carefully-phrased “advisory point” that strongly recommends that Japan prepare to discharge water where most isotopes other than tritium have been removed.

… the IAEA team encourages the Government of Japan, TEPCO and the NRA to hold constructive discussions with the relevant stakeholders on the implications of such authorized discharges, taking into account that they could involve tritiated water. Because tritium in tritiated water (HTO) is practically not accumulated by marine biota and shows a very low dose conversion factor, it therefore has an almost negligible contribution to radiation exposure to individuals.

Reliability and perfection

Not only did the accident destroy the ability of four plants to ever operate again, it has reminded us that reliability is not just a matter of technology and operational excellence. If the powers-that-be refuse permission to operate, the best technology in the world will fail at the task of providing reliable power. Our competitors are perfectly content to take over the markets that we are failing to serve. The longer they perform the easier it is for people to assert that we are not needed.

We have also been taught that we have no real control over cost. The aftermath of Fukushima has shown that it’s possible to establish conditions in which even the most dire prediction of economic cost is an underestimate. There is no upper bound under conditions where perfection is the only available standard.

If we do not learn how to occasionally fail, how to make reasonable peace with our powerful opposition, and continue to help everyone understand that a search for perfection does not mean that its achievement is actually possible, nuclear energy does not have much hope for rapid growth in the near future.

That would be a tragic situation for the long term health and prosperity of humanity. The wealthy portions of our current world population can probably do okay for a while without much nuclear fission power. However, that choice would harm the underpowered people who are already living and innumerable future generations who will not live as well as they could if we shy away from improving and using nuclear fission technology.

Fission technology is not perfect and poses a certain level of risk, but it is pretty darned good and the risks are well within the range of those that we accept for many other technologies that can perform similar tasks.

References:

INPO 11-005 Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station

IAEA INTERNATIONAL PEER REVIEW MISSION ON MID-AND-LONG-TERM ROADMAP TOWARDS THE DECOMMISSIONING OF TEPCO’S FUKUSHIMA DAIICHI NUCLEAR POWER STATION UNITS 1-4 (Second Mission) Tokyo and Fukushima Prefecture, Japan 25 November – 4 December 2013

__________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Is St. Lucie next on the antinuclear movement target list?

By Rod Adams

The most informative paragraph in a lengthy article titled Cooling tubes at FPL St. Lucie nuke plant show significant wear published in the Saturday, February 22, 2014, edition of the Tampa Bay Times is buried after the 33rd paragraph:

In answers to questions from the Tampa Bay Times, the NRC said the plant has no safety issues and operates within established guidelines. That includes holding up under “postulated accident conditions.”

Unfortunately, that statement comes after a number of paragraphs intended to cause fear, uncertainty, and doubt in the minds of Floridians about the safety of one of the state’s largest sources of electricity. St. Lucie is not only a major source of electricity, but it is also one of the few power plants in the state that is not dependent on the tenuous supply of natural gas that fuels about 60 percent of Florida’s electrical generation.

In March 2013, at the height of the political battle about the continued operation of the San Onofre Nuclear Generating Station—a battle that ended with the decision to retire both of San Onofre’s units—Southern California Edison issued a press release that contained words of warning for the rest of the nuclear industry.

The Nuclear Energy Institute’s Scott Peterson called the Friends of the Earth claims “ideological rhetoric from activists who move from plant to plant with the goal of shutting them down.” He goes on to say: “Not providing proper context for these statements incorrectly changes the meaning and intent of engineering and industry practices cited in the report, and it misleads the public and policymakers.”

In San Onofre’s case, the context of the public discussion should have included a widespread understanding that the decision to shut down the plant was based on a single steam generator tube leak that was calculated to be one-half of the allowable operating limit. That leak was detected by an alarm on a radiation sensing device sensitive enough to alarm with a leak that might have exposed someone to a maximum of 5.2 x 10-5 millirem.

The antinuclear movement has a long history of using steam generator material conditions as a way to force nuclear plants to shut down. Most nuclear energy professionals will freely admit that the devices have been problematic since the beginning of the industry. There was a period of acrimonious litigation when the utilities sued the vendors because the devices did not last as long as initially expected. However, with an extensive replacement program, focused research, attention to detailed operating procedures, and material improvements, steam generators are more reliable today than they were 25 or even 15 years ago.

It is also worth understanding that steam generator leaks do not cause a public health issue. Operating history shows that essentially all of the leaks have been modest in size and resulted in tiny releases of radioactive material outside of the plant boundaries. U-tubes are part of the primary coolant boundary and are thus classified as “safety-related.” Their integrity is important to reliable plant operation, but the 30 percent of the plants operating in the United States that are boiling water reactors don’t even try to keep radioactive coolant out of the steam plant.

The Tampa Bay Times feature article, written by Ivan Penn, included quotes from some of the same players involved in the—unfortunately—successful effort to close down San Onofre. Their words have that familiar ring of “ideological rhetoric,” indicating that St. Lucie might be high on the target list for the activists who move from plant to plant.

Arnie Gundersen, who Penn correctly identified as a frequent nuclear critic, provided a fairly explicit quote supporting the guess that the antinuclear movement has selected its next campaign victim. “St. Lucie is the outlier of all the active plants.” Later in the article, he stated that St. Lucie’s steam generators have a hundred times as many “dents” as the industry average. That might be true, but that is mainly because the industry average is in the single digits. The important measure is not the number of wear spots, but their depth.

Daniel Hirsch, described as a “nuclear policy lecturer” from the University of California at Santa Cruz, used more colorful language, “The damn thing is grinding down. They must be terrified internally. They’ve got steam generators that are now just falling apart.” Like Gundersen, Hirsch has fought against nuclear energy for several decades.

David Lochbaum, from the Union of Concerned Scientists, indicated that he thought that the plant owners were gambling, even though their engineering analysis, which was supported by the Nuclear Regulatory Commission, indicates that the plant has no safety issues and is operating within its design parameters.

Those quotes from the usual suspects, spread throughout the article, are balanced by quotes explaining or supporting FPL’s selected course of action to continue operating and to continue conducting frequent inspections to ensure that conditions do not approach limits that would require additional action.

Here is an example from Michael Waldron, a spokesman for FPL, that appears near the end of the article:

“We have very detailed, sophisticated engineering analysis that allow us to predict the rate of wear, and we are actually seeing the rate of wear slow significantly.”

Even though it is balanced with an almost equal number of pro and con quotes, Ivan Penn’s article includes a number of phrases that appear to be carefully selected to increase public uncertainty and worry about St. Lucie’s continued operation. It is possible to also attribute the words to the author’s desire to add drama and emotion to attract additional readers; that can be difficult to do while maintaining accuracy. Unfortunately for people who love drama, nuclear power plants are quite boring. The vast majority of the time they simply keep working.

Here is an example of the type of rhetorical enhancement that frustrates people who value the accurate use of words:

Worst case: A tube bursts and spews radioactive fluid. That’s what happened at the San Onofre plant in California two years ago.

As stated above, the tube at San Onofre did not “burst” and it did not “spew” radioactive fluid. A tube developed a small, 75–85 gallon-per-day leak from the primary system into the secondary steam system. The installed equipment provided an immediate indication of a problem and the operators promptly took a very conservative course of action to shut down the plant.

While the responsible engineers were performing their detailed investigations and drafting their recommendations, the activists and the politicians took charge of the public communications and worked hard to ensure that San Onofre never restarted. Their focused misinformation offensive resulted in the early retirement of an emission-free power plant that reliably provided 2200 MW of electricity at a key node in the California power grid.

Today, local residents in California are not safer, the air is not cleaner, and the wholesale price of power has already increased by more than 50 percent. Several large-scale infrastructure investments are being planned to restore resiliency to California’s grid. The primary beneficiaries of the antinuclear actions are the people who sell the 300–400 million cubic feet of natural gas needed every day to make up for the loss of San Onofre.

Let’s hope that the regulators and the politicians do a better job of finding sound technical advice, and that the responsible experts do a better job of helping people to understand that St. Lucie is safe, even if its steam generator tubes have more wear marks than anyone wants.

st. lucie 386x201

__________________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

How can we stop premature nuclear plant closures?

By Rod Adams

During an earnings call on February 6, 2014, Exelon Corporation indicated that it may decide to shut down two or more of its nuclear reactors because of poor economic return. Exelon spokespeople have been warning about the effects of negative electricity prices for several years.

On February 8, 2013, almost exactly a year ago, the Chicago Tribune published a story titled Exelon chief: Wind-power subsidies could threaten nuclear plants. The Tribune noted that Christopher Crane, Exelon’s CEO, was concerned about the continued operation of some of the units in the company’s large fleet of reactors:

“What worries me is if we continue to build an excessive amount of wind and subsidize wind, the unintended consequence could be that it leads to shutting down plants,” Crane said in an interview.

Crane said states that have helped to subsidize wind development in order to create jobs might find themselves losing jobs if nuclear plants shut down.

The Chicago-based company doesn’t have any immediate plans to mothball nuclear plants, although at least one analyst has predicted that could occur as soon as 2015.

“We continue to believe that our assets are some of the lowest-cost, most-dispatchable baseload assets and don’t have any plans at this point of early shutdown on them,” Crane said.

If the discussed nuclear reactor shutdowns occur, they would be numbers six and seven in the count of prematurely closed nuclear power plants in the United States since the beginning of 2013. Though there are certainly antinuclear activists and analysts who will point to this record with a delighted “We told you so,” this is not a trend that bodes well for the economic stability of the United States or for the continued effort of the US to reduce its dependence on hydrocarbon fuel sources.

It is also a trend that puts a number of nuclear professionals at risk of suffering a significant economic setback and life-altering job loss, despite having participated in an exceptional example of continued performance improvements over a sustained period of time.

During a recent industry gathering hosted by Platts, Dr. Pete Lyons pointed to the trend of shutting down well-maintained and licensed nuclear power plants as something that is worrying the current Administration, especially because it will make it difficult to achieve progress in reducing CO2 emissions.

Jim Conca, writing for Forbes, noticed Exelon’s announcement and wondered about its effect on a number of important attributes of energy production. He reminds his readers that nuclear plants represent a large fraction of the emission free electricity produced in the United States each year. He also points out that the longer nuclear plants run and produce revenue, the better. Construction costs are already sunk, the plants already have stored inventories of spent fuel, and they already require some form of decommissioning. The costs and pollution associated with all of those features should be spread over as many kilowatt hours of generation and revenue as possible.

There are several things that nuclear energy advocates can do that might help to eliminate the pressures that have been encouraging nuclear plant operating companies to either shut down or consider shutting down useful assets.

  1. Learn enough about the natural gas market to discuss it with your friends and colleagues
  2. Advocate policies that put a fair value on generating clean electricity
  3. Advocate policies that reward generating sources for reliability
  4. Cheer efforts to market electricity to restore growth in demand

During the winter of 2013-2014, there have been a number of examples of the risks associated with concentrating heating, industrial uses and electricity production on natural gas, just because it has been accepted as “clean” and seems to have become abundant and cheap—ever since 2008—which is apparently a long time ago in the memory of some market observers and decision makers. The Nuclear Energy Institute continues to produce excellent materials and testimony about the importance of fuel diversity; they need as much assistance as they can get in spreading the message.

This winter there have been reported shortages and price spikes that have exceeded $100 per MMBTU. That is roughly equivalent to oil prices hitting $580 per barrel, since every barrel of oil contains 5.8 MMBTU of heat energy. Natural gas price spikes have not been limited to the northeast; spikes exceeding $20 per MMBTU (five times the pre-winter price) have occurred in the mid-Atlantic, the Pacific Northwest, the Chicago area, southern California and even Texas. Last week, a price spike of $8.00 per MMBTU even showed up at Henry Hub, at the intersection of several prime US gas production areas.

Henry Hub spot prices as of Feb 10, 2014

Henry Hub spot prices for week ending Feb 5, 2014

When gas prices reach the levels seen this winter, many customers stop buying, even if they have no alternative fuel source available. If they are operating an industrial facility that needs the gas to run, they stop operating. If they are operating a household that needs the gas to stay warm, they put on more sweaters. If they are operating a school system; they shut the doors and tell the children to stay home.

In markets where wholesale electricity prices have been deregulated, gas fired generators are usually the marginal price setters, so the spikes in natural gas prices have directly affected electricity prices at times of peak demand, driving them to infrequently seen levels. It remains to be seen how the electricity price spikes this winter have affected revenues at generating companies, but it is unlikely to have harmed their bottom line. Unfortunately, brief spells of profitability may not be enough to encourage nuclear plant operators to keep running their plants if wholesale prices return quickly to loss-making level for much of the year.

Though many of us value the fact that nuclear plants do not produce any greenhouse gases or other air or water pollutants, that feature does not produce any additional revenue for plant owners. For the past twenty years, every alternative to fossil fuel except nuclear and large hydroelectric dams have been given direct subsidies, preferential tax treatment and quotas. Fossil fuel generators have not been charged for their use of our common atmosphere as a waste disposal site. It is time to put pressure on our representatives to pass legislation that establishes a price on carbon so that investors are encouraged to fairly value clean generation.

My personal favorite proposal is James Hansen’e fee and dividend approach where all hydrocarbon fuels pay a fee based on their carbon content and the public receives an equal share of the revenue. People who are careful and do not use much fuel will see a positive increase in their income; people who use more than average will see a net cost. Investors will recognize that it is worth their effort to identify technologies that do not emit CO2.

We also should advocate policies that reward generators for their ability to produce reliable electricity. It is a valuable service that helps to ensure that the grid is adequately served with a sufficient margin, and that we avoid the kind of volatility seen this past winter and that nearly bankrupted California in 2001.

Finally, we should seek to reverse the reluctance to tout the product we produce. Electricity is a wonderful tool that makes life better. It can be produced using a variety of fuels, though most readers here would probably agree that uranium and thorium are the best available electricity generation fuels. It’s time to recognize that the energy business is competitive. Like all competitive enterprises, it rewards people who fight for market share by producing a better product and by taking effective action to ensure that people know they are producing a better product.

While traveling through the southeast US last week, I heard an advertisement that made me smile. Alabama Power was offering to give people water heaters as long as they were shifting from gas heaters to electric heaters. Why have we allowed competitive energy producers to steal markets for so many years without fighting back?

I encourage people in the electricity production business to download a copy of the Jan/Feb 2014 issue of EnergyBiz and read the article titled Gas Competes with Power; A New Foundation Fuel, New Business Channels. While you are at it, you might also enjoy reading the challenge that NRG Energy’s David Crane lays down for the traditional business of generating and distributing electricity in his guest opinion piece titled Keep Digging: What Lethal Threat?

Exelon's Clinton Power Station

Exelon’s Clinton Power Station

______________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

The Value of Energy Diversity (Especially In A Polar Vortex)

By Rod Adams

Since the natural gas price collapse that started in summer 2008, many observers have become accustomed to using the adjective “cheap” when talking about natural gas. Like the word “clean,” another adjective often applied to methane, “cheap” is a relative term. It is also a term whose applicability depends on time and location. As I wrote in a recent post on Atomic Insights, gas is only really cheap if nobody needs it. When demand increases due to some kind of perfectly natural phenomenon—like a winter with near normal temperatures—demand can exceed deliverability by a large margin.

When that happens, the only way that markets can match demand to supply is to allow the price to climb to a level high enough to destroy some of the demand. Because the infrastructure for extracting, storing, and delivering gas cannot be rapidly altered, suppliers are unable to bring additional supplies to market in time to provide relief.

Late last week, the price of natural gas at three major trading locations—New England, New York, and Mid-Atlantic—exceeded $70.00 per MMBTU. It is worth seeing the table for yourself.

Daily natural gas prices January 22, 2014

Daily natural gas prices January 22, 2014

Those prices are, of course, spot market prices that do not apply to customers that have signed long-term supply contracts; but since long-term contracts are often priced at a level that is substantially higher than the short-term spot market, many customers have been loath to buy the protection offered. Home heating delivery companies are generally seen as utilities that supply a vital need, so they have traditionally signed long-term contracts with priority delivery clauses. Most merchant power generators have taken the risk associated with short-term contracts.

When gas prices get too high, those merchant generation companies have a simple choice; they stop buying fuel and stop generating power.

During last week’s brutal cold weather in New England there was a day when 75 percent of the region’s natural gas-fired power generators were unable to operate, presumably because there was an insufficient amount of gas to supply both heating demands and power demands.

Even with the delivery-related demand destruction, withdrawals from working gas-in-storage reservoirs has been running at a higher pace than at any time during the past five years, resulting in a current gas-in-storage inventory that is about 14 percent below the five year average for this time of year. Natural gas analysts are starting to speculate about the ability to maintain a sufficient storage buffer to complete the winter.

The total working gas in storage in the United States for the week ending January 17 is 2.4 trillion cubic feet (TCF). To put that number in perspective, average daily use in January has been running at 97 billion cubic feet per day for a monthly total of 3 trillion cubic feet. Traders are starting to pay attention, and long-term pricing at the main delivery hubs is starting to climb rather steeply.

Natural gas prices at Henry Hub Jan 2012 - Jan 2014

Natural gas prices at Henry Hub

To maintain grid stability, the New England independent system operator resorted to using combustion turbines supplied by diesel or jet fuel. Though distillate oil is normally a premium fuel best reserved for transportation, it has an advantage over gas in times of high demand. Because it is more readily stored, it can be staged in advance so that it is ready to run when demand soars—at least until the tanks run dry.

It has not yet made the news, but there are probably quite a few New Englanders who are happy that they still have heating oil in tanks on their own property. The oil heat advocates at American Energy Coalition would certainly like to spread the word that gas may not always be the best source of winter heat.

Fortunately, the US power grid has not yet arrived at the state that seems to be the goal of the natural gas marketing departments and their allies in the media. Not only are there still a number of coal- and oil-fired power plants that are still capable of running, there are still 100 operable nuclear power plants that thrive on colder weather.

Though there have been one or two operational issues, the monthly nuclear power plant performance report for December 2013 showed a total generation of more than 71 billion kilowatt hours for an average capacity factor of 97.6 percent.

So far in January, nuclear plant performance remains impressive; with some days reaching average capacity factors in excess of 97 percent. Much of this performance comes from well executed maintenance strategies and adverse weather plans. Those preparations allow operators to take timely action to minimize the probability of weather-related outages.

Nuclear plants have a reliability advantage over their fossil fuel competitors; they usually enter high demand, bad weather seasons with “fuel tanks” that contain many months’ worth of accessible fuel. All other competitors can run into fuel-related problems when deep cold persists for too long. Coal piles have been known to become solid blocks of ice, gas lines can freeze, and even diesel fuel can get syrupy if not properly stored.

Nuclear power plant operators also benefit from fuel prices that do not change as a result of high demand periods—the average cost of commercial nuclear fuel in the United States remains steady at between $0.50 to $0.60 per MMBTU. For merchant power plant operators, the cold weather is providing a great opportunity to bank some terrific returns. If you look at the daily spot market price table above, you can see that electricity prices were very robust, especially for companies that operate generating plants with an average operating and maintenance cost of $24 per MW-hr.

It would be terrific if the operators that benefit from selling their output at those generous prices stash some of the money away for those balmy spring days when few people need gas for heat. Gas still is a cheap and relatively clean fuel when the demand is low. There will again be times in the near future when gas-fired generators sell their output at prices that are not profitable for many others on the grid.

Maybe one lesson worth learning this winter is that an electric grid supplied by integrated power utilities operating under rate regulation with an obligation to serve is not such a bad arrangement after all. Electricity is too important for the rest of the economy to allow its price and availability to be so dependent on the whims of the weather.

There is another lesson that is specifically applicable to the state of Vermont. Vermonters, you still have a licensed and operating nuclear power plant that supplies power to your regional grid that is equivalent to 85 percent of your total consumption. For political reasons, you elected a governor and representatives that made that plant feel so unwelcome that the owners have decided to shut down the plant instead of refueling it and continuing to operate for the rest of its licensed life.

It’s not too late to take note of the way weather has been affecting your regional grid this year and consider how bad things might get if Vermont Yankee gets shut down as currently scheduled. Take a look at the possible impacts of following through with the proposed Total Energy Study.

Once you have imagined that scenario, pick up the phone and call some of your government leaders. Tell them that you want them to ask Entergy to keep the plant running. Tell your representatives that they have your permission to beg for forgiveness if necessary.

____________________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Nuclear energy is built on an actinide foundation

By Rod Adams

During the past several years, I have been following the progress of a strange situation in my adopted state of Virginia. Despite being a state with a long history of mining and mineral extraction, we have a law in place that forbids mining one specific element—uranium. The law is technically just a temporary moratorium put in place in order to give the state’s regulators time to draft effective regulations, but the law enacting the moratorium was put into place more than 30 years ago.

At this point, it is rather difficult to consider that it is just a temporary measure, especially since there is no longer any progress being made to begin drafting the required regulations. There has been work in progress since 2008, but it has recently hit a pretty substantial barrier.

The governor-elect, Terry McAuliffe, made a statement about a week after his election party ended, stating that he would veto any legislation that ended the moratorium. Since he expects no change in the moratorium while he is governor, he said he would oppose any effort to begin drafting rules as a waste of time and money. The governor made that decision after a strong sales effort by people who did not like the idea of allowing uranium to be mined in the state.

I’ve spent some time on the phone with Ben Davenport, the leader of one of the main opposition groups. He told me that he and his group are strongly pronuclear and believe that nuclear energy is the cleanest and best way to produce electricity. However, Mr. Davenport and his group believe that mining uranium is the dirty end of the business that should be done somewhere else.

I believe that the established nuclear energy interests in the state have missed a good opportunity to build an effective coalition that would take advantage of a teachable period to help people understand more about nuclear energy, the basic materials that enable it to function, the measurably minor health and environmental impacts associated with modern mining, and the economic benefits that result from materials extraction from the earth.

The people who are already in the nuclear industry are the ones who are most likely to understand that it is a safe, clean, and productive industry with the ability to provide great benefits to society. We need to practice our ability to more clearly communicate those aspects of our business to a public that has been subjected to many negative perspectives, often from people with economic interests for spreading fear, uncertainty, and doubt about our technology.

One aspect of economic development that seems to elude most people who do not live in Texas, Oklahoma, or Alaska is that businesses that pull valuable materials out of the earth are essentially finding money that makes the resource pie larger for all of us. Though mining opponents claim that there is plenty of uranium available on the world market—and they are correct under current conditions—they fail to understand that the money spent to purchase that uranium from somewhere else goes to the supplier region and is spent there.

Money used to purchase uranium in Virginia, on the other hand, stays in the United States. It ends up in the pockets of people who shop locally, dine in local restaurants, buy propane from local distributors, pay mortgages to local banks, and send their children to local schools. The valuable material adds wealth and capability, especially compared to simply leaving the material resting in the ground.

Because the nuclear industry in the United States grew up after the Cold War weapons program, we ended up with a geographically dispersed industry that prevents the kinds of sensible concentrations that yield substantial scale benefits to most other industries. Virginia-based companies have the opportunity to streamline the nuclear fuel fabrication supply chain and to take advantage of synergies that result when there are several different employers looking for people with similar skill sets in a geographic area.

There are potential political and public acceptance benefits for concentrating a complete industry supply chain in a defined geographic area. That is especially true when the industry is something disruptive that has complex or unique features that require knowledgeable communicators who can help the public and the politicians understand the impact of their decisions on the industry.

Many of the elements of this kind of concentration exist in southern Virginia, where there are nuclear power plant vendors, nuclear fuel suppliers, a nuclear power plant operator, a nuclear capable shipyard, nuclear engineering programs at regional universities, and a number of nuclear-powered ships. Unfortunately, many of these elements are not allowed to talk to each other and have a long history of maintaining “radio silence” among their neighbors and friends.

Here is a vision that I would love to see being pursued—I’d like to see the companies that are already engaged in the business of creating finished actinide fuel components and the machines that use those finished assemblies talk with the people who own a large uranium deposit with a potential worth of $7 billion. The same people own several thousand acres of land surrounding that deposit.

The discussions need to include businessmen who are already operating successful enterprises and are devoted to improving the foundations of the local economy. I’d like them all to think and talk about the possibility of siting additional training facilities or laboratories related to fuel conversion, enrichment, and fabrication on the site. The site might even be suitable for demonstration and test reactors that can serve as long term training facilities.

There is already usable railroad infrastructure in the area that is connected to one of the most capable ports in the world. The supply chain of manufactured parts required for small modular reactors need to be produced somewhere; why not in some of the places in southern Virginia with a long history of manufacturing, with skilled populations that know how to work with their hands? Craftsmen can learn to master the demanding quality assurance requirements for nuclear parts; those skills have wide applications and are not easy to outsource.

Some of the people who have opposed the uranium mining make it very clear that they are opposed because they believe that the perceived risks outweigh the benefits. So far, those benefits have been described to them as the potential for several hundred good jobs sometime in the uncertain future, after all of the licensing and permitting work is complete.

The good, practically minded people in the area know that job promises do not provide any meals, do not help educate any children, and do not increase the customer flows at any local businesses. Perhaps, by applying some creative thinking and vision, good jobs can start more quickly and lay the groundwork for a sustainable industry that will enable long term prosperity and resilience.

The site of the Alliance for Progress in Southern Virginia has some rotating photos that include one with a beautiful rolling pasture, complete with a few dispersed bales of hay. I enjoy bucolic scenery, and believe strongly that appropriate nuclear energy facilities can fit into that scenery quite nicely. However, from an economic development point of view, there are few land uses that are less progressive or economically important than growing hay.

Uranium ore

_____________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Do oil and gas suppliers worry about nuclear energy development?

By Rod Adams

The world oil market is not a free market. Prices are manipulated by a small number of producers that adjust production rates to achieve desired prices that are high enough to provide maximum profits, without being high enough to encourage customers to aggressively pursue alternative energy sources.

That is the most important take away for attendees at the OPEC Embargo +40 summit held in Washington DC on October 16. Unfortunately, the meeting sponsors avoided acknowledging that nuclear energy is the alternative energy source that most worries established hydrocarbon suppliers. Nuclear has held that position since the early 1960s, when General Electric first won a head-to-head competition against coal to sell the Oyster Creek nuclear power plant.

Nuclear energy is reliable, virtually emission-free, and uses a widely distributed, abundant fuel source that is no longer subject to influence by the same producers that manipulate other fuel prices. Its cheap, clean heat can help turn coal, natural gas, and plants (vegetation) into liquid fuels that can be drop-in replacements for petroleum-based fuels.

A glittering cast of American energy pundits gathered in Washington DC for the summit held on the 40th anniversary of the 1973 OPEC oil embargo. Natural gas was the celebrity invitee everyone wanted to faun over, while nuclear energy was an uninvited guest disrespected by almost all of the speakers whenever it was brought up.

The event was hosted by a group of retired large company executives and military flag officers who have served in roles in which they should have learned about the vital role that energy plays in our economy and in our politics.

That organization, Securing America’s Future Energy (SAFE), recently produced a document titled A National Strategy for Energy Security: Harnessing America’s Resources and Innovation 2013. There are only three uses of the word nuclear in that 125-page document. Two of those appearances are in the legends of graphs about energy sources; one is followed by the word “physics” in a list of education focus areas.

People who want to sell uranium, fabricate fuel, build and operate new plants, and stop a dramatic shift of leadership in technical innovation to other countries (e.g., Korea and China) must recognize that it’s past time to take action to force ourselves into the conversation, even if our technology makes some people uncomfortable.

During the summit, negative words about nuclear energy came from people representing numerous points in the political spectrum. Doubters included a man who had served as both chairman of the Atomic Energy Commission and as Secretary of Energy, a woman who had been the Secretary of State, a man who is the chief executive officer of a large ship operating company, and a man who is the CEO of one of the world’s pioneering nuclear power plant vendors.

Madeline Albright, the former Secretary of State, described the Atoms for Peace program as a mistake that led to too many unsolved “unintended consequences.” Meanwhile, according to James Schlesinger (former AEC chairman and one-time energy secretary), cheap natural gas has killed the nuclear renaissance and no utility CEO is going to consider proposing a new nuclear plant to his board of directors.

But you asked a question about nuclear. Madeline (Albright) mentioned the unintended consequences [of the Atoms for Peace speech]. There are unanticipated consequences. What we have seen as a result of shale oil development and shale gas development is natural gas so cheap now that nobody, no utility, is going to build a nuclear plant unless very heavily subsidized, and we are not seeing that. Philosophically we may be more interested in having more nuclear plants but as a practical matter, we’re just not going to see them. There is no nuclear renaissance coming.
(See SAFE video titled Insight from the Oval Office. Schlesinger’s comment dismissing nuclear energy starts at 24:55)

Adam Goldstein was asked if his Royal Caribbean Cruise Lines would be interested in nuclear power, as the company has replaced oil on large ships for more than 50 years. He chuckled uncomfortably—along with the audience—and stated that those ships do not have to carry passengers into Australia. He stated that costs make it prohibitive. He appeared unaware that his huge passenger ships are a tempting “early adopter” market for smaller reactor vendors; they operate baseload power plants running on low sulfur diesel fuel that costs more than $25 per MMBTU.

Jeff Immelt described GE’s new jet engine, which improves fuel economy by 15 percent, as his company’s most innovative technology for reducing oil dependence. When pressed about nuclear energy, he said that his company is going to keep their nuclear energy division on life support because his “successor’s successor” might be grateful to have that option available. He never mentioned the ABWR, the PRISM, or the ESBWR.

Nuclear energy received a few positive mentions; most of the best came from Fred Smith, the founder and CEO of Federal Express, a world-wide logistics company founded in 1971, just two years before the OPEC embargo. Smith fundamentally understands the importance of a reliable supply of fuel for his trucks, planes, and delivery vehicles.

He is also well aware of the fact—through repeated experience—that apparent abundance can rapidly turn into price-spiking shortage. He knows what that shift means to his company’s profits and what it means to the profits of companies that sell oil or alternative energy equipment. He noted the ongoing nuclear renaissance in China and his interest in what he called “pocket nukes” that are receiving investments from Bill Gates and Babcock & Wilcox.

Aside:  SAFE recently posted A Conversation with Jeff Immelt and Fred Smith on YouTube. Immelt repeatedly sings the praises of natural gas and explains how his company is involved in the industry. His comments about the most innovative technologies is in response to a question that Becky Quick asked starting at 23:46. Their discussion about nuclear energy begins with a question from Becky starting at 28:35. End Aside.

Carol Browner, who served as the Environmental Protection Agency administrator in a Democratic administration, insisted that nuclear energy has an important role to play in reducing fossil fuel dependence and reducing CO2 emissions.

Those examples show that the most receptive audiences for the nuclear energy alternative are people who buy a lot of fuel without selling any, and people who are deeply concerned about air pollution and climate change. The former understand that having additional supplies of reliable power will mean more competition to provide more stable and lower prices. The latter group knows that we cannot continue to dump CO2 into the atmosphere at an ever-increasing rate without unexpected consequences.

It’s time to get more aggressive in nuclear energy marketing. The uranium industry should teach people how heat is fungible in order to excite its potential supporters and capture attention from energy pundits.

Nuclear fission heat has already reduced the world’s dependence on oil; there is plenty of remaining opportunity. Nuclear energy pushed oil out of the electricity market in most of the developed world. Fission has replaced oil combustion in larger ships, but most others still burn oil. Nuclear-generated electricity has replaced oil burned for locomotives, city trolleys, and space heat, but there is room for substantial growth in these markets. Uranium producers should be influential members in the coalitions that are working to electrify transportation systems. Fission heat, especially with higher temperature reactors, can replace oil heat in industrial processes, including those well-proven processes that can turn coal, natural gas, and biomass into liquid fuels.

Fission can also reduce oil use by pushing gas out of the power generation business, thus freeing up more natural gas for other uses. As the gas promoters love to point out, methane is a flexible and clean burning fuel. It is important to remind their customers that fuel burned in power plants is not available for any other use.

There should no longer be meetings in Washington in which serious energy observers can hold sessions about efforts to reduce oil dependence, without discussing uranium’s important role in achieving that goal. There should also not be another meeting in DC discussing how natural gas is going to reduce our dependence on petroleum, without any apparent recognition that gas and oil are almost identical chemicals that come from essentially the same places in the earth’s crust, are supplied by essentially the same multinational conglomerates, and are delivered to customers using very similar types of pipes, ships, and trucks.

gas plant 290x201

Note: An abbreviated version of this article first appeared in the November 7, 2013 issue of Fuel Cycle Week.
_______________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Excitement about U-235 as coal competitor–circa 1939 & 1940

By Rod Adams

Conventional wisdom says that the general public was introduced to atomic energy by the explosions at Hiroshima and Nagasaki. According to that version of history, the introduction instilled a strong dose of fear that remains to be overcome.

Some observers who like to paint nuclear energy in a negative light have stated that the program to build nuclear power plants grew from a desire to find a civilian use for a technology developed solely from a desire to create weapons.

Accounts of the early days after the discovery of the fission chain reaction, however, show that physicists who were engaged in the study of the atomic nucleus and the use of neutrons to produce artificial radioactivity were keenly interested in producing useful power. They were motivated not only by a scientific desire to gain a better understanding of the fundamental structure of the atom, but also by a desire to provide the world with a new power source to compete with coal and oil. The stories also show, however, that writers who covered the scientific advances often asked questions indicating that they envisioned weapons or doomsday scenarios.

As a digital subscriber to the New York Times, widely referred to as “the paper of record,” I recently performed an archive search using the term “chain reaction” and a date range starting on 01/01/1938 and ending on 01/01/1944. The results of that search confirmed my suspicion that the atomic pioneers were primarily interested in fuel production—though, when pressed, they acknowledged the possibility of explosive energy release.

The search returned 10 articles published between February 1939 and March 1941, with no additional results after that date. Even before the Manhattan Project started, scientists apparently stopped discussing chain reactions in public. Some of the 10 pieces discovered were short inclusions in a regular column titled Science in the News. Here are sample quotes from those pieces showing atomic energy optimism:

Frederic Joliot, co-winner of the 1935 Nobel Prize for chemistry, is trying to find a way to make a $2 pound of uranium give up as much heat or power as is now obtained from burning $10,000 worth of coal.

Uranium atoms will do the firecracker trick under certain restrictions. If scientists can find practical means to set up uranium chain reactions, then it is estimated that it may be possible to obtain from one pound of uranium as much energy as is at present obtained from 1,250 tons of coal.

(Associated Press, Uranium as a Coal Substitute, New York Times, June 19, 1939)

Roberts and Kuper agree that “a chain reaction cannot be ruled out definitely for either slow or fast neutrons,” but decide that “there is no evidence of any kind that such a reaction will really occur.” They throw more cold water over dreamers by showing that uranium has not very great economic advantage over coal even if it could be used. “Uranium oxide (96 per cent pure) sells for approximately $2 a pound, which is roughly equal to the price of a ton of coal at the mine. In terms of energy dollar—uranium is cheaper by a factor of 8.5.”

Though this may look good to a financier, Roberts and Kuper point out that as the demand for uranium increases so does the price. In the end further refinement would be necessary and the limited supply of high-grade ore would soon be exhausted. “If uranium were to replace 500,000,000 tons of coal used annually in this country,” argue these skeptics, “the amount of uranium consumed would increase 15,000 per cent.”

(Kaempffert, Waldemar, Atomic Energy From Uranium, The New York Times, October 22, 1939)

There was also a lengthy front-page article titled Vast Power Source In Atomic Energy Opened by Science published on May 5, 1940. That article documented a high level of public interest in the new discoveries and described an optimistic attitude among both academic and industrial researchers. That article provided technical information that I had previously thought was a closely-guarded, Manhattan Project secret.

A natural substance found abundantly in many parts of the earth, now separated for the first time in pure form, has been found in pioneer experiments at the Physics Department of Columbia University to be capable of yielding such energy that one pound of it is equal in power output to 5,000,000 pounds of coal or 3,000,000 pounds of gasoline, it became known yesterday.

The discovery was announced in the current issue of The Physical Review, official publication of American physicists and one of the leading scientific journals of its kind in the world.

Professor John R. Dunning, Columbia physicist, who headed the scientific team whose research led to the experimental proof of the vast power in the newly isolated substance, told a colleague, it was learned, that improvement in the methods of extraction of the substance was the only step that remained to be solved for its introduction as a new source of power. Other leading physicists agreed with him.

A chunk of five to ten pounds of the new substance, a close relative of uranium and known as U-235, would drive an ocean liner or an ocean-going submarine for an indefinite period around the oceans of the world without refueling, it was said. For such a chunk would possess the power-output of 25,000,000 to 50,000,000 pounds of coal, or 15,000,000 to 30,000,000 pounds of gasoline.

Uranium ore, in which the U-235 also is present, is found in the Belgian Congo, Canada, Colorado, England and Germany, in relatively large amounts. It is 1,000,000 times more abundant than radium, with which it is associated in pitchblende ores.

(Laurence, William L., Vast Power Source in Atomic Energy Opened by Science, New York Times, May 7, 1940, P. 1)

The article continues on page 51 to provide a number of details that show a rather remarkable pace of advancement in understanding, considering the fact that only 18 months had passed since the initial recognition that neutrons could cause uranium to split into two pieces.

Not only is the energy-liberating process automatic and self-regenerating, it was explained, but it also is self-regulating. The energy liberated from the atoms heats up the water so that it turns into steam. When all the water supplied has been turned into steam, there is nothing left to slow down the fast-traveling neutrons, and fast neutrons just go through the uranium without breaking up its atoms and releasing its energy. This brings the whole process to a stop until more cool water is supplied.

As one leading physicist explained it, “the colder the water the better the reaction. The reaction is self-limiting because heat (generated by the split atoms) speeds up the neutrons and the faster the neutrons the less the reaction.”

“The faster you feed in the cold water,” the scientist added, “the faster the water will come out hot on the other side, because more neutrons will be slowed down and thus more atoms split and more energy is liberated. Thus the process is admirably suited for power generation.”

Because of the nature of the neutrons, even the slow-traveling ones, it was explained further, it is necessary to have a mass of at least five pounds, and possibly as high as twenty, to make the process work on a practical scale. In a smaller amount even low energy neutrons would escape into the open without splitting the initial “trigger-atom” that sets off the process. To start the process it is necessary for the neutron to remain inside the mass, so that it would enter the nucleus of an atom to start the splitting process.

One of the scientists explained the process of the energy-liberation from U-235 by comparing it to the burning of coal. Whereas coal uses oxygen to liberate its energy, he explained, the U-235 uses slow neutrons for the same purpose. The process of combustion in the case of the U-235, he added, is, atom for atom, 100,000,000 times as effective as is the case in the combustion of coal. However, as the atomic weight of the uranium is 235, compared with 16 for the oxygen and 12 for the carbon, there are fewer uranium atoms for a given weight than there are oxygen and carbon atoms. This reduces the energy relations of the U-235, compared with coal, to a ratio of 5,000,000 to 1.

There are several new methods being considered for increasing the yield of the new substance to large-scale amounts. But as to this, scientists greet the questioner with a profound silence.

(Laurence, William L., Vast Power Source in Atomic Energy Opened by Science, New York Times, May 7, 1940, P. 51)

On May 12, 1940, the New York Times Science in the News column written by Waldemar Kaempffert, its longtime science editor, included a section titled Atomic Power—Not Yet. That piece, published just one week later, had a completely different tone and expressed a sense of impossibility for the near term development of the technology:

Last week’s hullabaloo about atomic power naturally prompted this department to look into the possibility of dispensing with coal and oil. It is our sad duty to report that the prospect is not bright. If there is any thought of Germany’s making use of the work done at the universities of Columbia and Minnesota, and the General Electric Company’s laboratories, it must be dismissed. Yet physicists never were so near to doing away with coal and oil as sources of energy and turning to ordinary matter as they are now.

It takes about 100 hours to make one microgram of uranium-235 or 1,000,000 hours or over a century to make one gram. About 100 grams (a little more than three ounces) would be required to make serious experiments in generating energy on a small scale. At least five pounds would be required to drive an ocean liner. It may be that a more rapid means of producing U-235 than that now available may be evolved. But the prospect of using U-235 in the present war is zero.

As matters stand we are not likely to spend centuries in accumulating the necessary uranium-235. By the time we had it so much would be known about the structure of matter that easier means of developing power from the atom would have been discovered. Accordingly, this department has decided to place the usual order for coal to be shot into the cellar, and preparing itself for the usual task of shoveling expensive black lumps into a hungry furnace.

(Kaempffert, Waldemar, Science in the News: Atomic Power—Not Yet, The New York Times, May 12, 1940)

I was immensely curious about the abrupt turnaround in such a short period of time from the same publication. The mystery was solved when I found out that Germany’s push west into the Low Countries and France started on May 10, 1940. Based on the expressed concerns that Germany might be actively pursuing the technology, it’s possible that the discouragement was motivated by something other than telling the complete truth.

It seems quite apparent that if the fission chain reaction had been discovered just a few years earlier or later, nuclear energy history would not have been defined by explosives—but by steady, controllable, non-coal power produced in simple piles, designed to turn heat into useful power in ways similar to those used to turn coal combustion heat into useful power.

U-235 200x200

____________________

Adams

Adams

Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.