Category Archives: Radiation

Motives for pushing a no-threshold dose radiation risk model (LNT) in 1955-56

By Rod Adams

Dr. Edward Calabrese recently published a paper titled The Genetics Panel of the NAS BEAR I Committee (1956): epistolary evidence suggests self‐interest may have prompted an exaggeration of radiation risks that led to the adoption of the LNT cancer risk assessment model.

Abstract: This paper extends a series of historical papers which demonstrated that the linear-no-threshold (LNT) model for cancer risk assessment was founded on ideological-based scientific deceptions by key radiation genetics leaders. Based on an assessment of recently uncovered personal correspondence, it is shown that some members of the United States (US) National Academy of Sciences (NAS) Biological effects of Atomic Radiation I (BEAR I) Genetics Panel were motivated by self-interest to exaggerate risks to promote their science and personal/professional agenda. Such activities have profound implications for public policy and may have had a significant impact on the adoption of the LNT model for cancer risk assessment.

This new work was inspired when Calabrese found a 2007 history of science dissertation by Michael W. Seltzer titled The technological infrastructure of science. One facet of the paper is to explain how self-interest can create biases that affect scientific conclusions, policy setting, and public communications. Identical measurements and observations can be used to support dramatically different reports depending on what the scientists are attempting to accomplish.

That is especially true when there is difficulty at the margins of measurement where it is not easy to discern “signal” from “noise.” The risk of agenda-driven conclusions has become greater as the scientific profession has expanded far beyond the sporadically funded idealists motivated by a pure search for knowledge, and into an occupation that provides “good jobs” with career progression, regular travel opportunities, political influence, and good salaries.

On the other hand, their efforts on the committee illustrate one component of the technological infrastructure of genetics outside of the laboratory: the increasing significance of large-scale laboratories, federal funding agencies, policy-making committees, and government regulatory bodies as critical components of the technological infrastructure of science. Clearly, how the science of genetics was to advance into the future would have much to do with traditionally non-epistemic factors, in addition to epistemic ones.

Finally, in considering all these themes together, it is difficult to conclude that there is any sharp separation between the practice of science and the practice of politics (in the Foucauldian sense of power/knowledge). Rouse’s view of the intra-twining of epistemology and power, his view of epistemic politics, is pertinent here. The practice of science was at times the playing of politically epistemic games, whether at the level of argumentation in the contestable theoretical disputes of population genetics, at the level of science policy-making, as with the various organizations and committees responding to the scientific and political controversies surrounding the efforts to establish exposure guidelines in the light of concerns over fallout from atomic testing, or with the planning of the future infrastructure of experimentation based on funding opportunities.

(Seltzer 2007, p. 307–308)

Admittedly, the language in the above quote uses jargon from the field of historians, but my translation is that Seltzer found ample evidence to support an assertion that the majority of geneticists on the BEAR I Genetics Panel were more concerned about fitting into a political narrative than they were in answering the questions they were ostensibly assembled to answer. Their tasking was to provide political decision-makers with scientifically supportable answers about the genetic effects of the radiation exposure that might be expected as a result of atomic weapons testing. However, they decided to complete a different task.

Some members of the committee had an agenda to assert the zero threshold dose response assertion desired by politically active members of the scientific community. They knew that answer—whether or not it was the truth—would assist their scientific colleagues in their efforts to raise concerns about fallout to a fever pitch. Fallout fear was their agreed-upon lever for gaining public support for their efforts to halt nuclear weapons testing.

Other members of the committee were more concerned about obtaining financial support for a long-term research program in general genetics research. That desired research program could only be tangentially related to determining the effect of the tiny, but chronic and largely unavoidable radiation exposures to human populations from highly dispersed atmospheric weapons testing fallout.

(Warning: If you are interested in the history of how the no-threshold dose assumption was imposed and you are pressed for time, please do not download Seltzer’s paper and begin reading it. It is full of intriguing information, but it is 450 pages long including footnotes. The section on radiation health effects controversies is 112 pages long.)

Here is a quote from Calabrese’s paper that does an excellent job of summarizing the important take-aways from Seltzer’s historical research for people who are mainly interested in encouraging a new look at radiation protection assumptions and regulations:

Seltzer provided evidence that members of the Genetics Panel clearly saw their role in the NAS BEAR I committee to be a vehicle to advocate and/or lobby for funding for radiation genetics (p. 285 footnote 208). Moreover, it was hoped that the committee, which would exist continuously over many years, would influence the direction and priorities for future research funding. According to Seltzer (2007), such hoped for funding possibilities for radiation geneticists can be seen in letter correspondence between Beadle, Dobzhansky, Muller and Demerec.

Demerec responded by saying that “I, myself, have a hard time keeping a straight face when there is talk about genetic deaths and the tremendous dangers of irradiation. I know that a number of very prominent geneticists, and people whose opinions you value highly, agree with me” (Demerec to Dobzhansky 1957). Dobzhansky to Demerec (1957b) responded by saying “let us be honest with ourselves—we are both interested in genetics research, and for the sake of it, we are willing to stretch a point when necessary. But let us not stretch it to the breaking point! Overstatements are sometimes dangerous since they result in their opposites when they approach the levels of absurdity.

Now, the business of genetic effects of atomic energy has produced a public scare, and a consequent interest in and recognition of (the) importance of genetics. This is to the good, since it will make some people read up on genetics who would not have done so otherwise, and it may lead to the powers-that-be giving money for genetic research which they would not give otherwise.” (Dobzhansky to Demerec (1957b)

Calabrese goes on to tie this newly uncovered history-of-science work to several other papers that he has recently published regarding his own excavation work digging through the collected papers of major players in the drama associated with using fears of radiation to slow and then stop nuclear weapons testing.

In retrospect, therefore, a historical assessment of the LNT reflects the so-called “perfect toxicological storm”: Muller receiving the Nobel Prize within 1.5 years after the atomic bomb blasts in Japan, the deliberate deceptions of Muller on the LNT during his Nobel Prize lecture (Calabrese 2011a, 2012), the series of stealth-like manuscript manipulations and deceptions by Stern to generate scientific support for the LNT and to prevent Muller’s Nobel lecture deceptions from being discovered (Calabrese 2011b), the series of subsequent false written statements by Muller to support Stern’s papers and to protect his own reputation (Calabrese 2013), the misdirection and manipulation of the NAS Genetics Panel by the actions of Muller and Stern (Calabrese 2013), and now evidence of subversive self-interest within the membership of the Genetics Panel to exaggerate risk for personal gain. This series of Muller/Stern-directed actions inflamed societal fear of ionizing radiation following the bombings of Japan and during the extreme tensions of the cold war with its concomitant environmental contamination with radionuclides from atmospheric testing of nuclear weapons, and led to the acceptance of the LNT model for cancer risk assessment by a human population that had become extremely fearful of radiation, even at very low doses.

(Calabrese 2014 p. 3)

Though the scientist-led antinuclear weapons movement saw fear of fallout as one way of inciting public action to limit atmospheric weapons testing and its uncontrolled releases, other people might have had less admirable motives. There are many solid financial reasons to encourage people to fear all sources of ionizing radiation, especially the doses that members of the public could possibly receive from nuclear energy production.

After all, even in the 1950s, the fuel industry was one of the largest and most important businesses in the world and was the source of a number of enormous fortunes. The industry has always been interested in avoiding the unprofitably low prices that result when there are more energy options and when the total supply of available energy is greater than the immediate need.

When I spoke to Dr. Calabrese for Atomic Show #218, he indicated that he had not done much to find out where the BEAR I committee members thought they would be obtaining the funds that might be made available if they exaggerated the dangers of low dose radiation. Modern scientists often assume that basic scientific research funding comes from a government agency, but that is something that developed gradually after World War II. Before then, nearly all funding for science came from private sources.

A 1987 biography of Warren Weaver published by the National Academies of Science described the genesis of the NAS study of radiation started in 1955.

Paraphrasing the description on pages 506–507, in the United States one of the largest basic science funders was the Rockefeller Foundation. In 1954, there were numerous articles in the press indicating that the public was confused about the effects of radiation. At a Rockefeller Foundation board meeting, attendees asked Detlev W. Bronk, who was both a Rockefeller Foundation board member and the NAS president, if there was a way to produce some definitive answers.

The NAS proposed forming six committees to investigate various aspects of the issue and the Rockefeller Foundation agreed to provide the necessary funds to produce the reports. Warren Weaver served as the chairman of the Genetics Committee for the first BEAR reports. Of the other members of the committee, at least four (George W. Beadle, M. Demerec, H. J. Muller, and A. H. Sturtevant) had been recipients of Rockefeller Foundation grants before 1956 and several continued receiving substantial grants well after their work on the committee.

The NAS biography described Weaver’s successful committee chairmanship:

The first committee was chaired by Weaver, who successfully mediated the opposing positions of the two groups of geneticists who were members of the committee and prepared a report that had their unanimous support. After the first summary report was published in 1956, there was virtual editorial unanimity in the nation’s newspapers that the “report should be read in its entirety to be appreciated” and that it deserved the close attention of all concerned citizens.

Pages 506–507

In the June 13, 1956, edition of the New York Times, the news of the committee’s report occupied the entire far right column of the front page from top to bottom. Here is the top portion of the article:

Peril to future of man

Below those scary, attention-grabbing phrases, the article’s lead was designed to shock and raise serious concerns:

Washington, June 12 — A committee of outstanding scientists reported today that atomic radiation, no matter how small the dose, harms not only the person receiving it but also all of his descendents [sic].

The article continued:

The six committees studied the radiation problem in the fields of genetics, pathology, meteorology, oceanography and fisheries, agriculture and food supplies, and disposal and dispersal of radioactive wastes.

Overshadowing all others because of its implication for mankind was the report of the genetics panel. This was headed by Dr. Warren Weaver of the Rockefeller Foundation. It was this foundation that provided the funds for the year-long survey.

It is important to understand that the primary data that the genetics committee had available to review were from experiments using X-rays on fruit flies, most of which were conducted by foundation grantees and members of the committee.

It is also worth noting that Warren Weaver served as director for Natural Sciences for the Rockefeller Foundation from 1932–1959. During that period the program that he directed provided more than $90 million in grants for experimental biology. (NAS biography pg 504.) He had a distinguished career, received many awards, and had a major influence in selecting the science that was funded for molecular biology, radiation health effects, and genetics.

Weaver was a mathematician by education with a lifelong interest in statistics and Lewis Carroll’s Alice in Wonderland. According to his obituary, he had the world’s largest collection of various editions of the book. Upon his death, the collection was given to the University of Texas.

The Rockefeller Foundation was, and remains, interested in maintaining the dominance of oil and natural gas in our energy supply system. Those fuels were the source of the largess that the foundation has been able to give for more than 100 years.


Note: A version of this article appeared on Atomic Insights on July 19, 2014, under the headline of Selfish motives for LNT assumption by geneticists on NAS BEAR I. At the time, I was not aware that the Rockefeller Foundation provided grants supporting all of the Biological Effects of Atomic Radiation committees from 1955-1962.




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Proposed Revisions to Nuclear Plant Release/Public Exposure Regulations: ANS Response to EPA

By Jim Hopf

DC PerspectivesIn January, the U.S. Environmental Protection Agency issued an Advanced Notice of Proposed Rulemaking (ANPR) concerning 40 CFR 190—the regulations that govern public exposure and release of radioactive materials resulting from normal nuclear power plant operations (it does not pertain to nuclear accidents). The public comment period for the proposed rulemaking ended on August 3.

On August 1, the American Nuclear Society submitted a formal comment to the EPA. I also submitted a comment, personally.


In the ANPR, the EPA did not make any proposed changes to the regulations. Instead, the ANPR was a proactive solicitation of public input. The EPA asked if 10 CFR 90, which was issued in 1977, should be revised or updated. It also asked for public input on six specific issues or questions:

  1. Should the 40 CFR 190 public exposure limits be expressed in terms of (individual) dose or health risk?
  2. If dose limits are used, should the dose calculation methodologies be updated, and if so how?
  3. Should release limits for specific isotopes be retained (in addition to public dose limits) and should release limits be applied industry-wide or to individual facilities?
  4. Should a separate groundwater standard be added?
  5. Should specific rules pertaining to spent fuel and waste storage be added?
  6. Should revised or new standards be added to address new or emerging technologies (such as new reactor types or fuel cycle technologies)?

Details about the ANPR in general can be found in the EPA notice. More details about the six issues that the EPA sought public comment on can be found in this EPA slide presentation. Also, more information can be found in a July 15 ANS Cafe post by Rod Adams on the EPA ANPR.

ANS response

ANS submitted a response to the ANPR in an August 1 letter. ANS made some general comments, as well as specific comments on each of the six issues listed above. ANS’s responses are summarized below:


ANS stated that the EPA should move forward with a comprehensive rewrite of 40 CFR 190, due to the substantial advances that have occurred since 1977 in the understanding of the health effects of ionizing radiation, particularly in the area of low-level exposure.

ANS also stated that other things have changed, since 1977, with respect to the overall environmental and health context that applies to radiation standards. Public doses from air travel and medical procedures have increased dramatically since then (with medical procedures alone increasing the average public exposure to ionizing radiation by 200 mrem/year), and no detectable public health impacts have resulted from that increase in exposure. Also, as the negative public health and environmental impacts from fossil-fueled power generation have become more clear, there is more of as consensus that nuclear power has significant environmental benefits that may offset any negative impacts from public radiation exposures.

ANS also stated that while 40 CFR 190 specifically applies to the nuclear power industry, the risk modeling methodologies that form the bases of any requirements or limits should be consistent with those used to regulate other (non-nuclear-industry) sources of public radiation exposure.

Issue 1

ANS stated that an individual, total effective dose limit should be applied, as opposed to any kind of health risk limit.

Issue 2

ANS stated that dosimetry methodologies should be based on “effective dose” and urged the EPA to use standards and methodologies that are consistent with other agencies, such as the U.S. Nuclear Regulatory Commission. ANS also suggested using the effective dose definition used in ICRP Publication 103 (in its response to Issue 1), that document being one of the methodologies suggested by the EPA in its Issue 2 question.

Issue 3

ANS strongly recommended that the EPA revise 40 CFR 190 to discard any radionuclide release limits, as they are “duplicative, unnecessary and inconsistent with international practice.” ANS stated that limits on overall individual dose are sufficient to protect public health.

The reason for the radionuclide release limits currently in 40 CFR 190 was that in 1977, large-scale reprocessing was anticipated and there were concerns about long-term buildup (in the environment) from routine radionuclide releases from reprocessing facilities. This issue is far less significant now, given that the United States has not pursued reprocessing. The limits were also based on an extreme application of the linear no-threshold (LNT) theory, with very small doses to very large populations being used to predict significant health impact—something that is now considered questionable scientific practice by most experts.

Issue 4

ANS argued against having any separate regulations or dose criteria for specific public exposure pathways, such as a separate groundwater standard. Instead, limiting total effective dose to an individual, from all pathways, is the best approach for protecting public health.

Issue 5

ANS stated that there should be no specific EPA regulations related to storage of spent fuel and other forms of radioactive waste. Spent fuel and waste storage operations are already rigorously regulated and monitored by the NRC, making EPA involvement unnecessary. Any releases into the environment from storage operations would be covered by limits on overall public exposure (from all nuclear plant operations).

Issue 6

With respect to potential new reactor and/or fuel cycle technologies, ANS reiterated its position that limits on overall exposure (total effective dose) for individual members of the public is the most rational and effective approach for protecting public health. After all, any health impacts will be a function of dose, regardless of the source of that dose. It is clear than any limits on public exposure should be technology-neutral.

My own response

I submitted my own response to the EPA ANPR. My response concurred with ANS positions, and made many of the same points, with a few exceptions.

It is clear that any limits should be on public exposure (dose), and regulations should not distinguish between specific isotopes, pathways, or technologies. While there may be some disagreement over the health risk from a given amount of radiation exposure (rem), there is almost complete agreement that any health impacts from radiation are solely a function of dose (in the case of long-term exposure, at least). The science of dose determination is very well-developed, with the radiological and biological half-lives, and the chemical/biological behavior of various isotopes within the body, being fully accounted for in dose calculations. Dose is dose.

Therefore, it is clear that it is dose, and only dose, that should be controlled. To support the determination of any isotope-specific release limits, the EPA would have to do extensive pathway calculations to equate a given release (of a given isotope) with some predicted dose to a member of the public. That would be duplicative, as plant operators are already required to perform extensive environmental monitoring around the plant sites. This is necessary to determine public doses to comply with EPA and NRC public dose limits. Also, how would any EPA analyses account for differences between various sites (whereas plant operator monitoring and dose calculations are already site-specific)? Limiting dose, as opposed to releases of specific isotopes, maximizes flexibility and places the focus where it should be, i.e., on controlling the maximum overall exposure to members of the public.

As for long-term environmental buildup being a justification for isotope-specific release limits, it seems to me that this problem would be a uniquely small one for the nuclear industry, given the fact that radionuclides decay away (with most of the significant isotopes having relatively short half-lives). Meanwhile, other industries, whose pollutants often do not decay away at all, don’t seem to be asked the same questions (mercury from coal plant emissions being one possible example). Instead, the focus seems to be based solely on immediate (present day) health impacts from their pollution, as determined by various epidemiological studies. In the context of Issue 3, I asked the EPA why this question is seemingly only being asked of the nuclear industry.

Where I differed from ANS

While I agree that any regulations should be based on dose, I didn’t entirely agree with ANS’s position that limits should be placed on individual dose (to some most-exposed member of the public). To be fair, the EPA essentially asked responders to choose between a limit on individual dose or a limit on allowable individual health risk. Given that choice, I would pick a limit on dose, as did ANS. However, I also recommended different, even better, bases for regulations, which were not suggested by the EPA.

Limits on collective dose

Many nuclear professionals believe that repudiating the LNT theory (on low-level exposure health effects) would be key to rationalizing dose (or release) regulations. I’ve often argued that all we need to do is point out that LNT is being selectively applied (to the nuclear power/weapons industry only).

Current public individual dose limits are determined by using LNT to argue that there is some health risk even at very low doses, and then applying an absurdly low limit on allowable health risk (e.g., a 10-4 or 10-6 lifetime cancer risk). This process results in very low limits on individual exposure, that are only applied to nuclear industry related exposures. Much larger doses from other sources, such as natural background, radon, medical, and air travel are simply ignored (not regulated).

The problem with this “logic” is that if you assume LNT, and that the dose response is truly linear all the way down to zero, it then follows (purely mathematically) that total health impact (i.e., cancers or deaths) scale directly with collective exposure, in man-Rem. As I argued to the EPA, the concept of limiting maximum individual risk is not even meaningful. At the end of the day, you either die (from radiation-induced disease) or you don’t, and the number of deaths (which is what you’re really trying to avoid) scales directly with collective exposure (man-Rem). Thus, it is hard to justify placing limits on exposure to a (most exposed) individual, as opposed to limiting overall collective public exposure. The only downside to limits on collective exposure is that it may be somewhat harder to determine (or estimate) than maximum individual exposures.

Limits in individual exposure, as opposed to collective exposure, work against nuclear, since any pollution that nuclear plants release (under normal operations or in an accident) tends to stay localized, whereas many forms of pollution from many other industries drift far and wide. I believe that this is one reason why nuclear plant limits are a small fraction of natural background (far too small to have any measurable public health impact) while fossil fuel generators are still allowed to cause ~10,000 deaths in the United States annually (according to the EPA itself).

Also, the other sources of radiation exposure I listed earlier affect most or all the U.S. population, whereas any nuclear plant releases would affect only a handful of local residents. This results in differences in collective exposure that are even more vast than the differences in individual exposure (between nuclear power sources and other sources). The collective exposure that U.S. residents get annually from radon is far larger than the total public collective exposure that will result from the Fukushima accident, yet nothing is done about it. Such exposures are unregulated. Public exposures from U.S. nuclear plants, under normal operation, are about a million times smaller than the public exposures from these other, unregulated sources.

Based on the above reasoning, I asked the EPA to consider limiting collective public exposure from U.S. nuclear plants, as opposed to limiting the exposure to a maximally exposed individual. I also asked the EPA to put any proposed limits on collective exposure in the context of the collective exposures the U.S. public gets from other sources. I essentially asked how they could apply strict controls limiting nuclear operations to tiny public collective exposures while completely ignoring other sources of collective exposure that are a million times larger.

Cost-benefit analysis

The EPA currently performs cost-benefit analyses to justify most of its proposed regulations in most industries. In fact, the EPA even uses a published dollars-per-life-saved figure of ~$10 million per life as the basis for its regulations. This makes sense (to me) as the basis for any regulations, as one shouldn’t arbitrarily apply limits on doses, or health risks, regardless of the cost. Such policies allow society’s limited public health and safety resources to be applied where they will have the most impact.

Thus, I suggested to the EPA that they go one step further than limiting collective public exposures (man-Rem). I suggested that the best policy of all would be to establish a criterion for how much plant operators should have to spend per public man-Rem avoided. This would be similar to industry ALARA (As Low As Reasonably Achievable) policies currently in place for limiting exposures to plant personnel. If the EPA does not want to leave it up to operators to perform such cost estimates, then, at a minimum, the EPA should keep the $10-million-per-life-saved criterion in mind when determining limits on public collective exposures from plant operations. $10 million per life saved corresponds to a spending requirement of ~$4,000 per man-Rem avoided (based on current LNT estimates of one death per ~2,500 man-Rem). The EPA could consider industry input when determining what limits on public collective exposure would correspond to a cost of ~$4,000 per man-Rem.

I also (again) asked the EPA why nothing at all is being spent to reduce all the other, vastly larger sources of public collective exposure, and inquired about what other practices it should mandate (e.g., radon abatement) that could be performed for $4,000/man-Rem or less.

Distinction between different sources of exposure

ANS alluded to how nuclear industry–related sources of public exposure are treated differently than non-nuclear industry sources when it said that “the risk modeling methodologies that underlie them must be consistent with those used in EPA’s regulatory involvement (or lack thereof) pertaining to all other pathways of public exposure to ionizing radiation.”

I was more direct. I stated that “with the possible exception of medical exposures (that have an offsetting health benefit), all public exposures should be treated equally by regulations, regardless of source.”

It is indefensible to arbitrarily apply strict regulations to some sources of public exposure while ignoring much larger sources of public (collective) exposure. Given this fact, dose limits that are a small fraction of natural background (which ranges up to ~1,000 mrem/year in many places) are hard to justify. When considering collective (as opposed to maximum individual) exposures, strict limits on localized exposures in the vicinity of a nuclear plant are even harder to justify.

Although it is outside the scope of 10 CFR 190, this argument is even more important with respect to setting exposure limits in the event of nuclear accidents. Given the relatively small number of affected people (on the order of 100,000, based on the Fukushima experience), the assumption of LNT should allow individual exposure limits of several Rem/year, as that would still result in overall collective exposures that are smaller than those received routinely by the overall population. Expensive cleanup operations (e.g., to get doses down to 100 mrem/year, as Japan is considering) are hard to justify, given that far larger reductions in overall public collective exposure could be achieved at far lower cost in other areas (such as radon abatement or reducing unnecessary medical exposures).

us epa logo no text 214x201




Jim Hopf is a senior nuclear engineer with more than 20 years of experience in shielding and criticality analysis and design for spent fuel dry storage and transportation systems. He has been involved in nuclear advocacy for 10+ years, and is a member of the ANS Public Information Committee. He is a regular contributor to the ANS Nuclear Cafe.

Nuclear professionals: Establish standing now to improve operational radiation limits

By Rod Adams

On August 3, 2014, the window will close on a rare opportunity to use the political process to strongly support the use of science to establish radiation protection regulations. Though it is not terribly difficult for existing light water reactors and fuel cycle facilities to meet the existing limits from 40 CFR 190 regarding doses to the general public and annual release rate limits for specific isotopes, there is no scientific basis for the current limits. If they are maintained, it would hinder the deployment of many potentially valuable technologies that could help humanity achieve a growing level of prosperity while achieving substantial reductions in air pollution and persistent greenhouse gases like CO2.

In January 2014, the U.S. Environmental Protection Agency issued an Advanced Notice of Proposed Rulemaking (ANPR) to solicit comments from the general public and affected stakeholders about 40 CFR 190, Environmental Radiation Protection Standards for Nuclear Power Operations.

The ANPR page has links to summary webinars provided to the public during the spring of 2014, including presentation slides, presentation audio, and questions and answers. This is an important opportunity for members of the public, nuclear energy professionals, nuclear technical societies, and companies involved in various aspects of the nuclear fuel cycle to provide comments about the current regulations and recommendations for improvements. Providing comments now, in the information-gathering phase of a potential rulemaking process, is a critical component of establishing standing to continue participating in the process.

us epa logo no text 214x201It also avoids a situation where an onerous rule could be issued and enforced under the regulator’s principle that “we provided an opportunity for comment, but no one complained then.”

The existing version of 40 CFR 190—issued on January 13, 1977, during the last week of the Gerald Ford administration—established a limit of 0.25 mSv/year whole body dose and 0.75 mSv/year to the thyroid for any member of the general public from radiation coming from any part of the nuclear fuel cycle, with the exception of uranium mining and long-term waste disposal. Those two activities are covered under different regulations. Naturally occurring radioactive material is not covered by 40 CFR 190, nor are exposures from medical procedures.

40 CFR 190 also specifies annual emissions limits for the entire fuel cycle for three specific radionuclides for each gigawatt-year of nuclear generated electricity: krypton-85 (50,000 curies), iodine-129 (5 millicuries), and Pu-239 and other alpha emitters with longer than one year half-life (0.5 millicuries).

It is important to clarify the way that the U.S. federal government assigns responsibilities for radiation protection standards. The Nuclear Regulatory Commission has the responsibility for regulating individual facilities and for establishing radiation protection standards for workers, but the EPA has a role and an office of radiation protection as well.

The Atomic Energy Act of 1954 initially assigned all regulation relating to nuclear energy and radiation to the Atomic Energy Commission (AEC). However, as part of the President’s Reorganization Plan No. 3 of October 1970, President Nixon transferred responsibility for establishing generally applicable environmental radiation protection standards from the AEC to the newly formed EPA:

…to the extent that such functions of the Commission consist of establishing generally applicable environmental standards for the protection of the general environment from radioactive material. As used herein, standards mean limits on radiation exposures or levels or concentrations or quantities of radioactive material, in the general environment outside the boundaries of locations under the control of persons possessing or using radioactive material.

(Final Environmental Impact Statement, Environmental Radiation Protection Requirements for Normal Operations of Activities in the Uranium Fuel Cycle, p. 18.)

Before the transfer of environmental radiation responsibilities from the AEC to the EPA, and until the EPA issued the new rule in 1977, the annual radiation dose limit for a member of the general public from nuclear fuel cycle operations was 5 mSv—20 times higher than the EPA’s limit.

The AEC had conservatively assigned a limit of 1/10th of the 50 mSv/year applied to occupational radiation workers, which it had, in turn, conservatively chosen to provide a high level of worker protection from the potential negative health effects of atomic radiation.

The AEC’s occupational limit of 50 mSv was less than 1/10th of the previously applied “tolerance dose” of 2 mSv/day, which worked out to an annual limit of approximately 700 mSv/year. That daily limit recognized the observed effect that damage resulting from radiation doses was routinely repaired by normal physiological healing mechanisms.

Aside: After more than 100 years of human experience working with radiation and radioactive materials, there is still no data that prove negative health effects for people whose exposures have been maintained within the above tolerance dose, initially established for radiology workers in 1934. End Aside.

From the 1934 tolerance dose to the EPA limit specified in 1977 (and still in effect), requirements were tightened by a factor of 2800. The claimed basis for that large conservatism was a lack of data at low doses, leading to uncertainty about radiation health effects on humans. Based on reports from the National Academy of Sciences subcommittee on the Biological Effect of Ionizing Radiation (BEIR), the EPA rule writers simply assumed that every dose of radiation was hazardous to human health.

The EPA used that assumption to justify setting limits that were quite low, but could be met by the existing technology if it was maintained in a like-new condition for its entire operating life. Since the rule writers assumed that they were establishing a standard that would protect the public from an actual harm, they did not worry about the amount of effort that would be expended in surveys and monitoring to prove compliance. As gleaned from the public webinar questions and answers, EPA representatives do not even ask about compliance costs, because they are only given the responsibility of establishing the general rule; the NRC is responsible for inspections and monitoring enforcement of the standard.

The primary measured human health effects used by the BEIR committee in formulating their regulatory recommendations were determined based on epidemiological studies of atomic bomb survivors. That unique population was exposed to almost instantaneous doses greater than 100 mSv. Based on their interpretation of data from the Life Span Study of atomic bomb victims, which supported a linear relationship between dose and effect in the dose regions available, the BEIR committee recommended a conservative assumption that the linear relationship continued all the way down to a zero dose, zero effect origin.

For the radionuclide emissions limits, the EPA chose numbers that stretch the linear no-threshold dose assumption by applying it to extremely small doses spread to a very large population.

The Kr-85 standard is illustrative of this stretching. It took several hours of digging through the 240-page final environmental impact statement and the nearly 400-page collection of comments and responses to determine exactly what dose the EPA was seeking to limit decades ago, and how much it thought the industry should spend to achieve that protection.

The EPA determined that allowing the industry to continue its then-established practice of venting Kr-85 and allowing that inert gas to disperse posed an unacceptable risk to the world’s population.

It calculated that if no effort was made to contain Kr-85, and the U.S. industry grew to a projected 1000 GW of electricity production by 2000, an industry with full recycling would release enough radioactive Kr-85 gas to cause about 100 cases of cancer each year.

The EPA’s calculation was based on a world population of 5 billion people exposed to an average of 0.0004 mSv/year per individual.

At the time that this analysis was performed, the Barnwell nuclear fuel reprocessing facility was under construction and nearly complete. It had not been designed to contain Kr-85. The facility owners provided an estimate to the EPA that retrofitting a cryogenic capture and storage capability for Kr-85 would cost $44.6 million.

The EPA finessed this exceedingly large cost for tiny assumed benefit by saying that the estimated cost for the Barnwell facility was not representative of what it would cost other facilities that were designed to optimize the cost of Kr-85 capture. It based that assertion on the fact that Exxon Nuclear Fuels was in a conceptual design phase for a reprocessing facility and had determined that it might be able to include Kr-85 capture for less than half of the Barnwell estimate.

GE, the company that built the Midwest Fuel Recovery Plant in Morris, Illinois, provided several comments to the EPA, including one about the low cost-benefit ratio of attempting to impose controls on Kr-85:

Comment: The model used to determine the total population dose should have a cutoff point (generally considered to be less than 0.01 mSv/year) below which the radiation dose to individuals is small enough to be ignored.

In particular, holdup of krypton-85 is not justified since the average total body dose rate by the year 2000 is expected to be only 0.0004 mSv/year.

Response: Radiation doses caused by man’s activities are additive to the natural radiation background of about 0.8-1.0 mSv/year [note: the generally accepted range of background radiation in the mid 1970s, as indicated by other parts of the documents was 0.6 - 3.0 mSv/yr] whole-body dose to which everyone is exposed. It is extremely unlikely that there is an abrupt discontinuity in the dose-effect relationship, whatever its shape or slope. at the dose level represented by the natural background that would be required to justify a conclusion that some small additional radiation dose caused by man’s activities can be considered harmless and may be reasonably ignored.

For this reason, it is appropriate to sum small doses delivered to large population groups to determine the integrated population dose. The integrated population dose may then be used to calculate potential health effects to assist in making judgements on the risk resulting from radioactive effluent releases from uranium fuel cycle facilities, and the reasonableness of costs that would be incurred to mitigate this risk.

Existing Kr-85 rules are thus based on collective doses, and a calculation of risks, that is now specifically discouraged by both national (NCRP) and international (ICRP) radiation protection bodies. It is also based on the assumption of a full-recycle fuel system and 10 times as much nuclear power generating capacity as exists in the United States today.

Since the level specified is applied to the entire nuclear fuel cycle industry in the United States, the 40 CFR 190 ANPR asks the public to comment about the implications of attempting to apply limits to individual facilities. This portion of the discussion is important for molten salt reactor technology that does not include fuel cladding to seal fission product gases, and for fuel cycles that envision on-site recycling using a technology like pyroprocessing instead of transporting used fuel to a centralized facility for recycling.

There are many more facets of the existing rule that are worthy of comment, but one more worth particular attention is the concluding paragraph from the underlying policy for radiation protection, which is found on the last page of the final environmental impact statement:

The linear hypothesis by itself precludes the development of acceptable levels of risk based solely on health considerations. Therefore, in establishing radiation protection positions, the Agency will weigh not only the health impact, but also social, economic, and other considerations associated with the activities addressed.

In 1977, there was no consideration given to the fact that any power that was not generated using a uranium or thorium fuel cycle had a good chance of being generated by a power source producing a much higher level of carbon dioxide. In fact, the EPA in 1977 had not even begun to consider that CO2 was a problem. That “other consideration” must now play a role in any future decision-making about radiation limits or emission limits for radioactive noble gases.

If EPA bureaucrats are constrained to use the recommendations of a duly constituted body of scientists as the basis for writing its regulations, the least they could do before rewriting the rules is to ask the scientific community to determine if the linear no-threshold (LNT) dose response model is still valid. The last BEIR committee report is now close to 10 years old. The studies on which it was based were conducted during an era in which it was nearly impossible to conduct detailed studies of DNA, but that limitation has now been overcome by advances in biotechnology. There is also a well-developed community of specialists in dose response studies that have produced a growing body of evidence supporting the conclusion that the LNT is not “conservative”—it is simply incorrect.

Note: Dose rates from the original documents have been converted into SI units.

epa sign 313x201



Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Accepting the Science of Biological Effects of Low Level Radiation

By Rod Adams

A group of past presidents and fellows of the American Nuclear Society has composed an important open letter to ANS on a topic that has been the subject of controversy since before I first joined the society in 1994. The subject line of that letter is “Resolving the issue of the science of biological effects of low level radiation.” The letter is currently the only item on a new web site that has been created in memory of Ted Rockwell, one of the pioneers of ANS and the namesake of its award for lifetime achievement.

LNT and “no safe dose”

Ted was a strong science supporter who argued for many years that we needed to stop accepting an assumption created in the 1950s without data as the basis for our radiation protection regulations. That assumption, which most insiders call the “LNT”—linear no-threshold dose response—says that risk from radiation is linearly proportional to dose all the way to the origin of zero risk, zero dose.

Many people who support the continued use of this assumption as the basis for regulation plug their ears and cover their eyes to the fact that those who oppose the use of nuclear energy, food irradiation, or medical treatments that take advantage of radiation’s useful properties translate our mathematically neutral term into something far more fear-inspiring: They loudly and frequently proclaim that the scientific consensus is that there is “no safe dose” of radiation.

Some people who support the use of nuclear energy and who are nuclear professionals help turn up the volume of this repeated cry:

Delvan Neville, lead author of the study and a graduate research assistant in the Department of Nuclear Engineering and Radiation Health Physics at Oregon State University, told the Statesman Journal Apr. 28, “You can’t say there is absolutely zero risk because any radiation is assumed to carry at least some small risk.”

While most scientists and engineers understand that the LNT assumption means that tiny doses have tiny risks that disappear into the noise of daily living, the people who scream “no safe dose” want their listeners to believe it means that all radiation is dangerous. They see no need to complicate the conversation with trivial matters like measurements and units (I am being ironic here).

Scientists and engineers almost immediately ask “how much” before starting to get worried; but others can be spurred into action simply by hearing that there is “radiation” or “contamination” and it is coming to get them and their children. When it comes to radiation and radiation dose rates, we nuclear professionals have not made it easy for ourselves or for the public, using a complicated set of units, and in the United States remaining stubbornly “American” by refusing to convert to the international standards.

Aside: There is no good reason for our failure to accept international radiation-related measurement units of Sieverts, Bequerel, and Grays. Laziness and “it’s always been that way” are lousy reasons. I’m going to make a new pledge right now—I will use International System of Units (SI) units exclusively and no longer use Rem, Curies, or Rad. After experiencing the communications confusion complicated by incompatible units during and after the Fukushima event, the Health Physics Society adopted a position statement specifying exclusive use of SI units for talking or writing about radiation, and perhaps ANS should adopt it as well. End Aside.

Physics or biology?

Leaving aside the propaganda value associated with the cry of “no safe dose,” an important factor that supports a high priority to the effort to resolve the biological effects of low-level radiation is the fact that the LNT uses the wrong science altogether.

The LNT assumption was created by persons who viewed the world through the lens of physics. When dealing with inanimate physical objects all the way down to the tiniest particles like neutrons, protons, mesons, and baryons, statistics and uncertainty principles work well to predict the outcome of each event. An atom that fissions or decays into a new isotope has no mechanism that works to reverse that change. A radiation response assumption that applies in physics, however, is an inadequate assumption when the target is a living organism that has inherent repair mechanisms. Biology is the right science to use here.

At the time that the LNT was accepted, decision-makers had an excuse. Molecular biology was a brand new science and there were few tools available for measuring the effects that various doses of radiation have on living organisms.

The assumption itself, however, has since inhibited a major tool used by biologists and those who study the efficacy of medical treatments: Since all radiation was assumed to be damaging and could only be used in medicine in cases where there was an existing condition that might be improved, it was considered unethical to set up well-designed randomized controlled trials to expose healthy people to carefully measured doses of radiation while having a controlled, unexposed group.

Instead, health effects studies involving humans have normally been of the less precise observational methods of case-control or cohort variety, with occupationally or accidentally exposed persons. The nature of the exposures in those studies often introduces a large measurement uncertainty, and there are complicating factors that are often difficult to address in an observational study.

Science marches on, but will LNT?

Molecular biology and its available tools have progressed dramatically since the LNT was adopted by BEIR I (Committee on the Biological Effects of Ionizing Radiation) in 1956. It is now possible to measure effects, both short-term and long-term, and to watch the response and repair mechanisms actually at work. One of the key findings that biologists have uncovered in recent years is the fact that the number of radiation-induced DNA events at modest radiation dose rates are dwarfed, by several orders of magnitude, by essentially identical events caused by “ordinary” oxidative stress.

This area of research (and others) could lead to a far better understanding of the biological effects of low-level radiation. Unfortunately, the pace of the research effort has slowed down in the United States because the Department of Energy’s low dose research program was defunded in 2011 for unexplained reasons.

It is past time to replace the LNT assumption with a model that uses the correct scientific discipline—biology, rather than physics—to predict biological effects of low-level radiation. I’ll conclude by quoting the final paragraph of the ANS past presidents’ open letter, which I encourage all ANS members, both past and present, to read, understand, and sign:

The LNT model has been long-embedded into our thinking about radiation risk and nuclear energy to the point of near unquestioned acceptance. Because of strict adherence to this hypothesis, untold physiological damage has resulted from the Fukushima accident—a situation in which no person has received a sufficient radiation dose to cause a significant health issue—yet thousands have had their lives unnecessarily and intolerably uprooted. The proposed actions will spark controversy because it could very well dislodge long-held beliefs. But as a community of science-minded professionals, it is our responsibility to provide leadership. We ask that our Society serve in this capacity.

Additional reading

Yes Vermont Yankee (June 23, 2014)  “No Safe Dose” is Bad Science. Updated. Guest Post by Howard Shaffer

Atomic Insights (June 21, 2014) Resolving the issue of the science of biological effects of low level radiation

dna 432x201




Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

Food Irradiation Can Save Thousands of Lives Each Year

By Lenka Kollar

The Centers for Disease Control and Prevention (CDC) estimates that 1 in 6 people get food poisoning each year in the United States and that 3000 die from foodborne illness. Food irradiation can drastically decrease these numbers by killing harmful bacteria such as E. coli and Salmonella in meat and produce. The U.S. government endorses the use of food irradiation, but does not educate the public about its benefits. Food irradiation has not caught on in the United States because consumers fear that radiation will mutate the food. The U.S. Food and Drug Administration (FDA) requires a label (pictured below) for any food that has been irradiated.


Food irradiation works by bombarding food with gamma rays, electron beams, or x-rays. Radioactive elements, such as cobalt-60 and cesium-137, emit high-energy photons or gamma rays that penetrate food. This type of radiation technology has been used routinely for more than 30 years to sterilize medical, dental, and household products, and it is also used for radiation treatment of cancer. Because the elements used do not emit neutrons, they do not make anything around them radioactive. Electron beams, or e-beams, are a stream of high-energy electrons propelled out of an electron gun and have been used as medical sterilizers for at least 15 years. X-ray irradiation can also be used for food irradiation and is a more powerful version of the machines used in many hospitals and dental offices to take X-ray pictures.

Irradiation does not change the nutritional value of the food, nor does it make it radioactive or dangerous to eat. This has been proven by numerous studies by the FDA and other national and international organizations. In fact, it is very difficult to distinguish if a food product has been irradiated or not. The high-energy particles kill bacteria, but do not alter the vitamin or nutritional content of the food. It still tastes and cooks the same and can even have a longer shelf life. Food irradiation can also be referred to as “cold pasteurization” because it kills bacteria through the use of radiation instead of heat as in traditional or hot pasteurization. Learn more about the process of irradiation and its effect on food on the CDC website and on Nuclear Connect.

In a recent article in the Washington Post, Michael T. Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota, blames an “anti-science movement” for public resistance to food irradiation. Osterholm says that, “Not using irradiation is the single greatest public health failure of the last part of the 20th century in America.”

As mentioned before, the United States already uses irradiation to clean medical equipment and other consumer products. Spices are commonly irradiated and the practice is growing for imported fruits and vegetables. Americans are already eating much more irradiated food than they realize because irradiated ingredients in processed foods do not need to be labeled.

Irradiation advocates have fought to remove the label because it does not change the food, while other treatment processes such as chemical washes for chickens and fumigation for strawberries do not require labels. The word “irradiation” scares consumers because they are unfamiliar with the technology.

The United Nations Food and Agriculture Organization estimates that 25 percent of the world’s food supply is lost every year due to pests and bacteria while people die of hunger. Hundreds of millions of people worldwide are affected by diseases caused by contaminated food. Irradiation using radioisotopes has proved effective in controlling pathogenic bacteria and parasites in food products and can make our food safer and last longer.


lenka kollar 127x150Lenka Kollar is the owner & editor of Nuclear Undone, a blog and consulting company focusing on educating the public about nuclear energy and nonproliferation issues. She is an active ANS member, serving on the Nuclear Nonproliferation Technical Group Executive Committee, Student Sections Committee, and Professional Women in ANS Committee. Connect with Lenka on LinkedIn and Twitter.

Robotics, Remote Systems, and Radiation

RRSD-LogoLong-Merged 480x34.

By Reid L. Kress

This discussion is targeted at the robotics or remote systems professional who is interested in using his or her commercial system in a nuclear application or who is beginning the design of a new system for deploying in a nuclear environment, and for those who are interested in robotics and remote systems in nuclear environments.

This person will need to have an understanding of the expected levels of radiation and radiation dose rates that might be encountered in these environments. A typical robotics/remote systems engineer will be unfamiliar with nuclear applications, as many engineers come from a very diverse set of backgrounds typically found in the robotics arena; namely, mechanical engineering, electrical engineering, and computer science. To provide a general idea of the radiation environment that might be encountered, we will present a few example applications from past experience.

A primer on radiation units

The first step for an engineer aspiring to apply his/her existing or planned robotic or remote device in a nuclear environment is to have a clear understanding of the basic units associated with common nuclear environments. These units and definitions are the following. (Note that many of these are summarized from references [1] and [2].)

Becquerel (Bq): SI unit of radioactivity defined as: 1 Bq = 1 decay per second.

Curie (Ci): Unit of radioactivity in the conventional system (non SI) defined as: 1 Ci = 3.7*1010 decays per second = 3.7*1010 Bq. A curie is the amount of radiation from one gram of radium.

Erg (erg): Centimeter-Gram-Second (CGS) derived unit of work defined as: 1 dyne-cm = 1 gm-cm2/s2 = 10-7 J. (Erg appears in the definition of rad.)

Gray (Gy): SI unit of absorbed dose of ionizing radiation defined as: 1 Gy = 1 J/kg = 1 m2/s2 = 100 rads. Dose in Sv = (Dose in Gy)*( Radiation Weighting Factor), where the radiation weighting factor (WR) is an approximation of the relative biological effectiveness value as a function of linear energy transfer. Therefore, when discussing x-rays and gamma rays (WR = 1) one Gray is one Sievert; whereas, for alpha particles, which have a weighting factor or quality factor of 20, one gray is 20 Sieverts. Absorbed dose can also be “imparted specific energy” and “kerma.”

Kerma: Acronym for Kinetic Energy Released in Material. The sum of initial kinetic energy of charged particles released within a material per unit of mass of the material.

Rad (rad): Unit of absorbed dose in the conventional system (non SI) defined as: 1 rad = absorption of 100 ergs per gram. 1 rad = 0.01 Gy.

Rem (rem): Acronym for Roentgen Equivalent Man. Conventional unit of dose-equivalent radiation: 1 rem = 0.01 Sv. Rem is a unit that is generally used in the United States only.

Roentgen (R): Unit of radiation exposure in the conventional system (non SI) equivalent to 2.58*10-4 C/kg (where C is a Coulomb). One Roentgen unit is equal to the production of one electrostatic unit of charge in one cubic centimeter of air. One R exposure of gamma or x-rays will produce approximately 0.01 Gy (1 rad) in tissue and because gamma and x-rays have radiation weighting factors of one (WR = 1) this is equivalent to 0.01 Sv (10 mSv) dose.

Sievert (Sv): SI unit of dose-equivalent radiation equal to 100 rems. Sv is used in a biological context. Although 1 Sv = 1 J/kg = 1 m2/s2, the Sievert is used to quantify the effects of radiation on biological tissue. Use a gray when discussing dose in any material. Equivalent radiation dose can also be effective dose and committed dose.

X: Unit of radiation exposure in the SI system defined as the production of 1 Coulomb of charge in one kg of air. 1 X = 3,876 R.

Think of these units in this manner:

1) To measure radioactivity: in SI use Becquerel (Bq); in conventional use Curie (Ci).

2) To measure absorbed dose: in SI use Gray (Gy); in conventional use rad.

3) To measure equivalent dose: in SI use Sievert (Sv); in conventional use rem.

4) To measure radiation exposure: in SI use X; in conventional use Roentgen (R).

Example applications

Next, the engineer must consider his/her particular application and define exactly what is the expected radiation environment that might be present? However, the engineer might be designing a general system for multiple applications or he/she may not know the radiation levels present. In that case, looking at radiation environments found in some past applications should provide some useful guidance.

Three Mile Island

The first example application examined is from cleanup work performed at the Three Mile Island (TMI) power plant. A remotely driven mobile robot called the Remote Reconnaisance Vehicle (RRV) was designed to carry various tools and a manipulator at TMI [3]. A typical task used the RRV and other remote equipment to leach strontium and cesium contamination from the concrete block wall that surrounded the containment building’s elevator and stairwell structures. The task comprised several remote operations, the most complex of which was injecting water into the cavities in the center of the blocks. Injection rrvrequired the boring of water injection holes and inserting a water injection nozzle. Wall sources were in the range of 2 Gy/h to 3 Gy/h (200 to 300 R/h) gamma. Another task was the removal of sludge and contamination from the basement that included cleaning the floor and using a pressure washer to clean walls. General area radiation readings on the RRV used for this task were 4 mSv (400 mrem) gamma and 0.02 Gy beta/h (2 rad beta/h). General area radiation levels on the hydraulic manipulator mounted on the RRV was 6 mSv (600 mrem) gamma and 0.02 Gy beta/h (2 rad beta/h).

Spent fuel pool

A second application area involves robotic and remote systems operating at a reactor site or support facility. For example, a system deployed in a spent fuel pool at a reactor site working on tasks such as inspection and characterization might encounter dose levels in the following ranges. On vertical walls, dose rate is generally less than 1 Gy/h. On the bottom of the pool not directly below the fuel racks, dose rates are below 5 Gy/h. On the bottom of the pool under the storage racks without the fuel rods in place dose rate is less than 50 Gy/h.  Under racks with fuel rods in place the dose rate can exceed 200 Gy/h.

Dry storage, reactor pressure vessel

Another area is a dry storage cask. In this case, the radiation levels internal to the cask have been reported up to 100 Gy/h, containing both gamma and neutron radiation. For the pressure vessel annulus of the reactor, levels of 100 Gy/h (gamma) and 300 Gy/h (neutron) have been observed. For the lower and upper intervals of the pressure vessel, values of 0.2 Gy/h (gamma) and 0.07 Gy/h have been observed, respectively. Systems operating in a light water reactor waste disposal facility could see dose rates of 2 Gy/h to 3 Gy/h (200 to 300 R/h) ranging upwards to 300 Gy/h (30,000 R/h).


fukushima-robot 266x200At Fukushima there have been many values published for the possible radiation environment depending upon which reference one examines. One recent publication from Tokyo Electric Power Company stated that radiation levels inside the Fukushima Dai-Ichi reactor have been measured at 30 to 73 Sv/h [4]. Regardless of the reference chosen, these dose rates are well within ranges seen on other applications (e.g. bottom of spent fuel pool at < 5 Gy/h = 100 Sv/h if one is discussing alpha particles in tissue).

Medical isotopes, sterilization, irradiation

During the production of medical isotopes, exposure rates of 0.014 Gy/h (1400 mR/h) immediately following target bombardment to a steady exposure rate of approximately 0.0035 Gy/h (350 mR/h) five to seven days following bombardment have been observed [5]. Consequently, remote target handling equipment needs to be designed to support these exposure rates over the life of the facility, and this is especially challenging whenever increased demand necessitates increased throughout and higher system availability. A typical dose for a food and medical products sterilization facility treating a medical product might be 25 kGy gamma [6&7] and for a food product 4 kGy [7]. Dose rate in an irradiator found in a research facility might range as high as 20 kGy/h. In an industrial irradiation facility (consider one that might contain 3 MCi of cobalt-60), the dose rate might range to 100 kGy/h near the source; however, it is generally around 10 kGy/h [7].

Other applications

Other applications and dose ranges are: medical diagnosis 10—100 mGy, medical therapy 1—10 Gy, industrial food and agriculture processing 0.1—10 kGy, industrial sterilization 10—30 Gy, and industrial materials modification 50—100 Gy [7]. Systems designed to handle these products are expected to work for a long period with high reliability, but depending upon the design and use requirements, these systems may be able to be located in a shielded area whenever the highest exposures are present.

Material damage thresholds

Radiation damage thresholds on selected metals in Gy from reference [2] are: aluminum 5*1011, 300 series stainless steel (SS) 1*1011, 400 series SS 5*1010, copper 2*1010, and nickel 1010. Radiation damage thresholds on selected ceramics in Gy from reference [2] are: alumina 5*1010, quartz 2*107, flint glass 2.5*105, and borosilicate glass 1*105. Coatings (e.g. vinyl and epoxy) tend to fail around 106 to 107 Gy [2] and adhesives and mineral oils tend to fail around 106 Gy [2]. Off-the-shelf electronics components can absorb approximately 100 Gy. More radiation resistant components that can handle up to 1000 Gy are available and are more costly (cost increases by approximately a factor of 10). Dose rate is also important to the degradation of materials. For example, a dose rate of 500 Gy/h (50 krad/h) is of concern for semiconductor materials. Off-the-shelf CCD cameras can handle dose rates on the order of 100 to 250 Gy/h (104 to 2.5X104 rad/h) or a total dose of 250 to 1000 Gy (2.5X104 to 105 rad) depending upon the manufacturer.


The radiation levels and dose rates in these example applications are often sufficiently large to justify the application of robotics and remote operations. They are, however, well within the range of engineering acceptability with regard to design and fielding of viable and reliable systems. With proper selection of components giving attention to radiation resistance, with the design of appropriate shielding either by employing direct shielding or by using indirect shielding of critical components provided by other, more radiation-resistant components, and by providing adequate maintenance with planned replacement of degraded parts, the remote systems engineer can field remotely operated devices that can accomplish the required tasks with appropriate reliability in these and similar nuclear environments.



[1] Lamarsh, J.R. and Baratta, A.J., “Introduction to Nuclear Engineering, 3rd Edition,” Prentice Hall, Upper Saddle River, New Jersey, 2001.

[2] Houssay, L.P., 2000, “Robotics and Radiation hardening in the Nuclear Industry,” M.S. Thesis, Univ. of Florida, Gainesville, 2000.

[3] Bzorgi, F.M., “Summary of Remotely Operated Systems Designed for Inspection, Decontamination, and Decommissioning,” Bechtel National Inc. Oak Ridge, TN, 1996.

[4] Inajima, T., “Tepco Detects High Radiation Levels Inside Fukushima Reactor,” Bloomberg online, March 27, 2012.

[5] Boothe T.E. and McLeod, T.F., “Radiation Safety Aspects of Production of Commercial Levels of Medical Radioisotopes,” Nuclear Instruments and Methods in Physics Research B79, pp. 945-948, 1993.

[6] Eastman Specialty Products, “Sterilization of Medical Devices & Packaging,” Eastman Chemical Company, Kingsport, TN, 2010.

[7] International Atomic Energy Agency, “Gamma Irradiators for Radiation Processing,” Vienna, Austria.


The mission of the Robotics and Remote Systems Division of the American Nuclear Society is to promote the development and application of robotic and remote systems for hazardous environments for the purpose of reducing hazardous exposure to individuals, reducing environmental hazards, and reducing the cost of performing work.



Reid Kress, PhD, PE, is senior technical advisor at the Y-12 National Security Complex, and adjunct professor in the Department of Industrial Engineering at the University of Tennessee, Knoxville. He is chair of the American Nuclear Society Robotics and Remote Systems Division.

What’s Your Radiation Dose?

Yes, indeed, you do have one. It’s rather surprising that many people simply don’t realize that radiation exists naturally all around us, and is part of our everyday lives—whether we are aware or not.

So become aware! Click the image below to visit the quick and easy ANS Interactive Radiation Dose Chart

dose chart linker 301x198

The average radiation does for a person living in the United States is about 6.2 milliSieverts (mSv) per year. About half is from natural background sources, and about half from medical diagnostics and treatments.

For a quick primer on the basics of radiation, see the Radiation page at NuclearConnect. The U.S. Environmental Protection Agency has a highly informative page on Radiation Doses In Perspective. And for more perspective, also recommended is a recent post Get a grip on radiation, people at idigumining.

This chart from xkcd is a very interesting graphical comparison of radiation doses from many sources. Also, this chart (along with a click of the magnification button) from Information Is Beautiful is, well, beautifully portrayed, although do keep in mind it’s logarithmic so it can fit on one page.


New EPA Guidelines for Response to Radioactivity Releases

By Jim Hopf

DC Perspective

The U.S. Environmental Protection Agency just released a draft Protective Action Guideline (PAG) that sets standards and makes recommendations for the response to a large release of radioactive material into the environment (e.g., from a nuclear plant accident or a dirty bomb attack, etc.). The draft report is now out for public comments (which are due by July 15).

PAG recommendations

The PAG sets a public dose threshold of 2,000 mrem in the first year and 500 mrem in subsequent years, above which the areas in question should be evacuated. (See Table 1-1 of the PAG.) The PAG is not clear as to whether or not those same limits apply to resettlement of areas previously evacuated (i.e., if people can resettle areas after their exposure levels drop back below 500 mrem/yr). Section 3.8 of the PAG suggests that “re-entry” is allowed if annual exposure is kept under 500 mrem, but appears to say that this is only for temporary stays (to accomplish specific tasks). It’s unclear why permanent residence (resettling) would not therefore be permitted if full (annual) occupancy would not yield a dose over 500 mrem.

Apparently, the above evacuation guidelines (thresholds) are no different from the current guidelines, which were based on a 1992 PAG. The differences lie in the area of long-term cleanup standards, and (perhaps) standards for resettlement or reuse. Currently, the only guidance or precedent for such standards are the extremely strict standards that apply for EPA Superfund sites and nuclear plant decommissioning, which are based on allowable lifetime cancer incidences (for a hypothetical, most exposed individual) ranging from 10-4 to 10-6. For radiation, these standards led to dose rate limits as low as 10–25 mrem/year (i.e., far below natural background levels).

The new PAG does not appear to give any specific, recommended dose thresholds for long-term cleanup. In Section 4.1.3, it makes reference to the old 10-4 to 10-6 acceptable lifetime risk criteria, but goes on to suggest that in the case of a large scale release of radioactivity (e.g., following a severe plant accident), attaining such cleanup goals may be impractical. It then states that cleanup level (and perhaps resettlement) decisions should be made on a case-by-case basis, with inputs from local authorities and various other stakeholders, based on the principle of “maximizing overall human welfare”. In Section 4.1.4, it suggests that resettlement may be possible before the long-term cleanup goals are met, due to the fact that those goals will be met in subsequent years, resulting in acceptable lifetime exposures.

Whereas the EPA PAG does not give specific dose numbers for cleanup standards/goals, a related National Council on Radiation Protection report does talk about such values. It discussed possibly raising the allowable dose rates (for resettlement, and possibly long-term cleanup goals) to anywhere from 100–2,000 mrem/yr. That is in contrast to the existing EPA standards for nuclear plant decommissioning, which are on the order of 10–25 mrem/yr (and are based on a constant lifetime dose at those levels, and an acceptable lifetime cancer risk of 10-4 to 10-6). It did, however, go on to recommend continued cleanup efforts, even after the attainment of the (100–2,000 mrem) annual dose goal, and subsequent resettlement.

Political reaction

The EPA PAG and NCRP report have provoked a strong reaction from anti-nuclear groups, who characterize them as an enormous relaxation of radiation standards (i.e., a huge increase in allowable dose rates). In a New York Times article, however, the authors of the PAG and NCRP report insisted that they are not changing the cleanup standards or allowable dose, but are just using more accurate estimates of lifetime doses that people will receive, based on the Fukushima experience, and expected cleanup activities that will continue to occur.

I’m not entirely sure what they mean by more accurately calculating doses, when the subject is the setting of dose limits. I think that the authors are referring to what was discussed in Section 4.1.4 of the EPA PAG, where people can resettle in areas with a somewhat higher annual dose rate, while still “meeting” the old lifetime cancer risk criteria, due to an assumption that dose rates will fall off, significantly, due to decay, natural dispersion, and ongoing cleanup efforts.

Changes do not go far enough

All of these (EPA/NRC) policies and supporting analyses are based on the linear no-threshold (LNT) assumption, i.e., that cancer risk is directly proportional to radiation dose, for doses all the way down to zero. Many scientists outright disagree with this, and even most of those who do support LNT don’t really believe that the risk is truly linear, all the way down to extremely low doses (that are a small fraction of natural background). They just believe that it is a practical and conservative radiation protection policy, and that there is no better practical alternative.

It’s obvious that anyone who does not believe the LNT assumption, and believes that dose rates within the range of natural background have no health impact, will find these EPA/NRC policies to be completely absurd. I will not question or debate LNT here, however. For the reasons I discuss below, current policies—and even those suggested by the PAG—are clearly unwise, indefensible, and utterly hypocritical, even if one completely accepts LNT.

Man-made vs. natural radiation dose

My position has always been that the issue is not LNT per se, but the fact that it is selectively applied/enforced. While LNT is debatable, there is no debate among experts that a given dose has the same health impact, whether it comes from a natural or man-made source (or isotope). And yet, there is a complete black-and-white distinction between naturally caused doses and man-made doses (specifically, those from the nuclear power or weapons industries), in terms of dose limits. Government agencies assume LNT, and then apply an extremely low (and arbitrary) allowable cancer risk criterion, to arrive at extremely low allowable radiation doses. They then apply those low limits ONLY to nuclear-industry-related activities (and isotopes). Doses from natural and other sources that are orders of magnitude larger are not regulated or responded to.

How could it be that government agencies are saying that “contaminated areas” should remain off limits, and require expensive cleanup efforts, even though the overall exposure levels in those areas are lower than the natural background exposure levels in many regions of the earth (where millions currently live, with no apparent health impacts)? Under that logic, we should be spending billions to reduce doses in high natural background dose areas (e.g., Denver), or permanently evacuate those areas.

Those natural sources are responsible for annual collective exposures that are thousands of times higher than those caused by even worst-case accidents like Fukushima, let alone the nuclear industry in general. Even the individual exposures are orders of magnitude larger than those that would be allowed by the 10-4 to 10-6 lifetime risk criterion (radon exposes hundreds of millions of Americans to a lifetime cancer risk on the order of ~1%). Many of those natural doses (such as radon) would also be orders of magnitude less expensive to reduce (in terms of dollars per man-Rem avoided).

For these reasons, annual dose limits that are a small fraction of natural background, which only apply to nuclear-industry-related sources, are clearly indefensible. The policy solution to this is obvious. Government agencies need to be told that they are no longer allowed to apply policies or regulations that distinguish in any way between different sources of radiation (e.g., natural vs. man-made, etc.). Dose is dose, period. They need to establish what safe dose levels are, regardless of source, for normal (long-term) and accident/event (short-term) conditions. The only possible exception to that may be medical exposures, under the argument that they have an offsetting health benefit.

I can possibly understand the desire to set very low exposure limits (far below the level that poses any significant health risk) for normal nuclear industry operations, based on a “good industrial practice” philosophy. Routine releases really are unnecessary and easy to avoid, and we may want to avoid long-term buildup of man-made isotopes in the environment. However, unless the above reasoning is not clearly explained to the public, such policies may be counter-productive. The public will (understandably) tend to think that doses above the limits represent a significant health threat. In the event of an accident, the government will have to apply much higher limits, and then will have to explain that those higher doses are not really a significant health threat. This will result in a loss of public trust. A better stance would be to establish higher “public health and safety” dose rate limits around the top of the natural range (i.e., on the order of a Rem/year), but then say that much lower limits will be applied for normal operations since there simply is no reason why any significant releases are necessary, or should be allowed.

Cost vs. benefit

These extremely strict dose limits are yet another example of society spending enormous sums of money to reduce or eliminate tiny risks in one area, while ignoring vastly larger, and cheaper to reduce, sources of risk in other areas. This may be true of the (chemical toxin) EPA Superfund cleanup requirements, as well as the nuclear-related requirements.

The PAG and NCRP reports, and their authors, discuss the 10-4 to 10-6 “acceptable” lifetime cancer risk criteria, and how they will be maintained. To me, something seems odd about such stringent requirements in a world where ~25 percent of the people die of cancer. Clearly, there are much larger sources of risk that these regulatory bodies are failing to protect us against. That is, there are many industries or aspects of life where these strict risk standards are clearly NOT being applied. (Automobile exhaust, coal plant emissions, and the fact that coal ash is still not classified as a toxic material comes to mind.) It seems clear that this is yet another case of selective application/enforcement of overly strict requirements; another double standard.

My understanding is that the government has general public safety policies (for industrial projects/activities, building codes, etc.) that require that ~5–10 million dollars be spent per (expected) life saved. These same policies should apply for the cleanup and resettlement of nuclear-contaminated areas. At some point (dose level), the cost of continued cleanup up effort will exceed $5–10 million per life saved (even assuming LNT). At that point, cleanup efforts should stop.

This is especially true given that there are many ways to save lives that cost far less than $5–10 million per life saved. According to this article, the EPA’s proposed soot rule would only cost ~$5,000 per life saved. Also, according to my calculations, radon abatement (in a large fraction of U.S. homes) would cost only ~$100,000 per life saved (again, if you believe LNT).

Collective exposure vs. maximum individual risk

If one believes that there is a dose threshold (below which no health impacts occur), it may be logical to establish limits on dose rate (or annual dose) for individuals that are near that threshold. However, if one truly believes in LNT, limits on individual exposure have no logical basis. A simple mathematical result of the assumption that health risk scales linearly with dose is that the total health impact (i.e., numbers of sicknesses or deaths) scales directly with the collective exposure (in man-Rem). At the end of the day, the number of cancers is all that matters. Individual risk, and whether or not it is “acceptable”, is almost meaningless. Each person either gets cancer or not, and only the number of cancers matters.

Current limits invoke LNT (as they are far below the levels at which any health impacts are seen), but then establish extremely low limits on maximum individual risk (i.e., 10-4 to 10-6), as opposed to limits on collective exposure (in man-Rem). The way these current limits work, spreading the risk (pollution) out (e.g., tall smoke stacks) helps one comply with the limits, even though LNT (the very basis of those low limits) holds that spreading the risk out does not reduce the impact at all. It’s fallacies like this that make it possible for extremely low dose limits to apply for localized decommissioning or Superfund sites, that are having negligible impact, while fossil fuel air pollution (cars and coal plants) are causing tens of thousands of deaths every single year.

If LNT is to be the basis, correct policy would be to place limits on collective exposure, for any given industrial activity. For cleanup operations (or pollution prevention for that matter), a certain amount of money per man-Rem avoided should be required. Such policies would direct attention away from localized sites and towards more widespread pollutants that are actually having far larger health impacts. One thing is clear; these extremely low (10-4 to 10-6) limits on maximally-exposed individual dose have no logical basis and are completely indefensible.

Call to action

protective action guide 2013 c 201x259The draft EPA PAG is open for public comment until July 15. I urge American Nuclear Society members to respond. My personal view is that expensive cleanup operations or not allowing resettlement in areas with annual doses within the natural range (i.e., under ~1,000 mrem/year) is neither rational nor defensible. It wastes limited resources on a small to negligible public health benefit, and it inflicts needless suffering on the local population.




Jim Hopf is a senior nuclear engineer with more than 20 years of experience in shielding and criticality analysis and design for spent fuel dry storage and transportation systems. He has been involved in nuclear advocacy for 10+ years, and is a member of the ANS Public Information Committee. He is a regular contributor to the ANS Nuclear Cafe.


Fukushima Two Years Later

by Will Davis

At about a quarter to three in the afternoon on March 11, 2011, a gigantic and unprecedented earthquake struck just over 110 miles off the coast of Fukushima Prefecture in Japan. The quake was followed, just over 40 minutes later, by the first of several rounds of tsunami, which inundated enormous areas and eradicated entire towns and villages. Over 19,000 people were killed or are still missing, and over 6,000 survivors were injured.

Central to most narratives on this cataclysmic natural disaster has been the story of the Fukushima Daiichi nuclear accident. While no deaths have been attributed to the nuclear accident itself, or to radioactive contamination released from the plant, and while deaths at the Fukushima Daiichi nuclear site proper have been very few (three persons were killed on the day of the earthquake and tsunami—one by falling from a crane, two by drowning), the story of the nuclear accident continues to dominate press worldwide.

As we approach the two-year anniversary of these events, it’s important to look back and ask some honest and direct questions about the nuclear accident and how it relates to us here in the United States. What do we know now that we didn’t in the early days? Can we say for sure what was happening, both on a large and on a minute scale? Could the accident have been prevented? What are we doing to ensure something similar never happens again? What about the radiation exposure to the public? We will try to answer these and other important questions as we look back at two years’ worth of study and analysis, recovery and cleanup, and planning and preparing.

(Above, Fukushima Daiichi nuclear power station under construction in 1971. To the left of the photo, Units 1 and 2 can be seen complete while Unit 3 is under construction; Unit 4 has not yet been started. Nearer the camera is the construction site for Units 5 and 6. Photo courtesy Will Davis collection.)

The Great Tohoku Earthquake and Tsunami … and what we now know

As already described, the earthquake struck at 2:46 PM local time, and at that moment the three operating reactors at Fukushima Daiichi—Units 1, 2, and 3—detected the earthquake and were immediately shut down on a seismic scram signal. (The other units—4, 5, and 6—were shut down for maintenance.) Simultaneous with this event was a LOOP (loss of offsite power), caused by the electric distribution system outside the plant being damaged by the earthquake. At the Fukushima Daiichi station, the emergency diesel generators started as designed, and provided power to begin cooling down the three reactors that had been operating.

There has been speculation in some quarters that the earthquake caused damage to the plants and that this helped lead to the accident. In fact, all indications are that plant operations were nominal from the point of the seismic shutdown, LOOP event, and commencement of shutdown cooling at the three operating plants. As late as last November, presentations by the Tokyo Electric Power Company at the American Nuclear Society Winter Meeting revealed no suspicion of material failures at the plants prior to the tsunami’s arrival, as corroborated by recorded plant parameters and operator statements.

Of course, the actual triggering event of the accident was the tsunami-derived inundation of the plant 40 minutes after the earthquake, which, because of the pressure of the violent inrush of water, caused more physical damage than an equivalent–depth slow flooding event. The tsunami flooded the plant because the protection was inadequate; the protection guarded against tsunami of nearly 20 feet while the actual event was almost 50 feet. It should be noted, though, that an unanticipated factor in the event was the fact that the coastline actually dropped several feet—thus negating a percentage of the tsunami protection.

The inundation of the plants meant that both the (mostly below ground) diesel generators and near-grade electric distribution equipment was rendered inoperable. This is the situation called SBO (station blackout), where no AC power is available at all. Generators were called for, and shipped from outside the plant, but the sheer damage to the site made bringing them in and moving them around exceedingly difficult. In addition, procedures for their use did not really exist. The total loss of AC power meant that only DC power, to operate some valves and instruments, was available—and even this was limited not only by the time until the batteries discharged, but also by damage as well. At that point, the plant was crippled by loss of power, serious physical damage, confusion on site due to communication problems (and continued aftershocks), and lack of solid emergency operating procedures in such events. This led to a loss of cooling for Units 1, 2, and 3 reactor cores, ultimately resulting in severe core damage. Failure of the containment function of the reactor buildings led to the release of radioactive material to the environment.

At the ANS 2012 Winter Meeting, Akira Kawano of TEPCO stated that spare seawater pumps (both portable pumps, and replacements for built-in or installed pumps destroyed by the tsunami), spare sources of electric power (of all three ranges—high voltage AC, low voltage AC, and DC—used at the plant) and spare pressure cylinders to allow operation of valves after loss of electric power would have been exceedingly helpful in the hours after the tsunami. TEPCO has gone far beyond provision of these items, though, in its plan for tsunami protection at nuclear plants in the future.

It is important to point out that Units 5 and 6 did not experience a long-term blackout because one of the above ground air-cooled diesel generators installed at that northern section of the site remained fully operable. This diesel was at Unit 6, but power was patched in from it to Unit 5 later. Air-cooled diesels did exist at the area of Units 1 through 4, but the destruction of the electric distribution network inside the plants by water coupled with the loss of fuel tanks rendered these useless. (In this case, “air cooled” means that the diesels used conventional radiators to dissipate waste heat to the air, unlike the large emergency diesel generators that required seawater systems to be operable in order to dissipate engine heat.)

Regarding this tsunami damage and its implications, TEPCO has addressed its future commitment to safety at its nuclear plants by designating three courses of action:  First, it will take what it calls “Thorough Tsunami Countermeasures,” which means large seawall protection, protection of buildings inside the seawall should the seawall be breached, and also provision of multiple backup power sources. Second of the triad is ”Securing Functions by Adopting Flexible Countermeasures,” by which it is meant that many varied backup power sources and sources of site assistance will be spread among many other sites. Finally, under “Mitigation of the Impact after Reactor Core Damage,” TEPCO plans to make serious preparations to control events, even should the first two steps fail. This includes, but is not limited to, installation of hardened, filtered containment vents that can be operated remotely under even accident conditions. Click here to see a brief TEPCO synopsis of its accident analysis report that contains these three steps.

Eventually, all operators of nuclear plants in Japan will take serious measures like those described above, and more, to prepare the sites and personnel against future events like this. Some have already begun; click here to see a detailed account of preparations at two different sites in Japan. These efforts are enormous; Chubu Electric Power has stated that it will invest 140 billion yen (about US$1.47 billion)  in its Hamaoka nuclear plant upgrades.

At left, view of Fukushima Daiichi Units 1 through 4 after the accident. Photo courtesy Japanese Maritime Self Defense Force.

Two of the reactor buildings at Fukushima Daiichi were severely damaged, and another partly damaged, by explosions of hydrogen gas that was generated by the damaged fuel while in contact with steam. This hydrogen got into the reactor buildings, built up in concentration, and later (quite famously, for both explosions were filmed from a distance) caused explosions in Unit 1 and Unit 3 reactor buildings. Evidence delivered by TEPCO at the ANS 2012 Winter Meeting now shows that the probable leakage point of the hydrogen into the primary containments and into the reactor buildings (after first getting out of the damaged reactor vessels) was through the drywell head flange at Unit 1, and also possibly at Unit 3. (Other papers delivered at that meeting hinted at other possible leak points; none can be assured until the plants are decommissioned.) Unit 4 experienced a hydrogen burn event as well; this is now known to have occurred because PCV (primary containment vessel) venting at Unit 3 allowed hydrogen to enter a common exhaust stack, and flow not only out the stack but into Unit 4′s reactor building. Delayed and/or difficult venting of the containments is the key factor in this portion of the accident; venting would have prevented overpressurization of the primary containments, allowing them to retain physical integrity.

Containment vents have become a major topic of discussion after the accident. At the ANS Winter Meeting, Sang-Won Lee, a representative of Korea Hydro and Nuclear Power stated that all of its OPR1000 and APR1400 nuclear plants will have filtered containment vents installed by the year 2015 since KHNP considers  this the “final means to prevent an uncontrolled release of radionuclides to the atmosphere.” (Interestingly, all South Korean nuclear plants will fit or backfit seismic trip equipment as well.) Here in the United States, hardened vents, perhaps filtered, will eventually be fitted to all boiling water reactor plants with Mk I and Mk II containments; click here to see some detailed background on the decision-making process and on filtered vent systems at reactors in other countries. For more background on decision-making regarding filtered vents, click here.

Do we know all of the things that were going on at Fukushima Daiichi?

The answer to this question is a qualified “yes.” In the time since the accident, many reports have been developed by TEPCO (and many other bodies) to attempt to explain the accident progression. As these reports came out, each subsequent report has benefited from more and better detailed information on the actual minute-to-minute actions being taken by operators on site, and from more detailed records that have been released. As of November 2012, when TEPCO made presentations on the accident at the ANS Winter Meeting, there were no new announcements made about operator actions, equipment failures, and records—and TEPCO representatives stated on several occasions that it is thought that the full range of operator actions is as well known now as it will ever be.

In terms of what was happening mechanically, we might say, throughout the accident, the truth is less certain. The loss of most of the plant instrumentation and the inability to access parts of the reactor buildings (even today) means that the exact progression of events once serious core damage began isn’t known. It will not be known until the plants are more accessible (during defueling, years away) and not fully known until the plants are decommissioned and dismantled. It must be added that while these findings will eventually significantly add to our storehouse of knowledge, they’re not essential to setting up procedures and equipment to prevent any such accidents in the future.

For such detailed reports as mentioned above, you can click here to see the Institute of Nuclear Power Operations report on the accident; you can click here to see a massive 500 page report on the accident by TEPCO; you can also find the American Nuclear Society’s Fukushima Committee report here.

Could the Fukushima Daiichi accident have been prevented?

We could say “yes” at some, or many points along the way—for example, we might say (getting into details) that had the hydrogen explosion not occurred at Unit 1, there may not have been any serious core damage at the other units due to the site-wide problems caused by the Unit 1 hydrogen explosion. This is cherry picking, though; the best answer to the question is “yes, had the site been properly prepared for tsunami of the actual size experienced, and even if not, had it been prepared to respond both from inside the site and from outside to such a natural disaster.” I’ve provided a link earlier to show what’s being done in Japan to prevent such events; a clearly defined path for US nuclear plants to increase nuclear plant safety can be found in a document that the Nuclear Energy Institute calls “The Way Forward.”

Our first modern wake-up call in the United States to such events was 9/11, in the sense that this experience was applied to nuclear plants here; after this, what are called “B.5.b” enhancements to US nuclear power stations saw the provision of numerous pieces of equipment to help combat site emergencies that included physical damage. Since the Fukushima Daiichi accident, much more has been developed. The industry response to the accident is called FLEX, and it provides essentially the same sort of mobile backup responses that the Japanese are beginning to implement (for stations that will restart.) The FLEX response is by now well known; you can click here to see details of its implementation and progress.  There are also multiple documents available at NEI’s Safety First website, found here.

So, the answer to “could this accident have been prevented” is “yes”—which means that future occurrences can also be prevented. The important provisions are spelled out clearly in the FLEX plans, and in those fairly duplicate plans being pursued by the Japanese: prevent loss of all AC power (station blackout) and prevent loss of the ultimate heat sink (where heat from the reactors and spent fuel is ultimately deposited, be it water or even the atmosphere) and prevent core damage.

What about the radiation dose received by citizens off site?

The World Health Organization has just released a report that tells us that the dose received by persons not on the site was actually not dangerous—in fact, according to WHO, most persons in Fukushima Prefecture received no more than 10 mSv, although some received as much as 50 mSv effective dose. You can read the entire report by clicking here.

This is not to say that the trauma experienced by those evacuated from the prefecture is not real; it is. It is important to understand that prevention of future events like the Fukushima Daiichi accident will also prevent massive evacuations of people from their homes. What it does mean is that exposure received by most people is far less than what they normally receive through the course of daily living and travel in a year. Click here to calculate your dose rate where you live in order to compare it to the figures in the WHO report.

The Fukushima Daiichi accident has been given the same INES scale rating as the Chernobyl accident—a rating of 7, or “Major Accident.” This is because both accidents resulted in a release of radionuclides to the environment concurrent with reactor fuel damage. However, the release from Fukushima Daiichi was only about 10 percent that of Chernobyl; thus, the equivalent rating on the INES scale doesn’t tell quite the whole story.

Where do we go from here?

In terms of the Fukushima Daiichi site, the planned decontamination and decommissioning of the whole site might take as long as 40 years, according to TEPCO’s road map for site decommissioning. In the meantime, TEPCO will be performing a great deal of research on how to safely dismantle the nuclear plants, very likely with international cooperation.

Worldwide, each nation that either has nuclear plants or aspires to have them has made some hard decisions. In the case of a few, like Germany, the decision has been made to abandon nuclear plants entirely; Bulgaria recently decided not to build a nuclear plant, as well. In the cases of most nations, though, reviews and reports on ‘lessons learned’ from the Fukushima Daiichi accident have evolved into robust plans for action; this strategy applies to the United States, South Korea, and China as three of the foremost proponents of nuclear energy. Many other nations that did not have nuclear power prior to the accident but wished to have it are still on course to build nuclear plants; perhaps most well known of these is the effort underway in the United Arab Emirates. Many nations realize the need for electricity in order to have a more productive and safer society; in a number of cases, nuclear is the leading choice. (Also notable for entering into nuclear energy programs are Kenya, Vietnam, Turkey, and Kazakhstan.)

Indeed, it would seem that the greatly increased public dialogue and involvement after the accident on many varied aspects of nuclear energy (not just safety) has not led to widespread fear, shown by favorable poll numbers in the United States. Even as time goes on, the polls in favor of nuclear power hold up.

This has allowed the present-day general discussion about greenhouse gases and varied energy generating sources to, for the most part, include nuclear energy on an intelligent and rational basis. Much of that basis centers on the passive safety features of new nuclear plants such as the Westinghouse AP1000, which is designed to endure SBO events for 72 hours with no operator action whatsoever, and after that time and with some operator action to transfer water, can maintain core and containment cooling indefinitely. The reactor plant is also designed so that even in the event of a severe accident, the core will remain inside the reactor vessel—an important step in the prevention of release of radioactive material to the environment.

Nuclear plant operators and government regulators worldwide have responded to the Fukushima Daiichi accident with still-increasing vigilance, inspection, research, and action. It’s clear that such an accident must never be allowed to happen again—and by the actions being taken at least in the United States, it would appear that we are well on our way to ensuring that we can meet any and every challenge that future severe events might bring, for the safety of both the plant operators and the citizens they serve.


Will Davis is a consultant to, and writer for, the American Nuclear Society. In addition to this, he is a contributing author for Fuel Cycle Week, and also writes his own blog Atomic Power Review. Davis is a former US Navy Reactor Operator, qualified on S8G and S5W plants.

Friday Nuclear Matinee: Types Of Radiation

All ages are welcome at the Friday Nuclear Matinee for this short video from the UK. Kids get in free! Also, adults who enjoy British accents–and cartoons–and the funny word “aluminium.”

Alpha particles, beta particles, gamma rays… what are they and how do they differ? This helpful video aimed at the younger set explains some of the basics. Enjoy!

Note: The visible glow from potassium uranyl nitrate (or sulfate) is not actually from radioactivity – see this post at ANS Nuclear Cafe for a more thorough but equally entertaining exploration of Becquerel’s discovery of radioactivity

Thanks to Virtual School UK.

2012 ~ The year that was in nuclear energy

Plus a few pointers to what’s in store for 2013

By Dan Yurman

Former NRC Chairman Gregory Jackzo

On a global scale the nuclear industry had its share of pluses and minuses in 2012. Japan’s Fukushima crisis continues to dominate any list of the top ten nuclear energy issues for the year. (See more below on Japan’s mighty mission at Fukushima.)

In the United States, while the first new nuclear reactor licenses in three decades were issued to four reactors, the regulatory agency that approved them had a management meltdown that resulted in the noisy departure of Gregory Jazcko, its presidentially appointed chairman. His erratic tenure at the Nuclear Regulatory Commission cast doubt on its effectiveness and tarnished its reputation as one of the best places to work in the federal government.

Iran continues its uranium enrichment efforts

The year also started with another bang, and not the good kind, as new attacks on nuclear scientists in Iran brought death by car bombs. In July, western powers enacted new sanctions on Iran over its uranium enrichment program. Since 2011, economic sanctions have reduced Iran’s oil exports by 40 percent, according to the U.S. Energy Information Administration.

In late November, the U.S. Senate approved a measure expanding the economic sanctions that have reduced Iran’s export earnings from oil production. Despite the renewed effort to convince Iran to stop its uranium enrichment effort, the country is pressing ahead with it. Talks between Iran and the United States and western European nations have not made any progress.

Nukes on Mars

NASA’s Mars Curiosity Rover is a scientific and engineering triumph.

Peaceful uses of the atom were highlighted by NASA’s Mars Curiosity Rover, which executed a flawless landing on the red planet in August with a nuclear heartbeat to power its science mission. Data sent to Earth from its travels across the red planet will help determine whether or not Mars ever had conditions that would support life.

SMRs are us

The U.S. government dangled an opportunity for funding of innovative small modular reactors, e.g., with electrical power ratings of less than 300 MW. Despite vigorous competition, only one vendor, B&W, was successful in grabbing a brass ring worth up to $452 million over five years.

The firm immediately demonstrated the economic value of the government cost-sharing partnership by placing an order for long lead time components. Lehigh Heavy Forge and B&W plan to jointly participate in the fabrication and qualification of large forgings for nuclear reactor components that are intended to be used in the manufacture of B&W mPower SMRs.

Lehigh Forge at work

The Department of Energy said that it might offer a second round funding challenge, but given the federal government’s overall dire financial condition, the agency may have problems even meeting its commitments in the first round.

As of December 1, negotiations between the White House and Congress over the so-called “fiscal cliff” were deadlocked. Congress created this mess, so one would expect that they could fix it.

The Congressional Budget Office has warned that if Congress doesn’t avert the fiscal cliff, the economy might slip into recession next year and boost the unemployment rate to 9.1 percent in the fourth quarter of 2013, compared with 7.9 percent now. Even record low natural gas prices and a boom in oil production won’t make much of a difference if there is no agreement by January 1, 2013.

Japan’s mighty mission at Fukushima

Japan’s major challenges are unprecedented for a democratically elected government. It must decontaminate and decommission the Fukushima site, home to six nuclear reactors, four of which suffered catastrophic internal and external damage from a giant tsunami and record shattering earthquake. The technical challenges of cleanup are daunting and the price tag, already in the range of tens of billions of dollars, keeps rising with a completion date now at least several decades in the future.

Map of radiation releases from Fukushima reported in April 2011

  • Japan is mobilizing a new nuclear regulatory agency that has the responsibility to say whether the rest of Japan’s nuclear fleet can be restarted safely. While the government appointed highly regarded technical specialists to lead the effort, about 400 staff came over from the old Nuclear Industry Safety Agency that was found to be deficient as a deeply compromised oversight body. The new agency will struggle to prove itself an independent and effective regulator of nuclear safety.
  •  Japan has restarted two reactors and approved continued construction work at several more that are partially complete. Local politics will weigh heavily on the outlook for each power station with the “pro” forces emphasizing jobs and tax base and the anti-nuclear factions encouraged by widespread public distrust of the government and of the nation’s nuclear utilities.
  • Despite calls for a phase out of all nuclear reactors in Japan, the country will continue to generate electric power from them for at least the next 30–40 years.
  • Like the United States, Japan has no deep geologic site for spent fuel. Unlike the United States, Japan has been attempting to build and operate a spent fuel reprocessing facility. Plagued by technical missteps and rising costs, Japan may consider offers from the United Kingdom and France to reprocess its spent fuel and with such a program relieve itself of the plutonium in it.

U.S. nuclear renaissance stops at six

The pretty picture of a favorable future for the nuclear fuel cycle in 2007 turned to hard reality in 2012.

In 2007, the combined value of more than two dozen license applications for new nuclear reactors weighed in with an estimated value of over $120 billion. By 2012, just six reactors were under construction. Few will follow soon in their footsteps due to record low prices of natural gas and the hard effects of one of the nation’s deepest and longest economic recessions.

The NRC approved licenses for two new reactors at Southern’s Vogtle site in Georgia and two more at Scana’s V.C. Summer Station in South Carolina. Both utilities chose the Westinghouse AP1000 design and will benefit from lessons learned by the vendor that is building four of them in China. In late November, Southern’s contractors, which are building the plants, said that both of the reactors would enter revenue service a year late. For its part, Southern said that it hasn’t agreed to a new schedule.

The Tennessee Valley Authority recalibrated its efforts to complete Watts Bar II, adding a three-year delay and over $2 billion in cost escalation. TVA’s board told the utility’s executives that construction work to complete Unit 1 at the Bellefonte site cannot begin until fuel is loaded in Watts Bar.

The huge increase in the supply of natural gas, resulting in record low prices for it in the United States, led Exelon Chairman John Rowe to state that it would be “inconceivable” for a nuclear utility in a deregulated state to build new reactors.

Four reactors in dire straights

In January, Southern California Edison (SCE) safety shut down two 1100-MW reactors at its San Onofre Nuclear Generating Station (SONGS) due to excessive wear found in the nearly new steam generators at both reactors.

SCE submitted a restart plan to the NRC for Unit 2 in November. The review, according to the agency, could take months. SCE removed the fuel from Unit 3 last August, a signal that the restart of that reactor will be farther in the future owing to the greater extent of the damage to the tubes its steam generator.

The NRC said that a key cause of the damage to the tubes was a faulty computer program used by Mitsubishi, the steam generator vendor, in its design of the units. The rate of steam, pressure, and water content were key factors along with the design and placement of brackets to hold the tubes in place.

Flood waters surround Ft. Calhoun NPP June 2011

Elsewhere, in Nebraska the flood stricken Ft. Calhoun reactor owned and operated by the Omaha Public Power District (OPPD), postponed its restart to sometime in 2013.

It shut down in April 2011 for a scheduled fuel outage. Rising flood waters along the Missouri River in June damaged in the plant site though the reactor and switch yard remained dry.

The Ft. Calhoun plant must fulfill a long list of safety requirements before the NRC will let it power back up. To speed things along, OPPD hired Exelon to operate the plant. In February 2012, OPPD cancelled plans for a power uprate, also citing the multiple safety issues facing the plant.

In Florida, the newly merged Duke and Progress Energy firm wrestled with a big decision about what to do with the shutdown Crystal River reactor. Repairing the damaged containment structure could cost half again as much as an entirely new reactor. With license renewal coming up in 2016, Florida’s Public Counsel thinks that Duke will decommission the unit and replace it with a combined cycle natural gas plant. Separately, Duke Chairman Jim Rogers said that he will resign at the end of 2013.

China restarts nuclear construction

After a long reconsideration (following the Fukushima crisis) of its aggressive plans to build new nuclear reactors, China’s top level government officials agreed to allow new construction starts, but only with Gen III+ designs.

China has about two dozen Gen II reactors under construction. It will be 40–60 years before the older technology is off the grid. China also reduced its outlook for completed reactors from an estimate of 80 GWe by 2020 to about 55–60 GWe. Plans for a massive $26-billion nuclear energy IPO (initial public offering) still have not made it to the Shanghai Stock Exchange.  No reason has been made public about the delay.

India advances at Kudanlulam

India loaded fuel at Kudankulam where two Russian built 1000-MW VVER reactors are ready for revenue service. The Indian government overcame widespread political protests in its southern state of Tamil Nadu. India’s Prime Minister Singh blamed the protests on international NGOs (non-governmental organizations).

One of the key factors that helped the government overcome the political opposition is that Nuclear Power Corporation of India Limited told the provincial government that it could allocate half of all the electricity generated by the plants to local rate payers. Officials in Tamil Nadu will decide who gets power. India suffered two massive electrical blackouts in 2012, the second of which stranded over 600 million people without electricity for up to a week.

Also, India said that it would proceed with construction of two 1600-MW Areva EPRs at Jaitapur on its west coast south of Mumbai and launched efforts for construction of up to 20 GWe of domestic reactors.

India’s draconian supplier liability law continues to be an effective firewall in keeping American firms out of its nuclear market.

UK has new builder at Horizon

The United Kingdom suffered a setback in its nuclear new build as two German utilities backed out of the construction of up to 6 Gwe of new reactors at two sites. Japan’s Hitachi successfully bid to take over the project. A plan for a Chinese state-owned firm to bid on the Horizon project in collaboration with Areva never materialized.

Also in the UK, General Electric pursued an encouraging dialog with the Nuclear Decommissioning Authority to build two of its 300-MW PRISM fast reactors to burn off surplus plutonium stocks at Sellafield. The PRISM design benefits from the technical legacy of the Integral Fast Reactor developed at Argonne West in Idaho.

You can’t make this stuff up

In July, three anti-war activitists breached multiple high-tech security barriers at the National Nuclear Security Administration’s Y-12 highly enriched uranium facility in Tennessee. The elderly trio, two men on the dark side of 55 and a woman in her 80s, were equipped with ordinary wire cutters and flashlights.

Y-12 Signs state the obvious

The intruders roamed the site undetected for several hours in the darkness of the early morning and spray painted political slogans on the side of one of the buildings. They were looking for new artistic venues when a lone security guard finally stopped their travels through the plant.

The government said that the unprecedented security breach was no laughing matter, firing the guards on duty at the time and the contractor they worked for. Several civil servants “retired.” The activists, if convicted, face serious jail time.

None of the HEU stored at the site was compromised, but subsequent investigations by the Department of Energy found a lack of security awareness, broken equipment, and an unsettling version of the “it can’t happen here” attitude by the guards that initially mistook the intruders for construction workers.

The protest effort brought publicity to the activists’ cause far beyond their wildest dreams and produced the predictable uproar in Congress. The DOE’s civilian fig leaf covering the nation’s nuclear weapons program was once again in tatters.

So long Chu

Given the incident at Y-12, Energy Secretary Steven Chu, who came to government from the quiet life of scientific inquiry, must have asked himself once again why he ever accepted the job in Washington in the first place.

DOE Energy Secretary Steven Chu

Chu is expected to leave Washington. That he’s lasted this long is something of a miracle since the Obama White House tried to give him the heave ho this time last year after the Solyndra loan guarantee debacle, in which charges of political influence peddling by White House aides colored a half a billion dollar default on a DOE loan by a California solar energy company.

The predictable upswing in rumors of who might be appointed to replace him oozed into energy trade press and political saloons of the nation’s capital.

Leading candidates are former members of Congress, former governors, or just  about anyone with the experience and political know how to take on the job of running one of the federal government’s biggest cabinet agencies. It’s a short list of people who really can do the job and a long list of wannabes. With shale gas and oil production on the rise, having a background in fossil fuels will likely help prospective candidates.


Dan Yurman published the nuclear energy blog Idaho Samizdat from 2007–2012.

ANS Nuclear Cafe Matinee: The Radioactive Orchestra

With the 2012 American Nuclear Society Winter Meeting wrapping up in San Diego, a musical tribute to the nuclear sciences and technologies—and now, the nuclear arts!— is in order.

In this excellent TEDx presentation, media artist Kristofer Hagbard talks about making a musical connection to the world of atoms, and demos the orchestra’s first live musical instrument powered by radioactivity. His introductory explanation of the complexity, and yes, the beauty of the hidden world of radioactivity, as seen through the lens of the artist, is quite profound. Highly recommended viewing.

Visit for much more information, more videos, and an album of the orchestra’s works.

Challenging scientific organizations to adhere to scientific methods

By Rod Adams


For more than two years, I have been privileged to be included in correspondence about a battle for truth led by Ted Rockwell, one of the pioneers of nuclear energy and radiation protection. He continues to seek support of nuclear energy and radiation professionals in an effort to encourage the New York Academy of Sciences (NYAS) to do something that is apparently difficult for any large organization to do—apologize and take effective action to correct a continuing mistake.

NYAS book on Chernobyl effects rejects the scientific method

Here is a brief background of the error. It will be followed by a call to action.

The work selected as the December 2009 edition of The Annals of the New York Academy of Sciences (NYAS) was an expansion and translation of a report originally published in Russian and later translated to English under the sponsorship of Greenpeace International. The NYAS book, titled Chernobyl: Consequences of the Catastrophe for People and the Environment comes to conclusions about the effects of the accident that are in stark opposition to the conclusions reached by the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR).

Where the UNSCEAR report indicates that the total number of deaths caused by the accident through 2006 was less than 50, the book that the NYAS selected as its December 2009 Annals edition claims that there were 985,000 deaths attributable to the accident. It is difficult to comprehend the possibility that two scientific studies of the same event could differ by a factor of 19,700.

Fortunately, the authors of Chernobyl Consequences provide a reasonable explanation for the vast gulf between their conclusions and the conclusions reached by the scientific organizations that studied the accident’s effects. I am paraphrasing here, but the bottom line is that the authors, publishing sponsors and editors involved in the project had no intention of doing any scientific or statistical analysis. Instead they spent their time compiling as many anecdotes as they could find to support their preexisting mission.

Here are some quotes from Chernobyl Consequences that support my summary of their goals and methods:

(Causal thesis)
We believe it is unreasonable to attribute the increased occurrence of disease in the contaminated territories to screening or socioeconomic factors because the only variable is radioactive loading. Among the terrible consequences of Chernobyl radiation are malignant neoplasms and brain damage, especially during intrauterine development. (p. 2)

(Rejection of correlation requirements)
Why are the assessments of experts so different?
There are several reasons, including that some experts believe that any conclusions about radiation-based disease requires a correlation between an illness and the received dose of radioactivity. We believe this is an impossibility because no measurements were taken in the first few days. Initial levels could have been a thousand times higher than the ones ultimately measured several weeks and months later. (p. 2)

(Rejection of impact of other variables)
In independent investigations scientists have compared the health of individuals in various territories that are identical in terms of ethnic, social, and economic characteristics and differ only in the intensity of their exposure to radiation. It is scientifically valid to compare specific groups over time (a longitudinal study), and such comparisons have unequivocally attributed differences in health outcomes to Chernobyl fallout. (p. 3)

(Anecdote collection method)
The scientific literature on the consequences of the catastrophe now includes more than 30,000 publications, mainly in Slavic languages. Millions of documents/materials exist in various Internet information systems—descriptions, memoirs, maps, photos, etc. For example in GOOGLE there are 14.5 million; in YANDEX, 1.87 million; and in RAMBLER, 1.25 million citations. There are many special Chernobyl Internet portals, especially numerous for “Children of Chernobyl” and for the Chernobyl Cleanup Workers (“Liquidators so called”) organizations. The Chernobyl Digest—scientific abstract collections—was published in Minsk with the participation of many Byelorussian and Russian scientific institutes and includes several thousand annotated publications dating to 1990. At the same time the IAEA/WHO “Chernobyl Forum” Report (2005), advertised by WHO and IAEA as “the fullest and objective review” of the consequences of the Chernobyl accident, mentions only 350 mainly English publications. (Preface p. xi)

(Rejection of statistical methodology)
It is methodologically incorrect to combine imprecisely defined ionizing radiation exposure levels for individuals or groups with the much more accurately determined impacts on health (increases in morbidity and mortality) and to demand a “statistically significant correlation” as conclusive evidence of the deleterious effects from Chernobyl. More and more cases are coming to light in which the calculated radiation dose does not correlate with observable impacts on health that are obviously due to radiation.

(Emphasis added.)

Though Greenpeace International and its favored authors are free to print any material they want and people are free to read that material to reinforce their existing belief that radiation at any level is harmful, it is the responsibility of the scientific community to provide accurate information and to submit its work for independent peer review. The normal process of challenging assumption, correlating causes and effects, performing valid statistical analysis and accounting for confounding variables is what allows reasonably correct decision making.

Electronic version of NYAS book available for download

Though the decision to publish Chernobyl Consequences took place more than three years ago, it should not be relegated to the category of old news. The NYAS might have stopped printing the paper bound book, but the electronic version of the publication remains readily available for purchase or downloading by NYAS members. The publication web site contains links to several reviews and responses that are only available to people with academic subscription services or to people who care enough about the issue to lay out $39.95 for each letter to the editor. Just one of the linked responses is available to the public without additional fees; it is a devastating review written by M. I. Balonov of the Institute of Radiation Hygiene in St. Petersburg, Russia.

I purchased the response from Yablokov and Nesterenko to the criticism of S. V. Jargin so you would not have to. It provides more fodder for my assertion that the authors have specifically challenged the notion that the scientific method is important, and it includes a veiled accusation that should offend nuclear energy professionals.

In the Foreword, the Introduction and in Chapter II, it is mentioned that obliteration of those publications is not acceptable both from a moral and an ethical (note that in general, medical practitioners could only add short statements about their studies in numerous scientific and practical conferences) but also from a methodological point of view (when the sample number is very large, there is no necessity to use statistical methods developed for a small number of samples).

In this respect, criticizing us with the fact that our conclusions are in disagreement with those of IAEA (2006) and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR 2000) cannot but be surprising. The book itself was written as a counterpart to reports of official experts that may be connected to nuclear industry.

(Emphasis added.)

In response to Ted Rockwell’s sustained pressure, the staff of the Annals of the NYAS made some adjustments to the site hosting the book. They published what they described as a disclaimer that made it clear that the NYAS did not commission the book and that the opinions and conclusions are the responsibility of the authors, not the NYAS. However, the “disclaimer” also makes the statement that the book falls into the category of work deemed “scientifically valid by the general scientific community”.

Annals of the New York Academy of Sciences issue “Chernobyl: Consequences of the Catastrophe for People and the Environment”, therefore, does not present new, unpublished work, nor is it a work commissioned by the New York Academy of Sciences. The expressed views of the authors, or by advocacy groups or individuals with specific opinions about the Chernobyl volume, are their own. Although the New York Academy of Sciences believes it has a responsibility to provide open forums for discussion of scientific questions, the Academy has no intent to influence legislation by providing such forums. The Academy is committed to publishing content deemed scientifically valid by the general scientific community, from whom the Academy carefully monitors feedback.

That phrase “has no intent to influence legislation by providing such forums” was apparently selected to protect the tax exempt status of the NYAS, but it has no meaning in this instance. There is no pending legislation that could be remotely influenced by an honest discussion that evaluates the scientific merit of the December 2009 edition of the Annals of the New York Academy of Science. The discussion and resulting evaluation, however, would partially restore the scientific integrity of the organization as one that acknowledges that everyone is entitled to their own opinion, but not their own set of facts.

Challenge to integrity of scientific and technical professionals

There are many correct ways to do good science, and there is a method and a process that should be generally accepted as the way to glean truth, gather evidence, and evaluate causation. It is the responsibility of everyone who has a professional interest in properly informing the public about their subject to challenge those who seek to portray fiction as fact. It is especially dangerous for the truth to allow anyone to publish direct challenges to science and the professional integrity of thousands of people under the imprint of an organization like the New York Academy of Sciences.

Quiet pressure from a long-time member of the NYAS has not resulted in any effective action. Perhaps individual letters to the NYAS leadership sent by dozens of qualified professionals will have more impact.



Rod Adams is a nuclear advocate with extensive small nuclear plant operating experience. Adams is a former engineer officer, USS Von Steuben. He is the host and producer of The Atomic Show Podcast. Adams has been an ANS member since 2005. He writes about nuclear technology at his own blog, Atomic Insights.

A Salute to Medical Ionizing Radiation During Breast Cancer Awareness Month

By Bryan Bednarz

As a cancer researcher, I am constantly reminded of the horrific impact that breast cancer has on women and their families. This past week I received notification from my boss informing me and others that a work colleague’s daughter had recently passed away from breast cancer at the age of 40—certainly this reminder was much closer to home than usual. It is difficult to imagine the pain and suffering my colleague and his wife are now experiencing, adding to what I am sure was a nerve-racking and exhausting period of consultations for the family and treatments for his daughter.

Upon hearing this news, it was hard not to feel a slight sense of guilt that we, as a cancer research community, have not yet eradicated this horrible disease. In the year 2011 alone, breast cancer impacted the lives of an estimated 230,480 women in the United States. It is also difficult to not feel frustrated with the limited amount of money that our nation spends on cancer research, and even more so by the fact that this budget is stagnant and constantly threatened.

The National Cancer Institute (NCI), our country’s preeminent funding agency for cancer research, has an annual budget of about $5 billion. When compared to, for example, the $12 billion spent by the United States every month in 2008 on Middle East conflicts, NCI’s budget and the war on cancer seems like an extremely small fish in a very big pond. Cancer research will continue to be held hostage at this year’s congressional budget meeting, while patients and their families hold on to their hope for a cure.

I too share this hope. Along with increased awareness programs, such as Breast Cancer Awareness Month, new scientific achievements in screening and therapy are improving survival rates amongst breast cancer patients and are paving the way for a cure to this disease. This improvement is largely due to advances in the ability to harness the beneficial power of ionizing radiation to detect and treat breast cancer.

Lesions smaller than a millimeter can now be detected with mammography. A mammography unit fires a beam of x-rays through the breast, taking a snapshot of the tissue under the skin. Because cancerous tissue is often denser than normal breast tissue, it casts a shadow, making it visible to the naked eye on a monitor screen. Currently, the American Cancer Society recommends yearly mammograms at the age of 40 and higher, whereas a clinical breast exam every three years is recommended for women in their 20s and 30s. While it is common for women to be concerned about the risks associated with screening, mammograms actually result in a very low dose (approximately 1.5 mSv to the glandular tissue), which correlates to a minute risk. Certainly the gain from undergoing screening greatly outweighs the risk, as early detection is key to successful treatment outcomes.

We are also making tremendous strides in our ability to treat breast cancers with radiation. Many patients each year are treated with radiation produced by a medical linear accelerator. A medical linear accelerator is a sophisticated machine that generates high-energy x-rays or electrons that are directed at cancerous tissue. Using metal blocks placed within the accelerator, the treatment beam can be focused directly at the cancerous region in the breast, while healthy tissue can be spared from being irradiated.

Alternatively, radioactive sources (e.g. Iridium-192) and miniature accelerators the size of the core of a pencil can now be implanted adjacent to tumors to provide even better treatment precision. Even more exciting are therapy interventions that combine ionizing radiation with novel pharmaceuticals to take advantage of the benefits offered by each. Likely, it will be these therapy “cocktails” that will result in a breast cancer cure.

With the help of radiation in medicine, tremendous advances have been made in the detection and treatment of breast cancer, and I remain optimistic that we are on the verge of scientific breakthroughs that will save the lives of thousands of more women each year.


Bryan Bednarz is assistant professor of medical physics at the University of Wisconsin. His research focuses on solving problems that involve the interface between physics and biology. He received a B.S. and M.S. degree from the Department of Nuclear Engineering and Radiological Science from the University of Michigan, followed by a Ph.D. from the Department of Nuclear Engineering and Engineering Physics at Rensselaer Polytechnic Institute. He has been an active member of ANS since 2001 and serves on the Executive Committee of the ANS Biology and Medicine Division.

In case of atomic bomb, beer still OK

Science historian Alex Wellerstein recently wrote of a series of nuclear weapons tests conducted in 1955 at the Nevada Test Site, known as Operation Teapot. Among the important civil defense questions explored at the time was: What will the survivors drink after a nuclear apocalypse?

There was only one way to find out, at least according to the Atomic Energy Commission (AEC) of the day: Detonate an atomic bomb near some bottles and cans of soda and beer and then examine the evidence.

The results, it turns out, are quite reassuring. Many bottles and cans survived the nuclear blast, even those only a quarter mile away. Further, only those closest to Ground Zero registered radioactivity—but these readings were ”well within the permissible limits for emergency use,” according to the AEC.

Nearly as important: “Immediate taste tests indicated that the beverages, both beer and soft drinks, were still of commercial quality.”  (Immediate taste tests?)

Unfortunately, the flavor of the beers very close to Ground Zero was “definitely off,” the AEC concluded. Nonetheless, the ANS Nuclear Cafe concurs that it may be advisable, for readers who are so inclined, to keep a six-pack or two in one’s basement, as part of any sensible emergency preparedness program.

For further research:

Stefani Bishop announces the breaking news at WRKR radio, Kalamazoo, Michigan:

Atomic Beer Testing

Alex Wellerstein’s Restricted Data blog: “Beer and the Apocalypse

NPR’s Robert Krulwich’s sciencey blog: “U.S. Explodes Atomic Bombs Near Beers To See If They Are Safe To Drink