Why didn't the Atomic Age happen?

Outline of Atomic Dawn

Part I was going to cover the Henry Wallace administration, and was the only part worked out in detail. The PoD would be that FDR decides to keep Wallace, and the TL starts as the Bomb is being dropped on Japan. Here's the intro, "A Terrible Resolve":

“In the last milli-second of the Earth's existence – the last men will see what we saw.”
-George Kistiakowsky​

Thirty-one thousand thousand feet above Japan, the bomb bay doors of the B-29 slide open. A few seconds later, the Bomb falls free.

The timer ticks down. Fifteen seconds after release, the batteries power up the APS-13 RADARs, and weapon control passes to the barometric altimeter. Air pressure gently warps the altimeter's metal membrane, until the Bomb reaches 6,000 feet and the RADARs begin to fire, probing for the surface below.

Forty-four seconds after drop and one thousand, nine hundred and eighty-six feet above the Earth, the RADARs register the correct return, and the Bomb fires. Three BuOrd Mk15 Mod 1 primers ignite two pounds of cordite, the explosion slamming nine rings of highly-enriched uranium down the gun tube at one thousand feet per second. Ten milliseconds later the projectile reaches its target, slipping neatly onto six uranium target rings, and the chain reaction begins.

A neutron splits an atom of uranium-235. Two neutrons are released in the fission, which split two atoms, and then four atoms, eight, sixteen, the reaction building in a fraction of a fraction of an instant. The incandescent heat of the fission fragments begins to blow the assembly apart, the depleted uranium tamper holding it in place for an instant longer, the inertia of the heavy metal restraining a ball of plasma hotter then the sun and allowing the reaction to race for a few moments more, until it can hold no longer and the Bomb becomes a rapidly expanding sphere of white-hot vapor, expanding to form a fireball hundreds of feet wide.

The infrared pulse slams into the ocean, flashing the water's surface into steam. The blast wave follows, churning waves that gently rock the handful of boats still moored in the harbor. And throughout the city, shattered and burnt by weeks of firebombing, eyes turn to see the first stirring of a new kind of fire, of a sun brought to Earth in the sky above Tokyo Bay...

-Bering, Erik. The History of the Atomic Bomb.

“Today the United States Army Air Forces demonstrated the first of a new type of weapon against the crumbling Japanese Empire: an atomic weapon, a weapon drawing its force and power from reactions in the nucleus of the atom rather than from chemical energy. This weapon was delivered by a single B-29 bomber, and detonated in the air over Tokyo Bay, creating a fireball half a mile wide.

“No nation can stand against the power of this weapon. A single airplane armed with a single atomic bomb may destroy a great and mighty city – with a single bomb. If the Japanese government does not surrender unconditionally within ninety-six hours, I have ordered the Army Air Forces to employ this weapon against Japanese cities.

“Japan now faces the prospect of obliteration from the air...”

-Speeches of Henry Wallace.

There are many within the government who feel the weapon should have been used, not demonstrated. We have spent two billion dollars on the atomic bomb, and we have only two; more will come, but they will not be ready for weeks. Besides, for all the terror of the atom bomb, it is no new horror that would be visited upon the Japanese. We killed a quarter of a million people in one night when the B-29s hit Tokyo with firebombs. The atomic bomb is merely a more efficient form of destruction. The purpose of the atomic bomb is to bring the war to an end; a demonstration is more likely to inure the Japanese against the shock than to force them to the table. Nonetheless, I ordered the first bomb used as a warning, not a weapon.

I did this for one reason. There can be no turning back from this discovery, no returning to the age of atomic innocence – and war between men will not end when the present conflict is over. If war should come again, the atom will surely be used, and perhaps even more terrible weapons, that might mean the end of man on this Earth. Even as the present conflict draws to a close, we must focus all our efforts to ensure this will not happen. That must surely mean international control and comity between the powers. By demonstrating but not employing the bomb, the United States both shows the seriousness of the matter, and shows that we can be trusted with this new and awesome form of energy. Even if that should mean the war continues, it is the choice between a few millions of lives now, and hundreds of millions ten years hence.

The birth of atomic energy has changed the world irrevocably. After this, there can be no more secret alliances, no more arms races, and no more wars, for a war with atomic weapons will surely destroy us. A just and amicable peace, enforced if necessary by military force at the direction of the United Nations, is the only chance for the survival of our civilization. I can only hope that others realize that in time.

-Wallace, Henry. Presidential Diaries, Vol. I: 1945-1946.

Atomic Dawn

There Can Be No Turning Back

So Wallace uses the Bomb as a demonstration, and Japan surrenders before it's used again. Part I begins with Wallace's efforts to negotiate international control of atomic energy with the Soviets. This happened IOTL too, under Truman. That obviously didn't succeed, but Wallace is going to try a lot harder than Truman did.

I have a somewhat more sympathetic read of Wallace than, I think, most people on this board. I see him as fundamentally a scientist, not a politician. That's both his greatest weakness and his greatest strength. He trusts people he shouldn't trust, and one of them is Stalin. He's also thought through the Bomb a lot more than Truman had - he's had more time to, since he learned of it as VP before 1944, and had little to do as VP besides think about it. Fundamentally, I don't think it was impossible for the US and Russia to reach a negotiated settlement on arms control in 1946 if we had been willing to trust each other. Wallace is willing to trust Stalin, but Stalin isn't willing to trust Wallace, who he views as a fool. But he's willing to play him for a while.

The other thing about Wallace, though, is that he's not going to be fooled forever. One of the things we forget about him is that, IOTL, he turned against Stalin after the Korean War broke out, and even endorsed Eisenhower in 1952. We forget that because it doesn't fit anyone's narrative: it suits both those who like him and those who hate them to regard him as basically a pacifist. But Wallace was no pacifist: he had, after all, been VP during the deadliest war in human history. And he has both the courage to admit he's wrong - that's part of his strengths as a scientist instead of a politician - and a real Manichean streak to his character. Once he realizes Stalin has been playing him, he will turn on him, utterly and completely, and he will begin preparing the United States for war.

Because Wallace believes that - in the absence of international arms control and, eventually, a world government - war is inevitable. And he has thought deeply about what that implies. He will prepare the United States not to deter such a war, but to fight it. He won't start it - he's not a monster - but he will try to get us ready... Only by this point he's already alienated everyone due to his seeming appeasement of Stalin, and is widely regarded as a likely Communist traitor, despite his turn towards militarism.

Timeline-wise, my plan was that Washington has already turned on Wallace by 1948, but that his seeming success with Stalin and his better handling of the post-war demobilization means he wins reelection anyway. But the bottom falls out soon after when the Korean War or some analogous situation occurs. Wallace makes his pivot, but only succeeds in alienating most of his remaining supporters, who are die-hard peaceniks. And atomic weapons are used in Korea - I hadn't quite figured out how or by whom, but they are. I considered both having Wallace use them, and having the Russians employ one as a demonstration when it looks like US forces might completely overrun North Korea.

Eventually Eisenhower wins in 1952, and at first it seems like history is returning to its original course after this strange blip. But there are some things going on under the surface that will ultimately change history completely:
  • The relationship between the Atomic Energy Commission and the military is much more distant and much more cold. Historically, the AEC was formed to ensure civilian control over the atomic arsenal, and there were a few years where they took that obligation very seriously. That continues ITTL, as Wallace comes to believe - not without reason - that the military cannot be trusted with atomic weapons.
  • The Scientists' Movement - which IOTL was active in the late 40s to support civilian control of atomic energy and international arms control, but which disappeared after Korea - continues to be a significant force in politics. By the end, the AEC and the Scientists' Movement are basically Wallace's only remaining political allies.
  • The AEC has been ordered to set its own production targets.
  • The AEC is being much more careful with radiation.
  • International arms control is thoroughly discredited. By 1950, it is clear that Wallace has done everything that anyone could possibly do short of instigating a Communist revolution, and Stalin has repaid him very poorly.
  • The AEC has been given support of civil defense as a core task.
  • A major research program on the long-term health effects of radiation has begun, which is not limited to study of the Hiroshima and Nagasaki victims, as it was IOTL.
  • The atomic bomb has been used in combat after WW2.
  • Thanks to butterflies, Brien McMahon does not get cancer.
After that, my plans get more vague. I was probably going to skip the 1950s, because while a lot of stuff would be happening under the surface, it would mostly seem very similar to OTL. Part 2, Atomic Skies, begins in 1960, when Brien McMahon - chairman of the Joint Committee on Atomic Energy - is elected president on basically a platform of an atomic technotopia. Moving forward, my plans were:
  • A major public debate in the late 50s on radiation safety, as the technotopians of McMahon regard the AEC as far too cautious and safety-bound. The initial findings of the long-term radiation health effects study play into this, and ultimately lead to a public consensus to loosen the reins on safety. Critically, this is happening because the public - with McMahon's instigation - wants it, essentially a reversal of what happened IOTL.
  • A successful Aircraft Nuclear Propulsion program, leading to widespread use of atomic energy in the Air Force, and eventually civilian atomic airplanes in the 70s.
  • A massive civil defense program in the US.
  • The establishment of a National Atomic Power Authority in 1961, basically an atomic TVA covering the entire country.
  • The use of high-temperature gas-cooled reactors based on ANP for civilian power.
  • A seemingly successful Project Plowshare - though when you dig into the details, it's really more about publicity stunts than economics.
  • Slowly escalating funding and public support for science and technology. NASA isn't going to Mars in the 70s, but they are building a moonbase.
  • Missile defense.
  • The replacement of the libertarians in public consciousness by Technocracy, Inc. IOTL, Technocracy faded into an irrelevant personality cult after WW2, following a failed coup d'etat in the late 40s that split the group. The splitters subsequently disappeared pretty quickly. ITTL, thanks to a charismatic leader who is not IOTL's historical record, the coup succeeds, and Technocracy, Inc. goes on to play the same role that Ayn Rand played IOTL. Part of why the TL never ended up being written is that I found out more about the technocrats and got too disgusted with Technocracy, Inc. to let them do that.
  • No Vietnam War.
  • Lest people think this is a utopia: nuclear proliferation on a mass scale.
Part 3 begins some time in the 70s. Following a terror attack using a stolen nuclear weapon that almost leads to WW3, America wakes up from the technotopian dream and realizes that, yes, they have cheap electricity and flying aircraft carriers and a moonbase... But they have also spread weapons-grade nuclear material so widely that it is impossible to put the genie back in the bottle, and new countries are going nuclear on a yearly basis. Part 3, Atomic Empire, covers the attempt to put the genie back in anyway, including establishing something like an overt American empire in the western hemisphere, and eventually a reconciliation with Russia.
 

Archibald

Banned
Wow, this is... great. I always liked the Tokyo bay test option (do you know Kim Stanley Robinson alt Hiroshima novelette, The Lucky Strike ?)
Does NERVA goes forward ?

civilian atomic airplanes in the 70s

Me like this.

A seemingly successful Project Plowshare - though when you dig into the details, it's really more about publicity stunts than economics.
My favorite plowshare: Edward Teller crazy scheme of detonating H-bombs into underground lakes to drive steam turbines and voilà, civilian nuclear fusion (who needs tokamaks or ITER when a H-bomb can do the job ?) :p

When you thin about it, Project plowshare wasn't unlike this

With Edward Teller in the role of Homer Simpson, and nukes instead of fireworks.

"It's gonna take a lot of fireworks - nuclear weapons - to clean this place up."
 
Last edited:
Wow, this is... great. I always liked the Tokyo bay test option (do you know Kim Stanley Robinson alt Hiroshima novelette, The Lucky Strike ?)

I don't, no.

Does NERVA goes forward ?

https://www.alternatehistory.com/forum/threads/to-infinity-and-beyond-–-a-space-vignette.396359/

since the nuclear genie is out of the bottle, and proliferation is already rampant, I can see Orion and nuclear pulse in service.

I never really figured out what tech they were going to use for space travel. I wanted to avoid Orion, because it's kind of cliché, and because I think it has the wrong feel - it feels more heavy metal than atompunk. I really wanted to use NERVA for surface launch, but everything I read said that is ridiculously impossible - at most it might be possible to use it as a reusable upper stage, but that would probably require tech that wouldn't exist until long after the TL is finished, even with huge amounts of financing going into it.

... please make this :biggrin:

Sadly, I just don't have the time these days to do the research, even if I could sort out the remaining problems in the premise (like the issues with the technocrats). If I have time to do a TL, I'm going to finish Toxic Stars.
 

Archibald

Banned
I never really figured out what tech they were going to use for space travel. I wanted to avoid Orion, because it's kind of cliché, and because I think it has the wrong feel - it feels more heavy metal than atompunk. I really wanted to use NERVA for surface launch, but everything I read said that is ridiculously impossible - at most it might be possible to use it as a reusable upper stage, but that would probably require tech that wouldn't exist until long after the TL is finished, even with huge amounts of financing going into it.

I see your point, Orion is more brute-force than high-tech. Still you should really got for it, if only because it match pretty well with the proliferation issues and "atom madness" your world face.
 
Eloquent post. Thank you. I may steal it for another discussion on nuclear power.
Well, thank you! But if I am to believe asnys, and he seems eminently trustworthy, my core argument (that fissionable fuels are inherently very expensive to acquire) seems to be wildly off base. 1) the fuel is just 10 percent of an OTL power plant's lifetime cycle operation cost (including initial construction of the plant I gather--or if not, it is even less). So griping about the expense of processing uranium is like arguing different rocket designs on the basis of propellant cost--per e of pi and others this is marginal since propellant costs are a very minor part of the price of a rocket launch. I suppose I have been misled by some 60's-80's antinuclear polemics that argued that nuke plants put out less energy than the energy inputs to them. This admittedly seemed screwy to claim to me, and evidently now was, but wondering how it could be that we nevertheless have the power plants led me to the weapons program funding recapture theory which per asnys was pure hogwash. Probably nations with weapons programs do indeed in effect subsidize nuke power plants, in ways asnys has already himself outlined--by subsidizing the development and even construction of nuclear processing plants for instance.

I was wrong. Evidently the high cost of nuclear power OTL mainly boils down to the high cost of building, maintaining, and retiring the power plants themselves. The future history/ATL question is, how much cheaper can these plants be?

Note that it would be possible to enter the "Atomic Age" of the OP without fully accounting for all costs. The cost of waste disposal, including safely mothballing the end-of-life power plants, is quite likely to be grossly underestimated, and it is entirely possible then that there might be a binge of nuke plant building and even cheap power, then when the bill comes due for waste disposal bad things happen. Either the true price of a kilowatt has to be adjusted upward to pay for disposal, causing major economic shock, social dislocation and likely severe political repercussions, or the governmental/industrial complex starts juggling to cover their mistake, shortchanging the safe disposal process and obfuscating generally, with the outcome of major contaminant leaks that might themselves be covered up to a maximum extent. This also would have severe consequences in all the categories above. Watergate and Vietnam rolled into one on steroids, who knows, we might get a successful Hippie Revolution in the '90s! The hellish thing is even if an angry public under a new regime with a radical environmental agenda (much more populist and deep grass roots than OTL, due to mass suffering from major releases of toxic wastes) resolves (perhaps not entirely wisely) to mothball the entire nuclear industry forthwith they are still stuck with the wastes, for thousands of years to come. Hence the likelihood of a rival "stay the course" movement that punishes particular scapegoats for mismanagement but remains committed to the nuclear juggernaut. Call it rationality or call it sunk cost fallacy--God knows the sunk liability would be tremendous enough.

So, let's suppose that an alternate reactor design could beat coal in the cost department. ...
Now, although the LWR was the one that won, there were a number of other designs floating around in the 50s, including many that had small-scale prototypes built, and even a few commercial demonstration plants. These included aqueous homogeneous reactors, several different kinds of gas-cooled reactors, oil-cooled reactors, and others. Of these, the one that I think had the most potential over the same time frame as the LWR is the high-temperature gas-cooled reactor. Others may have more long-term potential - such as the Molten Salt Reactor of fame and story - but will require considerably more research to turn into commercial hardware.
Wasn't the core mission statement of General Atomics, the operation that hired Freeman Dyson and Ted Taylor in the late '50s, to develop civil uses of nuclear power? This is why I am so head-scratching as to why apparently none of these alternative designs, or yet others, were investigated as alternatives to LWR, and why the prospect (if it seemed to exist for any of them) of a much cheaper plant structure didn't attract any serious investment.

I could speculate thus:
1) despite the ideology of capitalism that assures us that progress happens via private capitalists taking risks in order to get an edge on competition, and this is the basis of profit, the fact is big establishments tend to be risk averse and more interested in trying to control known variables. They tend to leave the risky business of blue-sky research to government agencies or at any rate undertake it only with government funding. In the 50's USA (and USSR) this meant the military mainly.
2) the government establishments concerned with nuclear power were an interlocking directorate of Defense and the Atomic Energy Commission, which both regulated and promoted nuclear power. The latter should have been funding speculative research, and developing promising alternative modes of extracting fission power, but...
3) the contractors the US Navy hired to develop and build their pressurized LWR systems had a deep vested interest in that particular mode of power generation and equated it with "fission" as such. Were the AEC to present an alternative model unsuited to submarine installation but cheaper to build and operate than LWR, the prospect of leveraging the Naval contracts into a strong commercial presence as well would be thwarted; these firms would have no advantage over competitors in making the civil power plants and would have to divide capital investment between supporting the Navy and attempting to compete in the civil market, instead of synergistically using the same capital for both. Therefore it was not in their interest to have the AEC or any other state funded agency come up with a promising alternative. I don't have to suppose they were so cynical as to openly seek to suppress alternatives, merely that their strong influence in the interlocking directorate (via contacts at the Pentagon, long-established relationships there, political influence via Congress and Presidential electoral politics) would suggest to the AEC and Congressional committees that the established LWR program had everything well in hand and there was little need to "squander" large taxpayer funds on considering alternatives.

The question still arises why, in what was in the '50s and early 60's admittedly just a handful of parallel establishments, in Britain, France and the USSR, there wasn't some chance of alternatives being explored. But obviously they would all have parallel considerations, they too would hit on LWR early, and develop their own vested interests in it. And all of these rival national establishments had far less funds available than the USA potentially did. The Soviet system, driven by close to desperation (the high Kremlin officials knew, though didn't want to put out too loudly, that the West, mainly the USA, was ahead of them in every respect and feared it was all too likely we'd preemptively attack if we knew just how weak they were) and with a command society that could allocate admittedly scarce and inefficiently used resources lavishly on regime priorities, would be the strongest alternative. Poor per capita though it was, the Soviet bloc did command a very high population with a higher level of technical knowledge than mere comparison of standard of living would suggest, and had a rather futurist ideology too. But they had to prioritize vital defense, and if they had something in hand that seemed workable they'd tend to stick with it.
...And, unlike the other designs, I think there is a potential route to giving gas-cooled reactors the same kind of investment that LWRs got: the Aircraft Nuclear Propulsion Program.
Interservice rivalry! this addresses point 3 pretty directly! A bunch of scientists who think some other way than LWR is more promising are voices crying in the wilderness--unless the Air Force needs one of them, and now that alternative versus the Navy's favored system has a voice that will be heard in the interlocking directorate!
...ANP was an enormous project at the time, spending the equivalent of about $20 billion in today's money over ten years. Not quite as big as the submarine project, but still big. And it produced working hardware, including three nuclear turbojets that were static-tested in Idaho. Although two designs were considered, I want to focus on the direct-cycle option by GE. In its final incarnation, this consisted of an air-cooled, beryllium oxide-moderated reactor with uranium oxide fuel elements. Air would enter the turbojet, be ducted to the reactor, be heated by direct contact with the fuel elements, and then be ducted back to the turbojet. Now, it's not quite as simple as replacing the air with a closed helium loop and the beryllium oxide with graphite, but this is not too different from something that could be a very good power reactor.
I want to point out that if you disdain the Soviet RBMK, "upgrading" the gas cooled loop with graphite moderator is adopting the single feature that turned Chernobyl into a radioactive abattoir. Sure, everything is fine as long as it is helium circulating, but what happens if air gets into the loop?

I suppose a design derived from something developed for an airplane engine would not be nearly as massive as the RBMK cores to be sure. There might be relatively little graphite to burn, leaving the rest of the core to "simply" melt down or emit vapors thermally, not via combustion. Still, let's not condemn Ivan too harshly if graphite moderator seems like a fine idea to you!
Now, for powering airplanes, this has some serious problems, even leaving aside the whole "crashing" thing. First, it's not going to be fast. It's just not.
I figure this is for 3 reasons. One, the weakest in itself, is sheer mass. A jet engine is pretty light for the tremendous power it develops. Not so a nuclear core obviously, especially if it is shielded to give crew a chance of survival. But this is offset by the lack of fuel, to an extent.
Two, air intake temperatures. An air based heat engine needs to pressurize the air to be efficient to be sure. Up to a certain speed, pretty high up in the supersonic regime, ram air temperatures need to be actually increased to meet the adequate pressure levels for efficient power generation. So this point seems kind of shaky, but I suppose a specific design of reactor might have rather low intake temperatures, and flying too fast would exceed those temperatures.
Three, at a given thrust level, power rises with speed. If we have a plant adequate for cruising at Mach 0.75, and wish to go at Mach 2 with the same thrust instead, we might need to nearly triple the power output. For a fission reactor that means tripling the core mass, and tripling the radiation output. The former is actually dependent on design; we could have designs that "burn hot" and consume fissionables at a higher rate and get more power out of the same mass--bringing it to depletion and end of life sooner in inverse proportion of course. But radiation is a hard correlation, so much gamma ray an and neutron flux per kilowatt (thermal kilowatts, not necessarily useful output) in a fixed proportion. Also I believe supersonic aircraft require more thrust.
What it can do is stay aloft for a couple of weeks - its endurance is limited by maintenance and the crew's sanity, not by fuel. That could still be really useful, for things like missile carriers and command planes.

Unfortunately, that's not what the Pentagon wanted - they wanted a fast, high-altitude bomber, basically the XB-70 Valkyrie (which at one point was going to be nuclear-powered). This led to regular oscillations in the program's support, as it was alternately scaled up and cut back, which wasted a huge amount of money and time. Despite that, they still managed to produce a few turbojets, and by the time the program was cancelled in 1961, they basically knew how to build a nuclear airplane. It would be big, expensive, and slow, but it would fly, and it would not be completely useless.
Um, the thing is, desiring the air-cooled version probably does relate to desiring supersonic speeds. If we settle for subsonic cruise, in principle we could develop a version of LWR to drive propellers. The Soviet/Russian Tupolev "Bear" family of bombers was the USSR's answer to the US B-47 and B-52. The Americans used jets, later upgraded to turbofans, but the Russians used contra props driven by a turboprop engine. Their outcome was higher observabilty (including sheer noise, which I'm told could be heard by submerged submarines) and a noisier radar signature, but in terms of speed the Bears are competitive with the Buffs, that are after all pushing sonic limits anyway, and I believe they got significant endurance and range benefits with the more efficient turboprop setup.

Given subsonic cruise, I've seen a number of speculative options for nuclear powered planes. I was able to access a RAND corporation study done IIRC in the mid-80s comparing concepts for a big transport, much larger than a C-5, that considered many options for propulsion systems, including alternative fuels such as methane and ammonia as well as synthetic kerosene (making that version a mere size extrapolation of the C-5), hydrogen and nuclear power. The study meant to outline options for a future in which fossil derived kerosene would no longer be available.

The nuclear option was not at all a direct air cycle; rather it was molten metal (sodium and something else mix IIRC) with a double loop; the interior reactor core loop would, through a heat exchanger, transferred in a second loop (to isolate radio-activation of the primary loop substance) outside the shielded reactor core to the heat input part of a more or less standard turbojet, where the liquid metal would be cooled in lieu of fuel combustion there, then the heated air would first drive the turbine to drive the turbo compressor and possibly, likely actually, a fan, with the exhaust then completing the thrust, as in a normal turbofan. In fact, as a safety measure this study proposed that this engine be designed bimodal, using ordinary jet fuel (presumably synthesized per the study's premise) for takeoff and landing, only cruising on nuclear power with the kerosene flow turned off. The reactor core would supposedly be robust enough to survive a crash without cracking! The jet fuel reserve for a landing would also serve as part of the reactor shielding. The layout was unusual, with the jet engines, four of them powered by a single reactor, mounted on top of the fuselage above the reactor, placed in the center of mass of the airplane pretty much at the wing roots. Otherwise its layout was conventional for a big subsonic transport of the post-707 era.

Yet other schemes involve some sort of closed cycle turbine engines, hot gas or possibly steam, driven by a reactor to drive fans serving as the jets.

Whereas of course one rather infamous and well-known alternative here was the unmanned "Pluto" cruise missile, which would IIRC carry a number of short-range stand-off ballistic missiles to be fired at various targets on its trajectory (the thing being quite huge), propelled by a nuclear reactor in the form of an air breathing ramjet. IIRC Pluto would be quite supersonic, and it was anticipated the reactor core would leak quite a lot, spewing radioactive dust in its exhaust!

So really I'm not quite so sure why we are agreeing that a nuke powered jet could not be supersonic. Perhaps because Pluto or any other version running hot enough to produce necessary supersonic thrust on any scale would necessarily be dirty, and also much too "hot" in its core radiations for flight crew to survive on it or operate it on anything but a kamikaze basis? Clearly it would be possible for the intake air to be handled even at supersonic speeds. Clearly, barring some ATL breakthrough in making very small reactor cores, it would have to be scaled up to a massive airplane and making a plane both of unprecedented size and capable of supersonic cruise might indeed be too daunting a task, especially if one had any intention of ever safely landing it!

But accepting the stipulation it must be subsonic, certainly goals such as making it stay clean with its core not decaying and leaking, keeping radiation hazards within bounds so a flight crew could contemplate serving on it and hope to live normal lives, enabling safe landing, keeping the size within the bounds of leaps forward that would be deemed possible in the late 50s, are all more sanely attainable I guess.

But under those conditions as I say, would the rational designer aim for direct heating of air by running it right through the reactor core? The idea of air, with its random contaminants and occasional ingested bird, and its steady 20 percent oxygen composition, with variable amounts of water in various states, being the medium gives me the willies. Even granting that a gas cooled reactor points to lighter structure and probably more efficient power output, I think I'd suggest from the get-go that it be a controlled gas, using ambient air flow to cool it back down perhaps via a liquid intermediary heat exchanger or even active heat pumping "air conditioning" using Freon, ammonia or some such, in a closed cycle, and that basically the turbine would be a turboshaft, driving ducted fans or even open propellers, contraprops al la B-36 or the Bear. After all using props instead of heating the air for a jet effect would lower the net power demand, which would offset efficiency losses due to extra intermediary steps and the probably serious power drain of actively pumping heat exchangers to cool the working gas for another cycle. I probably would aim for helium to be that working gas immediately, but might be discouraged by the way helium has of leaking away while costing quite a bit to replace. Nitrogen might be good enough, and very similar in general thermodynamics to air, air being 80 percent nitrogen after all--but without oxidation hazard or the unpredictability of air's tertiary components. Carbon dioxide was used in early generation gas cooled reactors such as those developed in Britain--indeed IIRC such British gas cooled reactors were in fact essentially similar to the Soviet RBMK, which also was gas cooled unless I am much confused in my recall. (Advanced gas cooled reactors, such as you are suggesting here being already an Air Force project in the 50s, differ from both the British and Soviet design in that these used the hot CO2 to boil water and run ordinary steam turbines, whereas in advanced form the gas flows directly into gas turbines. Steam turbines are limited in their efficiency by the upper temperature limits at which our materials can handle pressurized steam, which sets a Carnot efficiency upper limit of efficiency well under 50 percent, whereas gas turbines can handle considerably higher peak temperatures and thus become overall much more efficient, in the 60-80 percent range versus 40 or less.

As a working fluid helium has the advantages of being strongly unlikely to be transmuted by neutron bombardment, almost perfectly chemically inert, and the second-lightest "molecule" known to science (or given current theory, possible) and also immune to phenomena like dissociation that complicates the thermodynamic calculation. It is as close to an ideal gas as is physically possible! So I'd think it smart to go straight for it for such an application. As I said the trick, in the context of an airborne power plant, is to cool it again for another cycle. But perhaps at some sacrifice of efficiency and hence requiring a heavier core and plant, the turbine exhaust temperature can be such that heat is readily drained from it through gas/gas heat exchangers that don't create too much drag and don't weigh far too much, and cool enough to get decent efficiency from the cycle.
Now, I am personally unable to be objective about ANP. I just love it too much. But we don't need the plane to actually fly to do what we need it to do. It just needs to get further than it did historically, to produce technology that can eventually go into a power reactor. Give it more consistent support, maybe even fly a few demo flights - maybe, if you buy the arguments about radiation I'm eventually going to get around to making, have it become a major part of the Air Force fleet. That gets us tech that can eventually become a gas-cooled civilian power reactor.
Someone in the Air Force has to recognize, around the late 50s, that a giant "turboprop" is useful to the Air Force mission. Too bad the glory is aimed at supersonic bombers--though I suspect you do dismiss the supersonic option a bit too definitively. A subsonic plane like the B-52 would clearly seem to have its days numbered and of course a mega-plane is going to be made in smaller numbers, hence become a high priority target for Soviet defenses.

OtL, the Air Force had a bit of a split personality between demanding control of strategic ICBMs--should these be deployed--versus wanting to keep priority on manned bombers. Eventually a branch of SAC emerged that was committed to Thor, Atlas, Titan, Minuteman, etc and ICBMs were there to stay.

Now suppose that some political and perhaps contractor interaction happens to give the Army the mission of CONUS based land ICBMs and perhaps allied-soil based front line IRBMs like Redstone (which actually had ranges verging on tactical) and Thor and so forth. (Thor would not exist, being essentially a replica of Jupiter capability). Von Braun's team at ABMA (or is that AMBA?) in Huntsville aka Redstone would be riding high, possibly Chrysler corporation joins the ranks of Boeing, Convair or Martin as an aerospace contractor (well, they were OTL, thanks to Von Braun's relationship with them).

With the Air Force backed into a corner, not having the fall back of owning the ICBMs (say a Congressional clique is duly impressed with the Army's work and authorizes von Braun to develop the prime ICBM contract, justified by the argument that missiles are basically artillery and not aircraft) some officers come up with a compromise, and suggest that the Air Force, like the Navy, be entrusted with an alternate system that is protected not by silos but by mobility and stealth--air launched long range missiles. Much longer range than the stand-off short range missiles that B-52s were eventually equipped with. The carrier aircraft does not penetrate enemy air space but stands off in international air space, maybe as far back as the Canadian Arctic, and the virtue of air launching a rocket versus ground launch from sea level is emphasized. Now at subsonic launch speeds, those virtues are not tremendous, but they are large enough to justify air launch satellite systems such as Minotaur OTL. If the plane could reach supersonic speed, it could contribute significantly to launch velocity and thus given the exponential nature of the rocket equation, save considerable propellant mass for a given throw weight. Recognizing that as an advanced goal, a first generation system is proposed instead that mainly benefits the rocket via launching from stratospheric altitudes, which cuts down the difference between launch and vacuum conditions and enables more efficient rocket engines thereby. Also, given good navigational techniques, a computer on board the plane, far more massive than anything that could be put on the rocket itself, inputs current coordinates at the time the go code for launch is given, computes a launch trajectory, downloads instructions into a simplified internal guidance system in the missile, and instructs the flight crew to an azimuth for launch. If location is accurately known and the crew can meet a tight specified launch condition, the missile can have high precision on target, a selling point versus early Polaris sub launched missiles since they were solid fueled and harder to steer accurately.

I'm thinking the plan is load one each Atlas missile in the fuselage of the plane, horizontally. Upon getting a go code the crew flies toward the launch point at the computed azimuth. Using extra thrust from conventional jet engines or perhaps rockets installed in the plane, they do a pull up maneuver to say 45 degrees climb. They shut down the auxiliary engines and throttle back the nukes, and coast on a ballistic "Vomit Comet" sort of roller coaster trajectory. As they approach the target release altitude on the right launch azimuth, they open a hatch in the rear of the plane, and the missile rolls out on rollers in the fuselage to emerge, pulled out by drogue parachute. Once the missile is clear the plane does a hard dive--or actually a mushy one as the air will be quite thin, but not so thin considering at say 300 m/sec they only have to climb one mile or so to launch the missile. After a suitable delay, when it has gotten hundreds of meters clear and is coasting on the right angle for a suborbital trajectory to target, the missile fires. The plane crew track it to make sure it is flying true and then head for a landing strip, some 120 tonnes lighter.

Why an Atlas? Because the Air Force was procuring it OTL, and it is so light that it can deliver a warhead to a target while massing all up not much over 100 tonnes. The fragility of the missile is less of an issue since it will be cradled within the fuselage of the plane until launched.

Some other features might be to have LOX in the missile but not kerosene fuel, this being kept in tanks on the plane (serving as extra radiation shielding) until the launch order is given, then rapidly pumped into it. The LOX will boil off, but since we have nuclear engines it is possible to power coolers that reliquefy it and return it to the tank; gradually the fuselage storage bay is chilled down a whole lot. Another feature might be to have the warhead separated, and only mount it upon getting the go code; that way if the plane has an emergency the missile can be dumped without risking releasing the warhead. Since there is only LOX in the missile the explosion risk is lowered considerably.

Similarly, when the plane is about to come in for a landing after a two or three week tour airborne, it is possible to lighten the load considerably by dumping the LOX. Making more LOX and loading it in to the missile probably costs relatively little.

Hey, for that matter might it be possible for such a nuclear plane to include an on board LOX generation facility? Then it could take off with the missile unloaded, and spend a day or two gradually condensing LOX out of the air and loading it in. This would make a certain portion of the Air Force's missile deterrent off line at any given time, as does having to land and change crews anyway, and necessary down time for aircraft maintenance and checking out the missile. In a way that's good though; it guarantees there is always a reserve of weapons that would not be fired in one spasm. Just so long as the operational portion is adequate!

A second generation launcher plane is also envisioned, something between a B-58 Hustler bomber with supersonic dash capability and the B-70 Valkyrie meant to cruise at Mach 3, and scaled up enough for over 100 tonnes missile load. This version would also cruise around at a high subsonic speed the nuclear engines are designed to maintain, but its roller coaster climb will be assisted with much more powerful thrust that first shoves it through the sound barrier, while also accomplishing a very fast rate of climb putting it into a roller coaster trajectory that reaches a much higher launch speed, one that does significantly cut down on the burden the missile must bear to reach a given target, therefore either the missiles can be lighter or can carry bigger bombs. Using more aggressive auxiliary thrust systems than the subsonic cruiser, such as ramjets, rockets, ejector ramjet, or what have you, when this one goes into a climb to launch it also surges up to Mach 3 or so, around 1000 m/sec. Unlike the Valkyrie there is no need to make it cruise for hours at this speed; it is a quick surge to release a missile or two, and then dive to escape their exhaust plumes, then very temporarily endure the high speed in lower atmosphere while braking down to more sustainable speeds. The nuke drive can sustain flight above Mach 1 though it might be poor at pushing through the sound barrier; normally it sustains flight at below the speed of sound, thus avoiding sonic booms and putting minimal strain on the loaded airplane and the plane only goes supersonic for an attack run. But, having been thus boosted to very high Mach speeds and past the "sound barrier" (which is a thing, you know--obviously not an absolute barrier, but right at sonic speed and within 5 percent or so of it on the low side and 15-20 on the high side, drag is high, lift is low and stability is dubious--a plane cruising just below sonic speed would need a surge of thrust to push to and past sonic speed, and would wish to get going well beyond sonic speed because it is hell trying just maintain it!) the plane would keep going at Mach 1.3 or so, damn the sonic booms--there is evidently a war on now after all! At high speed it would recover from its surge, turn around and head for a base to land at presumably to be reloaded with another set of missiles--even if it is hoped no more salvos will be launched, it is important to restore the deterrent again--presumably the USA has more than one potential enemy.

The extraordinary costs of such an airborne ICBM system are evident enough, so the Air Force would need to stress the advantages:

1) impossible for enemies to preemptively destroy. This might prove unnervingly less obvious after something like STARFISH, revealing the vulnerability to EMP. But in the conceptual stage that might not be anticipated, or it might be and means of shielding against it included in the design. Short of a hemispheric area bomb, how are airplanes flying at semi random locations on racetrack paths hundreds or thousands of miles long in the stratosphere going to be located and targeted independently? Even if the enemy has spy sats that can identify each plane by visual signatures and relay their exact flight paths in real time, what sort of missile can be launched that could locate, track and home in on each plane to get it preemptively? And would not any launch to try that give the US so much warning that the missiles could be launched even if the planes are doomed?

2) Flexible deployment--with their weeks of endurance, the missile bombers can be placed on standby patrol anywhere in the world where the USAF or allies enjoy air supremacy. They cannot be placed where enemy snipers have a good chance of shooting one down of course, although a certain risk of losing one or two to essentially commando/guerrilla operations can be accepted, with the proviso that the Air Force and allied forces have the political freedom to pound hell out of the ground bases attackers are observed to come from. So in peacetime one obviously could not ever fly one over enemy borders--no "Indian Country" missions. If you flew one of these over some Soviet client state in Africa, and the ostensible state air force were to attack, we'd have only ourselves to blame for losing the plane. But if it were flown over some territory whose state government is supposedly allied to the USA, like say Zaire under Mobutu or a Latin American state with a military junta put in with CIA help, and it happens the jungles or mountains are lousy with insurgents with ample SAMs, and they take a shot or three at them, some other USAF planes or say the Brazilian Air Force can come blast them with counterinsurgency gear. Naturally the sane thing to do is fly the planes over allied territory where conditions are pretty settled. The open sea is less favored than say the Canadian Arctic or the Australian outback since hostile submarines may lurk in the oceans--and even if one shoots at one of this missile planes, it may not be politically expedient for the USA to shoot back!

But anyway within constraints of political prudence, these missile planes can be sent to orbit around anywhere in the world, for even at subsonic speeds the endurance is so great it can deploy from any base in the world, reach a loiter area somewhere else in the world, then return to the base. In principle one could build just one airfield to handle these monsters, and deploy them anywhere. In practice one would want diverse fields they could operate from so enemies can't cripple the whole system by taking out one target. But all of these can be deep in the heart of CONUS, none need to be "forward" as a matter of giving the system reach. (Distant bases in Australia and so forth are useful in giving a damaged plane options besides ditching!)

So, if the USSR is the threat today, most of the planes can be sent to loiter over Canada. But say the Soviet Union has a political meltdown and fragments into seven pieces, none of which maintain the technical prowess of the unified former state and many of which declare friendship for the USA and seek aid from it...But meanwhile the PRC has built up a substantial threat, it is possible for the same air craft within days to be patrolling over Siberia (say that part of it was one of the new friend states to come out of the Soviet breakdown), Japan (well, maybe not because of the peace treaty, but say in waters just east of Japan, the Philippines and Australia or maybe Indonesia.

Also with over 60 degrees of range, it isn't really necessary to get really close to the targets, this just reduces warning time and introduces the possibility of heavier throw weights.

Now do I think the USAF could have the subsonic version of these babies deployed before 1965 and the supersonic ones come on line by 1970? I don't know if the actual deployment would happen but anyway perhaps serious R&D money might be spent on them before they finally do get cancelled. And they'll be cancelled only if the Air Force is granted, or settles for, an alternate strategic nuclear role to supplement conventional bombers.

Also, even if the missile carrier version is ultimately judged not cost effective, it could be that nuclear powered airframes that derive from the missile fleet study but are repurposed to other roles may be built and flown--as AWACS, as super-transports, as airborne refueling tankers, conceivably as attack plane carriers (the latter allowing USAF assets to be deployed to distant unprepared theaters and fight for air supremacy there so ground bases can be established).

Let's also add in the idea of Small Modular Reactors sixty years early. Similar ideas were already in the air, mostly related to powering remote military bases and villages, but they never quite gelled into the idea of mass-producing small reactors on assembly lines. But they easily could have.

Combine high-temperature gas-cooled reactors with the idea of the SMR, and what do you have? You might just have a power reactor that can beat coal. We can't know without actually spending a lot of money in the real world. But, for the purposes of fiction, I think it works.

Turbine gas reactors seem likely to have a limit to how low they scale in terms of practical power reactors. We would not be talking about something on a scale that would power say a single tank, would we? (Unless said "tank" is a Maus or some other hyperbolic dream from a Japanese cartoon anyway!) The reactor core might be compact and the turbine rather light (though if it is not air breathing, which poses the horrible prospect of a malfunctioning reactor injecting its shattered guts directly into the air as dust and vapor, it also needs some kind of heat exchanger that cools down the working gas again). Say power generator designs always require a steady supply of cooling water as from a fair sized river. In mass the plant may be petite compared to one of OTL, but if it puts out hundreds of megawatts, it can't be called mini, can it?

Of course in principle turbines can be scaled down, but they tend to become more challenging and less efficient doing so, and I think the same is true of reactor cores.

I gather that smaller cores are achieved by "burning hotter," by fissioning a greater percentage of the remaining available nuclei per second. Therefore these cores would reach EOL sooner. I daresay they might burn more efficiently, pushing the envelope of depletion farther before being too depleted to sustain operations. And with smaller cores it is easier to replace them.

Safety is something you'd have to explain very carefully. I notice in later posts you say RBMK had no containment for instance--but it did, it just wasn't a dome like Western PLWRs tend to have. It got blasted apart of course! But would a Western style dome over the core have contained the blast any better? I grant that RBMK appears to have had design features posing risks we would not allow--but really, isn't claiming a nuke plant is safe like claiming a rocket design is sure to launch without incident? Isn't there always a risk of some sort of failure, inherent in making the design capable of the useful performance you ask from it, and isn't it always asking too much to design against every conceivable failure mode?

The whole point of your thesis is, nuclear power is rare in the modern OTL world because the reactor designs are too expensive! Now clearly if this is the case, safety is in conflict with utility. It may be that PLWR were the wrong way to go and had we done something else we'd have the same degree of safety, at least per KW/hr produced, at a much lower cost and the nuclear business would be profitable, with lots of reactors of the alternate design existing and putting out much more fission generated power than OTL. That said if safety is defined on a per-plant year of operation basis, and we are running far more plants, the same degree of safety translates to a lot more accidents and accidental releases happening over the decades. (Not to mention an order of magnitude or more accumulation of atomic waste). To lower the degree of public contamination to OTL levels then we need not equivalent but superior standards of safety. And if we have lowered the cost of other aspects of operation, we need to spend a greater and greater proportion of the capital and operating budget on safety! Whereas, human irrationalities being what they are, if the cost is 90 percent safety features, either people will be deterred from using the tech even if you can show them how they are objectively worse off with alternatives. Or they will disregard some of the safety measures since doing so greatly lowers costs and thus creates a huge competitive edge for the scofflaws, whose profits might seem to outweigh any potential risk. Then when something bad happens they might be held up as scapegoats, if they didn't die, but the people most responsible probably will not be identified as such, because they would be very rich success stories. It will be hapless underlings of theirs who are blamed. And serious measures to enforce stringent safety standards would be opposed on the grounds they impose severe costs; someone will argue the general progress that comes from cheaper power is worth a few hundred thousand cancer cases more, or when put to it, a few ten million of them! Most won't want to put themselves in such a confrontational, Herman Kahn-esque position, but will be quietly pushing to downplay the importance of safety all the same.

Outcome I think is inevitably, a lot more accidents and a lot more radioactive materials release than OTL. Which is more or less accepted as normal and inevitable. The net outcome might be a severe polarization between the Technophiles and the Luddites, with the stakes much starker and stronger.

Naturally much depends on the details of the design of the reactors!

Reading Fred Pohl's novelization of the Chernobyl disaster, which I believe attempted within the limits of hard science fiction standards of technical accuracy to properly represent the nature of the reactor explosion and subsequent fire, I get the impression there are inherent resemblances between a fission reaction and a chemical fire such as in a fireplace or bonfire. There are inherent positive feedback effects--a fluctuation in favor of fission releases more neutrons, which inherently tend to expand the scale and speed of the reaction, just as allowing a wood fire to grow hotter accelerates the circulation of air which brings in more oxygen to extend combustion--vice versa, attempts to damp out reactions can also involve positive feedback. In the context of fission producing certain isotopes that have a strong tendency to absorb neutrons, with many designs (including RBMK, and also PLWR) the ambient levels of these isotopes, predominantly IIRC a xenon isotope, combined with a commanded reduction in neutron flux, can cause the reactor to plummet below sustainable fission and for the pile to become impossible to restart fission in until the isotopes involved have been removed or decay into others; since there is in many designs no practical way to accomplish purging them one must wait for the latter process to allow restart. This relates to the scenario in Pohl's novel (titled Chernobyl after the plant) where supposedly the power ministry was interested in conducting an experiment whereby the possibility of extracting useful power from a shut-down but still hot core was being examined--this meant bringing the core to a state of near shut-down, but avoiding actually stopping the reaction completely since this would terminate the experiment and take that core (one of four at the plant) offline for weeks. The near-shutdown zone was presented as one of very dangerous instability and therefore safety features which had been designed in and installed to avoid this hazardous zone were ordered turned off in order to conduct the test. (Pohl, in a possible deviation from OTL reality, had a heroic and competent Assistant Plant Manager who did the real work of operating the plant, his nominal superior being a creature of Kremlin politics and dealing with all that, adamantly opposed to the experiment despite clear Ministry demands for it, and therefore had his authority circumvented by conducting the experiment when he was home asleep--under KGB supervision). I believe the parts about the nature of the hazard, the nature of the breakdown (the fission level dropped to near the red line of natural and irrevocable shutdown and in the course of trying to bring the levels up a bit one section surged to such a degree an actual if quite small fission explosion occurred, this cracked the inner containment separating the graphite moderators from air, and then of course the big kaboom as these burnt and blasted the outer containment roof to smithereens, leaving the exposed core to burn uncontrollably) were accurate, as was the part about shutting off the safety controls and attempting something quite ill advised considering the basic design.

Now I have also read of approaches to controlled reaction that avoid some of these pitfalls. For instance it is apparently possible to engineer in negative feedback to countervail the inherent positive feedback, whereby if some or all of a core gets too hot the neutron absorption cross section of non-fissionable materials rises sharply enough to naturally check the surge; perhaps this also means that vice versa if levels go low and the reactor core cools, the reaction is easier to elevate at will. Perhaps some designs can purge the xenon and other gases naturally so their "poisoning" effect becomes a minor factor--though that is only indirectly a safety hazard; inherently they make it easier to shut down the reaction with assurance which is directly more safe, though obviously in view of the Chernobyl scenario a perverse incentive to keep the reactor running at some risk, as is the case not only for economic reasons but say in a warship power plant where a loss of power for weeks can equate to becoming useless in combat at best and in actual combat situations, a death sentence for the crew and an expensive loss to the Navy.

Various other designs you hinted at, such as aqueous reactions (I gather this means putting fuel and moderator into water solution in some way so that it is all dissolved together and control is a matter of controlling the solutions and use of solid or otherwise concentrated moderator controls) might offer a combination of inherent safety with profitable and compact reactor design, maybe. Such a design would certainly make the job of purging fission daughter product and neutron activated contaminants more straightforward, at least in principle--let gases bubble out, use chemistry to filter out the worst contaminants. Another design that caught my interest some years ago was a German proposal for forming fuel and moderator into prefabricated spherical small pebbles, and dropping them into a hopper; when enough fresh pellets are on the pile it reaches critical density and the pebbles get hot. The thing was a version of gas cooled, with helium flowing through the heap of pebbles (being spherical, if they do not physically crumble or melt, there is a fair amount of volume for the helium to filter through) and thus heated, drive a gas turbine. The neat thing here is that once running, you get a natural gradation of the pebbles by height, the newest and fresher ones being on top, where the main reaction occurs, the expended ones on the bottom. There is a valve to allow pebbles to be removed, one by one, from the bottom and taken away to reprocessing for reuses or disposal, while adding new ones on top sustains the reaction at a desired level. Thus there is no need to periodically take the reactor off line, remove the core and replace it with a fresh one. Another feature was a metal damper substance that would be liquid at normal reactor temperature, held above the hopper by a diaphragm of hotter-melting metal; should the reaction run out of control such that merely emptying it from the bottom is not adequate to check it, temperatures would rise but at a temperature below danger to the hopper walls, the safety diaphragm would melt and dump the damper mixture right on top of the pile, presumably the liquid damper would sink in through the interstices and shut down the reaction definitively. Creating of course a nasty blob of fuel pellets in a frozen matrix of damper metal which would have to be removed, broken down carefully, and the damper scraped or otherwise remove from the cold hopper walls; perhaps removing the blob would crack or otherwise undermine the hopper walls and it would need to be replaced. But this is an emergency measure after all.

Given the nature of radioactive decay, I am not sure any gas cycle can guarantee containment of decay or neutron activated undesirable wastes. When a nucleus undergoes fission, about 1/1000 of its total mass-energy is released mainly in the form of kinetic energy of neutrons and the two (are there ever more) daughter nuclei, and some gamma rays. The daughter nuclei are highly charged and though quite energetic are moving at much lower speeds than the neutrons. Charged particles interact very strongly with all other charged substances around them, including the components of atoms generally, and so lose their energy fast in a dense environment, and will be completely stopped by a finite amount of material. Thus I suppose most daughter nuclei can be guaranteed to be initially stopped within a solid element designed to catch them, and not get out immediately. However these core elements, even when refractory enough to remain nominally solid, are going to be quite hot and some elements, such as I presume the xenon, will subsequently migrate at random and some will reach the outer walls and escape into whatever fluid might be flowing there. Thus in a water cooled core, this stuff is gradually contaminating the water--which is why PLWR reactors always use a heat exchanger to confine the core water to the core and recycle it, and use the secondary heated loop which will have very little radioactive exposure (only from whatever does get into the water, which is mostly a matter of neutron activation of this water) and can be run through conventional steam turbines for power. Gas/gas heat exchangers don't work so well so their design would be heavy and bulky, which is why the gas-cooled reactors actually built OTL have used the hot gas, CO2 rather than helium, to boil water in a heat exchanger instead.

Clearly the Air Force OTL plan for a direct air cooled reactor would take ambient air, presumably pressurized by turbo compressors as in ordinary jet engines, blast that through the reactor core directly, and exhaust it either through a turbine or directly into jet exhaust, and not have to worry at all about recooling the working fluid for another cycle--this is accomplished for free by the atmosphere, and the hot exhaust is left behind in any case. With this design, neutron activation of air molecules seems inevitable though the exposure is brief, and any seepage of daughter fission nuclei or neutron-transmuted nuclei would wind up in the atmosphere. If that seepage is low I suppose a case can be made it is overall safer than conventional alternatives--Jerry Pournelle used to rant quite a lot that natural radioactive isotopes contained in coal would be released into the air at rates far exceeding what any conventional US nuclear power plant would be allowed to do in normal operation. Presumably the same is true of kerosene so it then becomes a question of which isotopes are being vaporized and released, whether the particular ones released by fission are more dangerous gram for gram, and if so if their levels of release can be reduced in proportion so the real hazard is equivalent. All this assumes of course that no physical erosion of the reactor core surfaces exposed to air flow happens, or if it does it happens at a very low rate, otherwise the engine is dumping radioactive grit at rates that I presume would rapidly exceed any sane safety standard.

A thing to remember when dealing with proposed military nuclear jets or rocket engine is that their deployment is often by design contingent on fighting an all-out war, presumably a nuclear one with a peer power like the USSR or PRC. The enemy plans to dump gigatons of bombs on our territory and those of allies we are defending, killing hundreds of millions at a stroke. In that context, hazards posed by release of radioactive materials and long-term man-years of human life lost to poisoning are set against deterring or foiling such wholesale immediate slaughter. Thus when I read about nuclear rockets like say Timberwind, even if I can accept their alleged thrust/weight and other features at face value, I am not at all confident they are designed to contain radiation and radionuclide contamination dangers effectively, since the premise of Timberwind launched weapons was to enable SDI, presumably shooting down incoming enemy H-bomb warheads before they can reach their targets, thus saving lives by the scores of millions--in that context, radiation releases that could shorten those same lives by decades might be justified as acceptable overall. And if the system deters enemy attack completely, then none of the hazard actually happens, except of course for necessary test launches.

These Nuke bombers, both the OTL envisaged nuclear Valkyries and my proposed missile carriers, as well as derivative transports, AWACs, fighter/attack plane carriers, tankers and so on fall in between. On one hand, even in peacetime they'd have to be operated quite a lot, for training and proficiency practice as well as for operational missions, the missile carriers in particular being useless if they are all kept idle on the runways. They have to be kept flying, otherwise they are vulnerable to preemptive strikes or even sabotage! So their emissions had better be kept at levels acceptable for sustained routine use, which implies they can and should be used for civil purposes too if such markets for such big planes emerge. But a society willing to accept this complex as necessary is probably going to be persuaded, by open and fair argument or by subterfuge, to accept contamination at levels we might not be willing to see OTL, in the name of national security.

Evolving air-breathing open cycle nuke jet engines into closed cycle power plants using helium, nitrogen or CO2 for working gases, and cooling those down for further cycles, ought to raise the safety level since the working gas is after all isolated.

But it still leaves open the question--why do you think these engines would on the whole work out to be cheaper to make and operate than OTL PLWRs, and would this include safety features at least equivalent to these and, to keep net contamination levels down, if at all possible superior?

Would the reactor core physics be such that there is more inherent safety against runaway reactions? Or merely equivalent to in LWR where we've seen amply that is something that has to be actively controlled?

A jet engine by its nature tends to run at a constant power level, since the thrust requirement of an airplane tends to fall within a tight limited band. Indeed I suspect nuclear jets, or alternatively turbofans or turboprops, would be more limited than normal jet engines. A combustion jet inherently tends to produce more power and thrust at sea level--where thrust is most needed for takeoff and landing--and be naturally "throttled" to lower steady outputs by the thinner air of the stratosphere where they are designed to cruise. There is simply more oxygen available per cubic meter at sea level than up there, although speed and ram effects offset falling density a bit. The reaction rate is governed by the available oxygen. Not so a nuke plant--there it is separately controlled by a deliberate choice of controls, and the higher power at sea level needed might not correspond closely with the higher cooling rate the denser air provides. Higher power is needed to move that air enough to get the necessary high thrusts for takeoff or landing, and that higher power here comes from higher rates of fission, hence higher temperatures in the core elements, which can only be checked by higher rates of cooling, but even when so damped still involve steeper temperature profiles within the condensed liquid and solid elements involved. The name of the game is to move that higher power flux into the denser air for higher thrust. But it may be quite unwise to attempt to design the reactor elements for this wide power band; instead it might be necessary to restrict power output to levels appropriate for sustained thrust at altitude. At sea level, this probably means higher thrust even so, because the denser air will absorb the heat at lower temperature (therefore more efficiently) but this heat though power available suffers from lower thermodynamic efficiency is moving more mass and thus thrust is higher. But only moderately so compared to full power at full temperature.

Fortunately a nuclear core heated jet offers another option for bursts of higher thrust--none of the oxygen running through the core is involved in chemical combustion yet, all of it is available for that in the exhaust from the core. It would be possible to inject fuel at that point and boost the thrust with after burning. This works well enough on normal jets because they operate fuel-lean in the cores, to limit temperatures there to something feasible materials can handle in the turbine; on a nuke jet the boost is higher due to more oxygen being available as well as the exhaust being pre-heated by the nuke core. Compared to combustion at higher pressure it is inefficient and so afterburners guzzle down fuel, but here it is only needed briefly for takeoff and landing and possibly for operational bursts of thrust in flight, as with my proposal to launch a missile from a parabolic trajectory. Or on a transport or AWACS plane seeking to evade an enemy attack perhaps. Moderate reserves of fuel for these purposes can also serve as secondary radiation shielding; if and when they are used up, it will be during some kind of vital maneuver that probably foreshadows the plane coming down for a landing soon after, and essential reserves for landing maintain some of the shielding level.

I'd think these cores in airplane jet engines would be designed to burn a lot of fissionable stuff up relatively fast and require frequent replacement, perhaps being designed for one nominal mission, and with each landing the cores are removed, shunted off for refurbishment or disposal, and new ones inserted. Redesigning for closed cycle gas generators might therefore involve heavier cores that burn at a lower percentage rate for say monthly or annual replacement instead. Still, a lot of fresh cores full of highly refined nuclear fuel are going to be circulating around the nation using them, at risk of being hijacked or stolen by chicanery for bad actors to abuse.

The airplane jet engine by the way I suppose might be designed somewhat differently than a typical turbojet. In the latter, the plan is, take air in, compress it with a compressor set driven by a turbine, combust and expand the gas to drive that turbine, and exhaust the jet with remaining heat energy for thrust. In a turbofan or turboprop one uses a more powerful turbine and exhausts considerably less jet thrust for a given air/fuel consumption rate, but uses the extra power beyond what the compressor needs to drive a fan or prop which moves a larger mass flow at a lesser exhaust speed, but overall the thrust is multiplied, at the cost of a heavier and more complex engine, in particular making more demands on the turbine which is trying to extract power from a very hot combustion exhaust flow. For the nuke, I suspect a reconfiguration is in order--make no attempt to extract any mechanical power from the reactor exhaust, including perhaps after burning enhancement, instead use auxiliary cooling loops from the reactor core to drive a turbine to provide the necessary compression. Then again I suspect the nuclear core exhaust might be cooler than combustion jet exhaust and perhaps it is more sensible to use a conventional turbine in this stream? But not doing so would eliminate an extra worry about wear and tear on a turbine that might be drawing power from slightly radioactive air! The radioactivity might cause structural weakening of the turbine despite lower operating temperatures.

Anyhow, I'd be very interested on any remarks you might have as to why the air-cooled or closed-cycle gas cooled reactor cores might have inherently better negative feedback control of fission for steady and controllable (at any rate, reliably shut-down-able) reaction rates.

For airborne use, a light exposed reactor core might seem desirable, but the pesky matter of crash survival is relevant too. With the liquid metal heat exchanger design the RAND study I read proposed for a subsonic turbofan, how that worked was clear enough--the reactor core is essentially a "solid" mass. Not really solid since the sodium mix coolant has to be liquid (and how it starts again from a cold shutdown where the sodium presumably freezes into solid metal is not so clear, perhaps it is kept from ever freezing, or auxiliary heating elements pre-melt the stuff, or a slow startup cycle can reliably melt it all before any of it needs to circulate, or something). Anyway it is condensed, presumably then a hard emergency shutdown definitively stopping the chain reaction combined with a seal-off of the heat exchanger flow and a certain degree of heat sink margin allowing core temperatures to peak well below softening the shell, with the core all contained in shielding/containment designed to tolerate a very hard impact, turns the whole thing into one dense sphere of essentially solid metal that doesn't crack upon crashing--I suppose it might be concussed or a bit squashed to ruin it as an operational unit, but it might not leak.

What can be done for lightweight air-core reactors to make them too immune to shattering and scattering their hot innards to the four winds?

If the answer is "not much really, without making it so heavy it can't fly" then perhaps this is why the actual nuclear airplanes remain a pipe dream--but one believed in long enough to develop light efficient gas core units for surface work?

What sort of containment would gas core light reactors need? Can they be designed for inherent assured shutdown if the gas loop is breached?

Having seen later comments of yours I have more yet to say, but I'll post this now!
 
Probably nations with weapons programs do indeed in effect subsidize nuke power plants, in ways asnys has already himself outlined--by subsidizing the development and even construction of nuclear processing plants for instance.

Oh, definitely. Economically, though, these days it's mostly nice-to-haves, not needs. For example, a ridiculous proportion of reactor techs in the US got their training from the US Navy. And back in the day, there was a huge R&D subsidy, though these days the numbers are pretty low. Nuclear still gets subsidies, but in the West, today, it's no worse than any other energy industry, and better than most. (Way better than wind and solar, incidentally, which receive the highest subsidies per joule of any energy source in the US.)

On a more subtle level, though, there are definitely non-monetary, political "subsidies" to nuclear in some non-Western countries that want to build up the capability to build nuclear weapons if they ever decide they need them. I'm talking more about political cover and encouragement than dollars, you understand, but it's definitely a thing in some countries (*cough* Iran *cough*).

Note that it would be possible to enter the "Atomic Age" of the OP without fully accounting for all costs. The cost of waste disposal, including safely mothballing the end-of-life power plants, is quite likely to be grossly underestimated, and it is entirely possible then that there might be a binge of nuke plant building and even cheap power, then when the bill comes due for waste disposal bad things happen.

That's a complicated issue. Really complicated.

Right now, the cost in the US for disposal is 0.1 ct/kWhr. That price is annoying, but not a major problem. And it could be a lot lower if the US government hadn't wasted literally tens of billions on Yucca Mountain before deciding not to build there. Of course, we aren't really going to know if that's enough until we've built a repository, put waste in it, and sat on it for a few hundred years, but it's a pretty reasonable estimate. However, in a nuclear-heavy world, you're eventually going to want breeders, and at that point the waste problem largely goes away. Let me elaborate.

What is nuclear waste actually made of? Roughly speaking - and I'm going by memory here, so I may be a little off - it's about 95% unreacted uranium, 4% fission products, 1% plutonium, and a trace of heavy transuranics. Now, the uranium isn't a big deal: you don't want it in your house, but its radioactivity is low enough that you can safely handle it with gloves, and it's basically just a nasty heavy metal; we know how to deal with that. By comparison, the fission products - which are the fragments left over when a nucleus splits - are extremely nasty. They include gamma-emitters like cesium-137, which can be rapidly fatal at a distance. Fortunately, they also decay relatively fast: after 300 years, all but a small residue is inert, and I believe the residue has lower radioactivity than the original ore, though I may be misremembering. The plutonium and the heavy transuranics are produced when a uranium atom absorbs a neutron instead of fissioning, and they're the stuff that lasts for fricking ever. And, while they're not as intense as the fission products, they remain dangerous long enough to be a serious problem.

However, they are also a solvable problem. The unreacted uranium, plutonium, and heavy transuranics can all, at least in theory, be used as fuel in nuclear reactors. The uranium and plutonium can be used in existing designs. The heavy transuranics require a fast breeder to destroy. If you use fast breeders and fuel reprocessing, you end up with a waste stream that will be basically safe after 300 years. That's a much easier engineering problem than trying to design a repository for millions of years.

So why don't we do that? Especially since the uranium and the plutonium, at least, can be burnt up in existing designs? A couple reasons. First, the chemical processing is expensive enough that it's cheaper to just mine new uranium instead of trying to recycle the waste. It's not so expensive that it could never be cost-effective, especially once we've used up more of the readily-accessible ore - the French do it now, in fact, and the Japanese used to. But it is expensive.

Second, the US really does not want people doing it, because it's one of the two routes to a nuclear weapon. If we're not doing it, it's easier to tell Iran that they can't have one.

Third, although they're a small portion of the total waste, the heavy transuranics are really nasty, and they require a fast breeder to burn up. Fast breeders are expensive and existing designs are unsafe.

However... If we do go nuclear in a big way, either IOTL or IATL, than in another century or so, we're going to want nuclear reprocessing and fast breeders. Uranium will eventually get expensive enough to justify the higher capitol costs of fast breeders. And there's nothing intrinsic to fast breeding technology that means it has to be unsafe, we just pursued an unsafe design for reasons that would require a lengthy digression to explain (basically, it seemed like a good idea at the time). So we will eventually develop this technology. And, in the meantime, there's no harm in leaving the waste where it is: if those sites are safe enough for reactors, they're safe enough for waste ponds, which are much more inert. We can leave it there for a century without major problems. Note that, in the civilian nuclear sector, we have had three major reactor accidents - but zero waste storage accidents. (There were a few waste storage accidents in the military sector.)

Nuclear waste is only a serious problem if we don't pursue a nuclear future.

Wasn't the core mission statement of General Atomics, the operation that hired Freeman Dyson and Ted Taylor in the late '50s, to develop civil uses of nuclear power? This is why I am so head-scratching as to why apparently none of these alternative designs, or yet others, were investigated as alternatives to LWR, and why the prospect (if it seemed to exist for any of them) of a much cheaper plant structure didn't attract any serious investment.

You're broadly right in what follows, but I'm going to add a few complicating details. First, alternate designs were investigated, and a few even had commercial-scale demonstration plants built. But at the time it seemed like LWR was going to succeed in outcompeting coal, so there wasn't a compelling business case for pursuing these alternate technologies.

For that matter, two alternative technologies even reached the stage of commercialization: MAGNOX in Britain and Heavy Water Reactors in Canada. MAGNOX proved uneconomical - though a lot of that was probably due to the structure of the British nuclear industry rather than the tech itself. And HWRs are still in use, but they proved to be economically very similar to LWRs - they have marginally higher capitol costs and marginally lower fuel and operating costs.

1) despite the ideology of capitalism that assures us that progress happens via private capitalists taking risks in order to get an edge on competition, and this is the basis of profit, the fact is big establishments tend to be risk averse and more interested in trying to control known variables. They tend to leave the risky business of blue-sky research to government agencies or at any rate undertake it only with government funding. In the 50's USA (and USSR) this meant the military mainly.

This was, and is, definitely also a factor.

2) the government establishments concerned with nuclear power were an interlocking directorate of Defense and the Atomic Energy Commission, which both regulated and promoted nuclear power. The latter should have been funding speculative research, and developing promising alternative modes of extracting fission power, but...

Most of the demonstration plants for alternative techs were built with AEC support, in many cases on AEC land.

3) the contractors the US Navy hired to develop and build their pressurized LWR systems had a deep vested interest in that particular mode of power generation and equated it with "fission" as such. Were the AEC to present an alternative model unsuited to submarine installation but cheaper to build and operate than LWR, the prospect of leveraging the Naval contracts into a strong commercial presence as well would be thwarted; these firms would have no advantage over competitors in making the civil power plants and would have to divide capital investment between supporting the Navy and attempting to compete in the civil market, instead of synergistically using the same capital for both. Therefore it was not in their interest to have the AEC or any other state funded agency come up with a promising alternative. I don't have to suppose they were so cynical as to openly seek to suppress alternatives, merely that their strong influence in the interlocking directorate (via contacts at the Pentagon, long-established relationships there, political influence via Congress and Presidential electoral politics) would suggest to the AEC and Congressional committees that the established LWR program had everything well in hand and there was little need to "squander" large taxpayer funds on considering alternatives.

This was almost certainly part of it. The competing designs lacked the sort of major industrial patrons that LWR had.

I want to point out that if you disdain the Soviet RBMK, "upgrading" the gas cooled loop with graphite moderator is adopting the single feature that turned Chernobyl into a radioactive abattoir. Sure, everything is fine as long as it is helium circulating, but what happens if air gets into the loop?

The graphite isn't the problem in the RBMK - nuclear-grade graphite will only burn if you pulverize it first. The problem with graphite in the RBMK is the combination of it with water cooling makes it over-moderated. This means that, in some parts of its operating envelope, it has a positive temperature and void coefficient of reactivity, meaning it can "run away", like a fire burning out of control.

Also, they put graphite tips on the ends of the control rods. This improved fuel efficiency, because it reduced neutron leakage through the control channel ports. It also, however, meant that the rate of reaction would briefly increase if you inserted the control rods to, e.g., cope with the reactor running away on you.

It was a mind-blowingly awful design.

I figure this is for 3 reasons. One, the weakest in itself, is sheer mass. A jet engine is pretty light for the tremendous power it develops. Not so a nuclear core obviously, especially if it is shielded to give crew a chance of survival. But this is offset by the lack of fuel, to an extent.
Two, air intake temperatures. An air based heat engine needs to pressurize the air to be efficient to be sure. Up to a certain speed, pretty high up in the supersonic regime, ram air temperatures need to be actually increased to meet the adequate pressure levels for efficient power generation. So this point seems kind of shaky, but I suppose a specific design of reactor might have rather low intake temperatures, and flying too fast would exceed those temperatures.
Three, at a given thrust level, power rises with speed. If we have a plant adequate for cruising at Mach 0.75, and wish to go at Mach 2 with the same thrust instead, we might need to nearly triple the power output. For a fission reactor that means tripling the core mass, and tripling the radiation output. The former is actually dependent on design; we could have designs that "burn hot" and consume fissionables at a higher rate and get more power out of the same mass--bringing it to depletion and end of life sooner in inverse proportion of course. But radiation is a hard correlation, so much gamma ray an and neutron flux per kilowatt (thermal kilowatts, not necessarily useful output) in a fixed proportion. Also I believe supersonic aircraft require more thrust.

This is basically true, yes. It's not totally impossible to get one of these things supersonic, but it's going to take a lot of work.

Um, the thing is, desiring the air-cooled version probably does relate to desiring supersonic speeds. If we settle for subsonic cruise, in principle we could develop a version of LWR to drive propellers. The Soviet/Russian Tupolev "Bear" family of bombers was the USSR's answer to the US B-47 and B-52. The Americans used jets, later upgraded to turbofans, but the Russians used contra props driven by a turboprop engine. Their outcome was higher observabilty (including sheer noise, which I'm told could be heard by submerged submarines) and a noisier radar signature, but in terms of speed the Bears are competitive with the Buffs, that are after all pushing sonic limits anyway, and I believe they got significant endurance and range benefits with the more efficient turboprop setup.

You know, I don't really know how the direct-cycle engine would compare to a nuclear-powered turboprop. I know the Navy very briefly looked at them for a flying boat, but got shut down by the Air Force.

The nuclear option was not at all a direct air cycle; rather it was molten metal (sodium and something else mix IIRC) with a double loop; the interior reactor core loop would, through a heat exchanger, transferred in a second loop (to isolate radio-activation of the primary loop substance) outside the shielded reactor core to the heat input part of a more or less standard turbojet, where the liquid metal would be cooled in lieu of fuel combustion there, then the heated air would first drive the turbine to drive the turbo compressor and possibly, likely actually, a fan, with the exhaust then completing the thrust, as in a normal turbofan.

That's the indirect-cycle engine, which was the other competing approach in ANP. GE developed the direct-cycle engine, and Pratt & Whitney did the indirect-cycle. The indirect-cycle has greater long-run potential, because the reactor can be denser and therefore lighter, but it takes more work to get it working. You don't necessarily need to use liquid metals, either - pressurized helium is also a possibility.

In fact, as a safety measure this study proposed that this engine be designed bimodal, using ordinary jet fuel (presumably synthesized per the study's premise) for takeoff and landing, only cruising on nuclear power with the kerosene flow turned off.

That's a standard feature in all of these designs. Not just for takeoff and landing, either: it was going to be used as an afterburner to give extra speed during bombing runs.

The reactor core would supposedly be robust enough to survive a crash without cracking!

NASA actually tested padding for aircraft reactors in the 60s using rocket sleds. I'm not sure I believe their claims that they could survive an impact at supersonic speeds into granite.

So really I'm not quite so sure why we are agreeing that a nuke powered jet could not be supersonic. Perhaps because Pluto or any other version running hot enough to produce necessary supersonic thrust on any scale would necessarily be dirty, and also much too "hot" in its core radiations for flight crew to survive on it or operate it on anything but a kamikaze basis?

The reason PLUTO could reach supersonic speeds was because it didn't need a radiation shield. That's where most of the weight of the reactor is. If you're okay with your airplane being not just unmanned, but dangerous for manned aircraft to approach, a nuclear engine can absolutely go supersonic.

Clearly it would be possible for the intake air to be handled even at supersonic speeds. Clearly, barring some ATL breakthrough in making very small reactor cores, it would have to be scaled up to a massive airplane and making a plane both of unprecedented size and capable of supersonic cruise might indeed be too daunting a task, especially if one had any intention of ever safely landing it!

Even with subsonic cruising speeds, the logic of nuclear engines pushes you towards aircraft weighing 10 million pounds or more, because the thrust/weight ratio gets better the bigger the engine is. This is because the weight of the reactor is dominated by the shielding, which is proportional to the surface area - but the power is proportional to the volume. There's a breakeven point somewhere around a few million pounds where the nuclear engine starts to outperform the conventional one, at least in terms of thrust/weight. The exact weight depends on your scenario assumptions (range, safety margin, etc.).

Of course, rebuilding airports so they can handle 10-20 million pound monster planes is going to be quite the construction project. But if the government is run by atomic technocrats who love ginormous infrastructure projects anyway... We really, really can't trust the cost figures on these studies, given that they're describing hardware that is utterly dissimilar to anything that has ever been built, but some of them claimed that a big enough atomic airplane could compete on cost with trucks, at least over a long enough distance.

But under those conditions as I say, would the rational designer aim for direct heating of air by running it right through the reactor core?

GE went for the direct-cycle because they could build it fast(er), and they wanted to get something flying before the project lost its political support. Literally everyone else exclusively considered indirect-cycle approaches, for precisely the reasons you outline, plus the fact that they could have higher thrust/weight due to having denser cores, and therefore lighter radiation shields (less surface area).

Someone in the Air Force has to recognize, around the late 50s, that a giant "turboprop" is useful to the Air Force mission. Too bad the glory is aimed at supersonic bombers--though I suspect you do dismiss the supersonic option a bit too definitively. A subsonic plane like the B-52 would clearly seem to have its days numbered and of course a mega-plane is going to be made in smaller numbers, hence become a high priority target for Soviet defenses.

There were people in the Air Force who realized that. The problem was that there weren't enough of them, and they were in the wrong offices. But I think it could definitely have happened, given the right circumstances.

With the Air Force backed into a corner, not having the fall back of owning the ICBMs (say a Congressional clique is duly impressed with the Army's work and authorizes von Braun to develop the prime ICBM contract, justified by the argument that missiles are basically artillery and not aircraft) some officers come up with a compromise, and suggest that the Air Force, like the Navy, be entrusted with an alternate system that is protected not by silos but by mobility and stealth--air launched long range missiles.

Now that is an interesting idea.

Why an Atlas? Because the Air Force was procuring it OTL, and it is so light that it can deliver a warhead to a target while massing all up not much over 100 tonnes. The fragility of the missile is less of an issue since it will be cradled within the fuselage of the plane until launched.

I'd suggest a Skybolt or something else with solid fuel. Liquid fuel is difficult enough on the ground.

Hey, for that matter might it be possible for such a nuclear plane to include an on board LOX generation facility?

That I don't know - how heavy would that be?

1) impossible for enemies to preemptively destroy. This might prove unnervingly less obvious after something like STARFISH, revealing the vulnerability to EMP. But in the conceptual stage that might not be anticipated, or it might be and means of shielding against it included in the design. Short of a hemispheric area bomb, how are airplanes flying at semi random locations on racetrack paths hundreds or thousands of miles long in the stratosphere going to be located and targeted independently? Even if the enemy has spy sats that can identify each plane by visual signatures and relay their exact flight paths in real time, what sort of missile can be launched that could locate, track and home in on each plane to get it preemptively? And would not any launch to try that give the US so much warning that the missiles could be launched even if the planes are doomed?

One thing I was considering for the TL was to have the Soviets be seen deploying an analogue to our SOSUS network - call it SUSOSUS. It probably doesn't actually give them the capability to sink all our boomers, but it will give the Air Force the political cover to claim it will, which should help enormously with procurement.

Now do I think the USAF could have the subsonic version of these babies deployed before 1965 and the supersonic ones come on line by 1970? I don't know if the actual deployment would happen but anyway perhaps serious R&D money might be spent on them before they finally do get cancelled. And they'll be cancelled only if the Air Force is granted, or settles for, an alternate strategic nuclear role to supplement conventional bombers.

In the TL, my plan was to have a nuclear X-plane flying by 1960, which I think is reasonable - we basically knew how to build one by 1960 IOTL, but we had soured on the idea by then. The first nuclear-powered squadron is declared operational some time around 1965.

Also, even if the missile carrier version is ultimately judged not cost effective, it could be that nuclear powered airframes that derive from the missile fleet study but are repurposed to other roles may be built and flown--as AWACS, as super-transports, as airborne refueling tankers, conceivably as attack plane carriers (the latter allowing USAF assets to be deployed to distant unprepared theaters and fight for air supremacy there so ground bases can be established).

This is precisely what I had in mind. And one role I would suggest for aircraft carriers that would appeal ITTL, with its nuke-happy attitude, much more than IOTL: carrying interceptor aircraft for air patrols over the Canadian Arctic. Much harder to take it out than a fixed airbase.

Turbine gas reactors seem likely to have a limit to how low they scale in terms of practical power reactors. We would not be talking about something on a scale that would power say a single tank, would we? (Unless said "tank" is a Maus or some other hyperbolic dream from a Japanese cartoon anyway!)

Probably not. The power/weight ratio gets really bad when you're that small, because of the weight of the shielding.

I gather that smaller cores are achieved by "burning hotter," by fissioning a greater percentage of the remaining available nuclei per second. Therefore these cores would reach EOL sooner. I daresay they might burn more efficiently, pushing the envelope of depletion farther before being too depleted to sustain operations. And with smaller cores it is easier to replace them.

If you're referring to SMRs, the current plan is that they will have roughly the same power density - or lower - as existing Gen-III designs. An AP1000 - the best Gen-III reactor on the market, at least judging by its sales record - puts out about 1,000 MWe. An SMR will typically put out about 100 MWe, but ten of them will be cheaper than the AP1000, because you can mass-produce them on a factory line. That's the hope, anyway; who knows if it will work.

Safety is something you'd have to explain very carefully. I notice in later posts you say RBMK had no containment for instance--but it did, it just wasn't a dome like Western PLWRs tend to have. It got blasted apart of course! But would a Western style dome over the core have contained the blast any better?

I grant that RBMK appears to have had design features posing risks we would not allow--but really, isn't claiming a nuke plant is safe like claiming a rocket design is sure to launch without incident? Isn't there always a risk of some sort of failure, inherent in making the design capable of the useful performance you ask from it, and isn't it always asking too much to design against every conceivable failure mode?

It's true that absolute, 100% safety is not a thing that can ever be achieved in this world. But I think that - if we continue to pursue this technology - we will eventually get to the point where the probability of an accident is low enough that it will not occur in the lifespan of Earth. The way we'll get there is by iterating through designs, getting a bit safer at each generation, until eventually we reach a point that really is safe for any reasonable definition of safe.

In particular, there are designs that appear to be immune to loss-of-cooling accidents. That is the great achilles heel of the LWR: the fuel continues emitting heat for some time even after the reactor is shut off, due to radioactive decay from the fission products, and if that heat is not removed, the core will melt. That's what happened at Fukushima. But it is possible to design reactor cores where natural forces will disperse the heat safely - such reactors exist, they're used for training reactor operators. The problem is that that forces other design choices that make it hard to make the reactor economical - none of those training reactors produce enough energy to be worth bothering with as power reactors. But, in principle, there's no law of physics that says it can't be done, and some of the SMR designers claim they've done it... But, at least for some of them, their business case rests on the claim that the NRC won't require as much paperwork if their design is intrinsically safer, so they can accept a lower power output. I don't believe the NRC will go along with that, because politically, they have no incentive to, and lots of incentive not to.

That said if safety is defined on a per-plant year of operation basis, and we are running far more plants, the same degree of safety translates to a lot more accidents and accidental releases happening over the decades. (Not to mention an order of magnitude or more accumulation of atomic waste). To lower the degree of public contamination to OTL levels then we need not equivalent but superior standards of safety.

At some point I am going to post my explanation for why the ATL can, on a scientific level - as opposed to a political level - accept lower safety standards. And why, if we're really worried about climate change, we should be willing to at least consider it - short version is that we don't actually know how dangerous atomic accidents are, and there's a good argument that they're not nearly as dangerous as we think. But this post is already too long.

were accurate, as was the part about shutting off the safety controls and attempting something quite ill advised considering the basic design.

I've not read Pohl's book, but this is basically accurate as far as I know.

Now I have also read of approaches to controlled reaction that avoid some of these pitfalls. For instance it is apparently possible to engineer in negative feedback to countervail the inherent positive feedback, whereby if some or all of a core gets too hot the neutron absorption cross section of non-fissionable materials rises sharply enough to naturally check the surge; perhaps this also means that vice versa if levels go low and the reactor core cools, the reaction is easier to elevate at will.

This negative feedback isn't just possible: it is why what happened at Chernobyl really can't happen to a light water reactor. A light water reactor is self-stabilizing: if the reaction runs out of control, the water heats up and expands. Since the water molecules are further apart, this reduces the moderation of the reactor, which slows the reaction back down. This is considered so fundamental to reactor design in the West that the NRC will not approve any reactor that does not have this, or an equivalent, feature. And this is why LWRs were regarded as safer back in the 50s - it's possible to engineer this into other designs, but it takes a bit of work, whereas the LWR gets it "for free".

Whereas in the RBMK, the combination of graphite and water meant the reactor was over-moderated. So when the water heated up, and the moderation reduced, the reaction increased... And the rest is very painful history.

Our reactors have other problems - the LoCA, primarily - but they really are no comparison to the RBMK in terms of safety. Fukushima, Three Mile Island - those can legitimately be held against the Western nuclear industry. Chernobyl can't.

Various other designs you hinted at, such as aqueous reactions (I gather this means putting fuel and moderator into water solution in some way so that it is all dissolved together and control is a matter of controlling the solutions and use of solid or otherwise concentrated moderator controls) might offer a combination of inherent safety with profitable and compact reactor design, maybe. Such a design would certainly make the job of purging fission daughter product and neutron activated contaminants more straightforward, at least in principle--let gases bubble out, use chemistry to filter out the worst contaminants.

Yeah, an aqueous reactor is basically a giant pot full of water with uranium salts dissolved in it. They're extremely stable - like light water reactors but more so. But they have a bad tendency to eat giant holes in the reactor vessel - one prototype at Los Alamos had to be literally gold-plated.

These days, interest in liquid-fuel designs focuses more on molten salt fuels, where the reactor is a giant pot full of a molten fluoride salt containing uranium. On paper, these have enormous potential for better safety and economics. They're extremely self-stabilizing - like LWRs but more so - and they allow you to use various tricks that you can't use in solid-fuel reactors, like dumping the fuel into a holding tank if something goes wrong, where it's easier to keep cool. Most of all, though, you can use online reprocessing: remove the fission products and other radioactive nastiness from the fuel mix as its created, so there's just less nasty stuff in there to get out if something does go wrong.

The downside is that they're very different from anything currently in operation. A few prototypes were built back in the day, but the last shut down more than forty years ago. There's a lot of exotic chemistry involved, and it's going to take a huge amount of work to commercialize these - even more than other alternative designs - and because they're so different from anything we've built lately, we just don't have a good handle on the risks, economic or safety.

Another design that caught my interest some years ago was a German proposal for forming fuel and moderator into prefabricated spherical small pebbles, and dropping them into a hopper; when enough fresh pellets are on the pile it reaches critical density and the pebbles get hot. The thing was a version of gas cooled, with helium flowing through the heap of pebbles (being spherical, if they do not physically crumble or melt, there is a fair amount of volume for the helium to filter through) and thus heated, drive a gas turbine. The neat thing here is that once running, you get a natural gradation of the pebbles by height, the newest and fresher ones being on top, where the main reaction occurs, the expended ones on the bottom. There is a valve to allow pebbles to be removed, one by one, from the bottom and taken away to reprocessing for reuses or disposal, while adding new ones on top sustains the reaction at a desired level. Thus there is no need to periodically take the reactor off line, remove the core and replace it with a fresh one. Another feature was a metal damper substance that would be liquid at normal reactor temperature, held above the hopper by a diaphragm of hotter-melting metal; should the reaction run out of control such that merely emptying it from the bottom is not adequate to check it, temperatures would rise but at a temperature below danger to the hopper walls, the safety diaphragm would melt and dump the damper mixture right on top of the pile, presumably the liquid damper would sink in through the interstices and shut down the reaction definitively. Creating of course a nasty blob of fuel pellets in a frozen matrix of damper metal which would have to be removed, broken down carefully, and the damper scraped or otherwise remove from the cold hopper walls; perhaps removing the blob would crack or otherwise undermine the hopper walls and it would need to be replaced. But this is an emergency measure after all.

Unfortunately, the Pebble Bed Reactors seem to have faded. I gather they discovered that the pebble bed had a bad tendency to get clogged so the helium couldn't properly circulate.

Clearly the Air Force OTL plan for a direct air cooled reactor would take ambient air, presumably pressurized by turbo compressors as in ordinary jet engines, blast that through the reactor core directly, and exhaust it either through a turbine or directly into jet exhaust, and not have to worry at all about recooling the working fluid for another cycle--this is accomplished for free by the atmosphere, and the hot exhaust is left behind in any case. With this design, neutron activation of air molecules seems inevitable though the exposure is brief, and any seepage of daughter fission nuclei or neutron-transmuted nuclei would wind up in the atmosphere.

Fortunately, it's hard to neutron-activate air. Dust would be a problem, though, as would fission product leakage from the fuel elements. But I'll discuss this if I ever get around to my post on radiation safety. For a civilian power reactor, though, you definitely don't want to use air cooling.

But it still leaves open the question--why do you think these engines would on the whole work out to be cheaper to make and operate than OTL PLWRs, and would this include safety features at least equivalent to these and, to keep net contamination levels down, if at all possible superior?

We don't know for sure. Safety-wise, I think gas-cooled reactors - assuming we're talking about civilian power reactors with closed loops - should be at least as safe as LWRs, possibly safer. They're more resistant to loss-of-cooling accidents because all that graphite absorbs a lot of heat, giving you more time to restore cooling. And they can be engineered to be self-stabilizing, though it's not as easy as with LWRs. Overall, though, I think they should be roughly similar.

Economically, I think they have a few advantages. Although the first generation will probably use a secondary steam cycle, they can eventually be engineered to use a gas turbine, which should cut the capital cost by a lot. And they run hotter, so they have better thermal efficiency. Plus the SMR concept, though that's not unique to gas-cooled reactors.

Finally, I was also going to work on the structure of the nuclear industry in the US. I think there's a lot of things that could be done to make nuclear power cheaper just by changing how the industry is organized - and I'm not talking about loosening safety regulations, though I was going to do that, too.

Still, though, there's really no way to know for sure without building a dozen of them. But it's plausible enough for fiction.

I'd think these cores in airplane jet engines would be designed to burn a lot of fissionable stuff up relatively fast and require frequent replacement, perhaps being designed for one nominal mission, and with each landing the cores are removed, shunted off for refurbishment or disposal, and new ones inserted. Redesigning for closed cycle gas generators might therefore involve heavier cores that burn at a lower percentage rate for say monthly or annual replacement instead. Still, a lot of fresh cores full of highly refined nuclear fuel are going to be circulating around the nation using them, at risk of being hijacked or stolen by chicanery for bad actors to abuse.

Oh, definitely. The civilian power reactors would use low- or mid-enriched fuel, but the Air Force jets run on weapons-grade. Which is a terrible idea in the real world, but would make for good fiction.

And it should be noted that even then, it's really hard to turn one of these things into a bomb. It can be done, but it takes a lot of chemistry, and therefore time and equipment. Not something you can do in a garage.

What can be done for lightweight air-core reactors to make them too immune to shattering and scattering their hot innards to the four winds?

Not a whole heck of a lot. The Air Force basically planned to just not fly them over cities - different times and all that. The Air Force wasn't even planning to use full shields - the reactor would only be shielded in the direction of the crew, to save weight, so flying too close to one of these planes while it's running would give you a very nasty sunburn.

The indirect-cycle designs, there's a lot more that can be done. It's still hellishly risky compared to anything we'd allow IOTL, though. But I'll cover that if I ever get to the post on radiation I keep promising.

And, to be honest, I'm just incapable of being objective on the subject of atomic airplanes. I love them too much.

What sort of containment would gas core light reactors need? Can they be designed for inherent assured shutdown if the gas loop is breached?

IIRC - and it's been a while since I looked at this, so I may be misremembering - the thickness of a gas-cooled civilian power reactor's containment is roughly the same as a LWR of an equivalent size. They're both running under high pressure.
 
I'd like to absolve the author of any obligation to use Orion nuclear pulse drives if he is not so inclined. Usually people object to Orion on the grounds that the radioactive fallout released is just too irresponsible, and the gung-ho types come back and say we are being wimps. Otherwise both sides tend to assume it is a done deal, a sure thing. While I've often been attracted to the idea of an Orion launch vehicle, the fact is, we don't know for sure that it was ever going to actually work! We can write a TL on the presumption that it would have, but we'd be guessing. The fact is that there remain a lot of unanswered questions about how well a sufficient series of sufficiently powerful blasts to put something the mass of a destroyer or larger into orbit would in fact be survived by a feasible plate design. The problem of reliably placing the charges at a rate of several times a second, or even once every several seconds (the latter low frequency really pushing the limits of feasibility for other reasons) is also rather speculative. Are the charges isomorphic, symmetrically producing a blast over a sphere? Then the process is somewhat inefficienct, requiring bigger blasts and lowering the practical ISP we estimate quite a bit. Do we use shaped charges for better efficiency? Now we have to not only put the bombs in the right place, but also pointed just the right way!

Quite aside from the fallout issue, which I take quite seriously then, we have the question of will it work in the first place!

Regarding nuclear thermal rockets--there is an interesting compromise called LANTR to consider. In general, the higher the ISP of a rocket, the less propellant mass we need to achieve a given delta-V. What is often forgotten is that it is harder to do this; the faster the exhaust is (ISP is an index of that) the more kinetic energy each kilogram of exhaust has, and that goes as the square of velocity. It works out that for a given thrust, total power required goes linearly with the ISP, so if we have a nuclear thermal engine that can exhaust hydrogen with a vacuum ISP of 1200 seconds (that is, almost 12 kilometers per second) that produces the same thrust as 3 Space Shuttle Main Engines (to pick an example) in vacuum, 6510 thousand Newtons or a bit 664 metric tons of thrust, it will need to have 2 2/3 the power imparted versus the useful energy released by combustion in the SSME combustion chambers. Both actually produce still more thermal power output which cannot be harnessed usefully due to fundamental thermodynamics and also non-ideal design features. But rocket engines tend to be pretty efficient as heat engines go so the extra thermal flux actually needed probably will be a relatively small multiplier; estimated minimal kinetic energy of the exhaust by simply multiplying thrust by speed is a good index, especially since the multiplier would be consistent across all rocket designs in a given state of the art.

Thus, the chemical power released by the three SSMEs will be in the ballpark of somewhat more than 29 gigawatts, whereas the hydrogen thermal rocket at ISP of 1200 would be over 76.63 GW. That's an index of the thermal power output of the nuke plant, and also an index of the neutron and gamma radiation flux it puts out. The chemical rocket of course does not put out any radiation and the chemical energy is released in the combustion chamber; the NTR needs a chamber kind of like that too after the nuclear core exhaust to guide the flow into an ideal sonic speed nozzle throat. The nuke also needs the core itself and shielding for it, and these will be pretty massive, so a huge strike against NTR at least for launch purposes is low thrust to weight ratio. And also it is operating at nearly 3 times the power density, hence three times the temperature! This by the way is why we don't want to use anything other than hydrogen for the working fluid; the NTR core transfers heat in accordance with gas laws, and using ideal gas law as an approximation, at the same rate for a given molar density. Since all molecules besides hydrogen (except for monatomic hydrogen but that only exists in bizarre circumstances, such as in a very hot engine--but that means useful power is lost to dissociation energy, so ignore it) mass more than hydrogen, with the runner up the monatomic helium atom being twice the mass, it follows each heavier molecule which picks up the same energy moves at the inverse square root of its molar mass relative to H2 speed-thus you see helium would be going at 70 percent the speed, Isp of 848.5 sec. It would also put out 40 percent more thrust at the same power rate, in accordance with the inverse relationship between Isp and thrust for a given power level. But in terms of maximizing Isp and hence minimizing the reaction mass needed for a given impulse, we are losing. If helium could store much more densely than hydrogen and was not far more expensive to acquire and difficult to handle, we might consider compromising with it, but the only substance more of a pain to work with than liquid hydrogen would be liquid helium! And it is bloody expensive. Going up the periodic chamber we quickly find few molecules less massive than water, which stores pretty densely and is otherwise very convenient to work with, but using water instead of hydrogen would bump the ISP down by a factor of 3, to 400 seconds, inferior to existing hydrogen-oxygen chemical engines. We do get a convenient storage issue and 3 times the thrust for a given power output, in this case we could get by with a 26 or so GW reactor for the same thrust, but that reactor is running just as hot at the same pressures. All this ignores any chemical reactions water might exhibit at these temperatures too. So, hydrogen it is.

Unfortunately, liquid hydrogen is quite difficult to work with, with a density so low that 14 cubic meters are required to store a tonne of it. Consider the Space Shuttle External Tank. It massed a bit under 30 tonnes dry (later versions got that down 4 tonnes or more, but in that ballpark anyway) and contained about 724 tonnes of propellant, oxygen and hydrogen in a 6:1 mass ratio. But in volume terms, the ratio was more like 1:2.68, the other way with almost 3/4 the volume being for hydrogen. If we were to fill both compartments of the tank with just hydrogen, we'd divide the whole mass contained by 5.1, reducing it to 142 tonnes. The tank in this case might be lightened quite a bit by removing the intertank structure which would also free up more volume, but I suspect that would also weaken it structurally so we might need to beef it up some more, so let's stick to the same volume and mass for each. Thus on the Shuttle the ratio of tank mass to propellant contained was 4.144 percent (which is very good, considering we are containing so much hydrogen) but filled with pure hydrogen it rises to 21.13 percent, pretty dismal.

Now let us imagine we have made a NTR engine that can match the thrust of 3 SSMEs at sea level and in vacuum. I have little idea how massive such an engine must be nor what in absolute terms its radiation output and hence necessary shielding mass would be, and I am not sure whether we could accomplish this by pumping the hydrogen to the same pressure as in the SSME, which was bloody high at around 200 atmospheres. It needed that pressure to achieve decent sea level performance, if it had been designed to only burn at high altitude, we'd get away with much lower pressures. Since the hydrogen has 1/9 the molar mass of the water exhaust of the SSMEs I'm not sure even 200 Atm would be sufficient for a similar ratio to SSME between vacuum and sea level performance; for the historic SSME it was about 5000 KN at sea level versus 6510 in vacuum, or 77 percent with Isp reduced in proportion, to 346.4 sec--which is better than kerosene rockets in vacuum to be sure. Anyway say we've got it and it performs that well, same thrusts as 3 SSME and sea level ISP of 921.7 sec.

I did an exercise where I compared what the Shuttle historically did put up into orbit--a 30 tonne dry tank and a 125 tonne Orbiter--versus what an ATL NTR Shuttle with the same solid boosters ought to be able to put up. I won't show all the work, just say it is all based on what total initial mass (not really at launch, but at the SRB shutdown) a given mass of reaction material can put through the same velocity change that the OTL Orbiter-tank combo accomplished, which I estimate at 6740 m/sec. For the Orbiter with SSMEs it is of course 155 tonnes, the Orbiter plus tank. For an ATL Orbiter sporting an equal thrust NTR using pure hydrogen, I find that the all up mass at launch is such that we don't need to ground-light the nuclear engine for its 500 tonne thrust and thus sea level thrust and Isp are academic since we can design the engine for low-pressure performance, and light it only after SRB separation. With that assumption we have 142 tonnes of hydrogen for a 1200 sec Isp engine and this means it can be 326 tonnes at NTR ignition, and thus place 184 tonnes into orbit, 30 of that the tank. We thus gain some 30 tonnes for the Orbiter, and face the question of whether the total mass of the NTR is more than 40 tonnes, which would leave us with the same mass left over for the rest of the Orbiter and cargo. Thus, despite a tremendous reduction of mass on the pad despite making 30 more tonnes available for the Orbiter, it is 6 of one, half a dozen of the other versus the legacy 1981 STS system, unless the NTR is in fact much lighter than 40 tonnes all up (3 SSMEs massing about 10 tonnes).

But now look. NASA has considered the option of taking a hydrogen emitting NTR, and running that exhaust with all the energy added by the nuclear core, to a second chamber in which this super-hot hydrogen is combusted with oxygen. The outcome is to get about half the Isp, but with a mass greatly increased by the oxygen. Supposing for the moment the oxygen is in the same 6:1 ratio as with the SSMEs, we'd combine 3/4 the hydrogen with oxygen to get water molecules (conceptually) and the remaining hydrogen is left free. Generally speaking hydrogen-oxygen chemical engines do burn quite fuel rich, and the highest theoretical ISP can be achieved at a 4:1 ratio, half that we'd want to completely burn all hydrogen (stoichiometric burn). No one does 4:1 because the tanks would be very bulky. I suspect with a LANTR we'd want to go closer to stoichiometric, but leaving it at 6:1 allows us to compare using the same Shuttle ET. At that ratio, for every 4 kg of hydrogen the nuclear engine puts out, we get 3*9+1 kg of chemical exhaust, or 28--7 times the mass. Now it comes out at about half the speed the NTR could put it out, Isp 600. Which note is substantially better than the SSME's 451! With the same hydrogen flow from the same reactor, but combined with oxygen and the chemical energy released from that as well, we have 3.5 times the thrust, while still having half ISP because we are expending 1.75 times the power. In this case my estimation of mass to orbit is modified to, as with the STS of real life, to have the LANTR engine fired on the ground because now the mass on the ground is higher than STS, due to greater payload, and thus we need some extra thrust. The nuclear/chemical core engine must produce some 650 instead of 500 thrust on the ground. But since the same NTR as we wanted for all-nuclear operation would have its thrust multiplied by a factor of 3.5, and we only want 1.3 increase, we scale the whole thing down by a factor of 2.7, down to 37 percent. The mass flow during the 120 second boost from the full STS ET is now 1.438 tonnes per second, about the same as for the chemical STS, so at separation of the boosters there are only 551.5 tonnes left in the tank. Based on that tonnage of propellant and the Isp of 600 sec, the total mass at separation is 803 tonnes, and 252 arrive in orbit. Take away 30 for the tank and we have a payload plus engine mass of 222 tonnes. We come out ahead 67 tonnes versus the pure hydrogen/pure nuclear option, on the pad the mass was 2156 tonnes or about 106 more than a chemical STS. At 222 tonnes left after tank ejection, we also orbit 97 more tonnes than chemical STS, which is close to a doubling, nearly 78 percent added mass.

We'd be pretty idiotic to configure it just like STS though. As with the Shuttle program, we definitely want to deorbit the nuclear engine and recover it--if not necessarily for reuse than for security and environmental protection. Just having it burn up in the atmosphere with the propellant tank is environmentally nyet kulcherni to say the least, nor do we want the core to splash down intact and simply leave it to sink to the bottom of the Indian Ocean because it might seem worthwhile to someone to retrieve it for evil purposes. So it is far more urgent than in the case of STS to retrieve the engine core, and sequester it safely. An alternative might be to leave it some extra propellant to crash it into the moon, I suppose, but that would really cut into the payload! With this sort of circumstance I think someone would stumble on the idea I had for reconfiguring STS tech to make it more cost-effective, by making a separate capsule or some sort of recovery vehicle for the engine and having it return to Earth after a once around orbit, leaving the remaining mass launched to go on its separate way in orbit.

Now we need to estimate a reasonable mass for the two types of engine. But notice this--compared to the pure hydrogen all nuclear engine with its highest Isp, the hydrogen flow we need, even though we had to kick the thrust up some 30 percent, is just over a third. That means whatever scale of atomic rocket we need to achieve 500 tons of thrust at launch on the surface, now we need 241 tons worth. The power of the nuclear plant is down to 28.5 GW. But we're getting 650 tons instead. Much of the power is chemical after all.

Compared to the pure nuke we also produce less than two/fifth of the radiation flux.

Project Timberwind, a secret program under SDI, claimed as in this example to be able to develop NTRs capable of a thrust weight ratio of 30:1. At that rate, a 6510 kN engine equivalent to a set of three SSMEs would mass just under 21 tonnes. Thus going back to the NTR example above, if it arrives in orbit with 154 tonnes beyond the tank, we'd have 133 beyond the engines, or compared to STS, a gain of about 18 tonnes. For the LANTR version however, the core masses just under 8200 kg! To be sure that is hardly all. We want 1.3 times the thrust of the 3 SSME set and the chemical portion operates at 1.77 times the power density, so I'd scale the mass up relative to a nominal 10 tonnes for the SSMEs by 1.3*1.8 or 7/3 for 23 tonnes or with the core, about 31.6. Note this may be an overestimate, since only the combustion chamber and nozzle actually needs to be beefed up--the pumps are moving essentially the same mass and require essentially the same power! And we might simplify them quite a bit by tapping off a feed from either the nuclear core or the combustion chamber to drive a single turbo pump instead of the SSME's very complicated 2 stage pump system. Then again maybe not, if want to achieve the high pressure the SSMEs required, and perhaps we require even more. But compared to the high total mass in orbit, 222 tonnes beyond the tank, even deducting 32 tonnes all up we come out with 190, compared to 115 for STS minus its engines, 75 tonnes improved.

Now I'm not sure how seriously to take Timberwind; it was a Teller sponsored Livermore project for SDI, and one can never tell how credible those claims are. Also as I said in the prior post, the intention was to launch SDI weapons to win a nuclear war, and so the designers might have felt justified in running without much shielding and taking few precautions against release of the core materials into the environment, since if they succeeded they'd be preventing Soviet H-bombs from blowing up their targets--so they'd set the fallout from a spent or failed Timberwind launch against the fallout they prevented from half-megaton bombs going off. I think we have to assume that these 30:1 thrust ratios do not include any shielding.

If my advice were followed in laying out a Shuttle Derived LANTR rocket, we'd put the 650 ton thrust at sea level nuke engine on the bottom of a modified STS sized tank, with the two SRBs on the sides as usual, and the tank is modified to put the payload on top, for an in-line SDV. That puts the payload, including any crew on a manned launch, a good distance from the nuke engine with (at launch) 724 tonnes of hydrogen-oxygen propellant between them and the nuclear engine. Then the LANTR would be designed with upside-down flow--the turbomachinery and 6 chemical combustion chamber/nozzle sets would be set above a plug shield, with plumbing running around it to feed hydrogen into the reactor, where it would flow down through it then back up through six feed pipes to each separate combustion chamber, which would also have the oxygen pumped to them sideways. Actually it is worse than that; first we need to send all the hydrogen to cool the six chambers, then down to the reactor, then back to the chambers. But in so doing, most of the non-nuclear machinery except for the nozzles (which would be smaller than SSME nozzles, there being six of them) would be behind the plug shield. This engine is mounted inside a conical capsule that can be sealed up after boost to form a sort of Apollo CM shaped capsule for entry. Unfortunately this puts the reactor on top of the cone, but we can't have everything. Anyway the ablative heat shield on the engine capsule would form yet more shielding, as would the engine set packed between them. The whole thing is mounted on the bottom of a fuel tank with lines feeding hydrogen and oxygen into the low pressure pumps through the sides of the engine capsule. Above this the tank, which is disposable, and its propellant further absorbs the radiation that gets through or scatters past the shielding (off the nozzles mainly). As soon as this engine has placed the whole stack minus SRBs into a low Earth orbit, the payload on top separates off, removing it from the close vicinity of the still very radioactive core. Upon reaching a suitable point to deorbit, either small reserves in the main fuel tank provide propellant for a brief burst of main engine activity to deorbit it, or auxiliary rockets (whose propellant can serve as yet more shielding) fire to do this. The engine capsule blows all the feed lines off, separates from the tank, and folds the nozzles in and covers vulnerable parts with TPS to protect from the reentry heat.

What would it cost to boost the tank and main engine set to a path meant to collide with the Moon instead? Essentially we'd need some 3100 m/sec delta v, on a mass that includes I don't know how much shielding. Say we need another 20 tons with the LANTR, which would mean the payload is cut by that amount-but that still leaves 172 tonnes for payload.

The tank/engine set then would mass 77 tonnes, and we'd thus have to reserve some 50 tonnes of propellant. That is worse than taking 50 tonnes from payload; that means the total mass to orbit would be reduced by 10 percent or 25 tonnes, then the 50 would be deducted, for a 75 tonne deduction leaving just 97 for payload. In this case we need not design a return capsule for the engine but we'd probably still have to use mass for the shielding.

If we have to return 50 tonnes to Earth, how much more capsule mass do we need? Well, whatever other mass we add we need to add some 20 percent for heat shielding. Say we need 10 tonnes beyond the minimal engine and shield structure of 50, so 72 tonnes all up. We'd want a heat shield of about 144 square meters area, so a disk 13.5 meters in diameter for the base should do it. Unfortunately the Shuttle fuel tanks were only 8.41 meters in diameter, so damn. I suppose maybe with thicker TPS it can be squeezed down to that diameter.

Taking 72 tonnes for the engine capsule from 222 tonnes leaves 150. Which is still a heck of a lot more than a Space Shuttle orbiter fully loaded but with its engines removed, or indeed fully loaded with the SSMEs included. We could design a supersize Orbiter with no launch engines, a la a bigger Buran. But we don't have to, we can just use 150 tonnes as payload exceeding the Saturn V!

Or we could scale down. A lot! The SRBs were made with 4 segments, every one of which added to the total thrust for the same endurance. What if we made a 1-segment booster? It would look goofy, being very squat, but if we scale everything down by a factor of 4, we still wind up with a 37.5 tonne payload and an engine capsule massing 18 tonnes, with of course a nuclear core cut down in power output by that same factor of 4.

I'd like to get away from the solids completely but that is another rant entirely.

The point here is that LANTR is much better than using a straight hydrogen nuclear rocket, and the reason is we get more thrust from less nuke, and a better ratio of useful payload due to the relatively higher density of storage for LANTR's mix than pure hydrogen would allow.
 
Last edited:
Ok, in a cultural sense it did but what components were missing in OTL which together prevented the scientific revolution that so many popular mechanics magazines tried foretell in the late 40s and early 50s?

If you read 40's & 50's SF you quickly realise the technology divides into the trivially impossible "better insulator than vacuum", the harder than you think "robot maid" and the since achieved "video phones".
Where the stories fail badly is in the cultural and social changes since they were written that often made their ideas impractical or down right bad.
There have been flying cars for decades (I seem to recall one on Tomorrows World in the 1970's) but they are completely impractical in a "getting to work" situation (take off, Safety ,landing, parking etc) and if you haver to travel where there are no roads a small aircraft is more practical. It is possible this may change but the AI/remote controlled commuter aircraft would be a very different beast.
 

Archibald

Banned
Project Timberwind, a secret program under SDI, claimed as in this example to be able to develop NTRs capable of a thrust weight ratio of 30:1. At that rate, a 6510 kN engine equivalent to a set of three SSMEs would mass just over 22 tonnes. Thus going back to the NTR example above, if it arrives in orbit with 154 tonnes beyond the tank, we'd have 132 beyond the engines, or compared to STS, a gain of about 17 tonnes.

Timberwind was insane and would never work.
 
Basically unforseen technical problems arose,safety factors for dealing with nuclear waste, piloting difficulties logistics and cost for flying cars,and fuel safety and endurance issues for jet packs.
 
Timberwind was insane and would never work.

I certainly did qualify it--Teller, Livermore, SDI, gung-ho crazy and mendacious into the bargain.

That said, can it be possible to make a hydrogen thermal engine with a thrust/weight ratio as high as 30 to 1?

It seems dubious to me, if only because even with a very weight efficient core one needs shielding. Even for an unmanned launch, you want to protect the payload from intense neutron and gamma flux. There is no way to clean up the fission reaction itself, by its nature it produces these radiations--and an ultralight core would if anything make it worse by failing to intercept these itself.

Note that by "shielding" I mean just plug shielding, protecting the spacecraft itself, in a narrow cone maybe 10, perhaps 15 degree half angle, with the rest of the spherical region around the core being unshielded. For a launch from the surface this exposes not only those on the ground in a big circular area all around the launch site but also any aircraft operating in line of sight. I would think the atmosphere would absorb a lot of it (with some neutron activation of dust particles). But in space, one aspect of the NTR orbital ferry concepts of the late 60s was that the range of hazard for bystander spacecraft exposed directly to the core outside the narrow shielded cone was given as many hundreds of miles, perhaps thousands. And that is for a less powerful reactor than the pure hydrogen "option" would require for launch from Earth!

So anyway I threw in doubling the Timberwind mass with extra shielding, but I doubt you mean just shielding needs to be added--that the core design was "insane and would never work."

So, what is a realistic thrust/weight ratio for a functional and adequately shielded NTR? 10 to one? 5 to one? 1 to one?

Obviously the heavier the NTR the more marginal the alternative of trying to use NT high Isp for launch from Earth versus chemical methods.

To just summarize, it takes little reduction of ratios below 30:1 to make the pure NTR option worthless.OTL there was some thought given to nuclear third stages on Saturn V, which would raise the payload considerably--but the same sorts of engines would be counterproductive on lower stages. For a third stage it only works because the upper stage is already very near full orbital velocity, so its effective weight is low.

LANTR buys us some useful margin, by slashing the power requirement for the nuclear element versus pure hydrogen thrust by 2/7 and with it the mass of the core. Thrust ratios as low as 3:1 or lower appear to be the norm for most proposed spacecraft, including the above mentioned Saturn upper stage! These might be feasible for deep space orbital injections from LEO--barely. They are useless on a launch engine, even one embedded in LANTR.
----------------
The major purpose of my exercise was not to show that a nuclear launch system is feasible, and these considerations help us understand why they are marginal, even before we consider issues like routine safety (protection from expected core radiation, for crew, ship structure and bystanders for tens or hundreds of miles around) and emergency safety (core crashes, from a botched launch or from high orbit).

It was rather to show that LANTR gives superior results to pure hydrogen NTR, despite the Isp slashed in half, due to 5.1 times the propellant density, and does so with a nuclear core 2/7 the mass the same sorts of thrust from pure hydrogen systems would require, at least in launch missions.
 
Alright, let's talk about radiation.

First, units. There are six or seven different units out there for this, but for this post I'm going to use milliSieverts (mSv), even though rads sound more atompunk, because I'm more used to them. MilliSievert is a unit of ionizing radiation dose, proportional to the amount of energy absorbed by your body's cells from ionizing radiation, and therefore proportional to the number of DNA breakages that occur as a result. The average human being absorbs 3 mSv per year in radiation from background, primarily from radon escaping from uranium-bearing rocks.

So how dangerous is a milliSievert?

Let's start with what we know for sure. If you absorb 1,000 mSv in a short time period - as in, a couple days - you have a risk of developing radiation sickness. This is when you suffer acute symptoms, and it's what we're all familiar with: nausea, weakness, hair falling out, etc. etc. The more you absorb, the more likely you are to get it, and the more severe it will be. At 1,000 mSv, relatively few people get it, and most survive. By the time you hit 5,000 mSv, basically no one survives.

But: this is a red herring. Literally no member of the public has ever suffered from acute radiation sickness as a result of the Western civilian nuclear industry. In fact, no one at all suffered from ARS as a result of Fukushima or TMI. To absorb that much radiation, even after a very severe accident, you need to be on the grounds of the reactor itself - any further away and the radioactive material is diluted enough that you are not going to absorb that big a dose that fast. And the post-disaster response at Fukushima was handled well enough, and the release at TMI was small enough, that not even members of the on-site crew absorbed that much radiation.

The real danger to the public from a radiation accident is cancer. (Theoretically, everyone thinks it probably causes birth defects as well, but the evidence on that is surprisingly murky - short version, there probably is an increase, but it's probably negligible compared to the increase in cancer.)

Now, our most reliable data on radiation and cancer comes from studies of the survivors of the atomic bombings at Hiroshima and Nagasaki. The radiation they absorbed was almost entirely the initial pulse from the explosion itself (since they were low-yield airbursts, the fallout afterwards was negligible). Since we know how big the explosions were, and we know how far each person was from the epicenter when the bomb detonated and what the geometry involved was, we can work out with decent precision how much radiation each person absorbed. We then compare that to their post-attack medical history, to work out the increased risk. This works out to a 1% increase in lifetime cancer risk per 100 mSv absorbed. So if you absorb 500 mSv, your lifetime risk of cancer increases by 5%.

The assumption used by regulators in the US, and most other countries, is that that's your increase in cancer risk. To find out the number of casualties, you take the average radiation dose to the exposed population, multiply by 1%/100 mSv, and then multiply by the population. Now, the average radiation dose in an accident is very very small - but enormous numbers of people are exposed. If you expose 1,000,000 people to 10 mSv of radiation, then:

1,000,000 * 0.1% = 1,000

People are going to get cancer, of whom about half will die of it. So the individual risk is miniscule - but the collective cost is huge. And, of course, the public perception of this is thrown off by the fact that 40% or so of those 1,000,000 people were going to get cancer anyway from some other cause, and all of those 400,000 people are going to assume it was caused by the radiation.

But, here's the thing: we don't actually know that the risk from 10 mSv is 0.1%. It might be less. It might be a lot less. It might be zero.

There's always some noise in any statistics, some random variation, and for a small enough increase in risk and a small enough sample size, that noise will drown out the signal. And there are no studies with a sample size big enough to detect a 0.1% increase in cancer risk. There are no reliable studies of the casualties of Fukushima - or public, as opposed to on-site, casualties of Chernobyl - based on epidemiology. All reliable casualty studies are based on modeling how much radiation was released and where it went, and then using the 100 mSv = 1% rule to infer how many people died of cancer as a result. Nobody knows if those studies are really accurate.

The sample size from Hiroshima and Nagasaki is big enough that we can be confident in our calculations down to about 100 mSv = 1% risk. For doses below that level, we just don't know - the signal is washed out by the noise. Out of an abundance of caution, modern nuclear regulators assume that the linear relationship continues below 100 mSv, on the grounds that your cancer risk should be proportional to the number of DNA breakages that occur, which is proportional to the dose. This is called the Linear No-Threshold (LNT) hypothesis.

But there's a competing hypothesis. We know the body has some ability to repair DNA damage. The competing hypothesis argues that there's some threshold dose, and below that dose, the body can handle the radiation without a problem. This is called the Linear Threshold (LT) hypothesis. Where the threshold lies depends on who you ask, but typical numbers are anywhere between 50 mSv and 100 mSv.

Critically, if the LT hypothesis is correct, Fukushima radiation killed and will kill noone. A scant handful of on-site workers absorbed enough radiation to breach the threshold, but the numbers are few enough that it's unlikely anyone will get sick from it. If LNT is correct, somewhere between one and ten thousand will ultimately die as a result.

If the LT hypothesis is correct, then almost all of the exclusion zones around Chernobyl and Fukushima can be reoccupied immediately. If the LT hypothesis is correct, then a meltdown is basically just a major industrial accident. It's no worse than any other accident involving toxic chemicals. If the LT hypothesis is correct, atomic airplanes are not obviously insane. And that is why all those people back in the 50s believed in things like atomic airplanes and Project Orion: they were working on the assumption LT is correct. What changed wasn't really our knowledge of radiation's dangers - it was our appetite for risk.

Unfortunately, there is simply no way to tell which hypothesis is right. You simply can't get studies with big enough sample sizes. Even if you could, you can't really tell how much radiation people are really exposed to, unless you manage to convince all those people to wear dosimeters throughout their daily lives - radiation rates vary a lot over the world. (That's why the studies of the Hiroshima and Nagasaki bombing victims are so valuable - we can calculate exactly how much radiation they absorbed.) There are a few studies trying to do it anyway - and many of them support LT - but none of them are strong enough to support a change in regulations, given the consequences of being wrong. The only way we'll ever be able to tell which is right is by developing a better understanding of the actual mechanics within the cell and how it responds to radiation, which is not going to happen overnight.

In the real world, I think we should stick with LNT. (Most of the time, anyway - there are occasions when I get worried enough about climate change that I'm willing to support drastic action to fight it.) But we aren't talking about the real world - we're talking about fiction. In fiction, I'm willing to play things a little riskier than I am in real life. All it takes is that the public needs to be willing to accept a little more risk - and to trust the people asking them to accept it.

That last word, trust, I think that's the real challenge. Even if we discovered tomorrow that hey, LT really is right, we can prove it, how much of the public would believe it? More than would have in the 80s, but still not enough. The science won't make a difference if people don't trust the scientists. But that is a subject for another post.
 

Puzzle

Donor
but they are completely impractical in a "getting to work" situation (take off, Safety ,landing, parking etc) and if you haver to travel where there are no roads a small aircraft is more practical. It is possible this may change but the AI/remote controlled commuter aircraft would be a very different beast.
Well there was this thing, the Williams X-Jet, which basically had the pilot standing on top of a cruise missile jet engine and steering by leaning. They claimed a sixty mile range and an hour flight time, when I was in Houston I would have liked one.
 
I certainly did qualify it--Teller, Livermore, SDI, gung-ho crazy and mendacious into the bargain.

This, I think, is being very unfair to Teller and Livermore. Teller is another of those historical characters like Wallace - someone who did not exist, and who it was therefore necessary to invent. It suits the purposes of both left and right to pretend that he was this stereotype of the nuclear mad scientist, just as it suits the purposes of both left and right to pretend Wallace was a pacifist. This is true of Hermahn Kahn as well. The truth is far more complicated. Teller was a deeply, deeply flawed human being, but I think he was fooling himself far more than he ever fooled anyone else, and the Strangelovian parody of the genocidal nuclear weaponeer is simply untrue.

Did you know, for example, that Teller was one of the very few voices within the AEC in the 40s and 50s calling for much stricter safety controls? It's true. He got the Reactor Safeguard Committee nicknamed the Reactor Prevention Committee. He also had some very pungent - but fair - things to say about the Aircraft Nuclear Propulsion Program.

But the thing about Teller was that... Well, he'd basically lived a movie plot. He had been the Lone Maverick Scientist who persisted in spite of the naysayers and the doubters and everyone, and who was proved right. And I think he saw everything else in his life through that prism - and, maybe, was trying to recapture a little bit of the glory of those days in the early 50s, when he'd (felt like he'd) been proven right about everything.
 
This, I think, is being very unfair to Teller and Livermore.

OK, fine, I'm mean to Teller! Poor fellow meant well, and in his perception was doing the right thing.

The question is, how much could other people trust him and his acolytes?

Edit--funny thing, I read this response to me before reading your post on the unknowns of radio-medicine. Which ends:

...That last word, trust, I think that's the real challenge. ...The science won't make a difference if people don't trust the scientists. But that is a subject for another post.

There are scientists I would trust.

That Teller believed he was justified, that he would risk harm only if he judged it necessary, that is only fair to assume. But given his overall mercurial track record and his association with politicians who were not above mendaciously lying about their ends and means alike (as when Reagan's "Peace Shield" TV ads sought to mislead the public about the nature and capabilities of SDI for instance, a project Teller was much behind and associated with)--should we trust him, either to have his shit together authoritatively about some projected technology's capabilities, or to be dealing straight with a public he believes is too fuzzy-headed to face what he perceives as political reality? Or for that matter, his perceptions of political reality?

Teller was the sort of scientist who tends to corrode people's trust in scientists in general, unfortunately. That he meant well when he did it is not the point.
 
Last edited:
Regarding radio-medicine generally:

I now have the impression that one reason discussions of radiation hazards go off the rails is that people talk past each other with different units.

As a physics student (former, long ago) I can readily understand a unit that boils down to particles per unit volume, or particles crossing a unit area. Then it becomes plain to me this simple datum is not enough, even if I have a device that can measure it. I also want to know how energetic those particles are. Are we talking iron nuclei at 99.99998 percent the speed of light, or a neutrino here? Energy flux might be the more appropriate unit I want to know!

But in turn, an energy flux of the same energy magnitude consisting of 1000 neutrons going at 10 million meters a second each versus another flux of 100,000 neutrons going at 1 million meters a second are equivalent, but of course might have very different effects on my body.

I also know this--a beam of protons, which are practically the same mass as neutrons, and of neutrons that have the same particle density and average speed have very different effects on me too--short answer, I'd choose the neutrons over the protons, unless I calculated the protons were going slow enough my outer skin cells would stop them in which case I'd choose them instead. Uncharged particles such as neutrons or gamma rays interact at random only with particles they happen to come very close to, and are scattered and absorbed by these. They are attenuated to an exponential degree by passing through a given length of given nuclei of given density, but by that same token you can never get them all--if you have a shield that will cut the density in half by absorbing half of them, putting on another layer of the same stuff the same thickness won't cut them down to zero, it will cut them down to 1/4.

Charged particles on the other hand reach out with their charge and interact with every other charged particle in range, and lose their energy quickly to those within range of their path when they are dense enough, and will come to a complete stop in a finite length of a given absorber material. By that same token, it will deposit all its energy in the material along that path, most of it at the end of the path.

Thus a neutron might have an interaction cross section with my arm of about 1 percent, so most of the neutrons in that beam come right through my arm doing nothing to my flesh at all, while some of them are diverted or slowed a bit and a few happen to be stopped completely--the latter do a lot of damage where they happen to hit, but there are few of these incidents, and the others also both do little damage and the interaction spots are not so common. A proton beam of the same energy and density will drill into me a certain distance, and terminate, delivering all its energy along that path, and messing with zillions of my cellular atoms all around it, producing secondary brehmstrellung EM radiation to spread the grief around and generally wrecking havoc. But if I can put a wet gel pack on my arm it will be intercepted by that, I only get X rays or whatever from the brehmstrellung and that too can be largely absorbed by the gel, and none of it gets to me. With the neutrons the same gel pack just attenuates it a bit and the beam passes through me pretty much the same as if I put nothing there.

So--what I need to know is not how much radiation I absorbed in some integral unit such as particles or even energy flux. I need to know what kind of particles went through me--EM like gamma or X rays? Neutrons? Fast nuclei like protons or alpha particles (and if so, which ones--what atomic number are we talking about?) Electrons aka "beta particles?" And then I need to know the energy spectrum of each type.

Knowing all that, it would be possible to adjust each one to a unit such as a Sievert, for clearly different fluxes of different kinds of particles would deposit the same Sievert into a test beaker full of water. For that matter it probably matters whether the radiation hits water or some other substance--if we are measuring breaking genetic material bonds, we need some chemical in there with similar bond strength, so we can measure how many were broken--ideally something that won't break down if left alone but if broken won't recombine to restore some equilibrium either. Or if it does recombine, puts out a signature--a photon is emitted say. Then we might make some sort of goop that is this phosphorescent DNA-similar (in bond strength) molecule and measure how many of those distinctive re-combination photons it puts out, and that's a meter that can measure Sievert doses directly. It would not be telling me whether it was gamma rays that broke the bonds, or neutrons, or alpha particles, or what--only the biologically relevant dosage, in terms of a particular type of damage done!

If I don't have such a beaker of goop, instead I need a bunch of instruments that will measure both particle flux, and energy flux, and allow me to deduce which of those particles were what, and each type of particle in each separate range of energy (a radio wave will not affect me the same way a gamma wave of the same power will) is then adjusted using calculations or empirical look up tables to tell me how many Sieverts each subcategory deposits, then I can multiply by the flux of each subcategory, and integrate it all to get a calculated Sievert dose. Which again is medically what I want to know, but an instrument like a Geiger counter just gives me a particle count without telling me the other stuff. To know my doseage in Sieverts I need to at least guess which type of particle in what energy bands those particles were, then I can estimate. I know exactly what it was in particle flux rate (well, within the limits of resolution of the counter anyway) but not how much harm they did me unless I can tell a proton from a neutron.

Does the photographic film in dosimeters react in a way that is linearly analogous to how cell nuclear material gets broken? So that 10 times the fogging of the film implies 10 times the cell damage? It might, for typical ranges of particle types and energies anyway.

---With nuclear power, when standing near an exposed core or watching a nuclear thermal rocket with unshielded core ascending, the main thing irradiating us is neutrons and gamma rays; these are both neutral and I believe that means we can be exposed to higher values of these and get the same Sievert dose as a much less energetic and dense beam of say alpha particles. When we are worrying about wastes or plutonium or whatever being released into the environment we are worrying about these nuclei being taken up by respiration or drinking or eating contaminated food, and the atom then being metabolized, deposited somewhere in my body--and then undergoing radioactive decay, which can mean it emits gamma rays, a proton, an electron, a neutron or an alpha particle. In roughly that order, they do less or more damage to me, but since it is inside my body odds are the energy winds up being deposited, and odds are when that happens it breaks bonds per whatever energy it had. So the worst of them would be the most energetic one, whether it leaves a long trail of damage or a concentrated blob of the same count of broken bonds. Or the right kind, probably a neutron or gamma in the right range, might get out of my body with most energy intact so the damage it does me is not well determined just by its energy but by the fraction it leaves. The real point here is, far lower flux, with far lower energy, does the same damage as something coming from outside that mostly passes through me. Since the decaying nucleus is in my body, it is sure to damage something, with whatever energy it has.

The main thing here is--if some isotope of potassium or radon or whatever is outside my body and emits a characteristic decay particle, there's a good chance air or something else will degrade its energy and perhaps absorb it completely, and if a charged particle, which does more damage with less energy and numbers, there remains a good chance my skin will stop it. But if I have ingested it, all its flight path, long or short, s within the body and every electron-volt of energy it has (minus any it might exit with anyway) does some number of Sievert's damage to me.

So the sort of damage we get by ingesting some radioactive nuclei is different--basically, clearly worse--than we get from equal fluxes at equal energies coming from outside. Another feature of ingesting material is that the damage a given gram of it does is not instant and done; rather all the material that comprises it will either at some point be excreted and we dodge that bullet, or it will someday before then emit its characteristic particle and do the damage then. The Sievert count might be estimated by knowing the mass and hence molecular count of the dose, but it will be released in the future. Unless we can manipulate body chemistry to flush that particular type of atom out (and we have no way to target just the radioactive isotope, if it is some kind of potassium we have to flush all the potassium) before too much of it decays, and it will linger in the body long enough naturally to span a good many half-lives of the stuff, we'll get the damage from the dose for sure eventually.

Also some nuclei have chemical properties such they tend to congregate in particular sensitive locations, such as in bone marrow--thus their danger is compounded beyond the Sievert level by its strategic placement.

Note that your essay on radio medicine is mostly about ingested nuclei that will emit a damaging particle eventually.

Clearly in terms of numbers of molecules or total energy exposure, we want numbers for ingested stuff lower that for external sources, and especially when we are talking about anything that tends to be concentrated near some particular type of tissue, since that tissue will be selectively damaged instead of a random sample of diverse tissues.
 
Top