alternatehistory.com

The creation of the Atomic Energy Commission and the establishment of the national lab system beginning in the late 1940s marked the beginning of a new epoch in what would later be known as high-energy physics research. Henceforth, rather than being the province of itinerant dabblers mostly depending on private funding to construct devices in university laboratories, it would be shaped by massive government funding and massive government facilities, backdropped by massive government expectations. Previously, high-energy physicists had given the world the atomic bomb and the nuclear reactor, both of which were quickly becoming indispensable to the military; who knew what they might come up with next?

For particle physicists, in particular, two of the AEC’s many national labs stood out. The first was E.O. Lawrence’s famous Radiation Laboratory at Berkeley, California, founded even before the war and the home of the cyclotron, the most powerful type of particle accelerator in the world in the 1930s and 1940s. Despite the foundation of Lawrence Livermore, a classified nuclear design shop similar to Los Alamos, the original Radiation Laboratory remained part of the AEC system, though with a focus on unclassified physics research instead of the nuclear-related work it performed during the war. The second was an entirely new laboratory named Brookhaven located near the town of Brookhaven on Long Island, on the site of a former Army base. Besides doubling the number of federally supported scientists and engineers, thereby creating additional jobs and expanding the pool of physics talent, having a second site would hopefully mean that the failures or foibles of one would not impact the ever-onwards advancement of science. Flawed or failed proposals could be critiqued by a competitor, while the pressure of Berkeley and Brookhaven would drive Brookhaven and Berkeley to higher and presumably more productive heights of discovery. Despite the additional expense, then, building Brookhaven appealed to the AEC. It also appealed to many within the particle physics community, for whom a site in the northeast, relatively proximate to the many major research universities located in the region, would be far more convenient than distant California.

For more than a decade this arrangement worked perfectly, with both labs alternatively competing and cooperating to advance the state of the art ever farther. When the AEC was founded, in 1946, the highest energy accelerators in the world, at Berkeley, could reach about 700 MeV, or millions of electronvolts, a unit of energy commonly used in particle physics and equivalent to the energy gained by an electron moving through a one-volt potential. That is, Berkeley's machines could, in theory, raise an electron to the energy it would gain by passing through a 700,000,000 volt potential. Over the next fifteen years, however, a series of new machines and new machine designs with tongue-twisting names like the synchrocyclotron pushed the state-of-the-art far beyond 1946, culminating in 1960 with Brookhaven's Alternating Gradient Synchrotron, able to accelerate particles to 33 billion electronvolts, almost fifty times the energy of Lawrence's old cyclotron. Besides being a technological tour de force, the rapid advancement in machine capability had enabled a string of discoveries, unveiling new particles at a rate too fast for the theorists to keep up.

By then, however, Berkeley was starting to run into significant trouble, for two reasons. First, they were beginning to literally run out of room. Already in the 1940s Lawrence's laboratory had relocated from the actual campus to the hills above in order to build ever-larger cyclotrons, and by the early 1960s the increasing size of state-of-the-art accelerators was causing them to run out of hilltop. In of itself this problem was surmountable, for the wide-open Central Valley of California beckoned beyond the Coast Ranges where Berkeley lived, but the second problem was, ultimately, more significant. Unlike Brookhaven, Berkeley was disliked among the so-called "users," the physicists who actually designed and built experiments and as a result turned the machine from a mere money-sucking toy into a major scientific experiment. Rightly or wrongly, they viewed Berkeley as giving more credence to the needs of their own staff than those of visiting physicists, a phenomenon exacerbated by the failure of an insurgent group of Midwestern physicists to obtain AEC support for the construction of a 12 GeV machine at the University of Wisconsin and the resulting perception of the system as being dominated by the coasts at the expense of the growing middle.

Perhaps these issues might have been worked out had the management of the AEC's laboratory system and particle accelerators remained as informal as it had been in the 1950s, let alone the 1940s, but with the rapid growth in machine energy came an equally rapid growth in machine costs. The agreement to split development between Brookhaven and Berkeley had only ever been a gentleman's agreement, and with the stakes so much higher than they had been before, gentlemen were in short supply. At first it seemed that Berkeley might win through in any case as Presidential commissions and expert panels supported their bid for the next big accelerator, but in 1965 the AEC for the first time ever asked the opinion of the National Academies of Sciences on its location, and next year opted to form a new laboratory near Chicago, the National Accelerator Laboratory, in order to host it. Berkeley never built another major accelerator.

Brookhaven, meanwhile, was promised the next accelerator after NAL's, and in any case was not directly touched by the debate. After all, what did it matter if it was competing with California or Illinois? Some of the staff, however, saw that the elevation of the National Accelerator Laboratory, later and better known as Fermilab, set a dangerous precedent; if Berkeley could be dethroned, why not Brookhaven? And the director of NAL, Robert R. Wilson, was certainly the sort of person who might do just that, with a reputation for an autocratic and hard-charging leadership style. They organized, and pushed the laboratory's management board to form a committee, the so-called Fitch panel, to study the laboratory's accelerator program. At first, this committee merely elaborated on the details of the machine which would naturally succeed the one under construction at NAL under the old succession of designs, but soon enough other concerns--both technical and political--began to drive more blue-sky thinking. Not only were existing machine designs apparently approaching a limit in terms of their physics results, but Brookhaven was being torn apart by increasing disputes between its users and its machine builders, and an ambitious new machine seemed to be just the thing to draw them back together.

Such a decision was facilitated by a series of major developments over the course of the 1960s that seemed to promise--and require--a revolution in accelerator design as significant as the development of the synchrotron or, before it, the cyclotron. The first was the trend, well-known by that time, of accelerators to increase in peak energy exponentially, similar to the Moore's Law of greater and more recent fame. Every few years a new accelerator would come on-line and increase achievable energy by a factor of ten. However, again much like Moore's Law, maintaining this rapid growth in energy required constant innovation in accelerator designs. Electrostatic generators and linear accelerators had been supplanted by cyclotrons; cyclotrons, by synchrotrons; synchrotrons, by...well, what? One option, under investigation at the time, was the particle collider, a device that would smash two beams of particles into each other unlike the conventional particle accelerator that simply propelled a particle beam into a fixed target of some sort. Due to the peculiar rules of special relativity, a collision between two fast-moving particles, like the ones in a collider, would be many times more energetic than one between one fast-moving and one virtually stationary particle, allowing another big jump in peak energy. In fact, thanks to this fact a collider using two beams of relatively low energy, say 200 GeV, can actually produce much higher energy collisions than a conventional accelerator using a single very powerful beam.

However, while most people know that a higher energy accelerator is a better accelerator, prompted by news articles that trumpet particle energy, the truth is that energy is only half of the equation for physicists interested in an accelerator's performance. The other half is luminosity, the rate at which particles flow through a given area, or in other words how many collisions--and therefore how much physics--actually occur. To a certain extent, energy and luminosity can trade off for each other, since higher energy generally makes it more likely for a given interaction to produce a particular physics outcome, while higher luminosity makes it more likely for an outcome rare at a given energy to occur at least once. Both energy and luminosity also require expensive equipment to realize their potential, too, such as the sophisticated magnets and beam control systems needed to achieve high energies, or the complex detector designs needed to disentangle the flood of particles produced by frequent collisions. In the end, however, there is a minimal luminosity needed for any accelerator to be useful, no matter how high its energy, and a minimal energy needed for an accelerator to probe new physics questions, no matter its luminosity. At some point, either there would just not be enough energy in the particles to do anything new, or the rate of interactions would be too small to collect useful data in any reasonable amount of time.

And this was the problem with colliders. While colliders offered a big step forwards in energy, their impact on luminosity--at least for protons--was far less beneficial. Right off the bat, the lower density of a particle beam compared to a solid target meant that luminosity would necessarily be lower in a collider than in a conventional accelerator, all else being equal. And all else was decidedly not equal, as colliders also had problems related to so-called injection, when particles were added to their beams, that could exacerbate beam instability, where beams would spread out and essentially evaporate over time. These issues might further lower luminosity so that a collider would produce practically no actual physics, instead being a mere exercise in engineering. Moreover, colliders had the additional problem of only allowing a few experiments to use the beam at a time, while also making it more difficult and costly to operate a program of experiments involving multiple kinds of particle. For all these reasons, accelerator physicists tended to view colliders as exciting challenges, while particle physicists tended to view them as expensive and potentially dangerous experiments. By the time Brookhaven was studying its next big accelerator, however, CERN had actually built a proton collider, the Intersecting Storage Rings, under the influence of its former Director General, Victor Weisskopf. Contrary to the fears of many physicists, the ISR showed that the problems of injection and beam stability could be solved, and was able to easily reach a satisfactory luminosity, although it was compromised in physics results due to what proved to be poor detector design. Hence, when Brookhaven began to consider its future options in the late 1960s, the idea of building a collider rather than a high-energy conventional accelerator seemed far more practical than it had earlier in the decade, and certainly a far better option to continue Blewett's Law of exponential accelerator energy growth.

Augmenting this reasoning was the developing technology of superconducting magnets, which promised to further increase the potential energies of particle accelerators. While superconductivity had been discovered a half-century earlier, early superconducting materials--so-called Type I superconductors--could not tolerate significant magnetic fields or large currents before a small part of the material would lose its superconductivity, triggering a quench, the abrupt reversion of a superconductor to its normal state. The rapid increase in temperature resulting from the virtually instantaneous increase in electrical resistance can then trigger various undesirable events, including explosions from the boil-off of the cryogenic coolant needed to maintain superconducting temperatures. Unfortunately, the magnets of a particle accelerator need to be able to tolerate both significant magnetic fields and powerful currents, so superconductors were not, at the time, capable of use in this role. In the 1950s, however, a new material, an alloy of niobium and tin, was discovered to exhibit superconductivity, perhaps unsurprisingly as both niobium and tin are superconductors on their own. Far more surprising was the discovery in 1961 that niobium-tin could support large magnetic fields and high current densities, unlike every other known superconductor, and the development by RCA in 1962 of methods for fabricating conducting ribbons of niobium-tin in commercial quantities.

Together, these led to a rapid increase in research into superconducting magnets at the national labs, now that such magnets seemed technically and even commercially viable. Although intended for many possible applications, including fusion reactors, particle accelerators have particularly strenuous magnet demands in order to bend and focus their beams, and superconducting magnets offered a particularly attractive route to increase performance. A very considerable amount of work had been carried out at Brookhaven during the 1960s in developing superconducting magnets, with an increasing focus later in the decade on the particular demands of accelerator magnets, which must be compact and able to rapidly and precisely raise and lower their magnetic field in time with the beam while at the same time resisting major stresses caused by those fields themselves. By 1968 Brookhaven had made a number of breakthroughs in this area, in particular coming up with a method of distributing the conductors around the central bore of an accelerator to generate the proper field inside with as little as possible "wasted" on fields outside the accelerator itself, later called the "cosine" method (since the current in the conductors would be proportional to the cosine of their angle from the horizontal plane). Still, their magnets required a very large amount of research before they could possibly be used in an accelerator, although the laboratory's Accelerator Department was confident these could be worked out in time for the next machine.

That year, however, Peter Smith of the Rutherford Laboratories in Britain dropped a bombshell when he announced that they had discovered a method of completely eliminating two of the most persistent problems in the superconducting magnet business, flux jumping and AC losses, both caused by currents induced in the superconductor by changing magnetic fields, and both involving the discharge of heat into the superconducting material. If the superconductor was unable to shed this heat quickly enough, it could quench, losing its superconductive properties. By using extremely fine conducting filaments, so that the eddy currents involved in flux jumping simply could not deposit much energy, and by twisting the wires to minimize the effective area exposed to the induced fields involved in AC losses, both problems could be virtually eliminated. The only problem was how to use such fine filaments to form a magnet, and on this matter the two sides of the Atlantic quickly developed different approaches. The Rutherford group proposed weaving a kind of superconducting cloth from the filaments, similar in structure to a hair braid. In theory, this approach had several advantages; the woven structure, for example, would firmly interlock the filaments without external intervention, lending the conductor a certain degree of mechanical strength, while it would allow simpler construction of certain kinds of magnet. In particular, the cosine magnet designed by Brookhaven would only need a single layer of conductor with a Rutherford weave, thereby making it simpler and potentially easier to mass-produce while protecting against quenches, always a serious threat to superconducting magnets. Rutherford began experimenting with a series of prototype magnets intended for CERN or even American accelerators, which seemed to bear out the advantages of the woven layout in practice.

Brookhaven, however, was not fully convinced. In the words of its then-director of superconducting magnet development, William Sampson, "When I saw the Rutherford weave I thought, 'That's a very complex design. What happens if one of the filaments quenches and shorts? What happens if a filament breaks in the weaving process?' In the Accelerator Department [the group responsible for designing and building the laboratory's accelerators], we liked to stick to Thoreau's words: 'Simplify, simplify.' And the Rutherford weave didn't seem very simple." Brookhaven's designers instead plumped for a cable, twisting a small number of filaments together to form larger strands, which were then squeezed together to form a rectangular cable that could be stacked for magnet construction. This seemed mechanically simpler and easier to manufacture than Rutherford's weave, and made more efficient use of space while maintaining a near-optimum ratio of superconductor to supporting copper used to further suppress flux jumping. However, the cable required careful production to ensure that the strands would hold together, and would require a more complex multi-layer design for cosine-type magnets, potentially harming their manufacturability and hence the ability of the Department to deliver the number of magnets needed for a new accelerator. The Department, however, thought that it could develop manufacturing techniques that would allow efficient production, and that the greater simplicity of the conductor could compensate to some extent for the greater complexity of the magnet.[1]

Thus, by 1971 the technology needed for the next generation of superconducting, high-energy particle colliders was in place, or at least under development. By itself, however, this would not have been enough to encourage Brookhaven's management to pursue this new technology had physicists not developed a hunger for precisely the type of high-energy, high-luminosity accelerator that these technological advances promised. Besides the perennial interest in what might be uncovered with more events at higher energies, there was a specific target that was drawing the attention of particle physicists at that time: the W boson, believed to be the "carrier" of the weak interaction in the same way that the photon is the carrier of the electromagnetic interaction or the gluon the carrier of the strong interaction. The W boson had originally been proposed in the early 1960s to resolve inconsistencies in the theory of weak interactions at higher energies, but a number of ingenious searches had puzzlingly failed to resolve any trace of the W at all, a problem explained in 1966 by a quartet of Japanese and American physicists as being due to previously unanticipated corrective factors that greatly reduced the probability of producing the W compared to previous assessments. However, their work, and the work of physicists investigating the collision of high-energy electrons with protons, showed by 1971 that an accelerator able to produce a center-of-mass energy of 400 GeV and luminosity of 5*10^33 or greater, about ten times that of the Intersecting Storage Rings at CERN, would be all but certain to either find the W or rule out its existence, in either case making a great discovery.

Such an accelerator could not be practically built as a fixed-target machine due to the energy requirement[2], nor could it be built using conventional magnets due to the luminosity target. Only a superconducting collider could meet these design goals, and by late 1971 members of the Fitch panel were already openly discussing what a collider intended to meet these parameters would look like. In their final report, delivered in early 1972, they explicitly called for the construction of a W machine--called ISABELLE, for Intersecting Storage Accelerator + "Belle," beauty--as the next major goal for Brookhaven, a goal which was quickly embraced by the laboratory's management and made the centerpiece of the laboratory's plan for the next decade.

[1]: This is the PoD. In reality, the approaches were precisely the opposite of what I have indicated; Rutherford favored a cable, whereas Brookhaven favored a braid. This would have serious ramifications later...

The Sampson quote is fictional, as you might have guessed from the above, but he was a real person who really was a director of superconducting magnet development at Brookhaven in 1971 (the structure of the laboratory was complex), and who really did express a preference for simplicity when it came to superconducting magnets through the words of Thoreau.

[2]: Achieving the same center-of-mass energy with a fixed target would require a 100 TeV beam, far beyond what anyone at the time--or today, for that matter!--could possibly have generated.

---​

Wow, this has been a long time coming! I first had the idea for this time way back in 2012, at the same time I first learned about ISABELLE, but for obvious reasons (aka Eyes Turned Skywards) never got around to actually writing it until last year, during my long break from the forums. Maybe it's just because I'm a particle physicist myself, but the ISABELLE project hit a chord with me when I heard about it, and it seemed so very ripe for some AH love...

Many thanks to e of pi and Asnys for reading and commenting on earlier drafts; any errors remaining (or questionable decisions made) are, of course, my own. I hope you all enjoy this little timeline very much.
Top