Eyes Turned Skywards

Part III, Post 4: Grumman Aerospace and the X-40 "Starcat" program
  • So, this week we're returning to something mentioned in Part II, but which as you'll see made its biggest impact in the decade covered by Part III. This post was a lot of fun to write, including the assistance of our very own Polish Eagle, who I'd like to thank for his advice on Grumman and Long Island history and activities. And thus, without further ado, let us consider these ancient words:

    "What goes up must come down."
    "Once rockets are up, who cares where they come down? That's not my department says Werner Von Braun."


    Meditate upon this wisdom we will.

    Eyes Turned Skyward, Part III: Post #4

    The end of Apollo had resulted in abrupt changes for almost every NASA supplier, major and minor. Only a few, like Rockwell (manufacturer of the Command and Service Module), were able to weather it without serious changes. Some, like Boeing and McDonnell, managed to spin their losses of large Saturn V contracts into other contracts like Saturn IC’s first and second stages, remaining critical parts of the post-Apollo programs. However, others suffered from harder times. The best example, representative of the hundreds of smaller contractors, was Grumman Aerospace Corporation. Smaller than most of the contractors who had vied for a piece of Apollo, the company had nonetheless managed through hard work to seize the lunar module contract and then worked to make that vehicle one of the most reliable and successful of the program. With the beginning of the station program, though, the funding squeeze NASA was passing through made continued lunar surface operations, let alone the development of any of the many proposed expanded operations variants, financially impossible, leading to the quick termination of Grumman’s Lunar Module contract. Moreover, Grumman had hoped to perhaps leverage its aerospace experience into bidding on the NASA Space Shuttle program. When that program, too, fell to the budget axe during the refocus on stations, Grumman was left completely adrift. Even the company’s successful history of naval fighters was up in the air, as ongoing issues with the company’s F-14 Tomcat were straining its relationship with the Department of Defense.

    The Hubble Space Telescope provided one of the only outlets for the company’s successes in the 70s, with its 1979 selection as lead contractor for the spacecraft portion of the vehicle (a joint venture of Kodak Eastman and Itek would provide the optical train, including the main mirror). Grumman had long had experience with the OAO series of solar observatories and limited involvement with the Skylab Apollo Telescope Mount, which it had leaned on heavily in a “bet the company” move to save its space division. Luckily, the gamble paid off, and though the program was not without problems (Grumman could not escape its history of rather chaotic program startups, nor the overhanging threat of budget cuts that loomed over all of NASA for the early 1980s), Grumman’s space division had managed to weather the 1980s, and the flawless start to Hubble operations reflected well on the company in spite of a series of development problems. Moreover (at long last), the F-14s problems had largely settled down, and the fighter’s performance had finally started to ease some of the tensions on Grumman’s relationship with the Department of Defense. The benefit of this was that Grumman was able to reach out for another high-profile program, something of a return to form.

    Under the auspices of Reagan’s Strategic Defense Initiative Organization, the Department of Defense was calling for the development of the necessary cheap spacelift capability through the development of two prototype spacecraft. One, the X-30, was to be a “spaceplane” of the classic mold, featuring advanced scramjet engines to carry it to altitudes and speeds high and fast enough to nearly put it in orbit. The other, the X-40, was a vertical-takeoff-and-landing vehicle testing a simpler reusable vehicle along the lines of existing stages. The X-30 received a larger focus by most contractors, as it promised a large contract with extensive development. However, Grumman, with its legacy of vertical rocket landings on Apollo and a leaner, hungrier eye, cannily put its focus on the less attractive prize, reasoning that it would have a better chance with a maximum-effort proposal for the X-40 than with the X-30. This approach paid off, and Grumman was selected to design, build, and operate the X-40 in coordination with SDIO and the Air Force. While the new experience of working with hydrogen and cryogenic fuels took the usual Grumman learning curve, the headaches were overshadowed by the much larger hassles that the X-30 developers were encountering during the extensive basic research needed to even begin detailed design. After design work on the X-40 concluded in 1987, the construction and associated initial qualifications began. While the main engineering would happen at Grumman’s Bethpage, Long Island headquarters as well as subsystem assembly such as avionics, fuel systems, and shrouds, the final assembly and some of the larger titanium work would take place at the Calverton plant established for Tomcat production. In line with conventional flight test protocol, the program was to involve the construction of two complete airframes and a complete set of flight spares. In 1990, fresh off yet another review of the lack of significant progress with the X-30’s advanced engines and headaches with finding suitable thermal protection systems, SDIO officials arrived for the Customer Acceptance Readiness Review on the first spaceframe of what Grumman had internally nicknamed the “Starcat.” As with most such handovers, the list of open faults was extensive, but many were largely perfunctory, and by the end of nearly two full days of reviews, all had been accepted or closed. Finally, the first of the two X-40 “Starcat”s was carefully wrapped up in plastic and loaded onto one of the same Super Guppies that had once carried Lunar Modules for its journey to White Sands Missile Range, leaving its twin to take over its place on the final assembly stands.

    Under the New Mexico sun, support hardware had already been prepared, and once Starcat Alpha arrived, work began to check out the fueling and support equipment. Since one of the intentions of the X-40 program was to test simplification of launch operations, the site was fairly primitive, with a single hangar/checkout building, a control trailer, and two basic concrete launch/landing pads for the vehicle separated by 500 meters for planned testing of horizontal translation in-flight. To eliminate the need for a launch mount, the X-40 would take off from its own retractable landing gear, and was intended to be serviced on the pad with a simple scissor lift or cherry-picker crane truck, as opposed to a dedicated service tower. April 1990 saw the first static test firing of the X-40s engines, with the four clustered RL-10 engines at low-throttle settings insufficient to lift off. A week of further review of the data was conducted, then, with nerves running high, the X-40 once again lit off, and made its first free flight. Under the command of onboard computers, the Starcat lifted to a height of several hundred feet, hovered, then descended to land safely. Onlookers marveled at the smooth takeoff and landing—“Just like Buck Rogers,” one was heard to remark. It was an ambitious start, but the testing would only get more challenging. The envelope was pushed once again on the second flight in May, which was intended to test the entire duration of the X-40’s design goals. Reaching an apogee of roughly 3 km and spending around 140 seconds in the air, Starcat Alpha demonstrated that it was everything the X-40 program demanded it be.

    The next flights got increasingly ambitious, spaced weekly to allow full review of data from each. Flight three was the first to translate in flight, moving 150 feet off the pad center, then diverting back to land once again, a feat flight four repeated. Flight five was intended to demonstrate the ability to “stick the landing,” the program’s internal jargon for a landing where instead of settling slowly down with a thrust-to-weight ratio of less than one, the vehicle would instead simply nearly shut down its engines and fall towards the pad. At the precisely calculated moment, the engines would flare to full power, and decelerate the vehicle to a stop precisely as it reached the pad. By making a faster landing, the “sticking” method would allow more fuel-efficient landings, preserving more of the vehicle’s capability for the aerial acrobatics planned to test its aerodynamic and thruster flight controls. However, while almost all went well in the flight, the moment the engines picked to reignite was not entirely correct, and the vehicle was still moving at slightly less than 8.3 m/s when its footpads made contact with the ground. The legs’ hydraulics could not fully absorb the shock, and instead pre-designed crumple points in the legs and structure absorbed the blow. These points were designed as sacrificial, permanently deforming to save the rest of the structure. Nevertheless, the post-flight inspections and repairs Starcat Alpha would require to verify that the system had indeed protected the vehicle’s key systems from damage would exceed the capabilities of the White Sands facility. X-40/01 would have to be returned to the manufacturing facility at Calverton for repairs and inspection. Fortunately, Starcat Bravo was completing checkout, allowing the program to resume—or, at least, for investigations of the causes of the failure to be carried out in parallel with repairs to the damaged spaceframe. The same Super Guppy that carried X-40/01 back to Calverton in late June returned bearing X-40/02. Starcat Bravo became the target for inspections of the avionics, in parallel with experiments with the “Iron Bird” version of the software in servers on the ground at Bethpage’s engineering headquarters. The investigations discovered that there had been a mis-calibration in the conversion of the X-40s computers from the flight software for the conventional landings to that needed for the “stuck” landings, which had led to the IMU “drifting”: failing to correctly correlate data from onboard GPS and radar systems, overestimating its altitude during the ascent, and thus thinking it was further from the ground than it actually was. If the software’s vision had matched reality, the vehicle could have touched down gently—it just happened that the real ground had interfered a bit less than 15 feet above where the vehicle thought the ground was. The software was corrected and Starcat Bravo made its first flight in August, successfully demonstrating the “stuck” landing.

    At roughly the same time back on Long Island, the inspection of damage to Alpha concluded—the sacrificial legs and crumple zones had functioned better than predicted, and vehicle X-40/01 turned out to have sustained almost no serious damage in spite of maximum deceleration exceeding 20 Gs. One of the engineering team joked that in light of landing (mostly) safely in spite of the G-load, “Add a tail hook, and the damn thing would almost be carrier qualified.” In the morning, the repair engineering review team returned to find that second-shift workers had improvised the missing equipment out of cardboard and aluminum foil, and fitted it with tape to the vehicle, along with a paper Navy roundel. It was a reflection of the high morale of the project—they had solved a major hurdle, and were moving forward in spite of it. Alpha had survived and was beginning rework; meanwhile, the second flight of X-40/02 (the sixth of the program overall), continued to push the envelope, combining a translation in-flight with a stuck landing on the same pad. The success was the first preparation for the next major challenge—testing rapid turnaround. On the next flight, taking place in early September, Starcat Bravo lifted off, pointed its nose east, and translated to the second pad, touching down safely. Overnight and all morning and afternoon, engineers and technicians converged on the vehicle. Just before sunset, the vehicle lifted off again having demonstrated a 28-hour turnaround, returning once more to its original pad. However, in the rapid turnaround, a fuel line on Number Three engine had been opened for purging but then improperly sealed as the task was handed over to another technician. In flight, leaks from the purge point let the engine bay fill with hydrogen gas, which ignited from exhaust backblast from the pad as the vehicle touched down. Even as the vehicle settled onto the pad, the inspection panels of the engine bay blew out from the resulting explosion. Testing was halted for the year, and the vehicle had to return to Calverton to take up its place in the repair/assembly bay that Starcat Alpha, fully repaired, had vacated only the week before.

    Unfortunately, X-40/02’s damage was much more severe than the more minor issues suffered by Alpha. The fireball inside the engine bay had charred wiring harnesses, blown out insulation, deformed panels, and completely incinerated the management computers on each of the engines. They would have to be removed and returned to Rocketdyne for repair and recertification, while the Grumman team tore the entire lower vehicle apart searching out the extent of the flame’s damage. At the same time, the engineering staff and DARPA were carrying out a thorough review of the X-40 program’s goals, pacing, and handling procedures along with the staff from White Sands, who were brought back to Bethpage. Suntans were not the only things they brought with them—complaints about the ground support equipment, funding, staffing requirements, and cavalier expectations from Bethpage about flight rates were aired, and it wasn’t just the weather around the Bethpage plant that was frosty all winter. However, with the spring, work at New Mexico had begun to rectify some of the worst complaints, and Grumman’s Calverton staff was able to offer some good news: the certification that X-40/02’s frame was not permanently damaged, nor would its engines require more than an overhaul. Starcat Alpha’s engine bay was retrofitted to try to avoid a repeat of the incident, and then it was packaged and shipped to White Sands.

    The 1991 testing campaign had a more successful beginning than the previous year. Between April and mid-June, Starcat Alpha made a total of five successful flights, bringing the program total to 14 flights in less than a year and a half—close to what Grumman’s cost analysis indicated could be break-even for a reusable first stage, and in spite of the two major failures. On the fifth flight, though, one landing leg failed to lock in place during deployment, and the vehicle toppled. Fortunately, the Grumman “build them durable” tradition and the review of potential combustion hazards the previous winter made it nothing more than an embarrassment, and the vehicle was just sidelined in the hangar for inspection. After ten months of teardown, inspection, overhaul, and reintegration, X-40/02 was once again shipped from Long Island to White Sands in July to take up the slack, marking the first instance of both vehicles being present at the test site. The twin Starcats only shared a hangar for a few days, though, before Bravo was towed out and erected on the pad for its first flight since the engine bay explosion. A full static fire of the engines was conducted and then on July 3rd, X-40/02 once again took to the sky. With its successful flight, the program moved to examining the so-called “swan dive” necessary to put the aerodynamic controls into use, demonstrating the vehicle’s ability to pitch over its nose far enough to bring the control surfaces to bear, then rotate once more vertical before landing on propulsion. The first swan dive flight was over the primary pad, only demonstrating the ability to pitch over into and out of the correct attitude, but the second in August once again translated to the secondary pad in a “swoop” controlled only aerodynamically by the fins before pulling the nose up vertically to land. However, the flight revealed some issues in the aerodynamic control sequences that were less than graceful, and the vehicle was lifted off its gear and towed back to the hangar to join Alpha while Bethpage engineers reworked the control code, a process that ended up taking the rest of the year as aerodynamic models were re-checked in wind tunnels and primitive CFD.

    In February 1992, the test program began again, this time with X-40/01 bearing the results of a winter of code overhauls at Bethpage uploaded into its computers. The flight demonstrated transition into and then once more out of the swan dive attitude three times in the second-longest flight of the program (only slightly shorter than Alpha’s second flight, which had demonstrated the maximum design duration of the vehicle’s flight capability). However, circumstances caught up with the vehicle—a small crack in one of the inner lamina of the composite aeroshell was stressed by the unusually strong heating of the extended flight, and as the heat on it was cycled as the vehicle nosed into swan dive and then out again, the crack grew. During the next flight, which repeated the August flight of Bravo to the auxiliary pad on aerodynamic controls, the crack reached a critical length, having compromised a portion of the aeroshell near the Number Two engine access port. On touchdown, the shock was enough to shed loose a portion of the aeroshell about a foot square. Both vehicles were returned to Calverton. X-40/01’s entire aeroshell was removed and inspected, then replace from spares, while X-40/02’s was removed, found to be intact, and reinstalled. Both Bethpage and White Sands teams took advantage of the stand-down to incorporate overhauls to the vehicles and support systems which lead to an early end to testing for the year.

    By 1993, Starcat operations had become fairly routine: X-40/02 was shipped to White Sands and made four flights, expanding the swan dive’s use and successfully demonstrating the rapid turnaround originally attempted three years before. However, on the fourth flight, it suffered a leak in the oxygen tank which lead to a small fire onboard the vehicle during descent. In spite of the nominal landing, the premature termination of the 1992 season caused by Alpha’s aeroshell lead to Bravo being shipped back to Long Island for thorough inspection. The issue was traced to an inadequate weld in the liquid oxygen tank which through a combination of thermal and mechanical stresses had opened a pinpoint leak. The entire weld was redone, while X-40/01, checked and identified as clear of the issue, was shipped to White Sands to pick up the program. However, during the mid-June thirteenth flight of the airframe and the 24th of the program overall, Starcat Alpha’s Number One engine suffered a partial failure, forcing it to abort the nominal mission and go for an early landing. With both vehicles temporarily out of commission, the program’s goals were examined—almost every objective the testing had set out to perform had been completed, essentially exhausting the potential of the Starcat design. Any further testing would likely require design of a new, larger vehicle closer to the program concept’s fully reusable first stage--an expense which the post-Cold War (and rapidly contracting) SDIO could not afford to fund. Moreover, there had been a major change at Grumman headquarters in 1992 which affected the desire to continue with the program.

    Grumman’s finances had always been shaky, essentially living from contract to contract, and the discontinuation of production of the F-14 Tomcat had put the company’s future into doubt. While they felt they had good odds of securing some of the contracts in Project Constellation, one or two space contracts couldn’t keep the entire company afloat without some of the fighter contracts the company had always relied on. When the company’s designs were not selected as a finalist for the Advanced Tactical Fighter competition, the company management began to consider if it might be necessary to seek a merger with another company to survive in the post-Cold War market. In fact, their experience was highly desired by another company who had failed in the Advanced Tactical Fighter contest, losing out to the eventual winner, the Northrop F-23. For decades, Boeing had been an outside competitor for Air Force and Navy fighter and bomber contracts, hoping to expand from its traditional strengths of transport and commercial aircraft into the lucrative arena of combat aircraft. Despite its success with legends like the B-17, B-29, B-47, and B-52, however, and the potential of designs such as the XF8B, Boeing had had little success in winning such contracts, failing time and time again to break into the market. Once again, with the Advanced Tactical Fighter, Boeing had stumbled. With only one other fighter competition, the Joint Strike Fighter, on the near horizon, Boeing was determined to do whatever it took to secure the contract. Grumman’s history in fighter, especially naval fighter, design, offered a significant chance to gain experienced and talented engineering staff to contribute to the forthcoming JSF competition, while its recent experience with Starcat offered opportunities in another, unexpected, arena. Grumman’s non-aviation businesses were also potentially valuable assets, whether they were sold to provide cash or retained for ongoing profit. After considering the total possible value of Grumman to their future, Boeing made an attractive merger offer in late 1992, which Grumman’s management considered carefully, and eventually accepted.

    Thus, in 1993 when Starcat’s future was being debated, it was by a team under new management and with altered goals. While throughout ’91 and ’92, Grumman engineers had been studying potential applications of Starcat, including high-altitude hops with the current vehicles with higher-efficiency flight profiles, the potential for adding a small (perhaps also reusable) upper stage to boost research payloads above the Von Karman line, and/or developing the always-intended larger derivative and operating it commercially, Boeing was more interested in making use of the Starcat team’s experience for gaining the Constellation lander contract, and thus did not fight hard to counter SDIO’s intention to terminate the program. Some of the team saw the lunar contract bid and potential to return to Grumman’s spaceflight roots as an intriguing challenge, and were happy to accept the transfer. However, some of the core Starcat devotees both in engineering and operations were put off by what they saw as abandonment of a design of tremendous potential. Several key members of the team thus left Grumman behind in search of others who might be interested in following the trail that Starcat had blazed. In the shutdown, the airframes (which were technically Air Force property) were reclaimed. Starcat Alpha eventually took up residence in the Smithsonian, while Starcat Bravo was transported to Wright-Patterson Air Force Base in Dayton, Ohio and placed on display in the Research and Development Hangar of the National Museum of the United States Air Force. After years warehoused against further disposition, the remaining flight spares and portions of the damaged Alpha aeroshell were acquired by the Cradle of Aviation Museum on Long Island, where (in association with some volunteers from the Starcat team) they were assembled with dummy replica RL-10s to create a display replica, the so-called “Starcat Gamma.”
     
    Last edited:
    Part III, Post 5: The International Solar Polar Mission and Odysseus' and Telemachus' flight to the Sun
  • Well, it's that time once again. This week, we're once more following up on something from Part II--but something which we'd actually planned to include in Part II. This is a post that has seen a lot of slips (thanks in a large part to the NTRS nonsense earlier this year), but it's finally here. And if you think that's a roundabout path, you should see the probes it covers...

    On a production note, this post was only finally completed last night due to those same issues, so it may be slightly rougher than normal. Please feel free to point out grammar, spelling, or continuity errors for correction. Thanks!

    Eyes Turned Skyward, Part III: Post #5

    As with every other astronomical object, the dawning of the space age marked a new era in the study of the Sun. Given its tremendous importance to life on Earth, understanding its internal processes had long been a major scientific goal, one that, as it proved, could not be achieved without observations impossible from Earth’s surface. Moreover, as the nearest star to Earth, the Sun offers an important testbed for theories not only of stellar behavior per se but also of theories which predict that the presence of stars might have significant effects nearby themselves, such as general relativity. The most obvious method of using spaceflight to investigate the Sun, after space-based solar telescopes, is to simply send a probe to pass very nearby it, just as probes are sent to the planets or minor bodies. While difficult, to the point where jokes are told about how such a mission ought to be sent at night, it is nevertheless possible. Although the near-Sun environment has an intense and difficult thermal and radiation environment, and special measures would have to be taken not only to protect and operate such a probe through its encounter, it would be possible to build a probe capable of surviving a near-solar environment, and the idea was, from time to time, subjected to close scrutiny and attention, not only from NASA but also ESA and the Soviet program. All of these analyses, however, foundered on the extreme cost of the mission; the special preparations that would need to take place meant that even a simple probe would cost hundreds of millions of dollars, large compared to probes of similar scientific value sent to easier targets. Whenever the idea of a solar probe was revived, the cost issue tended to quickly send it back into hibernation.

    However, the phrase “space-based solar telescopes” contains a great deal of complexity which had not been completely explored by merely basing telescopes in Earth orbit, as had been done for the OAO and Skylab programs. It may seem almost too obvious to be worth mentioning, but the Sun, of course, is round, and at any given time only half of its facing side is visible to the Earth, or to telescopes in orbit around it. Moreover, the ecliptic plane, in which the Earth’s orbit about the Sun lies, is nearly in the same plane as the Sun’s equator, hindering Earth’s view of the solar poles. Both of these factors mean that telescopes on or around Earth can only see a relatively small fraction of the Sun at any given time, yet activity on the far side of the Sun or at the Sun’s poles can have a significant effect on solar behavior and ultimately Earth. Observatories placed into orbits passing over the solar poles or around the “back” of the sun could not only fill this gap, but could also easily be fitted with particle and fields instruments to provide more data on the solar wind and related phenomena than possible from an Earth-centered perspective. The idea of a probe to observe the Sun’s poles and the solar wind at high solar latitudes, in particular, had been seized upon early in the space program and given a distinctive name: the Out-of-Ecliptic Probe, or OOE probe. The difficulty with an OOE probe was that any existing booster, even the mighty Saturn V, even the Saturn V augmented with a high-energy Centaur fourth stage, could not put a probe of any reasonable size into an orbit inclined more than about 45 to 50 degrees to the ecliptic, far less than was desired by solar scientists. As such, the OOE probe seemed doomed to fade into obscurity, a clever and scientifically interesting but impractical proposal.

    Fortunately for the future of the OOE probe, astrodynamicists were about to find a way out of this dilemma. As part of the same series of analyses that led to the discovery of the famous Grand Tour of the outer planets, scientists discovered that Jupiter could massively change the trajectory of an incoming probe. Not only could a probe be accelerated to reach other planets, but its trajectory could be bent away from the ecliptic, even folded back in on itself to drop the probe directly into the Sun. Scientists quickly proposed sending a spare Galactic Jupiter Probe, a sister to the Pioneer 10 and 11 spacecraft, to follow the proposed trajectory and pass over the Sun’s poles, but the limited scientific suite of the spacecraft, the cost of doing so, and the limited budgets of a NASA struggling with two other major robotic probe programs and several human spaceflight projects almost as quickly killed the idea. In the end, the spare probe was donated to the National Air and Space Museum to represent its siblings, bound out of the solar system. Nevertheless, only the quick and dirty proposal represented by the so-called “Pioneer H” mission died in the face of NASA’s budgetary difficulties, not the underlying concept, and low-level work continued at several NASA centers.

    Meanwhile, with the recent formation of ESA and its active programs in astronomy and planetary science, European solar scientists were beginning to consider the idea of an OOE probe themselves. Lacking the budget and technology base of the United States, however, ESA could not contemplate either launching such a probe on a titanic booster directly into an inclined solar orbit, nor building a probe that could survive the radiation and cold of Jupiter to be slung back into an inclined trajectory on its own. Instead, they planned on using the increased efficiency and steady thrust of a spacecraft equipped with ion thrusters to drive it into a severely, although not totally, inclined orbit without needing the giganticism of a Saturn V-Centaur or a long and difficult voyage past Jupiter. Such a plan had its own faults, however, starting with the poor development state of ion thrusters at the time, and, like the American plans, the apparent cost and development time needed for the probe drove the idea into dormancy.

    There the idea of an OOE probe remained on both sides of the Atlantic, until the scientific arms of NASA and ESA began to grow their contacts in the late 1970s. Solar scientists from both agencies discovered that their counterparts, too, had had the idea of the OOE probe, and gradually the idea of a possible joint mission became current in both circles. Such a mission could be both more scientifically productive and less expensive than a spacecraft built and operated by just one side of the partnership, perhaps allowing an OOE probe to be launched after all. Further work by both sides showed that rather than a joint probe, a joint mission involving two spacecraft would be even better; although more expensive, it would also be much more scientifically productive, by allowing simultaneous observations of both poles of the Sun. Together with Earth-based and Earth-orbiting telescopes, most of the Sun could then be observed during the probes’ flybys, allowing a detailed global look at solar behavior that would previously have been completely impossible.

    As solar scientists were meeting in Washington and Paris to discuss collaboration, the agencies they needed to fund and build the spacecraft were coming into conflict over the seat distribution of the first several Spacelab missions, the so-called “Seat Wars”. In this climate of conflict, proponents of the dual OOE mission were quick to sell their mission as one that could bridge that divide, both uniting ESA and NASA in a single mission while allowing them to remain largely separate in the actual details of construction and even operation. Although the “Seat Wars” were resolved by the development of the Block III+ upgrade program while what had become known as the “International Solar Polar Mission,” or ISPM, was still winding its way through budgetary approval, a round of fence-mending seemed to be in order, and relatively simple and inexpensive scientific probes--where Europeans and Americans had been collaborating for many years--in turn appeared an attractive place to start. In parallel with the Kirchhoff/Newton cometary probe, work on ISPM began in 1979, with launch planned for early 1985, two years after the Galileo mission and a few months after Kirchhoff/Newton.

    Unfortunately for ISPM, rough budgetary seas still lay ahead. Like the rest of the American scientific probe program, it was an early target for budget cutters in the Reagan administration, and although the intervention of Carl Sagan and the international character of the mission spared it from substantial cuts in 1981, instead merely delaying launch a year, there were rumblings of an American-driven downscope of the mission, or even a unilateral American pullout in the works for 1982’s budget. Mission management went into overdrive attempting to protect the mission from further cuts, only to be surprised, as the rest of America, by the Vulkan Panic. Although the budget was no longer so stressed as it had been, NASA’s solar science division was not a glamorous frontline against the Soviet program, as with its human spaceflight and planetary exploration programs, but instead a rather mundane scientific program with some useful but not (yet) especially economically important results. As such, the solar science budget, unlike the total NASA budget, did not see double-digit year-over-year increases, although it was still tacitly expected to produce spectaculars that would advance the unspoken mission of beating the Soviets...somehow.

    At the same time that NASA’s portion of the program was suffering from stagnation, the European half was struggling just to survive. Since the approval of ISPM, ESA had undergone nearly continuous expansion, engaging in more science missions, more international collaboration, and more technological development. Although its budget had mostly expanded in sync with these increased demands on its material, managerial, and human resources, many of the new programs also had specific national backers--although they were (theoretically) ESA programs, most of the actual funding and development needed by the new spacecraft, new rockets, and new capsules would be provided by one or another of ESA’s member nations. For example, the French would manage most of the collaboration with the Soviets, the Italians would lead the Piazzi asteroid probe, and so on. In each case, this meant that those programs had a strong backer at the highest levels of ESA management, in the form of industrial and scientific ministers from the countries involved who would step up for “their” program to ensure a fair industrial return. Almost alone among ESA’s major programs, ISPM had no such ministerial advocate, instead utilizing the older system of distributing each program evenly over multiple countries. Components for the probe were to be manufactured at a number of locations across the continent, while many of “ESA’s” contributions were actually coming from universities, again located in several countries, rather than from the agency itself.

    All of these factors combined set vultures interested in controlling ESA’s expansion circling the apparently attractive target of ISPM. If Europe unilaterally--as the United States would surely not agree to reducing the scope of the program, not after the Vulkan Panic--downscoped its probe, or even completely pulled out, it would open up funds for other programs. And given the relatively small number of jobs and small amount of European-level funding provided by ISPM to any individual country, there was every chance that any new or expanded program would actually provide a greater return than ISPM would, making the prospect of a cancellation an attractive bet to a certain sort of person in upper-level management. Whispers that it would be cancelled soon trailed after the program like a particularly unwanted groupie, following it as it slowly moved from design to hardware construction. Almost to the day of launch, rumors could be found in the right places that Europe would soon give up on the program, despite obvious continued progress and the ever larger resources that had already been sunk into the program.

    Ultimately, the failure of the vultures to defund ISPM can be traced to the mission’s historical and political context. Given that ESA was simultaneously extending and deepening its traditional links with the American and Soviet programs, as well as forging new connections with the rapidly rising Japanese space program, unilaterally abandoning a joint program would have seriously damaged ESA’s ongoing program of development by destroying its credibility and trustworthiness as an international partner. Moreover, ESA’s upper management had just a few years earlier been protesting similarly high-handed and arbitrary actions by NASA--the “Seat Wars” that had spurred ISPM’s approval to begin with! The hypocrisy of protesting NASA’s actions on the one hand and then going out and copying them on the other would, again, have badly dented ESA’s ability to actually perform its mission of promoting European space development and collaboration, both within and without the continent. It did not help that, when pressed to find cost savings, ISPM’s management were clever enough to see this problem and tended to present options that, while technically possible, were terrifically unpalatable to upper management, similar to how city and county governments in the United States will present budget predictions that cut extremely popular services such as police officers or firefighters when faced with the possibility of a tax cut. Ultimately, these factors were enough to ensure ISPM’s survival in Europe, although at the cost of taking time and energy away from the people who were supposed to be building the probe for them to defend it, forcing them to rely more and more heavily on their better-funded American partners for support.

    Fortunately, these measures were enough to ensure that the ESA probe--now named Odysseus after the famous hero of Homer’s Odyssey, who spent two decades away from home fighting in the Trojan War and then journeying back to Ithaca--was able to meet its schedule. In late 1985, it departed Europe for integration with its NASA counterpart Telemachus--after the hero’s son, who had grown up without his father before leaving home to discover who Odysseus had been, eventually returning and assisting his father in reclaiming Ithaca--at Cape Canaveral. Together, they were checked out and underwent final preparations for flight, including the installation of their RTGs before being stacked together atop the final Saturn IC-Centaur. Despite the enormous power of Jupiter to alter their trajectories, the probes still required an huge amount of launch energy to complete their mission, more than any previously launched spacecraft, and more than even the Saturn-Centaur could provide. To allow ISPM to go forwards, an additional fourth, solid stage was mounted on the Centaur transjovian injection stage, enough to give the probes the final boost needed to carry out their voyage.

    Launch went smoothly, and once the final stage fell silent the two probes were firmly bound towards Jupiter, traveling away from Earth faster than any previous spacecraft. With launch complete, the two probes maneuvered apart to take care of the last few adjustments needed to put them on their separate courses and began the lengthy process of commissioning; activating cruise instruments, deploying booms, and ensuring all systems were functioning properly. Once that work finished, the two spacecraft began, for the first time, to explore their environment. As probes not of any particular planet but instead of the Sun and the interplanetary environment, they could do just as much scientific work while waiting to encounter Jupiter as they could at any other point in their journey, and in fact their nearness up to the Jupiter encounter offered its own unique opportunity for solar research. For the first time, scientists could study not just how the solar wind and interplanetary medium varied over space, as they had with the Pioneer spacecraft of the 1960s and early 1970s, or with more modern solar observatories, but how it varied over time, especially short timescales, as first one and then the other spacecraft passed through any given point in space.

    Just over a year after launch, this quiet but steady routine was disrupted by their approach to Jupiter. Despite launching two years after Galileo, Odysseus and Telemachus would reach the planet a month before the Jupiter orbiter and its probe thanks to their extremely high speed while leaving Earth. As they approached the giant planet, their instruments switched from cruise into Jovian mode; some experiments were shut off to protect them from Jupiter’s intense radiation belts, while others were switched on and set to record data from the circum-Jovian environment. Despite six previous flybys, both spacecraft were well prepared to extend the Pioneer and Voyager observations of Jupiter’s surrounding environment, not only flying past the planet at higher latitudes than any previous or planned mission, but also passing through the dusk hemisphere. During their separate flybys, some two days apart, Odysseus and Telemachus discovered significant amounts of material from both the Sun and Io throughout circum-Jovian space, showing that despite the planet’s powerful magnetic field the solar wind is able to penetrate deep within the Jupiter system, while in turn the volcanoes of Io feed material farther out than previously thought likely. In addition, they discovered substantial flows of highly energetic particles at high Jovian latitudes, likely related to the planet’s auroras, and showed that much of the population of energetic electrons within interplanetary space probably originated from around Jupiter. As had been predicted, the dusk hemisphere--where the highly compressed magnetic field lines and rich particle environment of the Sun-facing hemisphere is allowed to expand into Jupiter’s enormous magnetotail--proved to be enormously dynamic, with rapid changes both during and between flybys evident in the data. Although merely a secondary objective, Odysseus and Telemachus had made significant contributions to Jovian science and scientific understanding of Jupiter’s interactions with the Sun, something that had scientists excited for the next phase of the mission as the two probes began their slow climbs away from the ecliptic and towards aphelion.

    For the next year and a half, Odysseus and Telemachus returned to their cruise state, waiting and watching the Sun as they traveled towards its poles. Gradually, more and more of the solar polar regions were revealed to the instruments aboard the spacecraft, and as they began to return to the warmth of the inner solar system more and more of their instruments were brought up to full power to drink in the data streaming outwards. In early March 1989, as Odysseus and Telemachus began to return to the inner solar system, the two probes, together with various Earth-based and Earth-orbiting instruments, observed a large coronal mass ejection, just a few days after a major flare they had also seen. Together, they remotely monitored the CME until it hit Earth some four days later, triggering auroras as far south as Texas and flooding much of near-Earth space with radiation, causing many problems for satellite operators. Satcom-D2-East, RCA’s major distributor satellite for its NBC Satellite service east of the Mississippi River, was permanently knocked offline by radiation-induced faults, while other geosynchronous and low orbit satellites suffered less severe, although sometimes still permanent damage. For some time afterwards, astronauts aboard Freedom and Mir had to withdraw into protective shelters against unusually high particle doses while passing through the South Atlantic Anomaly and other areas more exposed to particle radiation than most of low Earth orbit. More memorably for many residents of Quebec, magnetic field fluctuations related to the CME induced severe currents within distribution lines for Hydro-Quebec, the province’s main electric utility, knocking them offline within seconds. As a result of a disturbance on the Sun most had never heard of, six million customers had just lost power in the depths of a spring frost, while thousands more found themselves trapped in stuck elevators or plunged into darkness beneath Montreal’s streets. It took twelve hours for Hydro-Quebec to restore power, preventing the Montreal metro system from operating during morning rush hour and forcing many business and schools to close for the day. While previous incidents early in the century had caused similarly dramatic interruptions to telegraph service, no large disruptions had taken place since the beginning of the space age, and any risk posed by solar activity had largely been relegated to the concern of airy astronomers and perhaps a few specialized businesses. Now that it was clear that ordinary people could be affected by the Sun, interest in understand--and hopefully predicting--the Sun’s behavior surged, leading to discussions between Europe, Japan, and the United States about possible avenues of further solar research.

    For Odysseus and Telemachus, however, all of this was far away and of little importance in the here and now. Even before the official start of solar polar operations in the middle of the year, together with Earth-based observatories, the probes were able to see enough of the Sun that (besides a narrow equatorial strip antipodal of the Sun’s subearth point) virtually the whole Sun could be continuously observed by their instruments, affording an unprecedented whole-globe three-dimensional view of the Sun’s behavior, from the activity of the corona (imaged by Telemachus’ visible-light coronagraph) to the finest details of the photosphere and even further to the composition, speed, and direction of the solar wind at three widely separated points. As first Odysseus and then Telemachus sped through their polar passes less than a month apart, their data made it abundantly clear that many of the specific features scientists had expected to see simply did not exist, and many of their predictions for the characteristics of the Sun and the solar wind at high latitudes had been completely wrong. Where scientists had expected differences from equatorial behavior, there was often none, or at least differences of an unexpected kind. For example, researchers had thought that as the spacecraft approached the poles, they would observe a smooth increase in the speed of the solar wind with increasing latitude. Instead, they saw an abrupt jump from slow, high-density outflows at low latitudes to fast, low-density fluxes at mid to high latitudes, with speeds in both regions remaining relatively constant outside of the transition latitudes. While they had correctly predicted that the polar wind would be faster than the equatorial gusts, they had totally missed the mark on any details of the relationship between the two or the spatial structure of the wind, besides granting the equatorial portion a greater importance than it really deserved in solar dynamics.

    Although their beliefs about the the solar wind were partially correct, the same could not be said about their predictions for the shape and strength of the Sun’s magnetic field, nor their predictions of increased cosmic ray penetration towards the Sun’s poles. Prior to the mission, scientists had believed that the solar magnetic field was similar to that of a simple dipole magnet, causing there to be an increased density of magnetic field lines around the poles compared to the Equator. Because of the Sun’s powerful solar winds, these lines would be dragged out into space, where they would in turn be wrapped up into a relatively simple spiral shape by the Sun’s slow but steady rotation. Putting this all together, they then expected that cosmic rays entering the solar system in the direction of the Sun’s poles would be able to penetrate much more deeply than those entering along the ecliptic, increasing the rate at which the two spacecraft would observe cosmic rays. In reality, as it turned out, the magnetic and cosmic ray fluxes detected by the spacecraft were essentially identical to those in the ecliptic, while the structure of the polar magnetic fields were far more complex than a simple spiral. Altogether, solar scientists now had a long, hard period of thought ahead of them to try to create new models or reconcile old ones to the new data provided by Odysseus and Telemachus. Even the more minor dust experiment aboard Odysseus, practically a secondary payload, showed a much greater flux of interstellar dust into the solar system than had been expected. More positively, compositional analysis of the solar wind was showing certain heavy ions, most notably magnesium, were much more common than others, such as oxygen, in the “slow” component of the solar wind, and vice-versa for the “fast” component. Combined with other data collected by the spacecraft, this seemed to indicate that the two streams originated from different areas within the solar atmosphere, and in particular that material from the so-called “chromosphere”--the lower atmosphere of the Sun, just above the brilliant photosphere--must have significant influence on the corona, the Sun’s outer atmosphere, with significant material transfers between the two. Moreover, the actual scale and scope of this interaction varied from place to place as a result of differences in the coronal temperature, all facts which had previously been unknown and unexpected.

    These results were further buttressed just under a year later, during mid-1990, as Telemachus and Odysseus exchanged poles, the one dipping under the south pole while the other headed north. By comparing the data from this pair of passes with the previous year’s, scientists could extend from simply having a newly three-dimensional picture of the Sun to having a now four-dimensional image, with the ability to observe changes over time as well as variations over its surface. As the two spacecraft began their long trek back towards the outer solar system, it took little deliberation for NASA and ESA to agree to an extended mission covering at least the next pair of polar passes, expected in 1995 and 1996. Both Odysseus and Telemachus were in good shape, and it would cost little to continue operating them for another few years. Scientifically, the timing was excellent; while 1989 and 1990 had been near solar maximum, 1995 and 1996 would be years of solar minimum, allowing comparisons between the Sun at its most and least active.

    Despite their journey away from the Sun, the spacecraft were hardly slowing their scientific work. Although observations of the Sun continued at a lower pace as they moved towards aphelion, their efforts in non-solar studies were ramping up. Since launch, both probes had been used for a series of experiments intended to detect gravitational waves, one of the last great predictions of general relativity to have not been directly observed in the decades since the theory had been published. When they passed through the solar system, the waves would cause slight changes in the space between the Earth and the two probes, in turn slightly altering the apparent frequency of radio transmissions from the probes to Earth (and vice versa). In theory, this small change could be detected and used as proof of gravitational waves, although it would be a very difficult experiment to try. Because of the shift to extended mission operations and the probes’ increasing distance from the Sun (and therefore Earth), the cruise period between the first and second solar polar passes seemed particularly ripe for a lengthier and more in depth search than had been possible during the first cruise period (when the spacecraft had not yet reached Jupiter) or the second (while they were approaching the Sun). Unfortunately for physicists and astronomers eager to start measuring and using gravitational waves, these efforts all failed to return any indications of gravitational waves, leaving them to cast about for new methods of detecting the still-theoretical ripples in spacetime.

    Fortunately, the other major non-solar experiment the two spacecraft were carrying out was proving far more successful. Only a few years before the OOE probe concept had been first mooted, American satellites intended to detect secret nuclear tests had started picking up strange bursts of gamma radiation from space, unlike any known gamma source. As Pioneer H was being cancelled, this work was declassified and published by a Los Alamos research team, instantly sparking off a great deal of scientific discourse. The greatest question of all, of course, was where the bursts were coming from; it was known that they were not from Earth or the Sun, but what sort of astronomical object might be producing them was completely unknown. Investigations were further hindered by difficulties in pinpointing the location of bursts on the celestial sphere, preventing astronomers from definitively saying whether the bursts were generated within or without the Milky Way, or from attempting to observe the source object once the bursts were detected. As ISPM had begun to develop, astronomers connected with the project realized that it could help solve this difficult problem, as because of the great distance Odysseus and Telemachus would travel from the Earth and their very large angular separation, both from each other and the Earth, the combination of gamma ray detectors on both spacecraft with those already orbiting Earth could allow for far more precise determinations of gamma ray burst positions than possible for Earth-orbiting detectors alone. By the mid-1990s, towards the end of the first extended mission phase, this increased precision had allowed the discovery of optical afterglows associated with the bursts, proving that the events were associated not with nearby processes in the Milky Way (as had been suspected in the 1970s and 1980s) but with incredibly distant galaxies.

    The return of Odysseus and Telemachus to the solar poles in 1995 and 1996 offered scientists the first opportunity to compare their behavior during a solar minimum period to their appearance during solar maximum. Although little revolutionary was learned in comparison to the first phase of the mission, the additional data returned by the two spacecraft was nevertheless useful to solar scientists and valuable for showing how the Sun evolved over time from a truly global perspective. With the probes still in decent shape and proving invaluable for astronomical studies of gamma ray bursts, NASA and ESA went ahead with a second extended mission, the “Solar Polar Evolution Mission,” intended to last until after the next pair of flybys in 2001 and 2002. Like the first set of polar passes, these would take place at the height of the solar maximum, affording the two probes the opportunity to observe the Sun in all of its phases during their mission. During the extended voyage out and back, the spacecraft once again returned to quiescence, expending little power and returning comparatively little data. As they returned to the Sun for the third time, things warmed up slightly, but not as much as they had either of the previous two times. This time, the Sun was a known quantity, and this time Odysseus and Telemachus could offer little but refinements to existing knowledge, not the wholesale breakthroughs they had offered before.

    Moreover, the two spacecraft were getting old; after fifteen years in space, their RTGs, crucial for keeping the probes powered and warm in the outer solar system, were running down, with their nuclear fuel producing less heat and their thermocouples converting less of that into electricity. Already, Telemachus’ sun-pointing telescope and its power-hungry despun platform had had to be disabled to save power; there was every indication that both spacecraft would need to start instrument power sharing soon as well, greatly limiting scientific operations. Other parts had begun to fail as well, including certain components of the communications systems of both spacecraft. Although redundancies had allowed them to continue operating, further failures could easily cut either off from Earth. And to cap everything off, the resumption of American lunar exploration had drastically increased demands on the Deep Space Network, especially the large dishes needed to communicate with Odysseus and Telemachus. Ceasing communications with the two probes would open up a useful amount of capacity for human missions and a new generation of robots. Even the astronomers no longer needed the two, as advancements in Earth-orbiting telescopes and techniques for rapidly acquiring the x-ray and optical afterglows of gamma ray bursts meant they no longer required the extreme precision offered by Odysseus and Telemachus in locating them.

    In June 2003, after months of slow decommissioning work, both spacecraft were commanded to shut down entirely. Unlike the heroes from whom their names were drawn, they would never be able to return home, fated instead to continue orbiting the Sun until, by chance, their orbits intersected Jupiter again, and they were swallowed up by the king of the gods or thrown into entirely new and unforeseen paths. Nevertheless, they had not existed in vain, and their mission had ended in a voyage of adventure--a fitting legacy for the most famous wanderer of the classical world and his son.
     
    Last edited:
    Part III, Interlude #3: The Quiet Years
  • Salutations, everyone! I am the Brainbin, and I come to you today with yet another interlude, exploring the popular culture in the world (and beyond!) of Eyes Turned Skywards, this time in those disaffected, cynical, post-modern, ennui-laden years known as the Nineties. I’ve been graciously invited by e of pi and Workable Goblin to continue picking up on some of the plot strands I began weaving in my two previous posts, though I warn you now that this update is just about half again as long as those two combined, and easily the longest thing I’ve ever written for a single posting. Given its length and complexity, I could not have written this largely by myself as I did the two previous posts, and fortunately I didn’t have to - Google Drive is a wonderful tool. Many thanks to e of pi, Workable Goblin, and nixonshead for their very active input. Also, you may note that several plot threads are left hanging; many of these will be picked up in the second guest post I will be writing for Part III. So, without further delay, allow me to present…

    Eyes Turned Skyward, Interlude #3: The Quiet Years

    The Cold War was finally over, and in a way that no one who had lived through it could possibly have expected: instead of going hot, and very probably nuclear, as everyone had feared, it had ended in a gentle thaw, as the Second World collapsed in upon itself like a house of cards. The Autumn of Nations in 1989, which had resulted in the fall of the Iron Curtain and the reunification of Germany, was not put down by the Soviet Union as Hungary had been in 1956, nor as Czechoslovakia was in 1968. The era of two superpowers and opposing blocs was over; the United States was the last one standing. This shockingly abrupt and non-belligerent shift in the geopolitical situation left many combatants of the Cold War feeling alienated, perhaps even disappointed. It was the kind of anti-climax that could only happen in real life; the peace that everyone had said they wanted, but which nobody had honestly expected. The USSR was no more - the ancient enemy of the Western Democracies had not even lasted for 75 years, just barely the length of an average lifetime. The Cold War was even shorter - carrying on for just four decades in total. But it had seemed so much longer. The Presidency of Ronald Reagan, who had ended the era of détente in order to escalate the antagonistic situation with the Soviet Union, had only just ended when the Berlin Wall fell; the military spending initiatives he had pledged during his term in office had included a 600-ship Navy and the Strategic Defense Initiative, which seemed to be all for naught.

    That driving force, that carefully steered, steady-as-she-goes direction which had led all of Western culture was gone. Millions were left adrift. Defense budgets were trimmed. Battleships were put back into mothballs. Nuclear arsenals were scaled back. The palpable physical threat of hundreds of missiles with an atomic payload, pointed at all the major cities and installations belonging to the other side, was eliminated; but it made for a poignant metaphor. There was nothing to attack now; nothing to defend against. Humanity always thrived when faced with challenges, with resistance from an opposing force. There was no opposing force either. The veterans returning home from the battlefields of World War II had reported alienation, and difficulty re-acclimating themselves with their peacetime surroundings; the Cold War, which had been far more pervasively a culture war than a military one, counted everyone as its combatants. What they felt was certainly far less traumatic than the veterans had, less physically and emotionally scarring, but it did leave a mark. Everything had changed. The years which immediately followed the Cold War came to be known as “The Quiet Years”. [1] Many critics, particularly cultural conservatives, would instead describe them as disquieting. The 1980s had been an era of warm-and-fuzzy family sitcoms like The Cosby Show and Family Ties. Only at the tail end of this decade did more cynical, topical programming emerge, primarily as a reaction to this complacency, and this would itself become a dominant trend in the early-1990s, the first era in which Generation X, the generation which followed the Baby Boomers, made their cultural influence known.

    One of the earliest examples was Seinfeld, which starred observational comedian Jerry Seinfeld (playing a fictionalized version of himself), co-created by him and his former roommate and comedy writer Larry David in 1988. Though it had a direct antecedent in the cable program It’s Garry Shandling’s Show, it would reach a much larger audience from its network berth on NBC. It was oft-described as the “show about nothing” and epitomized television during the Quiet Years; plots were low-concept to the point of mundane. Characters would argue about trivialities, starting with which button was the most important on a shirt, and move nowhere from there. In addition, the core foursome - two of whom were based on Seinfeld and David, the others being based on their friend Kenny Kramer and a composite of their various ex-girlfriends - were unabashedly unsympathetic, both selfish and self-absorbed. [2] From there, they gradually evolved into gleefully amoral - as a direct reaction to the moralistic programming of the 1980s, there would be “no hugging, no learning” on Seinfeld. Seinfeld’s character in particular took a strangely vindictive pleasure in his continuing amorality, and it was telling that the most likeable (and compassionate) character was the stock “wacky neighbour”. Although the characters (and the actors who portrayed them) were late baby-boomers, the show had a more Generation-X mindset: the previous generation had fought for what they had seen as noble ideals, but these characters, to the extent that they fought for anything, sought to vindicate their own self-importance. It was telling of the times that viewers identified with them anyway.

    Serving as a distaff counterpart to Seinfeld - in more ways than one - was Murphy Brown, which (like Seinfeld) had technically premiered shortly before the end of the Cold War, in 1988 (but while glasnost and perestroika were in full swing). Unabashedly topical, like the Norman Lear sitcoms of the generation prior, but lacking all of their warmth and sincerity, Murphy Brown was a work-based sitcom set at a television news-magazine, which allowed for political satire and the blending of reality and fiction (as real television journalists and politicos were often mentioned and made frequent appearances). The titular character was another sign of the times: a single, mature career woman, played by veteran actress Candice Bergen. Essentially, Murphy Brown took the sketch-comedy approach to sitcom writing, the show often resembling the “Weekend Update” feature on Saturday Night Live far more than even other work-based sitcoms of the era. The controversy which came to define the show, however, would not emerge until the 1991-92 season, when a pregnancy storyline was written into the show. Bergen, 45 years old at the time, was not herself pregnant, but the decision was made in order to highlight the issue of single motherhood. This attracted the ire of Vice-President Dan Quayle, who felt that her pregnancy - and decision to raise the baby alone (Maude had already handled abortion, after all) - trivialized the importance of fathers and their role in the family. He made this statement during an election year, the day after the episode in which Murphy had delivered her child (a daughter, Kelly), and it attracted instant press attention. [3] Given the reality-meets-fiction tenor of the series, it responded in the two-part season premiere by reacting as if Quayle had condemned the character of Brown herself (which was to say, a real person), as opposed to the show on which her character appeared. It was a smash success and attracted truckloads of critical plaudits, but although the great weight given to this fairly insignificant hullabaloo (Brown had been far from the first single mother on television - Norman Lear had, once again, beat her to the punch with One Day at a Time, for example) seemed almost laughable compared to the world-changing events that had dominated the earliest seasons of the show, ratings had peaked during the two seasons detailing Murphy’s pregnancy and her newfound single-motherhood. It was, however, emblematic of the decline which faced real news and what it chose to cover during this era - a shift from geopolitics to celebrity gossip. People didn’t seem to care so much about events and ideals so much as they did other people.

    The aesthetic of the sweeping epics of yesteryear did survive in one curious genre, however: science-fiction. In another event which would come to fruition in 1988, the third instalment of the Odyssey series was published, with Arthur C. Clarke deciding to elaborate on the Vulkan Panic which was predominant earlier in the decade (which had informed the film adaptation of 2010). [4] He sought direct inspiration from the Galileo probe which arrived at Jupiter in September 1987, and the findings which it returned to Earth concerning the moons which shared their namesake with the probe in question. 2020: Odyssey Three was released late the following year, to brisk sales (even by Clarke’s standards). Hollywood was interested, because 2010 had done well at the box-office despite the lack of Stanley Kubrick’s singular, uncompromising vision, and because the success of 2020 had domino effects for science-fiction in other media; bringing the novel to the big screen still seemed unlikely, however, until the ascent of a most improbable champion: Tom Hanks. [5] The all-American everyman actor, primarily known for comedic roles, had gained critical plaudits for his dramatic role in Big, by far the most well-received in a spate of body-swap pictures which were released in the era. Hanks, a longtime fan of Clarke’s work, wanted very much to play the lead role of Commander Graves, the captain of the Discovery Two, and now he finally had the cachet to make it happen. [6] However, by the time 2020 was finally produced and released to theatres, the book itself had set into motion a whole new wave of science-fiction, starting on the small screen.

    J. Michael Straczynski, the one-time showrunner for the popular and well-received cartoon adaptation of the smash-hit Ghostbusters film (entitled The Real Ghostbusters), was left unfulfilled by his work on that program, seeing it as a mere stepping-stone toward his dream project, that which he was sure would become his magnum opus. The 1980s had seen dramatic series embracing serialization to unprecedented levels, even beyond soap operas such as Dallas or Dynasty into procedurals like Hill Street Blues and L.A. Law. Straczynski wanted to extend the reach of serialization further into science-fiction television. Star Trek: The New Voyages had experimented with arc-based storytelling, only for the notion to meet widespread resistance among viewers (resulting in writers approaching serialization on a piecemeal basis). Straczynski wanted to bring this half-hearted tendency to full bloom, creating an exemplar the fabled television novel - with a clearly defined beginning, middle, and end - in the process. After many years developing and refining the story he believed most worthy of bringing to the small screen, he began pitching it to production companies. The epic scope of his planned story alienated many of them, but Straczynski - whose showrunner experience in an expensive format, animation, gave him some knowledge of how rapidly (and unexpectedly) costs could accumulate, promised that his show could be produced on time and on budget. The amount of control he intended to exercise was singularly ambitious in that virtually all dramatic programming in the United States was written by committee (the “Writers’ Room” being the central nexus of any series) and Straczynski was intending to script most episodes by himself, having already developed most of the running story arcs he had in mind - this was far more in the British tradition.

    One key advantage of the setting in terms of keeping costs down was that, unlike Star Trek, Straczynskis series (which he called Babylon 5) was set on a space station. [7] This would allow for the construction of dedicated sets, with no need to incur costs on building, installing, and then demolishing swing sets. However, given the station’s stated purpose of serving as something of a galactic melting pot, alien races would be depicted, and in large numbers, as humans were but a small fish in a great big sea of interstellar species (a marked contrast to the prominent role played by humans in the United Federation of Planets on Star Trek), with other alien species forming power blocs which regularly threatened the fragile Earth Alliance. In fact, it was a long and bloody war with one of these powers, the Minbari, which had spurred the creation of the Babylon 5 station, in an echo of the diplomatic organizations that had emerged from each World War in the 20th century. Given that Babylon 5 was the fifth such attempt to do so, it was clear that the Babylon program in general owed much more to the failed League of Nations than to the UN.

    By this time, Star Trek: The New Voyages had been off the air for four years, and there was a growing demand by science-fiction fans of the era for another small-screen outing in the genre to replace it. [8] None of the networks, not even the nascent FOX, were interested in Babylon 5, however. Straczynski and his production company, Warner Bros., were forced to resort to selling the series into first-run syndication, a market which had supported original programming in substantial numbers in the 1980s. [9] From syndication, individual stations (including network affiliates) could choose to buy the series to air in any of the over 200 markets throughout the country, just as though it were a rerun of an already-aired show. Many stations were understandably nervous at the potential scope of Babylon 5, however, and thus a pilot movie, The Gathering, was aired on Monday, February 6, 1989, in over 150 markets throughout the United States (including all twenty of the largest) in order to test the waters. The lead character was Commander Jeffrey Sinclair, commanding officer of the Babylon 5 station. Lieutenant Commander Laurel Takashima served as Executive Officer. [10] The two leads were well-received by critics and audiences, as was the telefilm in general, leading Warners to greenlight production on a series proper, which had just enough time to begin preparation for a September premiere in the 1989-90 season. Still, no network was interested, though many individual affiliates were, and so it too would air in syndication - which Straczynski handled as diplomatically as he could. “Going up against Wheel of Fortune can be a double-edged sword,” he would remark, years later; Wheel was the highest-rated program in first-run syndication at the time, and had been for several years. [11]

    Many of the visual effects originally created for the miniseries were reused countless times for the series proper. Their design was overseen by Visual Effects Supervisor Steven Begg, including the station itself. Because computer-generated imagery was still in its infancy at the dawn of the 1990s (prior to its proliferation through the ensuing decade), practical effects were primarily used, including extensive model shots, matte paintings, and stop-motion photography. Inspired by the work done at Industrial Light & Magic, the Lucasfilm special effects division, over the previous decade, the work done by Begg and his team was some of the most impressive - and cost-effective - ever made for television. [12] The only Emmy Awards won by Babylon 5 throughout its run were for the visual effects, though it was also nominated in other (mostly technical) categories.

    The complexity of Babylon 5 was beyond even the most ambitious shows seen on network television at the time. The overarching storyline entailed constant growth and development of the characters throughout all five seasons. The “Shadow War” served as the backdrop for an in-depth exploration of the astropolitical situation throughout the conflict, which included ties to historical events. The Babylon 5 station, something between a melting pot and a mosaic, and despite the precedent set by the previous four stations, served as something of a touchstone and a constant through the tumult depicted in the series. The scope and focus of the storyline was, occasionally, derided as being an inferior ripoff of The Lord of the Rings, especially given the strong focus on mystical elements (fairly unusual for the technologically-oriented genre of science-fiction). However, the extreme complexity and attention required of the average viewer proved a deterrent and a particular thorn in the side of executives, who constantly challenged Straczynski’s creative control. Ratings were never terribly strong, and the threat of cancellation loomed throughout. However, the show would run for a full five seasons, concluding with a bang in 1994 - which (for this and other reasons) would become known as the “Summer of Space”. [13]

    As counterpoint, that franchise which had inspired confidence and optimism for the future of mankind in one of the darkest hours for the United States - the late 1960s - would see a revival in the early 1990s, for the 25th anniversary of Star Trek. Heading this project was the showrunner from the later seasons of The New Voyages in the 1980s, Harve Bennett. Though his entire career up to that point had been in television, he had an understanding of and appreciation for Star Trek which made him ideal for the position, not to mention that it allowed Paramount to pay lip service to “properly shepherding the franchise forward”. Most importantly, Bennett had a reputation for completing projects under budget and on schedule. That sort of prudence was worthy of a promotion to the big screen from the small one, in the opinion of many studio executives. [14] Gene Roddenberry, the creator of Star Trek, who had been effectively ousted from production of The New Voyages in 1979 and had virtually nothing to do with the franchise since, would have no involvement whatsoever with this film project; Paramount wanted nothing to do with him, and his health was in decline, to the point where he could not actively participate even if he had wanted to (and he had wanted to, though he certainly would never admit it). Bennett had been nursing an idea since the New Voyages days, which he would finally put into practice here: a flashback to Kirk and Spock’s days at Starfleet Academy. [15] Although the resultant film would turn out differently from how he had conceptualized it, the kernel of the plot was good enough to be green-lit for a release in the summer of 1991. The film itself was to be named Star Trek: Starfleet Academy.

    Star Trek: Starfleet Academy would star the central character of the franchise, James Tiberius Kirk, along with the man who was still considered runner-up for that position even after seven years of infrequent guest appearances on The New Voyages: Spock. Though the film was called Starfleet Academy, Kirk and Spock (who were contemporaries, having been born in the same year) were actually not undergraduates in the film (the opening scene depicted their graduation ceremony), but rather were invited to become part of a pilot project called the Accelerated Learning Program, in which recent graduates were invited to reinforce and apply their knowledge through teaching it to incoming students. Kirk, as the top student in the Command Division, and Spock, as the top student in the Sciences Division, were naturally considered prime candidates to start “climbing the ALPs” in the autumn; in the meantime, Ensign Kirk accepted a temporary assignment aboard the USS Republic, on the recommendation of one of his favourite instructors, Lt. Ben Finney, with whom he had grown so close that the senior officer had named his daughter “Jamie” in his honour. However, after the Republic was attacked by a pirate ship in the obligatory action prologue scene, Ben Finney was distracted from his engineering duties, neglectfully leaving a key circuit open to the atomic matter piles; the next officer on the shift, Ensign Kirk, fortunately caught this error in the nick of time. In logging it, he doomed his friend and mentor, Finney, to career stagnation; the senior officer was in fact recalled to his teaching position at Starfleet Academy alongside Kirk, as the opening titles of the movie finally began to play after the lengthy prologue.

    The newly-promoted Lt. JG Kirk, in recognition of his actions on the Republic and in order to reflect his special status “climbing the ALPs”, immediately got to work teaching command-level classes. Spock, in the meantime, had stayed on Earth over the summer, working in a research facility and making the acquaintance of Janet Wallace, a graduate-level biomedical researcher (perhaps better known as a “little blonde lab technician” by the more lecherous among the male student body). [16] The lab at which Spock had worked was technically a Federation facility, not belonging to Starfleet, making his “assignment” more of an internship. However, he and Wallace remained on cordial terms even as the term began. Kirk, meanwhile, immediately found himself heading a “clique” of students, including the charming and cocksure Cadet Gary Mitchell and the mature student, Dr. Leonard McCoy, who had joined Starfleet to get away from his ex-wife (who had won full custody of his daughter, Joanna) after their messy divorce. [17]

    James T. Kirk was played by Kiefer Sutherland who, like William Shatner, was Canadian-born. In fact, Sutherland had strong family connections in the Great White North, being the son of actors Donald Sutherland and Shirley Douglas, the latter of whom was the daughter of legendary Father of Medicare Tommy Douglas. Sutherland was known for his intensity and bad-boy image in past performances - he had mostly played villains prior to Starfleet Academy - and was chosen largely because producers felt that he had the range and ability as an actor to branch out beyond the “boy-scout” depicted in this movie into more complex portrayals down the line. He did not imitate the notorious vocal patterns used by Shatner, which was widely regarded as the right choice to make. It was the casting of Spock which was considered a risk - and a revelation: Keanu Reeves, up until that point best known as Ted Logan from the film Bill & Ted’s Excellent Adventure. In fact, a potential sequel to that film was scuttled when Reeves declined the opportunity to appear in it for Starfleet Academy. [18] Audiences knew Reeves as a goofy stoner character - his stoic, reserved, and brilliantly internalized portrayal of Spock [19] won some of the heaviest plaudits of the film - other than those for the actor chosen to play Dr. Leonard McCoy. Gary Sinise was a dead-ringer for DeForest Kelley physically - Kelley famously joked “it’s like looking in a mirror”, when the two posed for photographs together at the film’s premiere in San Francisco - and worked with a dialogue coach, as well as Kelley himself, who of all the original cast was the one who worked most actively in the film’s production, to get the right accent. It helped that Sinise was ten years older than Sutherland and Reeves - the same age difference between Kelley, and Shatner and Nimoy. Gary Mitchell was played by C. Thomas Howell, a finalist for the role of Kirk, who was felt to lack the “presence” that Sutherland brought to the table.

    The success of Starfleet Academy upon release in July 1991, just in time for the 25th anniversary of the wider franchise that September (which proved surprising to many commentators, who did not imagine audiences paying to see what they normally got for free) inspired a rash of other space-focused projects. [20] Development on 2020 continued apace (the film finally being formally green-lit and entering production that autumn) and, in addition, a dramatization of the Apollo 13 incident (based on the Jim Lovell memoir, Lost Moon) was green-lit at the same time, although the studio’s preferred choice for the role of Capt. Lovell, Tom Hanks, declined the part to star in 2020 instead. [21] Lovell’s personal choice for the role, Kevin Costner, then accepted the part. Lovell’s two crewmates, Fred Haise and Jack Swigert, were played by Bill Paxton and Kevin Bacon, respectively. In contrast to the space opera and technobabble associated with science-fiction, this historical chronicle was intended to be strictly accurate in depicting the events at hand, with the screenwriters interviewing all of the principals and touring NASA facilities extensively, being briefed on the history and development of the Apollo missions. It was during these tours that the mindset of those at mission control - “failure is not an option” - found itself taking on a whole new life in the pages of the draft scripts. When production commenced, the decision was made to compose the film entirely of original footage, not reusing a single sound or image from the extensive chronicles of the actual operation. Taking advantage of the lavish budget available to them, the producers opted to, as best as possible, recreate the conditions of the Apollo 13 mission through practical means: sets were built to the exact specifications of the original Apollo craft, with the exception of having certain parts removable (as with the “segments” of the bridge on the original Star Trek) for ease of filming. Real NASA pilots, including those contemporary to the Apollo era, put the actors through basic training. This included time in zero-g conditions on the high-altitude “Vomit Comet” aircraft, which would later extend to the filming of scenes in zero-g on the same vehicle with the help of specially constructed sets - hundreds of flights were conducted, given that much of the movie took place in zero-g conditions, and that each zero-g period lasted for less than half a minute. NASA technicians were so impressed with the sets that they “requisitioned” them for internal training use once principal photography was completed.

    Naturally, and in stark contrast to the strict scientific and historical accuracy aimed for by the makers of Apollo 13, a sequel to Starfleet Academy itself was followed by a direct sequel, Eternal Conflict, released in 1993. Harve Bennett, who as the primary creative force behind the franchise, continued to serve as Executive Producer for the second film, decided to take the opportunity to double down. Had the film continued with a straight take on the early adventures of Kirk and Spock, it would have run into the problem of telling the same stories that had already been seen, or at least far more directly and concretely alluded to than the “backstory blender” that was Starfleet Academy. The centrepiece of this conundrum was “The Cage”, the original pilot of Star Trek, which had been rejected by NBC (who, in so doing, had chosen to commission a second pilot, which was successful). Due to a shortage of scripts in the first season, “The Cage” was repackaged into a framed flashback episode called “The Menagerie”, which described the “Cage”-era footage as being several years in the past. Doing the math, “The Cage” would have been set very shortly after Spock had climbed the ALPs and begun his assignment on the Enterprise under Captain Pike, meaning that the prequel series had already run out of material before it would be forced to repeat the stories that had already been told. But not for nothing was Harve Bennett known for his ability to pull a rabbit out of his hat, taking advantage of a stray plot thread which he could now pull and watch unravel…

    “The Cage” had never aired in its original, unaltered form until 1986, having been presumed lost up until then. [22] Just in time for the twentieth anniversary of Star Trek, however, the original negatives were discovered in a Paramount vault, and a special ninety-minute event was broadcast around the footage from the original episode; 63 minutes all told, which would add up to the standard 75 minutes of programming (25 for each half-hour, with the rest devoted to advertising space) when combined with twelve minutes of commentary from some of the principals involved, including Leonard Nimoy, Robert Justman, and - surprisingly - Gene Roddenberry himself. All of the commentary preceded the showing of the uncut episode, though commercial breaks were rather arbitrarily chosen at various points therein. Nevertheless, Bennett - who remained the nominal chief executive of the Star Trek franchise, despite The New Voyages having ended in 1984, but declined to participate in “The Cage” special so as not to seem as though he were trying to retroactively insert himself in the history of Star Trek - took note of a strange phenomenon as he was watching the opening moments of the episode proper. The Enterprise had encountered an unusual disturbance in the midst of space immediately prior to the reception of a distress signal. Though the episode had implied that the two were connected, it struck Bennett as strange that an old-style radio wave would be capable of what appeared to be highly unusual subspace distortions. [23] So he devised an alternative idea: those distortions were instead the doing of some interdimensional phenomenon which was capable of creating “ripples” in spacetime - the source of this anomaly would thus be able to duplicate realities and send them along divergent paths through history. This theoretical being would be present in whichever “reality” he had created most recently; the implication would be that it had left an “original” timeline to create this new “alternate” one. This bizarre phenomenon could also potentially explain the existence of the parallel universe seen in the original series episode “Mirror, Mirror”, and this would in turn would allow for the possibility of infinite dimensions, and for characters to “cross over” between them. Bennett had always exhibited a gift for developing narrative hooks.

    The film opened with a prologue depicting the event which was only mentioned in “The Cage” (and “The Menagerie”): the battle fought by the crew of the Enterprise, including Captain Pike, on Rigel VIII. In keeping with the continuity established in that episode, Pike was ambushed by an alien warrior, both Spock and Tyler were injured, and Pike’s personal yeoman was killed, making for a thrilling and harrowing action prologue. After the opening credits, the beginning of “The Cage” was lovingly re-created, only for Spock to detect strange new readings that the Spock from the original episode did not… a prelude to the arrival of the alien phenomenon which would serve as the primary antagonistic force of the film. [24] After bombarding the Enterprise, the phenomenon vanished as quickly as it had appeared, and pursuing it would drive the rest of the story, gradually establishing that it was set in a different reality from the TOV/TAV/TNV continuity. When it was clear that the Enterprise was out of danger, she continued onto the Vega colony, as per her original orders - thus sidestepping the distress signal from Talos IV which spurred the plot of “The Cage”, and irrevocably altering history as fans had known it. From Vega, the Enterprise was ordered to return to the nearest starbase, where Spock would report his findings to the Admiralty, who would then decide on a course of action.

    The action then shifted to the USS Farragut, which was the first long-term assignment of Lieutenant James T. Kirk, several days later. Unlike the Enterprise, which was on the outermost fringes of Federation space, the Farragut was well within the interior; this was therefore the cause of some alarm when they too were beset by an anomaly which produced identical readings to the one which had intercepted the Enterprise. However, it disrupted the Farragut with much more force, severely damaging the vessel and killing many on board, including the commanding officer, Captain Garrovick. [25] This plunged Lt. Kirk, his navigator, into a guilt spiral, as he felt that he had not noticed the phenomenon or attempted to counteract its effects rapidly enough - despite receiving praise for his adroit handling of the situation by the First Officer and Acting Captain, Commander Matt Decker (played by Richard Hatch, who had played the same character’s son Will on TNV). [26] This angst would affect Kirk’s character for the rest of the film. Decker, meanwhile, informed Starfleet Command of their encounter before setting the ship on a course to the nearest starbase - where, upon arriving, they encountered the Enterprise. Admiral Morrow, Starfleet Commander, ordered the two ships, both of which had readings on the anomaly and crews familiar with it, to find the source, before another “attack” - perhaps of even greater intensity than before, and even closer to the major systems of the Federation - could take place. [27] The relative positions of the two ships at the time they each encountered the phenomenon allowed for them to triangulate on a position near the galactic core as the origin.

    However, the first order of business was to replenish the badly diminished roster of the Farragut, and the new crewmembers, mostly recent graduates from Starfleet Academy, included some familiar faces: Dr. Leonard McCoy on the medical staff, and Ensign Gary Mitchell as the new helmsman, both on Kirk’s recommendation (reuniting most of his “clique” from the previous film). Decker was promoted to Captain and formally assigned command of the Farragut; Kirk was also promoted, to Lieutenant Commander, and Decker offered Kirk his vacated position of First Officer, which he reluctantly accepted (partly at the urgings of McCoy and Mitchell), despite his continuing misgivings about his self-perceived failings as a navigator. Decker, working to build a rapport with his new second-in-command, tried his best to assuage these misgivings with a key piece of advice: “Remember, Jim, these feelings never go away. Everyone has to fight their own doubts and fears in the struggle to become a better person. It’s an eternal conflict.

    The new crewmembers bolstered the Farragut roster in more ways that one, given that - like Kirk - most of the hands which had survived the incident were plagued with fear and doubt, given what awaited them. As assignments were handed out and repairs on the ship were completed, a delegation from the Enterprise made a rendezvous with the command crew of the Farragut to coordinate their mission. This was led by Lt. Spock, as the science officer previously aboard the Farragut had not survived the initial attack, and the stoic Vulcan remarked upon the “disquieting emotionalism” which had swept through the Farragut. Once the preliminary work was completed, the two ships, proceeded in tandem toward the conjectured source of the anomaly. En route, it did indeed return as predicted, but had seemingly anticipated their planned defences, eschewing structural damage of any kind, but somehow deactivating the warp drives of both ships, leaving them centuries away from their destination, which served to further ratchet the tension on the Farragut and even the Enterprise: Captain Pike, who continued to be shaken by his recent experiences on Rigel, finally confided his doubts about continuing with his career as a Starfleet Captain to the ship’s doctor, Boyce (in a conversation largely lifted from “The Cage”). They were forced to retreat to a long-abandoned dilithium cracking outpost, which had become overrun by the savage native life of the planet in the intervening years, necessitating the beam-down of ample security teams, led by the officers Pike, Kirk, Spock, Scotty, and Bones, along with Pike’s yeoman, Colt, who was Kirk’s love interest for the film (replacing Dr. Janet Wallace, who had remained on Earth). [28] The planet was dangerous, and the landing parties could not be guaranteed a safe return, which further demoralized the already shaken crews; this helped to recreate the tense “powder keg on a tin can” atmosphere often seen in the previous series. Though the entire security detail [29] was tragically killed in action, all of the named characters miraculously survived, returning to their ships with the necessary cargo to continue their journey at warp speed, closing rapidly on the origin point. Despite continued setbacks, the ships were unyielding in their mission… even if their respective crews sometimes seemed to be hanging by a thread.

    The anomaly apparently perceived this precariousness, and thus the final barrage was not in any way physical or targeting the ships, but was instead emotional, targeting their crews, striking them with a quite literal case of what Spock might call “flagrant emotionalism”. Only a few people - not least of all, Spock himself, along with Kirk and Bones - seemed able to resist the effects to any significant degree. Captain Pike on the Enterprise fell into an unshakeable malaise. Dr. Boyce, who was considerably older than McCoy, found himself suffering early onset senility as a result of the anomaly, thus leaving Bones in charge of finding a cure. In the interests of inter-ship unity (especially since both crews combined could barely muster a medical research team), Mr. Spock co-headed the team with him, allowing for some classic Spock-McCoy interaction which rivaled the high points between them in TOV (which was, in turn, sadly absent from TNV, given Spock’s infrequent appearances - and seldom with Bones). Men and women alike were warped into twisted parodies of themselves: Scotty became a grotesque and obnoxious “funny drunk” [30]; Colt succumbed to what was described in “The Cage” as her “unusually strong female drives”; Mitchell became dangerously antisocial, as he had done in “Where No Man Has Gone Before” though (fortunately) without the addition of psychic powers; and Decker became crazed and paranoid, as in “The Doomsday Machine”, forcing McCoy to relieve him of command (which he famously failed to do in the original episode), granting Kirk the status of Acting Captain. Once again, Kirk was highly ambivalent, despite assuming command of the vessel (the place where audiences knew he belonged more than anywhere else). Kirk was not in command of the fleet, as the First Officer of the Enterprise, known only as Number One, remained in control of her faculties and (given her experience; despite also being only a Lieutenant Commander, like Kirk, she had held the rank for much longer) she served as the de facto task force commander, ensuring that the two ships didn’t find themselves in even worse trouble than that which they had already faced (admittedly, a very tall order). [31] Fortunately, although Spock and Bones got along like oil and water (leading Kirk, who monitored them bemusedly, to ask if they, too, hadn’t been affected), they did make a great team and eventually cured the malaise - by concocting a “laughing cure”, as in such TOV episodes as “Wolf in the Fold” and “Day of the Dove”. It was tested on Scotty, who served as their guinea pig; given his condition, it was hard to tell at first that the cure had worked. By the time they were sure it was successful, they had already arrived at the source of the anomalies: a seemingly inconspicuous deep-space outpost marked as GOTHOS STATION, located just outside the gravity well of the supermassive black hole at the galactic centre.

    The sole occupant of the station (even though no life signs had been detected by either ship’s sensors) hailed them, and introduced himself as Trelane, the very same entity who had been a one-time opponent of the Enterprise crew from TOV (played by his original performer, William Campbell), and invited them for an audience with him, the “humble stationmaster of Gothos”. Number One chose to remain aboard the Enterprise in order to supervise the administering of the cure to the afflicted crewmembers, including the other medical staff as well as the two Captains, Pike and Decker. In the meantime, Kirk was sent down to represent the task force. Now a young adult, Trelane had fully mastered the ability to travel through time and space, across all dimensions possessed by all members of his species. Though he had previously encountered other incarnations of Kirk in his own subjective past, this one did not know him, exactly. Trelane explained that he had visited many parallel universes, some of which were of his own creation (with the famous “mirror universe” from “Mirror, Mirror” being implied as one of them). However, he was not quite the petulant brat of his youth; he was more an inquisitive (if reckless) college student, conducting “experiments” to better his knowledge of the multiverse. [32] The Enterprise and the Farragut functioned as his own private laboratories, with all the people aboard as his own collection of lab rats. With all the instrumentation at his command, Trelane seemed unstoppable, but eventually, Kirk was able to muster his resolve and appeal to Trelane in a way that had failed even his older, more seasoned and experienced alternate self: reasoning with him, and pointing out that they had overcome every obstacle that he had thrown their way. Trelane - given the pretensions toward intellectualism which he did not possess as a “child” - decided in his benevolence that Kirk and his comrades had “potential” - something he had not yet been “enlightened” enough to see in his previous encounter with Kirk (in “another time, another place, another universe” - firmly establishing that the audience was now observing the adventures of a parallel crew). With that, Kirk was returned to the Farragut, and the outpost disappeared into the black hole, seemingly bound for whole new universes. The task force, armed with this wealth of sensor data, and having finally recovered from their emotional distress, headed back to Federation space, ready for new adventures, come what may; Kirk, for his part, had overcome his demons and found himself one step closer to his legendary Captaincy (with the once-again-lucid Decker remarking that he “wouldn’t be surprised if Admiral Morrow put you up for another promotion”). [33]

    The film was set largely on the bridges of the two ships in the task force (which were actually a single set, lightly re-dressed to play either the Enterprise or the Farragut), and this cost-saving innovation would prove an inspiration for future endeavours within the franchise, though not exactly on a cinematic scale. Although a third film in the franchise immediately went into pre-production, a new television series had been on the table as early as 1991, with the success of Starfleet Academy, with the success of Eternal Conflict reinforcing these plans. This time, Paramount would follow through on their plans to create a new network called the Paramount Television Network, or PTN (entertained, but ultimately abandoned, in the mid-1970s) on which the new Star Trek series would serve as the flagship show. The show (and network) were scheduled to premiere in September of 1994, a date which (just as it had been seventeen years before) proved remarkably serendipitous due to the timing of events which took place over the summer - the “Summer of Space”, as it were. [34] PTN would beat a rival “new network” (the success of FOX had inspired many imitators) founded by Warner Bros., which would launch later that season, in early 1995; given that Warner had produced Babylon 5, they green-lit a spinoff program to air as part of the launch schedule on their new network, in direct competition to Star Trek. But before either of those spinoffs of established franchises could come to fruition, a plucky newcomer entered the fray in the form of a summer mini-series called Exodus, which aired in July of 1994 on FOX. [35]

    Exodus was far more a symbol of the zeitgeist than the Star Trek revival had been, and indeed it came into being largely as a deconstructive response to that venerable franchise, which was generally regarded as optimistic and idealistic almost to the point of delusion even though the history of mankind in the centuries between the present and the far-future setting in which Star Trek took place had apparently entailed race wars between humans and genetic supermen, nuclear apocalypse, and bloody conflicts with other galactic powers. If the Federation was a relatively peaceful galactic superpower, it had won that status through no little amount of blood, sweat, and tears. Nonetheless, the creator of Exodus, Ira Steven Behr, seemed to have a chip on his shoulder regarding Star Trek - especially the original Roddenberry vision thereof (something which was itself continually evolving, it had to be said). He spoke more highly of the pragmatism exhibited under the Bennett regime, but (like Straczynski before him) believed that it had not gone far enough - so he decided to approach the future of mankind from the opposite direction. It helped that the early-1990s were a period of exceptional environmental hyper-awareness, with many scientists predicting runaway global warming and extinction events unless immediate corrective steps were taken all over the world to create sustainable development. For this reason, Exodus was set within a colony of Martian evacuees, the titular Exodus having taken place in an attempt to flee an apocalyptic asteroidal collision with Earth (a plot point inspired by the predicted - and realized - collision of Comet Galileo with the planet Jupiter in 1994, just in time for the mini-series to air). [36] Behr worked with a talented assemblage of writers, including Robert Hewitt Wolfe, Hans Beimler, and Chris Carter, in crafting the lore of Exodus. Despite the otherwise cynical premise, all of the writers favoured the inclusion of a mythical element (again like Straczynski), which would focus on the colonists discovering mounting evidence that Mars itself was once an Earthlike planet, on which an intelligent civilization had resided. The question of what might have happened to these people became a running plotline, the backdrop against the daily challenges of running this last bastion of humanity (other refugee colonies on other worlds were occasionally mentioned, but left unseen). The Apocalypse, caused by an asteroid hitting the Earth, was clearly allegorical for the man-made habitat destruction protested by environmentalists; a War of the Worlds in reverse. The message of a need for careful stewardship of Earth’s available resources with a focus on sustainability could be read into the artificial maintenance and critical focus on the survival of the colony within the boundaries of the tube with the supplies and technology on hand. However, this remained a more subliminal theme within the context of the show (which focused more on the overarching storyline of the ancient alien species, with the day-to-day survival of the colony fading into the background), though one which was popular in the fandom. In an era when shows with the blatant messaging of Captain Planet were on the air, it was difficult not to seem subtle by comparison.

    The characters who fled the Earth found themselves settling in a preexisting geological research colony which was based in a lava tube, akin to the “underground cities” featured in science-fiction and fantasy works since time immemorial. The refugees far, far outnumbered the miniscule population of the base personnel, whose commander, played by Tim Matheson, was already undergoing a midlife crisis (common to many people in the aging Boomer generation) prior to their arrival. After having left behind his life as a career soldier on the Earth to indulge his love for geology and scientific exploration at a quiet base in the peace and tranquility of Mars, his commission was being reactivated, making him the unwilling de facto governor of what had now become a colony of evacuees; the refugees decamped in the tube, given its potential to support a settlement of such sheer size, despite plainly being unable to do so at present. The leader of the new arrivals, played by Nana Tucker [37], was a staunch survivalist, far more self-centred and driven by the needs of the moment than focused on the big picture. Rash, impulsive, and insensitive, her character contrasted - and clashed - with the world-weary Matheson character. As the main focus of the original miniseries was indeed survival, the conflicts that all sides faced drove the plot far more than challenges in the new environment would have done alone. The “settlers” were further balanced against the “natives” (none of whom actually born on Mars) with the inclusion of a scientist character played by Bill Mumy [38], who had failed to detect the asteroid in time to stop the Apocalypse, driving a massive guilt complex (as did constant blame from certain other corners of the mission, including from Tucker’s character, who did not make friends easily). His redemption came with his continued value as a researcher and engineer for the growing colony and, in the series proper which resulted from the miniseries, when he found what he believed to be evidence that Mars had previously been inhabited by a highly advanced alien species in the distant past. This formed the backdrop to the story arcs of the three central characters: Tucker emerging as a competent leader, Mumy being redeemed from his previous mistakes (as “the man who doomed Earth”), and Matheson, the commanding officer, managing to once again find the strength to stand as a leader in spite of his past, his time on Mars having given him new strength and purpose, returning to his original vocation as the commanding officer of the research - or colonial - base. These redemptive character arcs were introduced more formally into the series proper; the mini-series established the characters as less developed or relatable than they became in the program that would follow, without hindering the obvious storytelling potential for them and their relationships.

    And finally, after six years of waiting, there was the film version of 2020, also released during the “Summer of Space” in 1994. Given that the novel had helped to launch the present wave of science-fiction, it seemed only fitting that the adaptation was able to reap some of those rewards. Clarke wrote the screenplay himself, as he had done for 2001 (though not 2010), as the producers had sought to take advantage of changes in the geopolitical landscape since 1988 (which, in one fell swoop, had severely dated both 2001 and 2010) while maintaining the legitimacy and gravitas of connecting them to the original author. Clarke agreed to “update” the plot and setting for the post-Cold War environment, while at the same time taking advantage of the continuing discoveries made by the Galileo probe in the several years since it had arrived at Jupiter.

    2020 was a story of pure exploration, largely inspired by these Galileo discoveries. In the book, it depicted an American-Soviet joint research mission sent to the newly stellar Jupiter to investigate the “planets” (formerly moons) which were in orbit about the dwarf star. However, the two ships (the Soviet Leonov, which had saved the day in 2010, and the American Discovery II) became only one (Discovery II) when the novel was adapted to film in 1994, partly as one of the many changes made to take the collapse of the Soviet Union into account and partly because Star Trek: Eternal Conflict had starred two ships (the Enterprise and the Farragut). Indeed, a Leonov model was designed and even partially built before it was discarded. In both novel and film, the two crews arrived at Jupiter and explored the outer planets - starting with the outermost, Callisto, which remained frigid and blanketed in ice, a situation which the Russian (in the film, the word “Soviet” was never once mentioned or seen, allowing for a quiet retcon of the USSR’s continued existence, as depicted in 2001 and 2010) observers compared to Siberia in their native homeland. Ganymede, the largest Jovian planet (formerly the largest moon in the Solar system), now had temperatures comparable to those of Earth, and the formerly massive ice deposits were rapidly melting into large freshwater seas when the astronauts surveyed it. The inner moon of Europa, which was shown to have life even before Jupiter became a star, was forbidden by the enigmatic star-children to travel by the Earthlings. The team obeyed the letter of this imposition, but not the spirit, surveying Europa remotely (and as discreetly as possible) noting traces of mostly simple organisms in a primordial soup. The planet enjoyed tropical temperatures, prime for the continuing development of life. This left Io, the innermost of the major planets, which had already been volcanically active. It had seemingly emerged as a hell-world even more frighteningly hostile to human life than Venus, its atmosphere full of noxious gases, its seas comprised of liquid sulfur, and the ground there unstable for even short-term landings. However, the expedition deemed the substantial risk worth it, due to the discovery of a gargantuan diamond “shard” (as tall as a mountain) on one of the innermost planet’s basalt plateaus. [39] It had been ejected from the core of the former planet Jupiter once it had been turned into a star, and though the entire mass could not be retrieved, a “small” sample (on the order of a dozen kilograms) was was successfully harvested by the crew to bring back to Earth before the hazardous environment of Io compelled the landing party back to the Discovery II.

    Pre-production was a time-consuming process (though many of the props and set blueprints from the filming of 2010 survived), as was post-production - given the reputation of 2001 as a trailblazer in visual effects, 2020 was expected to continue that tradition, and that involved making use of computer-generated imagery, ludicrously expensive and laborious to produce at the time, which helped to explain why it took so long to make it to theatres; long enough to have direct competition in Apollo 13 (also released during the Summer of Space), which enjoyed the overwhelming support of critics despite being only moderately more popular with audiences than 2020 - although many defenders of the latter film argued that it was a case of the crowd-pleasing, unchallenging Apollo 13 vs. the “cerebral” and “avant-garde” (read: “trippy”) 2020. Apollo 13 was nominated for Best Picture of 1994 at the Academy Awards, whereas 2020 received only token nods in the technical categories. [40] In fact, in what was surely a bitter pill for the 2020 cast and crew to swallow, Apollo 13 won Best Visual Effects and Best Sound, both over 2020 (along with Best Film Editing). [41] However, in a demonstration of one of the other predominant cultural forces of the early-1990s, Pulp Fiction, a curiously pleasing combination of Generation X self-awareness and irony with throwback 1970s exploitation, took the award for Best Picture, Best Director, and Best Original Screenplay. [42]

    But at the end of the day, all of these films and particularly all of these series, even those which attempted to reflect the newly-cynical atmosphere of the Quiet Years, were escapist by their very nature. The Quiet Years came after “the end of history” - the conclusion of the Cold War and what by all appearances was the rise of a Pax Americana. But the dawn of the Cold War had coincided with the rise of television as a medium; by 1989, thousands upon thousands of hours had been widely syndicated to American audiences, with series dating back to the 1950s remaining very much a part of the here and now in a way that only a handful of books and films were able to do. For many people, broadcast history was all-encompassing. Lucy Ricardo and Ralph Kramden lived in a world where freedom defined itself in opposition to the Commies - so did Gilligan, Rob and Laura Petrie, and every character on The Twilight Zone. Archie Bunker had railed against “commie pinkos”. It seemed to unite everyone, even on television. It was a medium defined by a single, looming antagonist throughout its history, but times had changed enough to paint a very different picture than the black-and-white of years past.

    But such things did not always happen overnight, and it was in the highest echelons of power where change seemed to take effect most gradually. This was likely how the incumbent President, George Bush, who as part of the Reagan administration was a living symbol of the “old guard”, entered the opening stages of his 1992 campaign for re-election seemingly invulnerable; he had shepherded the nation through the reunification of Germany and the collapse of the Soviet Union, and claimed the first military victory for the United States since World War II; a far cry from where he had been four years ago, as the milquetoast, uninspiring heir apparent. Bush had won what many political commentators described as “Reagan’s third term” (the 22nd Amendment had prevented the Gipper himself from running again) largely because the Democratic candidate, Massachusetts Governor Michael Dukakis, was a horrendous campaigner who could not effectively package his left-wing politics (describing himself as a “proud liberal”) against the onslaught of attacks from Vice-President Bush, who could (quite reasonably, based on the popularity of President Reagan) describe the American electorate as conservative, although Dukakis had been leading in the polls through much of 1988. However, Bush won decisively - performing better in the popular vote than Reagan had done in 1980 - and since then, had presided over the fall of the Berlin Wall, the Autumn of Nations, the collapse of the Soviet Union, and - most importantly - the singular triumph of the Gulf War. Perhaps Bush’s greatest weakness was his running-mate, Vice-President Dan Quayle, the laughably incompetent, blue-blooded nonentity whom Bush had chosen for strategic purposes. Though from solidly Republican Indiana, Quayle’s Midwestern origins were intended to bolster the ticket in the neighbouring swing states of Ohio, Michigan, and Illinois, though it is questionable how much impact he personally had in any of them. Nonetheless, despite many Americans - even Republicans - urging President Bush to drop his running-mate from the ticket in 1992, he declined to do so, perhaps reasoning that he could afford an albatross in a cakewalk election, and that dumping him would probably result in far more negative press than keeping him on the ticket.

    As a matter of fact, it was President Bush himself who had planted the seeds of his own downfall, in trying to be everything to everyone, promising a “kinder, gentler America” in almost the same breath as his vow not to introduce any new taxes; even Reagan had been forced to do so, and Bush was no Reagan. Sure enough, in came the new taxes, and once the Cold War came to an end and defence spending plummeted, the loss of jobs and the low levels of disposable income resulted in a major recession. Perhaps even more so than in 1980, the 1992 election would hinge on a perceived need for radical new solutions to radical new economic and financial difficulties, akin to the FDR landslide of sixty years before. Enter billionaire H. Ross Perot, a quixotic Texan mogul, whose platform of fiscal responsibility struck an instant chord with much of the American population - particularly those who leaned conservative (though many Democrats also favoured Perot). It was like the 1912 election of eighty years earlier, all over again.

    The third component of this new three-way split was Albert Arnold Gore, Jr., better known as simply “Al Gore”, the junior Senator from Tennessee. He had previously served in the House of Representatives before being elected to the upper chamber in 1984. Like so many other prominent lawmakers, and like his opponent President Bush, he was a political scion; his father, Albert Gore, Sr., had represented the other Senate seat in the Volunteer state from 1953 to 1971. Like many Southern Democrats, Gore was moderate-to-conservative within his party on many issues, though there were exceptions: Gore was a technocrat, also known as an “Atari Democrat”. The term came from the dominant video game system in use from the late-1970s until the mid-1980s, at the dawn of personal computing and the information technology industry in earnest. Gore was an advocate of using information technology to facilitate telecommunications, which would be done by opening the ARPANET, then available only to the military and government agencies, to the wider world (which was done through a successor network, known simply as the Internet). His High Performance Computing and Communication Act of 1991, known as the “Gore Bill” during legislative debates, would lay the foundation for the proliferation of the internet for use among the general population, and his later claim of having “taken the initiative in creating the Internet” would forever tie him to this issue in the public imagination long after the term “Atari Democrat” had fallen into disuse.

    Gore was also known for his environmentalism, having dated his affiliation with the movement all the way back to the first-wave in the 1960s, after having read the seminal Silent Spring in high school. From the very beginning of his legislative career, he focused on global warming, toxic waste, greenhouse gases, and the ozone layer, coming to strongly oppose fossil-fuel based energy sources, deforestation, and unsustainable industrialization. The early-1990s marked a turning point. The Second World had collapsed, with the carefully planned economy giving way to free-market influences where profit would be the primary concern for any venture capitalists. The Third World, now that it was no longer divided between the two superpowers into cultural or geographical spheres, was also open to investment from all sides, and it more than anywhere else in the world was primed for rapid industrialization. Gore saw that as having major potential for problems. This combination of ideologies, along with otherwise relatively conservative social policies, had served him well in his 1988 run for the Democratic nomination for President, where he had finished third - behind the eventual winner, the liberal Massachusetts Governor, Michael Dukakis, and the first runner-up, the Rev. Jesse Jackson, who had consolidated the African-American vote behind him, just as in 1984. Gore had secured the endorsement of the 1984 Vice-Presidential candidate, Senator John Glenn, and had finished third overall, winning 15% of the vote in the primaries and more than a half-dozen states.

    But the 1992 primaries were not expected to be competitive. All of the A-listers for the party had passed on what was widely expected to be a Bush cakewalk, but Gore (who had won reelection to the Senate in 1990) decided to take a second chance. [43] Gore emerged quickly as the only major Southern and centrist candidate in contention; his only real rival for either title, Arkansas Governor Bill Clinton, withdrew from the primaries and endorsed Gore, doomed by joint revelations about personal financial malfeasance and marital infidelity. His main competition was the liberal former Governor of California, Jerry Brown, who emerged late in the campaign, winning eight states (including his home state of California) and nearly a quarter of the primary electorate. Gore took 30 states (including the entire Old Confederacy), and about 40% of the vote; enough to clinch the primaries before the convention. After nominating two very liberal candidates for the Presidency, the Democrats chose a moderate (a “raging moderate”, in his own words) for the nomination, more in the vein of Jimmy Carter. However, because Gore was known for his opposition to federal funding for abortion, and for his overall socially conservative record, there was a strong desire to shore up support with female voters and the left-wing base of the party, who were lukewarm about his candidacy.

    For that reason, Texas Governor Ann Richards, who had wowed Democratic insiders at her 1988 keynote speech at the DNC (when she was merely State Treasurer), was chosen as his running-mate. [44] Richards was the second Democratic choice for VP from Texas in a row, following Sen. Lloyd Bentsen. Though relatively inexperienced, she had a down-to-earth, folksy southern charm which had completely eluded the wonkish Gore (despite his own Tennessean heritage), while as a self-described “sensible progressive”, Richards (though still a relative moderate, by Democratic standards) was largely to Gore’s left on many key issues, including abortion. Her selection - the first of a woman by either major party in American history - drew international attention, and indeed she was by far the most frequently discussed of all six candidates on the three respective tickets for the White House in late 1992 (Perot was a distant second). “Vote for Richards - and that other guy” was a commonly-seen campaign sign on the stumps, although Richards drew just as much opposition as she did support. Perhaps her most impressive feat was drawing more attention than the legendarily gaffe-prone Dan Quayle, particularly her utter domination of the lone Vice-Presidential debate. [45] This helped to compensate for the relatively lackluster performances by Sen. Gore at the Presidential debates - at the urging of his advisors, he focused as much as possible on the economy and foreign policy despite his singular passion for issues that were more peripheral to the campaign, and this made him more vulnerable to Perot (on the economy) and Bush (on foreign policy). One of his core “pet issues”, the environment, came up largely in the context of discussions about energy policy - Gore favoured renewable sources over fossil fuels. However, he did impress audiences with his emphasis on “the proud tradition of American ingenuity” through the use of technological advances to solve the new problems faced in the United States and around the world. This broad appeal reached many of those who could not have cared any less about “the internet”, and only paid lip service to his environmentalist causes.

    In the end, many observers assumed that the three-way race allowed Gore (the most fiscally liberal candidate) to come up the middle between his two more fiscally conservative rivals - that Perot was better at poaching votes from Bush (who, despite his loss, maintained good approval ratings through the end of this term) than from Gore. Perot, for his part, did not receive any electoral votes whatsoever, despite winning nearly 20% of the popular vote, the highest-ever tally for any candidate that won no electoral votes; he came closest in Maine, with over a third of the vote statewide, less than five points behind Gore; as Maine divided its electoral votes by congressional district, Perot lost the chance at a single electoral vote in Maine’s more rural second district by just a few thousand votes (only a point behind Gore, at 36-35). [46] He finished second in three other states: Alaska, Utah, and Idaho, all behind Bush and all with well over one-quarter of the vote. He performed worst in the South, whose voters were more willing to back a favourite son (Gore) or their stronger ideological ally (Bush). Given the three-way-race conditions, the electoral map was rather peculiar in contrast to past races. The Democrats dominated New England, including the longtime Republican stronghold of Vermont (not won by the Democrats since 1964), but the GOP held New Hampshire by a razor-thin margin. Gore also did far worse in the South than Jimmy Carter had done in 1976, losing every state in the Deep South except for Louisiana, despite hailing from Tennessee. Bush won his home state of Texas easily over the Gore/Richards ticket, the second time that a Democratic running mate from the Lone Star State utterly failed to make a dent in the Republican advantage there on the Presidential level (though Richards did have more success influencing down-ballot races). However, the Democrats won every Midwestern state except for Indiana (Vice-President Quayle’s home state, which had not voted for the Democrats since 1964) and, in the closest margin of any state in the Union, Ohio (a classic bellwether without which the Republicans had never managed to take the White House). [47]

    When the votes were counted, Gore won about 41% of the popular vote, compared to 39% for Bush - the first time one of the major parties had fallen below two-fifths of the vote since George McGovern in 1972. Indeed, Gore won the electoral vote with the same popular vote that Jimmy Carter had achieved in losing to Reagan in 1980, and little better than Walter Mondale would manage in his landslide defeat four years later. [48] Gore’s famous (and alliterative) pledge in his victory speech early in the morning of November 4, 1992, that he would “put public policy over petty politics” - would effectively foreshadow the tenor of his administration in the years to come. Political strategists for the Gore campaign had tried desperately to polish the “policy wonk” into a slick political operative, but the veneer did not last into his term of office. Gore was saddled with a reputation as (at best) a dull and steady pair of hands and (at worst) a bore. Political cartoonists, satirists, and comedians made “Gore the Bore” into a household name, with mockeries both lighthearted and cruel. Gore pushed Congress for tougher environmental restrictions, which resulted in a far more robust EPA mandate; energy policy was, as ever, a tightrope, since nuclear was both efficient and viable, but was heavily campaigned against by many within the environmentalist movement, so Gore advocated massive investment into solar and wind power (which would not become cost-effective for many more years). The primary social issue which Gore chose to tackle was anti-poverty initiatives; these trumped even gun control and health care, two topics favoured by the Democratic base. Nonetheless, with a friendly House and Senate, most of the Gore-proposed legislation passed during the honeymoon period for his administration - though by its very nature, this idyllic state of affairs would not last forever. Ann Richards, for her part, was proving the polar opposite of her predecessor, Dan Quayle, bringing her far more dynamic and vivacious character than Gore to the famously impotent office of Vice-President and doing much to bolster his policies, especially since the President himself naturally proved a lightning rod of criticism and opposition to his administration, and his “policy over politics” mantra could occasionally backfire in his dealings with the media.

    President Gore and his earnestness was certainly not reflective of the Quiet Years and their unrelenting cynicism, but as would also prove the case with Generation X and their twenty-something disaffectedness, people would soon be forced to reassess their attitudes, just as society would be forced to reflect on whether it truly had arrived at the “End of History”, or if there indeed remained so much more that had yet to be written…

    ---

    [1] Although the term “the End of History” is occasionally used for the post-Cold War period IOTL (which lines up very nicely with the cultural 1990s: 1989-2001), “The Quiet Years” is a term original to TTL, and which will become more meaningful once future events are brought to light.

    [2] Those of you who have seen the original pilot of Seinfeld may recall a waitress character who was replaced by Elaine on the series proper. ITTL, the waitress (Claire) was retained (because she was played by a different actress) but was given many of Elaine’s personality traits.

    [3] Murphy delivered a son IOTL, who was named Avery in memory of his grandmother (played by Colleen Dewhurst, who passed away in 1991). This son was eventually played by Haley Joel Osment, but he was rarely mentioned and seldom seen after the Dan Quayle hullabaloo died down.

    [4] Given that the Galileo probe did not arrive at Jupiter until much later IOTL, and that Clarke had deadlines to meet, he went ahead and wrote 2061: Odyssey Three anyway (with a plotline instead inspired by the Halley’s Comet hysteria), and this film was never adapted to the big screen (as, unlike ITTL, the 2010 film was less successful).

    [5] Hanks sought to exercise his clout to bring 2061 to the big screen IOTL as well, but it never came to pass.

    [6] There is no equivalent to Graves in 2061.

    [7] Babylon 5, of course, would not air until 1993-94 IOTL. However, Straczynski had been developing the plot and its characters since at least the 1980s.

    [8] As IOTL, Straczynski attempted to sell B5 to Paramount, but to no avail. And ITTL, there are no obvious “shenanigans” with the subsequent development of a suspiciously similar rival series under the Star Trek banner; Star Trek was considered dormant (at least on the small screen) through the 1980s.

    [9] Babylon 5 was IOTL part of the Prime Time Entertainment Network, or PTEN, an ad hoc quasi-network that was in essence a glorified syndication package, operated by Warner Bros. PTEN survived for only five years in the mid-1990s, and only two shows lasted for the entirety of its existence: Babylon 5 was one of them (Kung Fu: The Legend Continues, a spinoff of the classic 1970s series,was the other). PTEN never gets off the ground ITTL, leading Warners to devote more of their care, attention, and resources to the launch of the WB network in the mid-1990s.

    [10] Takashima, played IOTL by Tamlyn Tomita, lasted only for the pilot movie, The Gathering, before she chose to depart for other opportunities and was replaced by Susan Ivanova, played by Claudia Christian. ITTL, another actress more willing to see the show through for the long haul is cast as Takashima, which has a dramatic effect (or, more accurately, does not have an effect) on Straczynskis plans for the character.

    [11] IOTL, at this time, Star Trek: The Next Generation was the highest-rated show in first-run syndication (though Wheel was still a powerhouse, and has held the title unchallenged ever since Deep Space Nine ended in 1999). That program, obviously, does not exist ITTL.

    [12] Babylon 5 pioneered the use of CGI IOTL, using it exclusively for visual effects. This has, unfortunately, resulted in its visuals becoming very dated - contemporary Star Trek productions (which, until the late-1990s, relied largely on model work, compositing, and other practical effects) have aged much better. ITTL, so will Babylon 5.

    [13] It wasn’t as near-run a thing as IOTL - Babylon 5 got its fifth season order early enough that not all of the story elements intended for it had to be crammed into the fourth season instead. This gives the later seasons an overall slower pace, which can be a double-edged sword.

    [14] By this time IOTL, Bennett had been ousted from the franchise, having been made the scapegoat for the relative failure of The Final Frontier at the box-office (not to mention its negative critical reception).

    [15] Bennett had planned a film depicting Kirk and Spock’s time at Starfleet Academy IOTL, for the 25th anniversary of the franchise, prior to his ouster. The basic idea was of course recycled for the reboot film released in 2009.

    [16] Janet Wallace appeared in early drafts for the screenplay that eventually became The Wrath of Khan IOTL, which would have eliminated all doubt that she was the “little blonde lab tech” mentioned by Gary Mitchell in “Where No Man Has Gone Before”, before her character was replaced (and thus eclipsed) by Carol Marcus in later drafts.

    [17] This background for McCoy’s character had been written as early as the original series, but never appeared onscreen until the reboot film IOTL.

    [18] Yes, that means no Bill & Ted’s Bogus Journey ITTL. I’m sure you’re all just devastated (well, I know Alex Winter must be, anyway).

    [19] I’m describing Reeves as critics of the time (who were enamoured with his… peculiar acting style) often described him; in truth, he is playing Spock largely as he played every role in his career after Ted. Reeves is chosen at least in part for racial consideration: given the absence of Sulu and Uhura from the cast, and the presence of many white characters to replace them, it was felt that someone “ethnic” should play Spock, as Leonard Nimoy looked suitably “ethnic” (he and William Shatner actually have the exact same ancestry: Ukrainian Ashkenazi) that he wouldn’t “have” to be played by a white actor.

    [20] September 8, 1991, was quite fortuitously a Sunday, so Paramount pushed the film into a wider release for that weekend and renewed their ad campaigns, encouraging Trekkies to “celebrate” the silver anniversary of the franchise in a packed theatre. It worked: the film returned to #1 at the box-office for that weekend (which is, to be fair, usually a dead-zone for movie releases anyway).

    [21] Hanks also declined to star in Forrest Gump at the same time. Costner was indeed Lovell’s first choice to play him (and the two do resemble each other physically, certainly much more than Hanks), and in accepting this role he does not appear in the notorious flop, Waterworld (the most expensive film ever made at the time of its release) - this is likely to extend his A-list status for several more years.

    [22] “The Cage” was not discovered by archivists until 1987 IOTL, too late for the twentieth anniversary. It aired in 1988 as part of a two-hour special containing other clips from the series, as well as the films, and The Next Generation, and interviews with numerous individuals whose history with the franchise had no connection to “The Cage”.

    [23] The “ripples” first appear at 1:03 in “The Cage”, and at 27:23 in “The Menagerie, Part I” (with an explanation by Spock at 28:41).

    [24] The POD is at 1:57 in “The Cage”, immediately before the Communications Officer proclaims “It’s a radio wave, sir”. Everything from that point forward is divergent.

    [25] The death of Garrovick echoes the circumstances of his death that were mentioned in the episode “Obsession”, though this attack is a few years ahead of schedule (the events of “The Cage” and the original attack on the Farragut are traditionally dated three years apart).

    [26] Hatch is made up to more closely resemble the actor who played his character’s father in “The Doomsday Machine”, William Windom. He was 47 at the time of filming, compared to Windom who was 43 (and playing the character more than ten years older than Hatch does here).

    [27] Morrow, of course, appeared IOTL in The Search for Spock, principally written by Harve Bennett.

    [28] In “The Cage”, Colt was quite obviously interested in Captain Pike (one of Gene Roddenberry’s directives was a romance between the Captain and his Yeoman, which carried forward into the series proper with the interactions between Kirk and Rand before finally being abandoned). In this film, on the other hand, the decision is made to abandon the Kirk/Wallace relationship in much the same way as it was implied to have ended in “The Deadly Years”.

    [29] All of whom wore red, of course. Anachronistic uniforms (they should have matched the beige ones worn in “The Cage” and “Where No Man Has Gone Before” but instead much more closely resembled those of the series proper due to their far more iconic appearance) allowed for these redshirts to make their valiant but completely anonymous sacrifice to prove that the situation was serious.

    [30] Think Dudley Moore from Arthur, only not played for laughs.

    [31] Number One, who is given no proper name in the film (just as in “The Cage”), is identified as a Lieutenant Commander and the senior-most officer other than Pike aboard the ship. In “The Cage” she was only a Lieutenant (as was, apparently, every officer aboard other than Pike), but this was deemed unworkable for the film (especially after Kirk was promoted to Lieutenant Commander), so she was made senior (in rank and/or tenure) to every officer in the task force save Pike and Decker. Boyce is also identified as a Lieutenant Commander, and Tyler (whose rank is ambiguous in the episode) is stated to be an Ensign; “Cadet Tyler”, played by a different actor, appeared in Starfleet Academy.

    [32] Trelane has a stereotypical “Generation X college student” mentality, essentially, as opposed to the “spoiled Baby Boomer kid” of “The Squire of Gothos”.

    [33] Bear in mind that, at the conclusion of this film, Kirk is barely two years out of Starfleet Academy and is already a Lieutenant Commander and the First Officer of a starship. That’s a leg-up on the OTL Prime!Kirk (still a mere Lieutenant as late as 2257) though (notoriously) not the OTL Reboot!Kirk (from Cadet to Captain in one fell swoop).

    [34] IOTL, the United Paramount Network, or UPN (jointly owned by Paramount and boat manufacturers Chris-Craft) did not premiere until early 1995, with Star Trek: Voyager as their inaugural broadcast (and their flagship show, through the end of the 20th century).

    [35] Exodus has no OTL equivalent, though much of its talent was culled from Star Trek: Deep Space Nine and The X-Files.

    [36] Quite literally, in fact. Galileo (which, you will recall, was IOTL known as Comet Shoemaker-Levy 9) collided with Jupiter over the course of July 16-22, 1994; Exodus began airing on July 18, 1994 (a Monday). As the date of the impact was known well ahead of time, this was no coincidence, and it paid off in terms of a ready-made audience.

    [37] Born Nana Tucker, she achieved professional recognition IOTL under the name Visitor, primarily as the female lead in Star Trek: Deep Space Nine.

    [38] Mumy appears here instead of on Babylon 5 as the character of Lennier.

    [39] A prevalent theory at the time was that the core of Jupiter - and the other gas giants - was indeed made of diamond (which is to say, highly pressurized carbon), as can be seen in this contemporary article, and so Clarke could not resist the opportunity to exploit this, IOTL or ITTL. This is also among the biggest changes from the book of 2020, in which the Leonov was irreparably damaged in its attempt to retrieve these “samples”. It’s a near-run thing in the movie version (especially as it’s the climactic action sequence), but the Discovery II (lacking a spare) gets away just in the nick of time.

    [40] Apollo 13 takes the slot for the Best Picture nomination held by Forrest Gump, but all other nominees are as IOTL: Pulp Fiction, Four Weddings and a Funeral, The Shawshank Redemption, and Quiz Show.

    [41] Apollo 13 won all three of those awards at the following year’s Academy Awards ceremony IOTL.

    [42] Forrest Gump won for Picture and Director IOTL; without it, Pulp Fiction takes both awards easily, resulting in a true coronation for Tarantino rather than “mere” veneration by the “in” crowd. Shawshank, for all the plaudits it has received in the years since, is too earnest and straightforward a film to have won the big prize in that climate.

    [43] ITTL, the car accident that severely injured his son and led him to drop out of that race was butterflied; his success gave him the platform he needed to try again in 1992.

    [44] Funnily enough, Richards wowed the party brass at the very same DNC at which a certain other politician bored audiences to tears… Bill Clinton.

    [45] Comparisons to another “upside-down” ticket - the Dukakis-Bentsen tandem of just four years before - abound throughout the campaign, given the perceived dullard leaning on a charismatic Texan for support, and that same Texan steamrolling Quayle in the VP debates.

    [46] Although Maine was also Perot’s best state IOTL, he did not come nearly as close to a single electoral vote from that state, ending up over five points behind Clinton in the second congressional district.

    [47] Bush won New Hampshire, Ohio, Georgia, Montana, and Nevada in addition to all the states he carried ITTL.

    [48] IOTL, Clinton won with 43%, to 37.5% for Bush and 18.9% for Perot. Clinton received 370 electoral votes, carrying 32 states and the District of Columbia, whereas Bush won received 168 electoral votes and carried 18 states.
     
    Last edited:
    Part III, Post 6: The Gore administration, chaos at NASA, and the Richards-Davis Report
  • Good afternoon, everyone! It's that time once again, and this week we'll be looking at what the incoming Gore administration thinks about where to go in space.

    Eyes Turned Skyward, Part III: Post #6

    When Gore was inaugurated as President in January 1993, he had three major goals for the space program. First, with the end of the Cold War, he aimed to reap the “peace dividend” with a drawdown in defense spending. While he foresaw a hard sell on the Hill for any cuts to the military-industrial complex, he recalled the hard-sell that Constellation had required on the hill, and anticipated that checking the year-to-year increases to NASA’s budget could be popular with the Republican House--and a test case for cuts to the more traditional military-industrial complex. However, at the same time, he recognized that spaceflight leadership had been a key part of US soft power for more than a quarter century, and that the diplomatic and scientific initiatives it represented might be even more useful for maintaining American influence with the end of the Cold War and the slimming of the conventional military. Thus, his second vision was for a continuation of NASA’s pioneering efforts in spaceflight, specifically Constellation, Freedom, and scientific missions while also adding new emphasis to ongoing technology development and research more applicable to life on Earth, particularly for given technologies like satellite television, satellite data relays, and GPS which were beginning to flourish commercially. Finally, Gore wanted to promote co-operation to tie the world together, both with traditional allies like the NATO nations and with newer potential partners like the Chinese and the Russians. In Gore’s eyes the space program had already proved a valuable way to build ties, as the ongoing participation of ESA, Japan, and others in Freedom proved, and he wished to continue this, establishing a degree of global cooperation in space, an alliance of space-faring nations with the USA at its head working peacefully in orbit and beyond as an example for those back home on Earth.

    Given how these goals asked NASA to accomplish more with less, while working more closely with other agencies on new programs (a recipe for confusion and failure if poorly implemented), Gore would need to have an Administrator he could trust to share and advocate for his vision as much in the halls of NASA HQ and the various centers as on the Hill. While Bush had aimed to change just the scope of NASA’s reach, Gore’s plans aim to change the way the agency would operation; he’d need a strong advocate on the inside if he wanted to overcome decades of inertia. Thus, in spite of respect for Administrator Schmitt’s service as administrator under Bush, when Schmitt tendered the traditional resignation at the start of the new administration Gore accepted it, and took the chance to make his own selection, nominating Lloyd Davis, a relative nobody from NASA HQ who Gore had met through work in the Senate. Though the joke on the Hill was that Gore had made the pick with the goal that there be one person in the executive branch with less charisma than himself, Davis’ selection was in fact the first volley of Gore’s attempt to recast the space program along his intended lines. A native of northeastern Ohio, Davis had been fascinated by spaceflight from an early age. Excelling academically, he had studied aerospace engineering at Purdue, receiving his bachelors degree and then returning to his home state to work at NASA Glenn in electric propulsion system research. However, after a few years, Davis was headhunted into industry, accepting a position with Aerojet. He would spend almost a decade there, gaining an insider’s view of the industry side of aerospace as he moved from engineering to management before returning to government work, this time at NASA HQ. At headquarters, Davis’ job had included dealing with strategic visions and their intersection with the budgetary realities enforced by the Office of Management and Budget. While he had never lost his passion for ambition in space, his time in industry and at the intersection of policy and budget left him with a fine grasp of the practical realities of space exploration. Moreover, Davis was a shrewd engineer--capable of maintaining a broad situational picture in his head and more than willing to pick at threads of detail in an answer to a question or a suggested solution to a problem until it unraveled completely as unviable or to discover the core value of a concept. While rather withdrawn in person, he had a reputation for letting loose in forceful memos and dramatic conference calls when his patience was stretched by attempts to dodge points. With Davis already having a firm grasp of the general picture of Constellation, Gore wanted him to put this keen understanding to work on every aspect of the agency to review it from the bottom up in line with his new objectives, a task he wanted to have done in the first two months after the inauguration. As was traditional, the White House’s main face on the review fell to the Vice President, a task Ann Richards embraced, and the end document presented to the President in April, the “Interim 60-day Progress Report on the State of the National Air and Space Administration” quickly became known as the “Richards-Davis Report”.

    The agency that the report profiled was in a state of near-schizophrenic action. The Ares and Artemis program offices were in the middle of receiving and dissecting the results from the first major rounds of Constellation Phase A study contracts, with almost every topic imaginable under review. For Artemis, concepts for landers of every design imaginable from LMs on steroids to “crasher” designs that would use larger hydrogen stages to brake a lander most of the way down to the surface to single-stage reusable landers to landers that would use multiple thrust axes or land on their “sides” were under consideration, along with virtually every combination of fuels ever proposed for use in spacecraft, from hydrogen to methane to hypergolics to--in one memorable study from Langley--a solid-fuel ascent stage for increased reliability after long periods on the surface. The process of getting that lander and its crew to the moon was also in flux, with studies considering using Earth orbit rendezvous, lunar orbit rendezvous, use of various Lagrange point meeting points, and any and all combinations thereof. The architectures mostly examined hydrogen departure stages, but of many and varied sizes and configurations, ranging from huge new monolithic stages launched fully fueled aboard Saturn Heavies to clustered Centaurs, either separately launched and assembled in orbit, or launched empty and filled by additional flights. The mission capabilities were similarly incredibly varied, as were the durations, though most studies had quickly converged on a crew size of four. Almost all, however, assumed that some kind of base would follow on the initial sorties, despite Congressional rejection of a definite commitment to such permanent outposts, and aimed at systems that could serve both roles. Many studies even looked directly at applying lessons from Apollo and Freedom into a longterm plan for operations of a potential permanent base, harnessing local resources to supplement supplies and fuel from Earth. The Ares program office’s studies were--incredibly--even more varied, as without the immediate time pressure of Artemis they had even greater flexibility to dream about technology and architectures. Some proposed Zubrin-esque single-launch monster missions, while others favored more von-Braun-style flotillas of spacecraft, built up in Earth orbit to fly to Mars as a convoy. Especially in conjunction with the last, there were multiple proposals for propellant depots, pre-positioning fuel caches in LEO, at the Lagrange points, and potentially even in Mars orbit. The proposed sources varied as much as the depots’ locations; besides the mundane option of launching the fuel from Earth, mining oxygen from the lunar regolith or cracking it, together with hydrogen, from the ice deposits hinted at by the Lunar Reconnaissance Pioneer were proposed to fill the tanks of future Mars-bound spacecraft. Even more speculatively, the potential ice content of Phobos or Deimos could be mined in the same way to produce fuel around Mars itself, even ignoring Zubrin’s proposal for producing fuel on the Martian surface.

    This plethora of studies and analyses had done nothing, however, to help the agency actually choose an architecture and an approach for Artemis, let alone Ares. Instead, they left the agency struggling to choose between the advantages and disadvantages of each proposal. Should it opt for an architecture minimizing ongoing operational costs, to protect the program as its objectives were achieved, or one that minimized development costs, increasing the likelihood that it would survive any future political struggles to reach those objectives? How much should it involve international partners, including the unknown possibilities of Russia, China, and India? What balance between technical risk and possible performance should it take? Rather than provide it with the information needed to make informed decisions in all of these areas to present to the Administration and to Congress, the studies were instead paralyzing NASA with an excess of attractive options, forcing even more analysis to try to narrow down its choices even further, all the while accomplishing little of real import.

    Falling under the goals of all three manned program offices, and thus answering to all while directed by none, the Advanced Crew Vehicle (ACV) program was a microcosm of Constellation’s problems. Originally conceived during the late 80s as a program to develop a next-gen crew capsule to finally replace the venerable Apollo with something more capable and modern, ACV was incredibly open in scope, and in the flush of money after Bush’s incorporation of the existing conceptual research into Constellation the number of contractors and NASA engineers involved had exploded. Almost every major US contractor had at least one proposal, while large ones like Lockheed and Boeing had several parallel programs. Other concepts and studies were being added by NASA centers, research universities, and even small startup companies. Vehicles proposed ranged from scaled-up capsules resembling Apollo or Minotaur, (aiming to include more volume and equipment into a returnable and reusable core capsule) to more exotic aeromanuevering configurations, including spaceplanes, lifting bodies, biconic capsules, and others. A third camp advocated for stripped-down vehicles intended to reduce costs per flight by allowing crew rotations in tighter conditions aboard commercial launchers like Lockheed Titans, McDonell Deltas, or even (in the smallest proposals) ALS Carracks. Most designs aimed at switching to land-landing, with precision touchdowns of one form or another, and many also called for at least some degree of reusability. However, the needs of the ongoing Freedom program, the near-term Artemis, and the longer-term Ares program offices clashed as to what the ACV was expected to do, when, for how long, and with what crew and cargo aboard, with almost no configuration able to answer every goal. Moreover, few of the designs were expected to be able to enter service before the year 2000 and, in some cases, even later. Thus, the Gore-Davis report highlighted ACV as a prime target of budget reductions. After all, with Apollo doing such yeoman’s duty for Freedom, and with such versatility, why bother with billions of dollars on a replacement that, although cheaper over an extended planning period stretching into the 2010s, would be more expensive in the next decade, while Freedom and Artemis were actually taking place and while Gore was in office? Instead of followup studies or hardware contracts, most of the original partners found their funding eliminated, while ACV was folded down to a smaller office looking exclusively at potential development of Apollo to meet current and near-future needs.

    The same pattern was repeated throughout Constellation’s offices--while Freedom’s more tangible and largely underway efforts escaped serious cuts, Ares was gutted--manned Mars was off the table, as were more expensive robotic precursors like a Mars sample return mission. The Mars Traverse Rovers were to remain the main focus, plus some of the more budget-friendly planetary science missions like the international collaboration on Fobos Together. Indeed, the Ares Office was so stripped that the remaining manned planning was mostly folded in with long-term planning in the Artemis office, which was in turn renamed as simply the Exploration Office, though the lunar program itself would retain the Artemis name. The unmanned operations of what had been Ares were instead spun off into the arms of the Planetary Science Directorate. While the Artemis-cum-Exploration Office made off much better than Ares, it still saw a serious cutback in the scope of studies approved. The message was clear--Gore wanted to see more progress made considering the amount of money and time that had already been spent. Most importantly, Gore wanted the critical mode decision made, settling the question of how Artemis would go to the moon. While no Kennedy-esque deadline had been set for Artemis, Gore made it known through Davis that a goal of “before 1999” (and the 30th anniversary of Apollo 11’s landing) would be prefered--and that meant moving now. Gore also wanted to see more of the United States’ allies in space brought onboard in more meaningful roles--both as a way of putting his co-operative vision for space exploration into practice and as a way of spreading the costs of precursors and communications elements to reduce the program’s budget requirements--and Lloyd Davis would run the Exploration office ragged, with a narrow focus on the initial sorties: either to see the mission done or shown as impossible--and Davis knew it wasn’t impossible.

    The money saved on Ares and Artemis research wasn’t cut from NASA’s budget entirely, though. Some was lumped into Artemis’ operational budget, aiming to help the Herculean task of moving the scheduled landing to meet the 1999 goal, pushing the program off its comfortable status quo of building castles (and moon bases) in the sky and towards results. However, other elements went to another of Gore’s pet projects. Given the flowering of the commercial space market in the 80s, Gore found the role of NASA in enforcing single-source monopolies with the Multibody, Delta, Apollo, and more to be contrary to what NASA’s goals for the US spaceflight industry ought to be in his mind--that instead of monopolism, NASA should be working to develop technologies to foster innovation in the commercial space field. The new Technology Development Incubation program was almost hypocritical--the same kind of kaleidoscopic array of contracts that had made up Ares and Artemis’ analysis paralysis, distinguished only by that most of them had near-term deliverables. Aimed at fostering innovations in the US launch market, the programs included contracts for all sort of projects, from advanced hydrogen/oxygen engines including the altitude-adjusting aerospike so fondly regarded by SSTO advocates to US-built high-pressure staged-combustion kerosene/LOX engines similar to Russian designs, from advanced reusable TPS to “dumb” mass-produced expendable stages using composite tanks, and from ion-drive tugs for reusable trips from geosynchronous orbit to low-Earth orbit and back in order to reduce the payload required for GEO satellites to new examination of storable hypergolic fuels like hydrogen peroxide/kerosene for use on spacecraft and satellites. The program was to culminate in the development of a testbed vehicle to put into effect the best concepts in reusability for a near-space suborbital single-stage demonstrator.

    Finally, Gore proposed a new international initiative, extending the international aspects of Freedom’s operations to new potential partners--the Russians. Since the first launches of Skylab and Salyuts, Russian and American stations had shared the skies. Now Gore proposed that in a leadup to co-operation in more distant missions, Russian cosmonauts and American astronauts should conduct exchange missions, like the ASTP I and II flights. Unlike the earlier missions, though, these would be exchanges, not just meetings in space. American astronauts would travel to Mir via Baikonur-launched TKS spending time participating in operations aboard the station for a full rotation, while Russian cosmonauts would have the chance to fly aboard Apollo and do the same aboard Freedom. It was intended as a way of comparing operational practices, and of laying the groundwork for more extensive peaceful co-operation with the thawing of the Cold War--both in orbit and on the ground. More cynically, it was also a way of funneling US money into supporting the Russian program, preventing Russian rocket engineers and technicians from being headhunted by rogue states to build missiles that might pose a threat to the United States. In the end, while Gore’s eye for the practical cut ambition in some areas of the long-term space program, he hoped that by focusing on the near-term like Artemis, Freedom, the commercial space market, and co-operative missions he could enable the kind of peaceful, US-led joint future he envisioned in space.
     
    Last edited:
    Part III, Post 7: The Artemis lunar program in detail
  • Well, everyone, despite a truly hell week on the part of both of the author's, it's that time once again. Last week, we reviewed the changes of policy at NASA resulting from the incoming Gore-Richards administration, in particular the elimination of the active pursuit of near-term Mars landings from NASA's goals but a renewed and tightened focus on the lunar return mission. This week, we're going to be looking at what that focus means for the mission itself.

    Eyes Turned Skyward, Part 3: Post #7

    Although the Richard-Davis report largely spared the Artemis Program the gutting suffered by the Ares Program, it by no means recommended continuing “business as usual” at the Artemis Program Office (now the Exploration Office). Expressing strong dissatisfaction with the pace of NASA’s decision-making, it emphasized the need to quickly begin developing hardware and mission profiles for the sortie missions Gore wanted to see, relegating base development to the future, if NASA performed favorably and budget realities allowed. Although couched in formal language, dense, technical tables, and “sand charts” of budget projections, the message was clear to everyone in NASA Headquarters, Johnson, and Marshall: Get a move on, or else.

    However, to be fair to NASA, the questions it had been struggling with since the beginning of Constellation were not easy questions for an agency aware of its reality as a secondary or even tertiary budget priority and trying to maximize the survivability of its programs in a hostile environment, nor did they have simple technical answers. Even the so-called “mode question,” a parallel to the debates of thirty years earlier that had led to the selection of lunar orbital rendezvous, contained a great deal of complexity if examined closely: How many launches to use for each mission? How to divide the necessary components of the mission between the launches? Where to bring those components together and, if necessary, to take them back apart? How many supplies to provide for each mission? Whether to take those supplies with the astronauts at each step or separate some of them out? None of these questions had an obvious best answer, and, even worse, which answers seemed better than the others was partially dependent on whether one saw the Artemis program primarily as a series of brief sorties to the Moon for scientific and prestige purposes or the beginnings of a base-building effort to parallel Freedom. Given the division within NASA between those favoring the shorter-term approach, often in centers or parts of centers closely involved with Freedom operations, and those favoring a more expansive vision of the program, it was no surprise that the agency had deadlocked on such essentially political decisions. With Gore’s support clearly behind the former faction, the impasse had already started breaking down, even while the Richard-Davies report was being prepared.

    Some ground rule assumptions and requirements had already become clear even before Gore’s election. Although the Saturn Heavy was a powerful, capable rocket, it was still considerably less capable and powerful than the Saturn V, which had been only just able to carry out lunar missions itself. Combined with the evolution of safety requirements since the 1960s, an implicit desire to do more than just Apollo redux at the agency, and the unspoken assumption that no new launcher development could be funded, it was obvious that multiple launches would be required for any reasonable mission plan. This, in turn, implied that some location would be needed for bringing together the payloads launched on those multiple rockets and gathering them to form a “stack” capable of landing on the Moon and returning safely to Earth. Given the success of the lunar orbital rendezvous mode in the Apollo missions, it was generally assumed that the lander and return vehicle would be separate, with only the former landing on the Moon while the latter remained in some safe staging area nearby. Finally, a crew of four had been chosen as the default assumption for most studies, with only a few examining larger or smaller teams. With advances in automation since the 1960s, it was no longer considered problematic to allow the entire crew to descend to the surface, leaving the CSM untended. In turn, by adding an additional crew member, every astronaut would have a “buddy” for EVA or other operations, allowing a greater operational tempo than the Apollo missions.

    Together, these three assumptions had their own consequences. First and foremost, two Saturn Heavy launches simply could not support a meaningful mission by four people to the lunar surface. At best, using a low lunar orbital rendezvous mode, they could spend no more than a few days on the surface, barely better than the Apollo missions. At worst, if a Lagrange point rendezvous location was selected, the crew might not be able to spend even one full day on the surface. In both cases, little more would be achieved by any lunar mission than had been done on a given Apollo mission, leading to the obvious question of why billions of dollars were being spent to recreate missions from thirty years earlier. The minimum number of launches needed for a mission was therefore three. Since the Kennedy Space Center had only two pads capable of supporting Saturn Heavy launches, at least one of those launches would need to take place a few weeks before the others. In fact, to best fit in with the center’s processing flow and minimize the amount of extraordinary effort needed to ready pads in quick succession, it would be better if it took place several months before the other two launches. In consequence, the payload launched on this first flight would need to be something that could tolerate several months in space--ruling out cryogenic liquid hydrogen or liquid oxygen, which made up the bulk of the launched weight--and which could easily be separated from other mission elements that would have to launch just before the mission itself, such as the Earth departure stage or the crew. The obvious answer was to launch the supplies needed for the desired longer missions on a separate lander, reducing the crew lander to little more than a lightweight taxi for transiting to and from the lunar surface, able to be launched on a Heavy with the crew vehicle and carry out a “two-Heavy” mission with a separately launched Earth departure stage. Since a logistics lander would be needed for a permanent base, to land supplies without the expense of a human flight and to transport large base modules and equipment, this plan gained immediate support from the pro-base contingent of NASA’s personnel. Although the pro-sortie club was more reluctant to follow, eventually they, too, conceded that it was at least acceptable, and this general plan had already started to become the default before Gore’s election.

    It proved more difficult to resolve the question of where to stage from “nearby” the Moon. The Apollo missions, of course, had had their lander and return spacecraft separate and eventually rendezvous in low lunar orbit, and at first most mission plans followed suit, happy to trust the judgement of the men who had actually landed men on the Moon. As more in-depth analysis took place, however, problems in the low lunar orbit profile began to appear. Modern mission planners wanted access to the entire Moon, not just a narrow band of sites near the equator, especially in the wake of the Lunar Reconnaissance Pioneer’s apparent discovery of large deposits of water ice near the poles and the presence of a gigantic impact basin of scientific interest near the South Pole on the far side. Eliminating the communications problems was easily achieved by inserting satellites into lunar orbit to relay signals from astronauts on the far side, but the equatorial orbit used by the Apollo missions could not reach many of the more interesting sites. An increase in the delta-V budget could allow choosing an arbitrary orbit passing over any part of the Moon, but this itself led to further problems. Since the 1960s, safety standards had become more stringent as more had become known about the dangers of space, and as part of any future Moon missions it was desired by certain parts of the agency that the astronauts be able to choose to abort their mission at any time and return to Earth, a capability which became known as “anytime return”. It quickly became apparent that orbital mechanics meant that providing this capability was going to require a substantial amount of delta-V on the return vehicle, on top of the already large amount needed merely for escaping lunar orbit in the first place. Since the return vehicle was supposed to be at most a variant of the spacecraft used for crew transport to Freedom, and since these requirements were much larger than needed for the low Earth orbit maneuvers needed for that role, designers were left with the unpleasant dilemma of either accepting a mass and cost penalty for low Earth orbit missions because of a larger, more expensive service module than needed, or accepting the expense of designing and manufacturing two different service modules, one for lunar and one for Earth orbital missions.

    However, while studying possible communications relay satellite locations, a Langley astrodynamicist had stumbled over an interesting observation--the issues with adding “anytime return” for low lunar orbit wouldn’t apply to a vehicle staged out of the second Earth-Moon Lagrange point, or EML-2, a region where satellites could remain hovering over the farside with relatively small stationkeeping requirements. Exploring trajectories to and around halo orbits around EML-2 for farside communications using work by Robert Farquhar in the late 1960s, Abe Lewis observed that a hyperbolic trajectory to these halo orbits consumed only slightly more delta-v than the trans-lunar injections of Apollo, while the fixed position of EML-2 relative to the moon and the much easier trans-Earth injections essentially “baked in” anytime return with much less delta-v requirement, especially on the return spacecraft. This solved in a single step the dichotomy that had been facing mission planners between the required performance required by the Earth orbital and by the lunar orbital missions. The tradeoff was that the lander would require more performance, both on the descent and on the ascent, and thus a heavier lander would be required to place payloads onto the lunar surface. However, Lewis calculated that the increases were not enough to outweigh the benefits of these EML-2 trajectories, and showed so in an impressively exhausting series of head-to-head comparisons of notional missions, comparing his conceptual designs against other NASA design reference missions for the moon. In these analyses, another benefit emerged: the large descent stage needed for the EML-2 mode was well suited to be a logistics lander, provided the necessary electronics and equipment were baked in rather than located on the ascent stage, turning a potential drawback into something of an advantage. Like Houbolt in Apollo, others were considering EML rendezvous before Lewis began his work and the influence of one man in bureaucracy as large as NASA can be hard to judge, but the EML-2 rendezvous gained much attention, and studies similar to Lewis’ side-by-side comparisons soon emerged from the main Artemis Office. Within months, EML-2 staging had begun to dominate Artemis reference missions.

    Thus, the final Artemis architecture emerged. A three-launch mission would occur, with the first launch sending a logistics lander via a Saturn Heavy directly to the landing site. Several weeks later, with the cargo lander confirmed to be safely on the surface, a pair of Saturn Heavies would carry aloft the crew portion of the mission: one with a large hydrogen/oxygen departure stage, the other with the Block V Lunar Apollo and crew lander. These would meet in LEO, with the departure stage expended to put the stack into a path to EML-2. From there, the crew would descend to the surface in the lander, using supplies from the cargo lander for stays lasting up to 14 days, then ascend back to EML and return to Earth aboard their Apollo. Originally, 8 lunar flights were planned, requiring three new hardware elements: the lander, the new lunar Apollo, and the large EDS (named internally the Exploration Cryogenic Upper Stage). Each mission was to cost around $1.5 billion, with development costs and surface hardware bringing the Artemis initial sortie program to about $20 billion. Flights would begin in 1999 and continue at a pace of one per year until 2007, NASA’s bid to both smooth out budgetary requirements and allow a building of support for permanent bases. These plans were reflected in the budget recommendations Lloyd Davis brought to President Gore in late 1992 for the FY 1993 budget process. However, it has been said that no plan survives contact with the enemy, and in order to be approved, these recommendations would have to pass through the halls of the United States Congress.

    Roughly speaking, Congress broke into four groups on the matter of spaceflight. One could be termed the “hawks”--largely interested in seeing the US space program continued in full force. Not coincidentally, these tended to be representatives from Florida, Alabama, and other states with large vested monetary interests in the US space program, but the memory of Vulkan Panic’s arrival after US space spending was decreased after Apollo still hung in the minds of a few other members concerned about the growing Chinese program. The second group, for a variety of reasons, saw the space budget as a massive target--either to shrink the government overall, or to be redirected to the member’s preferred programs. The third group was essentially a mix of both--worried about the United States losing its place in spaceflight (both manned and commercial) to Russian, Chinese, or European competition, but conscious of the price tag associated with the endeavor in an era focused on “reaping the peace dividend” and shrinking spending. The fourth group, and by far the largest, honestly cared only as far as the topline numbers, and was lead by whichever messages emerged from the most influential of the other three groups--particularly the third. Gore’s proposed plans, as encapsulated in the Richards-Davis Report, had therefore been calculated to appeal to this group--in his time in the Senate, Gore had plenty of experience in the way things worked, as Davis himself had in NASA dealing with the OMB. In order to re-assure the more hawkish tendencies, Davis’ advocacy of the new plan on the Hill focused on selling the budget savings of cutting Ares and of co-operation with international partners on the precursor missions, the benefits of the station crew exchange program on keeping Russian rocket engineers working for Russia and not rogue states, the potential benefits of Gore’s commercial initiatives for assuring continued US success in the commercial market even in the face of Chinese, Russian, and European competition, and the newly enhanced focus of Artemis ensuring that the money spent would produce results. In the large sense, the sales pitch was effective, as the general outline of the new direction was approved in the new Authorization bill, while Appropriations roughly followed suit. However, there was sacrifices that had to be made. To appeal to the budget cuts, the final two Artemis missions were cut to bring the program lifetime cost down to just $17 billion, shortening the initial sorties to just six flights ending in 2005. Additionally, to secure approval for Gore’s forward looking commercial development with a little precautionary protectionism, new teeth were granted to export controls of “defense technologies,” which were expanded to include launch vehicle and satellite technologies. While not actively preventing such exports, the new approvals required to export such technologies (which would include, not coincidentally, launching US satellites on foreign vehicles) were intended to discourage and otherwise limit such activities.

    With the missions approved and money flowing, the contracts for the three major hardware elements could be let. Rockwell’s receipt of the “Lunar Crew Vehicle” contract for the uprated lunar Apollo was almost a formality--the mission plan’s preference for an Apollo closely related to Block IV was well known in the industry. Essentially, the final proposal would mate a Block IV Apollo CM to an SM based closely on the existing Block II Aardvark SM, allowing more room for fuel, together with a “lightweight” pressurized module to provide additional space and services--most prominently a proper toilet--during the flight to and especially from the Moon. The largest change would be overhauling the power system--for the near month of total operations expected of Artemis-model Apollos, batteries would be impractical. Instead, the Block V would introduce much smaller batteries, kept charged by solar arrays. The spectacular improvements in solar cell efficiency since the 1960s had made the conversion an “also ran” on every new block of Apollo since the 70s, and the lunar mission requirements finally pushed solar panels ahead of simply maintaining the proven and effective battery system. Given this and the intention to roll the conversion out across both lunar and Earth-orbital Apollos, the Rockwell contract (at $400 million) was slightly more expensive than might have been expected for simply “another Apollo,” but the process was both smooth and cheap compared to the contracts for the ECUS and the lander.

    The lead competitors of the ECUS contract were mostly confined to companies already constructing hydrogen stages, namely McDonnell of the SIVB/C family and Northrop of the Centaur (brought in from General Dynamics when Northrop acquired them). While other companies including Lockheed and Boeing submitted bids, the experience of these firms was enough to push their proposals into the lead. Both stages were planned to use the same engine cluster--six RL-10s--and to use common bulkhead designs to minimize dry weight. However, the designs differed in the key detail of diameter. The Northrop design was set at 5.5m diameter, essentially replicating the S-IV stage of the 1960s with an improved mass fraction and higher overall fuel load. McDonnell, on the other hand, set about encapsulating the ~70 tons of propellant in a 6.6 meter tank based on the proven SIVB derivatives they had developed. In order to build a small enough LOX tank, this then required flipping the common bulkhead’s dome to nest “into” the aft LOX dome--a major revision to the common bulkhead design, requiring new structural analysis, a slightly heavier common bulkhead dome, and substantial engineering costs. Compared to this, the new tooling required for Northrop’s overgrown Centaur was judged less technically risky, and Northrop’s bid cost ended up being slightly lower. In the end, it was a deciding difference--McDonnell's contributions to Artemis would be limited to Earth orbit with their SIVC on the Saturn Heavy. Northrop’s design, which they saw as giving “wings” to the Artemis program, was named “Pegasus” after the winged horse of mythology. Northrop’s contract for the development of the stage was set at $1.2 billion, and was a major win--a chance to gain NASA funding to build their own large-stage toolings.

    EDSdesigns_zpsbd295962.png


    The lander competition was equally fierce--while the product was less commercially applicable than a large hydrogen stage, the lander was viewed as higher prestige. However, experience in lander technologies was less widespread, putting most proposals on more equal footing, with one major standout. With the experience brought in by their new Bethpage division and Starcat, Boeing had very recent history with a vertically-landing hydrogen vehicle. Moreover, the institutional memory of Grumman on the Apollo Lunar Module gave a base to build this more recent experience on. Their entry (1) was far more “conventional” than many put in by other companies, consisting of stacked ascent and descent stages. However, this created an issue of reaching the surface--the porch of the ascent stage would be more than 6 meters off the ground, requiring quite a bit more than “one small step.” Other entries were more creative in order to eliminate this issue. Several turned the lander’s launch axis horizontal. While some simply mounted the engines perpendicular to the launch axis (2), some variants on this concept used separate descent and landing engines, with the main descent performed by a larger engine mounted along the axis, then smaller engines for final descent--thus avoiding the issue of deep throttling for the main engines (3). Others used a sort of “crasher” design (4), with the descent stage doing most of the work of landing, but the ascent stage then actually landing separately, performing final descent as well as ascent, eliminating any need to climb down the descent stage to the surface and any need for equipment such as landing gear on the main descent stage. However, in spite of this, Boeing’s Grumman experience helped the technical maturity and NASA’s judgement of the risks of the design, and it was enough to win them the $5 billion of the Lunar Crew and Logistics Module (LCLM) contract.

    EyesLanderoptionssmall_zpsc1c81d66.png


    With congressional approval secured and contracts settled, the doldrums that had gripped Artemis were largely eliminated. Most shocks to the program caused by the cuts and re-arranging of the Artemis and Ares Offices into the Exploration Office were eased by the focus on Artemis that Davis brought, and the measurable progress made in 1993. Across the country, work on Artemis was beginning to grind into gear. From being nothing but a distant possibility a few years earlier, now a return to the moon seemed to be drawing ever-nearer for American astronauts.
     
    Last edited:
    Part III, Post 8: The Russian space program and its international partnerships
  • Hello, everyone! It's that time once again, and having thoroughly covered the crystallization of American lunar plans for the last two weeks, this week we're turning our attention to the other side of the fallen iron curtain. This week, we're looking at the state of the Sov--er, Russian space program in the shadow of the collapse of the USSR. I hope everyone enjoys it!

    Eyes Turned Skywards, Part III: Post #8

    With the end of the Cold War in Russia also came the end of the reliable political support and massive budgets for the Soviet space program. For Vladimir Chelomei, his dream of being Chief Designer of the program, achieved at long last, was rapidly becoming a nightmare. When he had assumed control of the program following the death of Glushko, Chelomei had hoped to be able to build on Glushko’s achievements in space with his own, a series of mixed-fuel airbreathing single-stage spaceplanes that would enable cheap and simple development of space-based infrastructure, in turn enabling mighty space stations and far-flung expeditions even Korolev and Glushko would have been envious of. It was an idea that Chelomei had harbored for many years, but it was doomed to remain nothing more. Even by 1989, the state of the Soviet Union was dire; the Politburo had little interest in increasing funding for the space program to pursue such imaginations (even if they might be technically achievable) and indeed was more interested in asking pointed questions of Chelomei about how the program’s budget could be further trimmed with “minimal” effects on the political value of the program. With the final implosion of the Soviet Union, Chelomei found the new Russian leadership even more insistent--now, the question was how much could be cut without “critical” effects. It was readily apparent even to Chelomei that in order to enable the space program he had spent much of his life building to survive, he would have to find alternate revenue sources.


    At 76 years old, Chelomei was no spring chicken, and had lived his entire adult life among the enormous battling design bureaus of the Soviet Union, an environment where vast political maneuvering and horse-trading was the fuel that powered development programs. Perhaps, then, it is unsurprising that, at least initially, Chelomei’s efforts to build a new revenue stream focused not on the commercial spaceflight industry that had begun to spring up, but instead on similar “great moves”. In order to reduce the costs of sustaining R-7 and Vulkan production while offering greater flexibility, the concept of a ‘lite’ version of the Vulkan, based around its RD-160 second-stage engine had been in the air almost since the Vulkan’s inception. The Indian space program had reached out to Chelomei in the early days of his time as Chief Designer, but caught in the transition (both of his career, and the rapidly changing landscape of the Soviet Union’s politics) Chelomei had had no time for their offers. However, now two years later in 1991, he saw the chance to forge a strategic design alliance that could enable completing the vehicle design, now called Neva after the short but powerful river that flows through the heart of St. Petersburg, keeping engineers at work he would need for his spaceplanes, and getting him the construction cost savings he desperately needed to balance his budget. While such a program was more than the Indians were initially looking to gain, he was willing to sweeten the pot with licensed production deals, as well as flights of Indian cosmonauts to Mir--critical for ensuring the funding needed to keep Russian astronauts flying there as well and preventing the station from falling into disuse from which it might be unrecoverable. He then built off of this by securing an alliance with the Chinese, to provide technical support to Chinese launcher and capsule design work and access to Mir in exchange for straight cash he needed to keep his programs running. It was perhaps a worse deal than he could have made, but Yuri Gagarin’s flight had been one of the great successes of the program he was trying to safeguard, and the burning of Gagarin’s Start at Baikonur had recently brought home the financial difficulties he was struggling with. To see an icon of history, and not just Russian history, burn, to see something his ancient rival Korolev had been responsible for even while he himself had been struggling to dominate the Soviet space program go up in flames, weighed heavily on the man, perhaps driving him to search farther than he otherwise would have for whatever money he could dig up. Having made these Faustian bargains to sell access to Russia’s hard-earned spaceflight knowledge for the cold, hard cash needed to keep his rockets flying, it was perhaps only inevitable that Chelomei would eventually authorize similar sales to the West--the very same opponent whose competition had spurred on the very development of the technology, back when he had been the young upstart. Needing to go to the West for help was not easy, but within Chelomei’s mindset of grand bargains, it was the only way to ensure the survival of the program.


    The Western world, with the exception of the relative latecomer Japan, had developed a suite of reliable, if relatively low performing rocket engines during the 1950s and 1960s through painstaking labor and testing. As a result, the development of new engines, however large a benefit they promised over the already developed motors, seemed almost too painful to bear, given the common assumption that any such development would need similar amounts of testing--and quite probably similar numbers of expensive flight failures--to become equally reliable. Instead of continuously developing and introducing new engines utilizing improved design features, Western designers chose instead to incrementally upgrade their existing designs, introducing new materials, increasing chamber pressures, and a host of other tweaks to push performance as far as possible. And therein lay the rub, as the underlying designs were fundamentally low-performance, and could only be pushed so far. To get around these inherent limitations, Western engineers turned towards augmenting the perhaps unimpressive core vehicles with a wide variety of additional stages and modifications. For example, rather than relying purely on thrust from the core, a rocket might use strap-on boosters, whether liquid or solid, to lift its bulk into the sky, increasing the payload carried. Alternatively, upper stages using solids, storables, or kerosene as fuels could be replaced by far more efficient high-energy stages using hydrogen and oxygen, a difficult propellant combination that had nevertheless been greatly developed by the United States military during the "Suntan" spy plane program and the later Centaur upper stage project. Taken separately, they could yield important gains to the performance of the underlying vehicle; taken together, however, they could turn a previously mediocre vehicle into an outstanding performer, as in the case of the Europa 3. Most of the performance gain of this workhorse of ESA over the initial Europa 2 came not from the improvements, however significant and difficult, that Rolls-Royce made to the core's RZ.2A engines relative to the older RZ.2, nor from the large increase made in the size of the first stage now that it no longer needed to be largely a copy of Blue Streak. Instead, it gained from the use of a capable new French hydrogen-oxygen upper stage in place of the older hodgepodge of storable French and German stages and the ability to use solid and liquid boosters to increase takeoff thrust. This combination lifted the vehicle from matching the Delta, barely, to seeing eye to eye with the mighty Titan III in terms of payload capacity. By the early 1990s, virtually every Western rocket used some combination of boosters and high-energy upper stages to boost performance, with most of the exceptions being launchers where other concerns, such as politics or cost, dominated over raw performance.


    In contrast, the Soviets had preferred a stable of relatively simple vehicles specialized to their particular use, and, due to the absence of a significant technology base in solid rockets and the presence of a rare concentration of liquid engine design talent, relied almost exclusively on liquid propellants for thrust, even in military applications where the Western world quickly developed solids. Moreover, as a consequence of the peculiarities of character of their chief designers, the Soviets were skeptical, even dismissive, of very high energy but hard to handle cryogenic propellants, famously expressed in the battle between over what propellants should be used in the Soviet moon-landing efforts. This battle, waged between Glushko, an engine designer who favored storable propellants, and Korolev, a systems designer who favored the mildly cryogenic pair of liquid oxygen and kerosene, lasted through much of the 1960s. Although Glushko reconciled himself to cryogenics by the 1970s, when his Vulkan was designed to use exclusively kerosene and liquid oxygen, and the Soviets began using high performance hydrogen-oxygen stages in the 1980s, they never completely lost their aversion to cryogenics, with the Blok R high-energy stage mainly being used for a select set of planetary and very high orbit spacecraft, where nothing short of hydrogen would do. In fact, one of the very first things NPO Lavochkin tried to do after becoming an independent firm after the fall of the Soviet Union was sell a derivative of the storable propulsion system they had developed for the latest block of Soviet planetary probes as a reliable upper stage for the Soyuz launch vehicle, achieving some success in the process. To compensate for the inherently lower performance of kerosene and storables as propellants, the Soviets had developed highly sophisticated metallurgy and engine design practices allowing them to run their engines as high-pressure staged combustion engines, offering far better specific impulse and thrust for a given propellant than the simpler, mostly low-pressure gas-generator engines dominant in the West. Between this mastery of a number of highly sophisticated technologies and design methods and a willingness to "just build a bigger booster" if that proved necessary, the Soviets were able, just prior to their collapse, to field a set of launchers, from the small Tsyklon and Cosmos to the reliable workhorse Soyuz to Vulkan and on up to the mighty Vulkan-Atlas, just as capable as any booster in the West, if less flexible on a vehicle-by-vehicle basis.


    With the collapse of the Soviet Union in 1991 and the resulting elimination of many barriers on trade and travel between the West and the newly-formed Russian Federation, particularly restrictions on discussions of Russian and Western rocket hardware, came the discovery of these advanced capabilities by Western rocket engineers. Except for Mitsubishi, which was engaged on pursuing a completely different and independent route to high rocket performance levels, the major Western rocket engine development firms quickly began salivating over the potential offered by these technologies, especially given the low cost of acquiring the fundamentals from a Russia in the throes of significant economic restructuring and in desperate need of hard cash. All of them proposed to their respective governments that Russian technology be incoporated into new engines that would dramatically outperform existing designs. Rocketdyne and Rolls-Royce had in many respects the most conservative proposal, where they would form an international partnership, International Engines, to apply Russian design principles to their existing (and dominant) engines. The goal would be to replicate as closely as possible the key characteristics of the engines, such as thrust and physical size, so that only minimal changes would need to be made in existing stages, while still reaping the benefits of dramatically improved ISP and specific thrust compared to their existing, more conventional rockets. By contrast, Pratt and Whitney had the most radical proposal, where they would partner with the Russian company NPO EnergoMash to sell their engines directly in the United States. Although significant amounts of development work would need to be undertaken to replace existing boosters, which were largely incompatible with the Russian designs, noises about "Third Generation Boosters" (where the 1950s and 1960s boosters were "First Generation" and the products of ELVRP "Second Generation") in the US and the Europa 5 program in Europe perhaps encouraged Pratt and Whitney to believe that such a replacement was inevitable anyways.


    Aerojet, the fourth major Western engine manufacturer, had a completely different approach to the prospect of incorporating Russian technology than either Rocketdyne/Rolls-Royce or Pratt and Whitney. Rather than upgrade or use existing engines, Aerojet proposed that an entirely new engine be designed to take maximum advantage of the new technology. By properly regulating its size--Aerojet estimated that an engine with about half a million pounds (or 2200 kilonewtons) of thrust would be ideal--and allowing the ability to throttle significantly, a single engine could replace all existing first-stage engines in all Western launch vehicles (subject to the necessary redesigns, of course). Everything from Europa to Delta could be powered by the same engines, allowing enormous economies of scale. Of course, the Europeans were unlikely to agree to dismantling the independent infrastructure they had constructed over the past three decades for the benefit of an American firm, but even if only the United States adopted its proposal, there could be substantial advances not only in performance but also in economy.


    Meanwhile, on the Russian side, Chelomei’s grand bargains had at least achieved much of their task of keeping the program the Russians had inherited from the Soviets alive through to the approach of the mid-90s. However, new forces in the political and technical realms were beginning to make themselves heard, pointing out that the kind of mindset Chelomei was operating with was falsely constrained within the new, capitalistic, commercial world that Russia was a part of. In this world, it wasn’t grand alliances that ultimately were the real money source, it was putting payloads on rockets (or passengers in capsules) and flying them to space. Moreover, these payloads and passengers weren’t just a side project to fund the massive projects of space exploration, they would have to be the bread and butter--the program’s main reason to be. In the view of those within the Russian government and space program who had begun to grasp this fact by watching the operations of their competitors like Lockheed, ESA, ALS, and McDonnell-Douglas, Chelomei’s attitude towards developing a base for selling Russian rocket flights to foreign customers was unacceptably lax--by 1994, not a single foreign payload had flown on a Vulkan or Soyuz rocket in spite of the dramatically lower costs of Russian rockets allowed by the condition of the Russian economy and the lower cost of labor, and the results of his other partnerships had also been less than might have been hoped.


    In many ways, this blame was undeserved--getting insurance coverage, technical contacts, launch support and pricing structures in place was a colossal task, and even if payloads had not been designed from the ground up for a specific launch vehicle, it usually took years to negotiate and finalize LV contracts. This was exacerbated by the sheer scale of Vulkan compared to other commercial vehicles--it had a payload both to low Earth orbit and the more commercially relevant geosynchronous transfer orbit substantially larger that its largest competitor, the Europa 44u, and several times larger than the commercially dominant Lockheed Titan IIIE and Europa 42u. While Vulkan was cheaper per kilogram of payload than any of its competitors in theory, this advantage only applied if its payload capacity was fully exploited, not if it was allowed to fly partially empty. However, since most commercial satellites fell well short of Vulkan’s lifting ability, “fully exploiting” its capacity meant lifting two or more satellites on a single launch, a complicated and difficult proposition to arrange. If merely one satellite was launched, Vulkan would be no cheaper and less convenient for the usually Western firms that were seeking to launch satellites than its competitors at the Cape and Kourou. Even the best salesman would struggle to obtain contracts under such conditions, and the environment of the Soviet Union, where Chelomei and his top lieutenants had had to do little but focus on research and development, meant that they were far from the best salesmen in Russia.


    Moreover, Chelomei’s extensive co-operative programs which he had attempted to offer as a path had been progressing slower than had been promised to the international development partners. India had initially been promised that development of the Russian designs for the Neva/Polar Satellite Launch Vehicle would be complete by 1995. Given that the core was to be based on Soyuz tankage and Vulkan-derived engines, the goal had looked initially achievable. However, the combination of limited budgets and unanticipated challenges in adapting hardware to produce Neva pushed these schedules back. Originally, India had hoped to have its PSLV by 1994, but had allowed a slip to 1995 as an acceptable alternative given the potential of the Russian stage, extending the use of its Augmented Satellite Launch Vehicle in the meantime. However, every slip of the Russian development program brought implications for the Indian program; as delays accumulated and began to push the introduction of the vehicle into the latter half of the decade, many Indian program managers began to express impatience and frustration, even to the point of suggesting that it might be just as well for India to cancel their co-operation and instead build their own native designed-stage. While Neva’s engineering team managed to largely assuage those impulses (in part with arrangements to pay to fly some of the PSLV-only payloads on Vulkan in the meantime), they were an ominous and discouraging sign for the future of the Russian-Indian partnership. Perhaps the only areas relatively immune from delay were those simply involving flights to Mir--including Indian, Chinese, and American astronauts. On the commercial passenger side, where Russian companies had begun attempting to sell the concept of a “tourism” flight to Mir, there had been interest even at the prices needed to help subsidize Mir operations, but none of that interest had yet translated into the cold, hard cash the program needed.


    These difficulties provided strong evidence that Chelomei’s worldview of grand moves and massive projects was incompatible with the efforts needed to secure the stream of mundane commercial payloads needed to secure the program’s future, and that given the strain already present on the cash-strapped program, he had over-extended. Finally, in 1995, Chelomei was outmaneuvered in his own game--the last of the great Chief Designers had made one wrong move too many and he was informed he was being offered a well-deserved, richly compensated, and quite compulsory retirement in honor of his stewardship of the program as Chief Designer and years of dedicated service beforehand. His replacements would focus on the large commercial potential of the assets he had managed, however clumsily, to preserve of the glory days: Vulkan, TKS, Mir, as well as cooperative efforts on Neva with India and with the Americans in LEO and beyond.
     
    Part III, Post 9: The Cassini Probe program
  • Well, everyone, it's that time once again. Last week, we touched on the position of the Russian space program (and their alliances with other programs) as the commercial and international market became the key for the survival of their program. This week, we're taking a jaunt out to what the American program is up to at the same time as we track down the latest in their outer planet exploration missions.

    Eyes Turned Skywards, Part III: Post #9

    From Earth, Saturn is perhaps the most intriguing of the giant planets. While its own complex atmospheric systems are virtually invisible to ground-based observatories, unlike the glorious belts, storms, and zones of Jupiter, it more than compensates with its famous rings, the only set of giant planet rings easily visible from Earth. As with Jupiter, NASA planning for advanced missions to Saturn and its system of rings and moons, to follow flybys like the Pioneers or Voyagers, began early, almost before the planetary exploration program itself, although due to the greater challenges involved in exploring Saturn these missions tended to be granted a lower priority than missions to Jupiter or the inner planets. By the mid-1970s, these plans had coalesced into a family of “super-Voyagers” or “super-Pioneers,” beefed up with extra propellant tanks to handle orbit insertion and modification, and a modified instrument suite to better address questions specific to each planet. By using the Saturn-Centaur, these probes could be dispatched directly to Jupiter or Saturn; indeed Jovian probes could carry additional scientific equipment, such as an atmospheric probe. Alternatively, the Titan IIIE--the Titan-Centaur--with an additional solid “kick stage” could be used, although this would limit probe capabilities and require, for Saturn orbiters, the use of gravity assists to reach the destination. It would, however, be available earlier than the Saturn-Centaur, and possibly be cheaper as well. Although the major focus during these studies was exploration of Jupiter and the Jovian system, some attention was paid to the possibility of Saturn orbiters at a later date, a pattern that would repeat through the 1970s and into the 1980s; although Saturn was a decidedly lower priority than Jupiter, it would nevertheless benefit from the attention paid to the latter.

    This was apparent in the next round of Saturn orbiter analyses, started after the approval of the Galileo Jupiter orbiter missions in 1976. Now, instead of being based on the Voyagers or Pioneers, Saturn orbiters would be based on the more capable but heavier Galileo platform, carrying an array of instruments and probes to explore not only the planet, but also its moons. Since the previous round of studies, observations of Saturn’s moons, especially the largest moon, Titan, had revealed them to be, as with Jupiter’s moons, more interesting than previously thought. In particular, evidence seemed to indicate that Titan might have an atmosphere, probably thicker and more dynamic than the Martian atmosphere which had been explored in detail by Viking, making it the only moon in the solar system with an appreciable atmosphere (although Voyager 4 later showed that Triton had a thin but perceptible atmosphere). Interest only grew after Pioneer 11 and Voyagers 1 and 2 flew past Saturn and its moons, revealing many additional scientific questions just waiting to be answered and confirming the thick and dynamic nature of Titan’s atmosphere. Although the greater mass meant that even Saturn-Centaur would require a kick stage to send the “Saturn Orbiter with Probes,” or SOWP, to Saturn, the trade off was felt to be worth it in the additional scientific return possible.

    As these studies began to sharpen up the details of the notional Saturn probe, new opportunities began to emerge for the tentative SOWP program. While a few foreigners, particularly French scientists associated with CNES and its balloon programs, had been involved in discussion of possible Saturn and Titan missions, most of the discussion to date had taken place at Ames or JPL, with little involvement from non-Americans. As part of their program to further develop a common European space science program, the European Space Agency encouraged a series of meetings between members of the National Academy of Sciences and the European Science Foundation in the early 1980s, largely before the Vulkan Panic, to discuss possible future areas of cooperation between the scientific programs of the European Space Agency and NASA. Given their previous and ongoing collaborations for Hubble, Helios-Encke, and Kirchhoff-Newton, much of the discussion focused on possible alliances in astronomy and planetary science, although sharing of data and possible joint missions for Earth science and helioscience programs were also discussed extensively. The growth of European planetary science over the past decade, coupled with significant interest from the Americans in involving international collaborators in SOWP (if for no more noble reason than protecting SOWP against any reappearance of budget-cutting enthusiasm within the OMB), led to substantial interest from European scientists in participating in SOWP. There were a number of components where European industry could clearly and easily make useful contributions, while European scientists had unique experience and advantages in certain possible instruments. Although no specific agreements were made, the consensus was clear that any future Saturn mission would be a joint mission--led by NASA, true, but including ESA as a critical partner.

    At first, the Vulkan Panic and subsequent infusion of funds into NASA changed very little about the design of SOWP. Despite the advantages of significant budgets and improved performance from the Multibody, it was still much too early in planning to proceed to formal approval and the start of detailed design and manufacture. Instead, JPL continued formal studies into spacecraft configuration and design, while inviting ESA to participate more directly in defining SOWP, now tentatively named “Cassini” after the Franco-Italian astronomer Giovanni Domenico Cassini, who had discovered four of Saturn’s moons and the Cassini Division within the planet’s rings, besides a number of other contributions to science. Over the next several years, Cassini’s design became more and more well defined until, in 1985, it was finally submitted to Congress for a new start. Despite the years that had passed since the initial furor of the Vulkan Panic, Cassini fairly sailed through Congressional approval, with the costs balanced out by arguments about the need to maintain the unique American capability of exploring the outer planets, something which otherwise would atrophy and decay after Galileo’s end.

    The Cassini Saturn System Mission approved by Congress would be a behemoth of a mission, the “cornerstone to end all cornerstones” as detractors said. The orbiter alone, equipped with an expansive scientific suite including a cloud-penetrating radar for mapping Titan, an improved version of the Galileo imaging system, and other modifications, would mass as much as the complete Galileo spacecraft, orbiter, probe, fuel, all, even when unfueled. Furthermore, it would carry two parasite probes, one that would be released before reaching Saturn and penetrate the atmosphere of the planet like Galileo’s probe, and another which would be released later to explore Titan. Altogether, and including the propellant needed for Saturn Orbit Insertion and other critical maneuvers, Cassini would set a new record for probe weight, at over six and a half metric tons at launch. In fact, Cassini was so heavy that even Saturn-Centaur with a substantial kick stage could not propel it directly to Saturn; instead, it would need to take a complicated path using multiple flybys of Earth and Venus before being able to speed on to the ringed planet, something which would increase the complexity of Cassini relative to Galileo still further. Consideration had been given to instead using a Heavy-Centaur, which would be capable of directly injecting the probe onto a trans-Saturn trajectory, but although this would not significantly affect the lifetime cost of the probe, peak costs would be higher--too high for the science budget to support, especially given the already high projected cost of the program. Even then, substantial components, including the spacecraft’s entire propulsion system and the Titan probe--now nicknamed “Huygens” after the Dutch astronomer, mathematician, and physicist Christiaan Huygens--needed to be produced in Europe to prevent exceeding projected budgets.

    With the program defined and budgetary authorization in hand, development quickly began. Although design and manufacturing would need to be relatively quick to meet the planned 1992 launch date, generous budgets and the problem being one less of entirely new development and more of integrating existing technologies like the advanced radioisotope thermal generators developed for Galileo or the thermal protection material invented for Galileo’s atmospheric probe into a single, coherent whole meant that scientists, engineers, and mission planners were optimistic about their ability to meet deadlines. As with virtually all large aerospace projects, however, these early assessments quickly proved inaccurate. No previous spacecraft had had to endure thermal and radiation environments ranging from the fury of the Sun around Venus to the cool and quiet of Saturn orbit. None had had to support so many parasite craft during such a long voyage from launch to probe delivery. None had needed such an endurance merely to complete their primary mission. Increasingly, as JPL and ESA engineers confronted these problems, it looked like Cassini’s launch might slip from 1992 to 1994, the next possible date when Venus could be used for a flyby.

    In response, NASA returned to Congress asking for more money for the probe, hoping to throw enough resources at the spacecraft to complete it on time despite the difficulties. As the rapid budget growth that had characterized the agency’s funding through most of the 1980s was coming to an end, obtaining this supplementary funding proved more difficult than agency officials had anticipated. Despite failing to obtain these additional monetary resources, JPL leadership was still officially aiming for launch in 1992, hoping to simply push its existing personnel and technical resources harder to make up the difference. With the scientists, engineers, and technicians involved in the efforts to prepare the probe slowly coming to a consensus that the probe could not possibly be ready by that time, morale began a slow-motion collapse, strained by the disconnect between management and the workers actually in charge of implementing the program, further slowing Cassini development.

    In the wake of Bush’s “constellations of exploration” speech, Cassini gained prominent billing as the largest and one of the most important NASA planetary exploration missions planned for the next decade. Increased funding followed increased attention, but by the time additional resources began to flow into the program, it could not realistically be ready by 1992. As 1990 wore on, management was finally forced to face this fact, officially delaying launch from 1992 to 1994. With two more years to build and test the spacecraft, more funding and resources flowing into Cassini accounts, and increased support by upper-level management, morale recovered and the program began to get back on schedule. Even when Gore was elected, his budget-cutting instincts and a more budget-conscious Congress found a riper target in the as-yet inchoate Ares Program than in the more concrete and nearly ready Cassini, sparing it significant pain. By mid-1994, the probe had been completed and shipped to Kennedy for final systems integration and mating with its booster, and in early September was rolled out to the launch pad atop a Saturn M02-Centaur. Launch went smoothly, easily inserting Cassini onto its planned trans-Venusian trajectory.

    With launch behind it, Cassini commenced on its voyage to Saturn. Although few of its instruments could usefully operate during the voyage except in an engineering capacity, those few which could, such as particle and fields instruments, were left running to gather what data they could, while the others were periodically tested to assure their continuing functionality. During its decade-long voyage to Saturn, Cassini slumbered as it flew by Venus, Earth, and then Earth again before finally being slung into the outer solar system. Bypassing Jupiter because of its intense, deadly radiation, which it had not been designed to resist, it was not until August 2004 that the probe finally stirred itself for its arrival at Saturn, jettisoning its probe onto a Saturn-bound trajectory and then making a short rocket burn to prevent the main probe from impacting the planet. As Saturn swelled ahead of the probe, more and more instruments were activated, checked out, and set to work collecting early data, until finally, just before Christmas, the spacecraft’s two parts arrived at the ringed planet simultaneously.

    Much like the Galileo probe before it, Cassini’s atmospheric probe slammed into Saturn’s atmosphere traveling tens of thousands of kilometers per hour, far above hypersonic speeds even in the thin hydrogen-helium upper atmosphere of the planet. Instantly enveloped in a plasma sheath stretching for kilometers, the probe was subjected to decelerations of hundreds of gees as it slowed to a more palatable speed. Once it slowed sufficiently far that a parachute would not be ripped apart instantly on deployment, it fired a drogue through its heatshield’s backshell; moments later, the backshell and drogue detached and the main parachute spread itself, slowing the probe even further. Freed of its heat shield, the probe was now able to look around itself, exploring its surroundings with a variety of scientific instruments. Unlike its sibling, however, what greeted it as it began peering at Saturn from within was not a turbulent storm but the relatively calm southern midlatitudes of Saturn’s atmosphere. Although a thin haze surrounded it, and the ubiquitous and powerful jet stream winds were bearing the probe along, little else disturbed the probe’s descent through the atmosphere. A few minutes after opening up, it passed through a thin, high-level layer of clouds, before emerging once again into the open sky. As it fell, it constantly sampled Saturn’s atmosphere, probing its composition in great detail. Much like Jupiter’s, it was made mostly of hydrogen and helium--but that was not what most interested scientists. Like Galileo’s probe, what they were after was heavier, less common stuff: carbon, oxygen, argon, and other massive volatiles. Surprisingly, given its position farther away from the Sun, in the more volatile-rich outer Solar System, Saturn’s atmosphere proved to have fewer volatiles than Galileo’s probe had indicated for Jupiter’s--although whether this was a real difference between the two planets or an artifact of the very different situations the two probes had found themselves in as they entered their respective atmospheres instantly became an ongoing point of scientific debate and argument.

    As the probe continued to fall, though, those debates were months in the future. The data needed to write the papers and create the conference presentations that would spur them on was still being transmitted to Cassini high above Saturn, not enlivening the memories of computer systems back on Earth. In the moment, the probe was still falling through the atmosphere of Saturn, slowed by its main parachute. Unlike Jupiter, Saturn’s lower density and consequently lower gravity meant that the probe was falling more slowly than Galileo’s probe had done while passing through similar pressure levels on the giant planet, even though for the same reason those pressure levels were located deeper in Saturn than they had been on Jupiter. If the probe continued to fall at the same stately speed, it would run out of batteries, terminating further data collection, long before it reached the deeper areas of most interest to scientists. As it passed through the one bar pressure level, roughly equivalent to sea level pressure on Earth, the solution Ames engineers had developed to this conundrum made itself known with the detonation of pyrotechnic devices around each of the risers connecting the probe with the main parachute, severing them in a single explosive action. No longer burdened by the parachute, the probe plunged away, deeper into the atmosphere, diving into the patchy but deep water ice cloud layer. Nearly an hour after it first entered the atmosphere, it ceased to transmit data, just as it had begun to indicate the tell-tale signs of a third cloud bank, composed of a water-ammonia mixture. The probe itself, like its Jovian counterpart, continued to sink into the planet until it eventually melted, then vaporized. With the probe’s signals cut off, Cassini turned away from Saturn and prepared itself for the most critical part of its mission yet: Saturn Orbit Insertion. As it passed through perikrone, its main engine, largely silent since launch, ignited. After burning for more than an hour, it shut down again, having placed Cassini into a highly elliptical Saturn orbit. At last, more than a decade after launch, and almost two since the program had started, Cassini was ready to begin its mission.

    High above the ringed planet, the spacecraft’s electronic eyes had a grand perspective from which to observe the changing, fickle nature of the second gas giant. Just as Galileo had shown Jupiter to be a world of vast, rapidly changing weather interleaved with longer and slower climatic cycles, so too did Cassini follow in showing Saturn imitated its larger sibling. Around the north pole, a vast and curious hexagonal pattern surrounded a great and endless storm, fodder for endless speculation about alien lifeforms somehow manipulating the planet (although scientists quickly determined it was most likely merely a result of some strange fluid dynamics). In the south, a gigantic hurricane, complete with the first eyewall seen outside of Earth’s own atmosphere, occupied the pole, churning away endlessly, fueled by the planet’s rotation. Away from the permanent storms of the poles, other atmospheric disturbances rose, stormed (often the accompaniment of powerful lightning bolts), and died away, none greater than one that struck nearly halfway through the probe’s mission. Quickly growing to enormous proportions, the “Great White Spot” wrapped itself around the planet’s northern hemisphere, attaining a behemoth span far greater than any other storm ever witnessed on any other planet, even the famous planet-spanning Martian dust storms--which this one would have swallowed whole a dozen or more times over. As Cassini watched, the planet’s temperatures and prevailing winds shifted with the seasons, just like the inner planets or Jupiter. Even though too little data could be collected to definitively explore every aspect of Saturn’s climate, what was collected was enough for a hundred theses and more papers, fueling academic investigation for years.

    Although exploring Saturn’s weather and climate was an important part of its mission, it was not the only or even perhaps the most important subject of Cassini’s explorations. After all, Cassini was in space, and from space only a vanishingly thin outer layer of the planet could be observed; even its probe could only penetrate into a single tiny region of the planet. Like the other giant planets, however, Saturn has a vast collection of moons, ranging from tiny specks of dust floating in its famous rings to the gigantic Titan, the second largest moon in the solar system. Comparatively open to Cassini’s observations, these, not the planet itself, had been the primary focus on Cassini’s mission since early planning on SOWP had begun. Again like the other giant planets, these moons proved to be far more varied and active than astronomers in the middle of the twentieth century, before they could be observed from close range, had thought, and, despite the revelations of the Voyager probes, even more than had been suspected only a decade or two earlier.

    Chief among the moons which Cassini was targeting was mighty Titan, by far the largest of the planet’s collection. The only moon in the solar system to possess an atmosphere of any significant thickness--indeed, thicker than Earth’s--Titan had also been a primary target of Voyager 1’s Saturn flyby, but had frustrated the probe’s observations through a thick layer of virtually opaque haze enveloping the entire globe. Scientists, although disappointed, had not given up their interest in the moon, and Cassini came prepared to pierce the haze through three methods. First, spectral analysis through a variety of methods had shown there were very narrow “gaps” in the haze at certain frequencies of infrared light, which Cassini’s optical instruments were sensitive to. By imaging the moon at those frequencies, pictures could be taken of deeper regions of the atmosphere, even the surface, a technique demonstrated by the Hubble Space Telescope in the late 1980s and early 1990s. Second, like the Venus orbiter VOIR, Cassini carried a radar capable of ignoring the moon’s clouds and hazes altogether to image the surface directly. This had been one of the highest-priority instruments for a Saturn mission since the discovery of the frustratingly opaque haze layer, despite its significant weight and power consumption, and its presence aboard Cassini had been a given since the very earliest SWOP design concepts. Finally and most dramatically, Cassini was not alone in its mission to explore Titan. It carried not just the single probe it had dropped in Saturn’s upper atmosphere, but a second, designed and built by ESA, intended for Titan and Titan alone.

    When ESA took on this task in the 1986 Memorandum of Understanding which finalized the exact arrangement of American and European contributions to Cassini, they were confronting one of the trickiest tasks ever faced by the designers of a planetary entry probe. Despite observations from Earth, the Hubble Space Telescope, and Voyagers 1 and 2, very little was known about Titan’s surface. The most intriguing observations were those of methane and ethane, light hydrocarbons that would rapidly break down in Titan’s upper atmosphere. If they were being seen, that meant that there had to be some kind of source at the surface. Some models suggested that this source might be alien volcanoes, erupting liquid methane or even water from within the surface, while others indicated that the moon might be englobed in a vast, cold ocean of methane and ethane, a strange and unusual sea--but the first found off of Earth. Lacking certainty, ESA designed the probe against any eventuality. Huygens would be able to float on alien seas, survive landing on alien soils, endure atmospheric pressures half again or more as great as at Earth’s surface, cold and hot, transmitting useful data from them all. Building on experience from the Mars Surface Elements they had built for the Soviet Mars 12/13 missions and information from NASA Ames, which had constructed the Galileo entry probe and was constructing the Cassini Saturn entry probe, ESA quickly went to work on the spacecraft. Like the rest of Cassini, they quickly ran into problems. Titan, after all, was a considerably different environment from the surface of Mars or the atmosphere of Saturn, and many of the necessary requirements had little in common with the areas they had drawn experience from. Moreover, Huygens was intended to last not for the months of the Mars Surface Elements but for mere hours, perhaps transmitting some data from the surface after its dramatic plunge through the atmosphere. Although the Europeans had some experience with short-lived, battery-powered spacecraft, ensuring the necessary performance under Titantian conditions and after more than a decade in space was something else, and ESA welcomed the postponement of the launch with its own sigh of relief.

    Several months after entering Saturn orbit, Cassini ejected its remaining probe onto a Titan-crossing trajectory, then, a few days later, carried out a brief burn to remove itself from the danger of encountering the moon personally. A few weeks later, Huygens hit Titan, screaming into its atmosphere at thousands of kilometers per hour. Despite traveling much more slowly, like its sibling a kilometers-long streamer tail of shocked plasma burst into existence around the probe as it hit Titan’s atmosphere, trailing away from the probe as it rapidly decelerated in the thin upper atmosphere. Within minutes, it had slowed enough for first the drogue and then the main parachute to deploy, allowing it to eject the rapidly cooling heat shield and begin collecting data. As it fell through thin haze, haze that would pervade the entire visible atmosphere throughout its mission, it detected winds rivaling those of the great Martian dust storms, though far from Saturn’s fury. Chemical samplers, greedily sucking up the Titanian atmosphere and putting it through complex equipment, found complex organic molecules throughout the atmosphere, already known from remote sensing but now sampled in greater detail. A thin ionosphere was detected at lower levels, probably the result of galactic cosmic rays hitting Titan’s atmosphere. As Huygens continued to fall, these instruments built up vertical profiles of wind speeds, atmospheric composition, and more, all the while radioing the data back to Cassini and to sensitive radio telescopes back on Earth, tracking Huygens’ signals to provide a back-up wind measurement and determine its position.

    Nearly two hours after entry, as it neared the surface, its descent camera was finally able to penetrate the haze. Hindered not only by the haze but by Titan’s dim sunlight as it drifted downwards, it was nevertheless able to return the first actual pictures of Titan’s landscape ever seen on Earth. In its images were gently rolling, brightly-colored hills, etched with channels that appeared to have been carved by flowing liquids. Nowhere in sight were the extensive dark-colored areas that scientists had suspected of being lakes or seas, a disappointment for those who wanted a definite answer to their composition and state, nor any indication of liquid actually flowing through the channels Huygens was seeing, at least while it was descending. As it neared the surface, the photographs and surface data it was radioing to Cassini became ever more detailed and informative, culminating in a set of final images transmitted just as it was about to touch down, showing the Titanian surface in magnificent detail. Unfortunately, moments after touchdown, Cassini and Earth lost contact with Huygens, with no trace of the signal being detected between the projected touchdown time and when the orbiter finally would have descended below the horizon as seen from the landing site. A joint NASA-ESA board of inquiry determined that the most likely cause of the failure was inadequate rad hardening on the primary and backup radio transmitters during assembly, coupled with errors in the transmitter firmware that was supposed to oversee the transition from descent to landing operations. Both had been largely copied from the earlier Mars Surface Element probes, then modified to meet the different conditions that would be encountered at Titan in order to save money during Huygen’s development and assembly. However, the vastly different conditions encountered by Huygens during cruise and operations were not, in fact, fully insulated against errors that might be caused by those changing conditions. Although the inability of the board to examine flight hardware meant that this could only ever be a provisional finding, the fact that both transmitters had behaved erratically during descent supported their findings.

    Despite the disappointment of Huygen’s failure on landing, however, the data it returned, together with the other data returned by Cassini during its many flybys of the moon, proved a vast and valuable source of information on Titan, greatly refining scientific knowledge of the body. And besides, much as with Saturn itself Cassini was intended to do more than just explore the largest of the planet’s many moons. All of the major moons of Saturn received their own flybys, from Mimas, at the outer edge of Saturn’s main rings, to Iapetus, the most distant and probably most famous, after Titan, of Saturn’s large moons. Of these flybys and the discoveries they represented, the most unexpected were the observation of great geysers of water erupting from certain areas around the south pole of Enceladus. This small, icy moon, previously thought to be of little interest, suddenly found itself catapulted nearly to the top of the shortlist of planets and moons thought most likely to harbor life, behind only Mars and Jupiter’s moon Europa. Although it had elicited relatively little interest in pre-mission planning, now scientists talked about a possible future mission dedicated solely to exploring the moon, perhaps even returning samples from its geysers to Earth. For the moment, with the Artemis program and other probes occupying the agency, this amounted to little more than idle discussion, but, then again, without idle discussion at some point no space mission would ever have been launched. In the meantime, Cassini continued to explore the Saturn system, being repeatedly extended to allow it to spend ever more time harvesting yet more data on the entire system--the planet, its captivating and beautiful ring systems, and its multitude of moons. As it entered its second decade of operation, Cassini had not only a storied career behind it, but much to look forwards to.
     
    Part III, Post 10: European launcher development of the 1990s
  • Good afternoon everyone! It's that time once again, and I think this week's post should interest people. Two weeks ago, we looked at the state of the Russian space program, with a particular focus on the transition from national service to competing in a commercial world. This week, we follow up on that thread a few hundred miles west as we check in on the European launcher program. Unlike IOTL, it's not unchallenged nor dominating by default, and the result is a sharp question: how to position for the new millennium?

    Eyes Turned Skyward, Part 3: Post #10

    The dawn of the 1990s arrived disconcertingly for the European Space Agency. At the beginning of the European international program, the partners had gathered to lay out a roadmap to develop a substantial program, capable of matching even the superpowers. On the manned side, a long-running and highly successful, if sometimes contentious, partnership with NASA had been key, while unmanned astronomy and planetary science missions had depended on partnerships not just with the Americans, but with Japan and Russia as well. However, the key of the endeavor had been the ongoing development of a string of commercially-aimed launch vehicles: Europa. Despite their troubled birth, by the introduction of the Europa 4 series in the late 1980, ESA had forged Europa into a very successful family of rockets. Covering a wide range of capabilities, Europa was well-suited to European needs and enjoyed considerable success for European governmental and commercial payloads. Despite the success of the rockets in serving the European market, the core of commercial satellite development remained in the United States, where a plethora of launch options, many covering similar payload capacities as Europa, were offered by competing manufacturers. With little differentiation between Europa and American rockets on price--indeed, the new ALS Carracks promised to be cheaper than the lighter Europas whose payload it matched--it was a challenge to win American launch contracts, and with very few exceptions American payloads flew on American rockets while European payloads flew on Europas. The battleground was thus markets like India, South Korea, Vietnam, and the Middle East--all wealthy, industrialized nations looking to build satellite networks, but as yet unable or unwilling to build their own launch vehicles. Currently without a launcher development program for the first time in their history, ESA had already been debating how to position to build market share in these areas when the fall of the Soviet Union sent shockwaves through the industry. The end of the Cold War meant that Russian manufacturers, desperately hungry for hard cash, were now free to bring their low costs and proven histories to Western markets--indeed, they were virtually required to by the chaos spreading throughout Russia. At the same time, the Chinese were also opening their own low-cost industry up to foreign payloads, opening yet another front that European manufacturers would need to hold.

    To compete in this newly complicated marketplace, Europa would need to change. However, unlike in the past where the requirement had usually been for a more capable launcher to keep up with growing payloads, here the requirement was for a more competitive rocket, able to hold its own against a flood of low-cost competitors from the East and high-tech ones from the West. Unfortunately, this was not a problem solvable with more development funds and engineering--in fact, sometimes quite the opposite. The problems of Europa were largely administrative and logistic, and these required solutions that no combination of strap-ons could solve. Despite the strength that ESA’s financial support gave it, Europa’s operations were limited by this same support. For one, the necessity of having vehicles available to meet ESA member’s scientific and national defense needs meant that Europa’s schedule was somewhat limited. In addition, the direct control of Europa’s operations budget by the ESA members meant that flexibility to invest in launch site improvements or to make modifications to the vehicles to reduce operations was lacking, replaced with extremely formal contracting and byzantine governmental budgeting processes. In addition, the structure of ESA’s logistics train meant high built-in costs, which in turn only further discouraged potential customers from selecting Europa for their launch needs.

    With the need to compete internationally clearly requiring a shakeup, the British and French, exercising new muscle thanks to reduced German funding of the combined agency, proposed a potential solution: instead of the government directly supporting and operating Europa, Europa’s suppliers would instead be grouped together under a new semi-private company, which the ESA member governments would hold stakes in. This separation would give the new company greater freedom to alter its operations to minimize costs and build commercial market share while still ensuring continued ESA access to native launchers. This new consortium--announced in March 1991 as EuropaSpace--set about its improvements with gusto. However, given the ongoing changes in the market and the potential improved technologies available as new Russian technologies were being examined and replicated by European companies like Rolls-Royce, the company initially focused not on new launchers, but on streamlining contracts and supply chain for their existing families. With moves thus underway to reduce overhead and trim costs to lower the price at which commercial payloads could be offered, the main limiting factor on selling slots was the facilities at Kourou. With only a single launch pad available for the Europa 4 family, the maximum number of launches that could be performed of the type was roughly 8 per year. Thus, a 4 ton and a 2 ton pair of satellites (the most common sizes for satellites--”full” and “half” sized busses, respectively) launched independently would require nearly a quarter of a year’s launches on a Europa 44 and Europa 42 respectively. In the past, this had been somewhat typical as launches were allocated as flights were sold. However, EuropaSpace, following on the track blazed by commercial Titans, moved to instead pair such payloads into larger dual-launched pairs aboard the larger Europas: 42u and 44u. A “full” and a “half” would fit reasonably well on a 42u, while slightly heavier pairings could be slotted in on the until-then almost unused 44u configuration originally planned for manned Minotaur or future space laboratories and probe missions. In doing so, not only could twice as many commercial payloads be launched per year, but the cost of each would decrease. While a 44 and 42 would have required a total of two Griffin cores, six Blue Streak boosters, and a pair of Aurore upper stages, dual-launching on a 42u would cut that to just one, two, and two respectively--a substantial savings in hardware costs with only minor launch timing changes to the customer.

    However, even as the benefits of this focus on dual-launching was being reaped, the future of European launchers was being discussed at the highest levels, both by the ESA partner governments and by representatives of EuropaSpace. The Europa family, built on the legacy going back to the 1950s missile programs, was still something of a “second generation” launcher, with large levels of craft work and extensive analogue steps involved in production thanks to tooling and facilities that had been in service since before the information revolution--some of the jigs, stands, and metalworking techniques used in assembling RZ.2 engines for Blue Streaks and Griffins (not to mention the engines themselves) had originally been designed and constructed based on calculations carried out by slide rules, with tubes being bent and welded by hand for the regenerative nozzles. In an era when automotive and aerospace engineering was increasingly making use of the benefits of automation and electronic controls, it was an anachronism. Thus, like the Americans and Russians, the Europeans too were looking towards the future and the potential for an overhauled “ELV3” third generation to incorporate the latest launch technologies and production improvements. The question, then, was what this third generation would look like. In the summer of 1993, this question came to a head in a series of technical conferences sponsored by ESA.

    The lead entry, supported by EuropaSpace, was a continuation of the past successes with expendable rockets, but updated to use the latest in manufacturing techniques and engine technologies. The proposals for such “Europa 5” concepts resembled in some ways the multicore families of the Russians and Americans--instead of a large-diameter core and separately designed boosters, the new family would instead be based on a single lower stage which would be clustered to meet the required payload capacities. EuropaSpace proposed this class to meet the existing Europa 4 ability to dual-launch current satellites, while also future-proofing against the growing number of 6 mt “supersized’ busses by designing for an upper payload to GEO of no less than 10 mT--enough to dual launch 4 mT full busses, or a potentially expanding to dual-launch 4 ton class or 4 and 6 ton birds. With new staged-combustion engines, improved production techniques, and pad updates, EuropaSpace promised that the new vehicle would be able to match or beat American launch providers on cost, while meeting the capabilities of all competing launchers (even commercial Vulkan). (Meanwhile, EuropaSpace would receive ESA funding for all new toolings, pad improvements, and more--which they wouldn’t for any less ambitious modifications, and thus a benefit for the company.) However, while this was popular with the French and British governments, it was less so with Germany. The Germans had already suffered trims to the Minotaur program as a result of their reduced funding caused by diversion of resources to rebuilding the old East German territories. Now the new Europa 5 proposals promised to shift even more of the money involved in Europa--already limited mostly to the Astris third stage--to France and Britain in the name of “minimizing overhead.” However, the reasons for this were hard to argue against: Britain and France were the nations with the largest existing foundation for any new hydrogen or kerosene expendable rocket. Thus, German support ended up falling behind less “conventional” proposals, mostly involving some degree of reuse--particularly types divergent enough from conventional expendables that German aerospace manufacturers would be no less advanced than French or British companies.

    The main German support was behind their own native Sanger II project, which had examined a fully reusable two-stage-to-orbit system since the mid 1980s. The first stage was to have been a turboramjet-powered aircraft, which would have lifted the second stage up to Mach 6 and nearly 25 km before dropping it and returning to base. This would then have enabled the Horus second stage, a reusable delta-wing spaceplane, to continue on to orbit on hydrogen/oxygen rockets carrying a reasonable payload. The designs--both with the spaceplane upper stage and an expendable higher-payload version--had reached advanced conceptual stages, and the proposed turbo-rocket cycle for the carrier plane had been seen initial demonstration on the ground in 1991. With such a design, Germany argued, Europe would be able to compete not just on a level playing field against the Americans, but to beat them by a wide margin--perhaps even beat the Russians and Chinese to make Europe the leader in spaceflight. However, support for such ambitious proposals didn’t break down entirely by national lines--there had been low-level French studies continuing on from the rejected spaceplane designs for the European cargo vehicle, while the Uk had seen a program run by Rolls-Royce on a cooled turborocket of their own for use in a single-stage spaceplane called HOTOL. However, neither set of proposals had gained much traction in their native countries, and the Germans provided a strong backer for these programs which otherwise were nearing abandonment.

    However, such revolutionary vehicles would require substantial investment to show any results at all, while also carrying huge development risks. Many of the proposals had very limited payload margins, meaning any overruns risked preventing them from making orbit, while the technical readiness of such hypersonic aerospace vehicles was much, much lower than conventional rockets or even existing supersonic aircraft--limited to computer simulations and sounding rocket testing to serve as pathfinders for basic information. Given these, even under the most optimistic development timelines, such a vehicle could not be in service before the mid 2000s. Thus, no matter Germany’s fierce advocacy and interest expressed by many individuals within the main ESA and EuropaSpace leadership and rank-and-file engineers, in the end Europa 5 was given the go-ahead, with a targeted entry-into-service of 1999. However, since the risk reduction was much less expensive than full-scale development, Germany was able to secure a roughly 6:1 ratio of funding for Europa 5 development to development for an “X-plane” program under the Sanger name. This would be devoted to a subscale demonstrator of a Mach 6 turborocket vehicle called the Hypersonic Engine Demonstrator (HED) to prove out the carrier aircraft’s systems and development of a “stagelike” spaceplane similar to a subscale Sanger Horus upper stage. This could be tested with subsonic captive carry and drop testing, and potentially even supersonic carry and drop from a Concorde-derived carrier aircraft. While Germany had not achieved the full program they might have dreamed for, even this was enough to make them the center of European RLV development--a more than satisfactory outcome.

    The Europa 5 program proceeded fairly rapidly once approved. The Aurore upper stages of earlier Europas would be retained, though the HM-7B would see an improved vacuum extension and an overhaul to reduce part count and minimize manual assembly steps. Similar process improvements were applied to the stage structure--the number of separate welding operations on the assembly of the domes and barrels was reduced, and the remaining welds confined to fewer specifications to minimize reset times--reducing production costs and enabling a higher throughput of stages if needed. The major element, though, was the new first stage. Built using new 3.5m tooling, it would be based on a pair of the new Rolls-Royce staged-combustion kerosene engine, the RZ.4. This engine, designed with the benefits of Russian insight into staged-combustion cycle design, was roughly the size and form factor of the existing RZ.2 but produced substantially more thrust and had significant improvements in specific impulse. With three cores clustered into a Europa 53u configuration, it would be able to launch more than 8 mT to GTO, allowing for either a 4-and-4 configuration or a 6-and-2. Single core and 5-core configurations would enable it to support both the old Europa 4 range and expand the upper end to match the capabilities of the single-core Saturn Multibody and Vulkan. With conceptual design complete in 1994 and the final design approved, work began to bend metal. In order to avoid introducing hassles into the carefully-leaned launch operations at Kourou, EuropaSpace was able to secure funding for an entire new integration and launch complex. This facility would enable Europa 5 to be brought online in parallel with Europa 4’s final launches, but was to be located so that once Europa 4 was retired, the old Europa 4 site could be re-activated to support Europa 5 as well if needed to open up even more launch slots for sale at the launcher’s internationally competitive prices.

    Meanwhile, the Sanger program was proceeding along parallel tracks. The first was the construction of the Hypersonic Engine Demonstrator, a vehicle designed to be air-dropped from the back of a custom-modified A340 at altitude. It would then ignite its engine for a brief demonstration of hypersonic controls and thrust before burning out and falling into the sea. A successful series of these flights (using, of course, multiple vehicles) would demonstrate the basic principles behind the Sanger design in flight, a critical first step for justifying the approval of a full-scale vehicle. At the same time, work was proceeding on a “flight like” mockup of the orbiter, which was scheduled for a series of captive carry tests and drop gliding tests to demonstrate vehicle control and verify weight projections, as well as on other associated required technology such as the new higher specific impulse Vulcain upper stage engine from Snecma, the same firm building the then-current HM-7B. Conceptual design work on the two vehicles was completed in early 1995, and metal began to be bent on the four HEDs and the mockup Horus second stage vehicle. By the start of 1997, the first carry flights of the HED were beginning and construction of the Horus was nearly complete. However, the start of the HED program put the entire program into jeopardy. The first firing of a HED resulted in a partial success, though a seriously qualified one. While the drop was nominal, the engine lit, and the initial burn went as planned, after roughly a minute of flight the vehicle lost communications with the carrier and chase planes. After months of reviewing the data from the thousands of sensors onboard the HED, the issue was traced to a faulty seal between the engine and the exhaust of the vehicle which had failed to hold up under the full running engine’s load, venting combustion gasses into the body of the vehicle. Not designed to withstand high-pressure gasses at hundreds of degrees Celsius, the avionics had melted moments before the fuel tank gave in and the entire vehicle ignited. Even as the Horus mockup began captive carry testing, the next few HED tests proved no more successful, with control issues and a compressor stall, respectively, dooming the vehicles to a watery grave before they could complete successful extended flights.

    With only one HED remaining and generally negative results thus far, 1998 saw a sharp re-evaluation of the Sanger program. The hypersonic carrier vehicle proved a weak link, while the orbiter was beginning to look more plausible. The question was if it was replaceable, and suggestions abounded. The simplest would be a subsonic drop from the same modified A340 that had carried the drop-test vehicle--with such a launch, the vehicle would be capable of making orbit, though with virtually no payload, making it little more than a technology demonstrator. Alternately, the orbiter could be used to replace the Aurore second stage on Europa 5, which would enable downmass capability and could offer an alternative to man-rating the existing Minotaur for crew transport to station. In a third option, a supersonic carrier, potentially Concorde-derived, would enable a meager but potentially worthwhile payload with full reusability. However, modifications to the Concorde in order to enable it to carry a large external payload would be technically demanding given the age and sophistication of the type, and the expense could easily run to billions of dollars, leaving the existing Sanger budget far in the rear-view mirror. Even though most of the benefits and funding would ultimately flow back to British and French companies, the dominant Anglo-French coalition was cool to the cost, and the idea withered on the vine. Another option was the ambitious proposal of the British Rolls-Royce team, which had moved from HOTOL to Sanger, to start largely from scratch on a new HOTOL-like design that would be single-stage-to-orbit capable, completely eliminating the cost of the carrier vehicle, albeit at the cost of increased expense on the orbital portion. However, their proposed engine was still in very early development, and key areas including the heat exchangers would require advances well beyond the state of the art to reach even bench-testing. While work had proceeded far enough that giving up entirely, especially given the potential and competition from across the Atlantic, seemed not to be an option, the exact trajectory of the Sanger program into the new millennium and reality was very much up in the air.
     
    Part III, Post 11: Commercial satellite communications from 1965 to the end of the Cold War
  • Good afternoon everyone! It's that time once again, and once again we're here with this week's installment of Eyes Turned Skyward. We've touched on several of the major commercial operators ITTL--ALS, Lockheed, EuropaSpace, and the Russians. This week, we're looking at the commercial market those launch vehicles serve--satellite communications--and how the growth of cheaper providers and past successes are leading people to speculate on new uses for the future. So, without further ado, let's get into position to beam down...

    Eyes Turned Skyward, Part III: Post #11

    To all practical purposes, the beginnings of commercial satellite communications can be dated quite precisely to the 6th of April, 1965, when the first satellite designed and built for Intelsat, the International Telecommunications Satellite Organization, was launched. While experiments had certainly taken place earlier, such as the well-known Telstar or the less-known Syncom, they had been just that, experiments, and not intended for real operational use. By contrast, Intelsat I--or Early Bird, as it was nicknamed--was designed to solve a real problem facing the eleven founding countries of what was at first known as the “Inter-Governmental Organization”: a serious lack of capacity in transoceanic communications. Before the development of the communications satellite, the only possible methods to transmit messages across oceans were the century-old technology of submarine cables or the more recently developed technology of radio, bypassing line-of-sight limitations by bouncing signals off of the ionosphere or the Moon.

    While both were serviceable enough, both also had serious problems using the technology of the time. Building and laying submarine cables, especially lengthy submarine cables, is a slow, expensive business, it is difficult to maintain or upgrade a cable which may be miles underwater, and the copper-cored electrical wires then in use have a very limited transmission capacity; by way of example, by 1965 five different telephone cables had been run across the Atlantic Ocean, from stations in New Jersey and Newfoundland to France and the United Kingdom. Despite spending nearly a decade building the system, and despite many decades of experience with submarine telegraph cables, the five TAT lines could handle only about 500 simultaneous voice circuits, sharply limiting access to transatlantic telephony. Ionospheric or lunar relay radio had fewer problems with construction times, but suffered more from unpredictable day-to-day fluctuations in ionospheric conditions; one day one might be able to reliably connect halfway around the world, the next be hardly able to transmit even slightly farther than line-of-sight. Additionally, while cable capacities could theoretically be upgraded any amount by simply building more cables, and had been increasing on a per-cable basis (the third TAT had quadruple TAT-1’s capacity at laying), the amount of bandwidth available for radio transmissions was fixed by nature, and could never be expanded past a certain amount without having to deal with excessive noise.

    From the point of view of the Intelsat nations, then, the capacities of the communications satellite were revolutionary. By itself, Early Bird was able to carry some 240 simultaneous voice circuits, increasing transatlantic telephone transmission capacity 50% in a single fell swoop, while by the end of 1967 three more second generation Intelsats--with the same capacity but twice the expected lifetime--had joined it in orbit, expanding Early Bird’s transatlantic service to transpacific and transindian routes as well. However, that was just the barest taste of what was to come, as the third-generation satellites, launched beginning in 1968, could carry 1,500 each--nearly four times as many as all transatlantic cables ever laid put together. Although in 1970 cable operators added a fifth cable, able to carry more than 800 voice circuits, Intelsat had already launched eight third-generation satellites, and in 1971 began launching a fourth generation--now able to carry 4,000 voice circuits and two television channels (while even the original Early Bird had been able to carry television signals, as shown by its usage in the broadcast Our World, doing so required tying up dedicated telephone circuits). By 1974, Intelsat’s network could carry up to 20,000 phone calls and five television channels simultaneously, an exponential increase on what had been possible only a few years before, and the beginning of the vast increase in international communications which would continue for the rest of the century.

    While decried in later years as a bloated bureaucratic mire of a socialist organization, in truth Intelsat was responding quickly and with aplomb to what its customers wanted. It was just that its customers were not, at first, individual telephone users, or even large businesses, but instead entire national telephone networks: American Telephone and Telegraph, Post Office Telecommunications, Postes, Télégraphes et Téléphones, and more. As government-regulated monopolies or nationalized firms, their concerns were less those of individual customers, and more those of maintaining a solid, cheap to maintain network--certainly attributes that would benefit their customers, but not ones those same customers directly cared about. It was not until the deployment of the fourth-generation Intelsats, with their television transmission capabilities, that Intelsat started to really address large businesses directly, and even then their customer base was numerically dominated by large nationalized European television networks, with many of the same issues of corporate interest. At the same time, the number of countries involved in the Intelsat consortium had nearly octupled from its founding by its tenth anniversary, vastly increasing the number of “stakeholders,” as a later age would put it, and increasing the difficulty of deploying new systems and upgrading new technology. While the first seven years of Intelsat’s existence had seen the development and then deployment of four distinct generations of satellite, each a significant improvement over its predecessor, the next nine saw only an intermediate IVA generation, providing 50% more voice circuit capacity per satellite as its fourth-generation predecessor, and then a fifth generation, which doubled simultaneous call capacity again. While not insignificant upgrades, they paled in comparison to the rapid rate of improvement offered earlier in the decade.

    With Intelsat at once a government-mandated monopoly and stagnating in its own success, real interest--and money--turned during the 1970s to using the technology developed for the now “solved” problem of transoceanic communications to address more specialized communications issues. Shipping firms, oil and gas corporations, and other companies whose business depended on spending long period of time away from fixed communications links were interested in smaller, more mobile earth stations, able to be mounted on a ship or easily moved by a truck to wherever communications might be needed. Large countries, like Canada, Australia, or Brazil, with huge areas of thinly populated land where building conventional wired or microwave links would be prohibitively difficult were interested in using satellites to bring modern telecommunications to their most remote populations. Other countries, like Indonesia or India, with little existing telecommunications infrastructure, saw satellites as a cheap method of bypassing the time-consuming and expensive need to build conventional links. There was growing interest from firms involved in nationwide or international business in dedicated satellite links, offering potentially improved security against hostile eavesdropping or spying attempts and increased speed and reliability compared to conventional communications. RCA was beginning to develop the first broadcast satellite television system, heralding a wave of copycats to come in the next decade. And, of course, there was always the American government, and especially the military, always interested in new, faster, and more reliable methods of linking together their ever-growing systems of airplanes, tanks, headquarters, satellites, and more into a single network.

    All this activity, even if it was mostly on behalf of government customers, drove rapid growth in the satellite business. Major satellite construction firms, like Hughes, General Motors, Ford, and General Electric expanded their satellite production lines to accommodate rapidly growing demand, while launch vehicle manufacturers like McDonnell Douglas and Martin Marietta saw increased demand for their products. And, of course, a variety of new firms were founded to try to capitalize on this expanding business, both in building satellites and in providing satellite services. While attempts to break into the launch vehicle and satellite construction businesses were mostly unsuccessful, with the notable exception of American Launch Services, Inc. (which did not even attempt to address the communications business at the time), attempts to build new businesses addressing these new needs with new customers were far more rewarding. In the United States, especially, the chinks opening in AT&T’s long-held monopoly on long-distance communications opened a wealth of business opportunities for those cunning enough to seize them. By the mid-1980s, even Intelsat found itself suddenly faced with competition in the international market from American firms aiming at the most lucrative of satellite communication markets, while it itself had slowly taken aim at many of these new markets, offering specialized domestic and business services to new customers.

    The rapid churn and bustle of the industry through the decade raised hopes that the last twenty five years of rapid growth in the business could continue virtually indefinitely. Although threatened by the recent deployment of high-capacity fiber optic links on domestic and international routes, which promised to erode the traditionally huge cost per unit capacity advantage satellites had over conventional links, satellite communications was still far cheaper to roll out nationwide than any cable network, and seemed to have great promise in broadcasting, as with NBC Satellite and its copycats, and in cheaply connecting burgeoning markets in developing countries. More than this, though, the revival of an idea from nearly the dawn of the space age promised a vast new market to manufacturers and launchers alike, totalling as many as several hundred satellites over the next several decades.

    While the first serious proposal for satellite communication, by Clarke in the 1940s, was based on geostationary platforms, by the time came in the early 1960s to actually begin building such a network it faced competition from a newer AT&T proposal. Rather than large satellites in geostationary orbit, AT&T argued, a system of satellites based in low Earth orbit--like Telstar, a prototype funded and sponsored by AT&T--ought to be used for global satellite communications. While Telstar proved successful enough, AT&T’s monopoly position and the increased difficulty of coordinating and communicating with a system of rapidly-moving low Earth orbit satellites rather than fixed geostationary satellites, led their proposal to be bypassed in favor of Intelsat’s geostationary network, and the idea fell into dormancy. Until the late 1980s, the idea largely languished, with new entrants into the business focusing instead on geostationary satellites, which could provide similar coverage at a much smaller overall cost, or, for some users at high latitudes, Molniya orbits to provide improved coverage.

    These advantages, though, came at a cost; more than twenty thousand miles from the Earth, and using relatively low-frequency radio bands, geostationary satellites required large antennas, several meters in diameter, to provide a two-way connection to earth stations. Although of little consequence for network backbone links, building-mounted antennas, or even ship- and aircraft-mounted stations, such a setup was obviously impractical for personal or small vehicle use. If, however, a network of low Earth orbit satellites--a constellation, in industry parlance--was built, with space terminals only a few hundred miles away from earth stations, a much smaller earth station could be built. So small, in fact, that it looked like a reasonable amount of technological development, well within the budget of major electronics and telecommunications firms, could build a mobile telephone handset that would actually be a very small satellite earth station. In one fell swoop, providers could offer global mobile coverage without the massive investments in fixed infrastructure that would be necessary with a conventional system, potentially mushrooming their customer base and profits. More, as was soon realized, if such a system was developed, it would offer another advantage: shorter latency. Ever since satellites had been introduced into the international communications market, customers had noticed irritating lag unavoidably introduced by the distance of the satellites from the Earth when they made calls routed over them. Satellites in low Earth orbit, although necessarily more sophisticated and greater in number, would have virtually no lag compared to those in geostationary orbit, and only slightly more than conventional ground-based links. While this would merely improve the quality of voice service, it could be critical to new and quickly growing telecommunications services, perhaps even not-yet invented ones, offering another market where an entrant could grow big, and one with little competition from any rival satellite firms.

    Together, it seemed clear that the next big thing in satellite communications--indeed, in the entire field of telecommunications--would be building these constellations. Electronic giant Motorola was already investing heavily in its own in-house satellite telephone project, while a plethora of smaller firms and investors were following close behind. This momentum, in turn, was beginning to trickle down to the launch vehicle business, where the ongoing recession and end of the Cold War were winnowing the field of competitors, leaving only the strongest standing but limiting capacity. With a new age of brilliance on the horizon, though, investors soon forgot the last round of failures and once again began to look skyward.
     
    Part III: Post 12: Lunar exploration and planning in Moon in preparation for the Artemis program
  • Good afternoon, everybody! It's that time again here, and I'm very pleased to be bringing you this week's Eyes Turned Skyward post. We've covered a lot of the political wrangling on the manned side of the Artemis program, but we've yet to touch on the unmanned missions that will precede Artemis back to the moon in preparation. That changes this week with another of Workable Goblin's amazing probe posts--I hope you all enjoy it as much as I always enjoy seeing these come together. Anyway, without further ado.....on to the moon!

    Eyes Turned Skyward, Part III: Post #12

    Even before the publication of the Exploration Report at the beginning of 1990, it had become clear that any future human missions to the Moon would be preceded by wave of robotic explorers. Despite being the second-most explored world in the Solar System behind only Earth itself, the American and Soviet missions of the 1960s and 1970s had left many open questions behind them, ripe for answers from new missions, as well as new problems that mission planners of a previous era had never known to confront. While many of these questions and problems could be answered or addressed without any precursor missions, or safely deferred until crewed flights, some loomed as open issues that could delay or derail future missions before they even left the ground.

    Among the most serious of these was the lunar dust issue; while a few scientists had predicted that the lunar surface would be coated with a large amount of dust, they believed that this would prevent successful soft landings on the Moon, with any spacecraft sinking instantly into an ocean of fine grains. The actual problem of sticky, sharp-edged particles coating all surfaces and damaging seals and joints went unsuspected until the Apollo missions, when astronauts had had to confront their spacesuits and spacecraft becoming rapidly fouled by lunar dust. The similarity of the dust to the agents behind diseases like silicosis and black lung disease on Earth raised further questions about the safety of extended habitation on the lunar surface. Besides these looming technical problems waited a scientific question, spurred by observations by several Apollo missions of strange structures--variously referred to as “bands” or “streamers”--around sunrise or sunset. Some scientists had proposed that these odd formations of light could be created by sunlight falling on dust particles levitated from the Moon by electrostatic forces; if so, this effect might also explain a number of other observations, not only by the Apollo and Surveyor program but perhaps even by earlier astronomers. If lunar dust did levitate and move over the surface of the Moon, this would also have an impact on designing systems to resist the abrasive and damaging effects of the dust, particularly systems that would be expected to be stationary for long periods of time. Therefore, while solving the dust problem would be a matter of engineering, researching the dust question would play a role in that engineering, and determining the exact properties and behavior of the dust would be a useful task prior to any human missions being launched or hardware being built.

    Somewhat smaller in scale loomed the mascon problem. Unlike Earth’s relatively smooth gravitational field, the lunar gravitational field had proved to be “lumpy,” with many areas of higher or lower-than-average field strength. This makes low lunar orbits highly unstable, in contrast to their Earthly counterparts, forcing probes to spend more propellant for a mission of a given length in order to remain in orbit, rather than resting on the lunar surface. While not ultimately a huge problem, this lumpiness had spelled the doom of several clever concepts involving subsatellites which would, with little on-board propulsive capability, quickly crash into the lunar surface. In any case, a better lunar gravitational anomaly map would help mission planners optimize orbit-keeping requirements, saving precious kilograms of propellant that would otherwise be needed for stabilizing orbits. Such a map would also be valuable to geologists, who could compare the hidden subsurface features revealed by gravitational anomalies to surface maps and compositional data to infer new facts about the lunar interior. As with quantifying the lunar dust environment, producing a high-precision map of the lunar gravitational field would be a valuable input to human missions.

    New technological developments and new mission designs had also created new challenges, as well. While planners of the 1960s had largely assumed human involvement throughout mission operations--even in lunar base development scenarios, cargo landers were often assumed to be guided down by a human pilot--the rapid improvement of microelectronics since then had led to a new assumption of significant automation throughout mission operations. Return capsules waiting in orbit would be uncrewed; cargo landers would automatically deliver themselves. Even where there was human involvement, it might be remote and distant, employing workers in office buildings instead of spaceships to teleoperate equipment on the Moon. However, even with the gigantic jumps that had taken place in computer technology over the past two decades, automated systems were still less flexible and responsive to unexpected events than human-controlled ones. If automation was going to be heavily utilized in a return to the Moon, efforts would need to be taken to ensure that these automated systems would never face an unexpected event; that when a lander landed or a rover roved, it would never find a boulder in its landing ellipse or a surprise hill to climb. That the guidance systems of these spacecraft would always be able to find their way to where they needed to be.

    Modern mission planners were also more ambitious than those of previous eras. Where Apollo planners had been content enough to design a system that could land a man on the Moon and return him to Earth, modern planners wanted to do that and maximize scientific return. Missions to the lunar poles, where vast deposits of ice might exist, or to the lunar far side, with its vastly different landscape and unusual topography compared to the near posed an entirely new set of challenges, among the greatest of which was communications. During Apollo, communicating with the Earth was, for the spacecraft on the Moon, relatively simple: they needed merely to point an antenna and transmit. For a mission among the cragged mountains and permanent shadows of the poles, however, or on the far side where the Earth never rises, adopting such a solution would leave Earth out of contact with its explorers for weeks, a clearly unacceptable option. What was needed were communications satellites, just like on Earth, orbiting the Moon to provide a relay to the far side or the terrain around the poles. Such satellites could also serve as navigational beacons, helping to improve the precision of celestial navigation for lunar surface explorers and lunar landers.

    These problems were all on the mind of mission planners and engineers as they prepared the Exploration Report, and as a result the Report proposed a series of lunar missions to help resolve outstanding questions and set up the infrastructure needed for sustained exploration. As a follow-on to the Lunar Reconnaissance Pioneer, a pair of orbiters would be sent to the Moon in the mid-1990s. Unlike the LRP, which could only map the near-side gravitational field through careful tracking of its Earth-bound signals, these two would communicate with each other to map the far-side field as well, and at higher resolution. They would also carry cameras to resolve proposed landing sites in just the sort of exquisite detail needed for automated precision landing, and instruments to help resolve the question of whether or not there really was water ice in permanently shadowed craters at the lunar poles. While complete confirmation would have to wait on a geologist or probe actually collecting samples and returning them to Earth for analysis, scientists had dreamed up numerous techniques to increase or decrease confidence in the presence of ice which could be carried by an orbiter, some of which would fly on the proposed spacecraft. Later in the decade, only a year or two before the beginning of Artemis operations, a set of communications satellites would need to be launched to support surface activities. In an early appearance of EML-2 in Artemis planning, the Report suggested that it might make a good position for a communications satellite constellation; only four or five satellites would be needed to achieve complete hemispherical coverage, and station-keeping demands would be less than in low lunar orbit, saving a considerable amount of money in constructing and launching the relay spacecraft.

    The Report was more vague about possible robotic surface operations, suggesting that rovers or sample return missions might be dispatched to some proposed landing sites to investigate whether or not they were suitable for human missions, that fixed landers might carry prototype resource-processing payloads, that they might be used to investigate possible methods of mitigating dust impacts, or that they might be used for certain high-risk missions--one possibility mentioned in passing was a “rock climber” mission that would dangle an instrument package down one of the “skylights” found by LRP to investigate the interior of a lunar lava tube. Ultimately, however, the Report was palpably uncertain about the value of surface precursor missions compared to orbital ones, suggesting idea after idea but then stating that they needed further study before they could be accepted or rejected as part of the final plan.

    As NASA moved from developing a plan to convincing the Bush Administration--not to mention Congress--to support it, other interests beyond the purely technical began to make themselves known in precursor planning. President Bush’s longstanding interest in foreign policy, coupled with the ongoing success of the Freedom collaboration, led to suggestions from the State Department, senior Administration officials, and the President himself that NASA pursue international cooperation in a return to the Moon. Outside of government, a variety of individuals and groups similarly proposed that Constellation include a substantial international component, ranging from Carl Sagan’s optimistic vision of joint American-Soviet missions with perhaps some European and Japanese contributions to more hardline or pessimistic views of mostly American missions with maybe a few instruments or devices from overseas. The Exploration Report itself had suggested that international collaboration be studied, but the Office of Exploration had largely considered such questions as falling beyond its competence, assuming generally that any mission would be basically American with perhaps some token international involvement. Now, the question of what form that involvement would take was rearing its head, and NASA began to reach out to ESA and ISAS to begin to answer. Tentative contacts were even made with Soviet space authorities, with whom President Bush had some idea of forging agreements to help prevent the spread of advanced weapons technology, but the chaotic environment of the slowly collapsing Soviet state prevented firm agreements from being made.

    Encouraged by their important role in the construction of Space Station Freedom, both Europe and Japan insisted on playing more than a token role in the upcoming lunar missions, going beyond the modest limits set by the Office of Exploration. While neither had much appetite for replacing the most expensive and critical American contributions--the launch vehicle, the transport capsule, the lunar lander--they were more than willing to argue for Mitsubishi building lunar rovers or Zeiss building camera optics, important but relatively simple and cheap elements of the mission. Both also seized on precursor missions as an area where they could possibly make outsized contributions, digging up lunar mission proposals that their own scientists and engineers had made in the past and reworking them to fit in the framework of Project Constellation. A European proposal to build a small ion-propelled spacecraft as a technology demonstrator prior to the operational use of the engines, which had been forestalled by the approval of Piazzi as a major European mission, was resurrected and reworked as a pair of spacecraft for gravity mapping, for example, while a Japanese proposal to send a stretched version of their Halley probes was suggested as one method of investigating the scientific side of the dust question.

    These efforts to secure an important place for its partners at the table intersected with growing discontent in Congress at the scale and cost of the proposed American dual-orbiter mission. Facing a dearth of missions beyond Cassini’s launch in 1994, JPL quickly went to work to secure its position in Project Constellation, trying to quickly set the mission design. The result was a “Christmas tree” of a complex probe with many instruments, able to address virtually every outstanding question possible, albeit at considerable expense. The same impulses that led Congress to reject “Option B” and a commitment to lunar bases in favor of cheaper sorties also led them to reject the inevitably costly JPL dual-orbiter mission. Offers by America’s allies to supply far less costly spacecraft to address some of its roles were a potent weapon in Congressional arguments against NASA’s spending; why shouldn’t NASA save $250 million here, $150 million there by taking them up on their offer, they asked? Under Congressional pressure, and with little Administration commitment to a particular architecture, NASA crumbled; the JPL orbiter was downscoped to address just two questions, that of the presence of water ice and landing site preparation, while the ESA and Japanese proposals were accepted as part of their contributions to the Artemis Program, much as they had contributed to Freedom.

    Just as this agreement was hashed out, however, events conspired to force even more international work on NASA. The collapse of the Soviet Union had led to worries that, in the chaotic economic state of post-Soviet Russia, the advanced military technologies of the Soviets might be sold to rogue states or terrorists, allowing them to strike with ballistic missiles or even nuclear weaponry. These fears had been stoked by the deals made between the Russian space industry and India and China to provide technical assistance to the space programs of the latter two countries; while the transfer of technology to two nuclear-armed states already in possession of ballistic missile technology posed little risk of proliferation, it seemed an ominous sign to those in Washington and Brussels that the Russian arms industry might be overly morally flexible for their tastes. To forestall this possibility, European and American politicians agreed that they needed to inject their own funds into the Russian weapons industry, keeping scientists and engineers working on dual-use technologies gainfully employed rather than assisting Kim Il-Sung or those of his ilk in building ICBMs and nuclear warheads for them.

    With spaceflight a major nexus of dual-use technologies, one significant arm of this effort was in ensuring the Russian space industry remained focused on satellites and launch vehicles. Despite Gore’s turn away from Mars, the joint Russian-American Fobos Together mission that had been proposed in the last year of the Bush administration as part of the Ares Program was steadily moving forwards, and efforts were made to find other areas of possible cooperation. As an ongoing and not yet entirely defined program, Artemis was the natural choice for the State Department to search for possible areas of cooperation between NASA and Roscosmos. Although a range of proposals were proposed, such as NASA use of the larger Vulkan variants for translunar launches or Russian-built surface habitats or hardware, attention quickly narrowed to simpler, more modest areas where the American and Russian programs could cooperate. One area highlighted by the discussions was in communications support; besides the Soviet deep-space communications network, which could be repurposed to support Artemis operations as an additional backup, the Soviet Union had developed and built its own communications satellite industry completely independently of the west, with attractively low costs compared to Western manufacturers. While some modifications would be needed to the equipment being developed for Artemis to allow relay through Russian satellites, given the early stage of design and construction these changes would be relatively straightforward, simple, and, therefore, cheap to implement. Along with the provision of engines for the lander upper stages, the Mesyat communications network, named after a lunar goddess of the pre-Christian Slavic religion, would be one of the largest contributions made by Russia to the Artemis Program, earning them a seat on one of the lunar flights, as with Europe and Japan.

    Even as negotiations between the the two former adversaries were moving forwards, so too was development and construction of the precursors. The Richards-Davis report supported the division of labor that NASA had been implementing, finalizing the number of precursor probes at three: JPL’s imaging/ice spacecraft, ESA’s gravity mapper, and ISAS’ dust explorer. Other precursor proposals were discarded and left as little more than historical curiosities for those of a later age to wonder about, any scientific questions they might have addressed left for human missions to address.

    With work on Cassini turning from construction to final launch preparations, JPL’s program hit the ground running with a fully engaged and ready workforce. With what were essentially two separate missions assigned to the same spacecraft and a hard deadline of 1997 for launch, so that the probe’s data could be fully processed and ready before it needed to be used for the actual Artemis missions, JPL was under a great deal of pressure to deliver on time and under budget, something its last several missions had had trouble with. While neither the optical side of the mission--imaging proposed landing sites in considerable detail to detect obstacles and build navigational charts for future landers to use--nor the ice side--integrating several different theoretical methods of detect water ice to avoid possible bias and error--posed any especially new problems to the laboratory, the pressure cooker environment and subordination to human spaceflight goals were new, or at least unwelcome reminders of a distant past they had done their best to shed since the 1960s.

    Even as development was actually proceeding exceptionally smoothly, at least by the standards of planetary exploration, therefore, the mood around the lab was tense. As the biggest and most important project floating JPL, the Lunar Ice Observer, as it was known, occupied pride of place, but unlike its predecessors it was a tenuous and contested position. There was constant worry, especially from those not directly involved in the project, that LIO might be a Trojan Horse for tighter central control over the famously independent JPL, that it might represent the first stage in a decline of American planetary science--with the launch of Cassini and the MTRs in 1994, JPL had no independent planetary science missions in planning or development for the first time in decades--or other, more fantastic fears. These fears were further stoked, ironically, by the low-key nature of the technical challenges involved; solar power, minimal delta-V requirements, and short duration (by the standard of most of the lab’s recent missions, at least) provided no opportunity to really show off JPL’s technical prowess, and prove that it was still a valuable member of NASA.

    Opportunity, however, was soon to come. Shortly before the demise of LRP in 1993, several of the mission’s scientists proposed a novel and, even better, cheap and quick method of checking whether the mission’s apparent detection of water ice in polar craters had been correct or a misinterpretation of a spurious signal, suggesting that the probe be deliberately targeted on one of the craters in question at the end of its mission. During its impact, it would churn up and vaporize a certain amount of material from the crater surface, among which might be some water ice, which in turn could be spectroscopically detected from telescopes on Earth trained on the Moon’s southern limb. As it would potentially add a significant amount of scientific value to the mission at virtually no extra cost, the mission modification was quickly approved, and LRP was duly crashed into a crater near the lunar south pole. Unfortunately, the results were negative, although supporters of the lunar ice hypothesis were quick to point out that many circumstances could have led to a negative reading; the crater targeted might not have had extensive ice deposits, for example, the deposits might be patchy and by chance the probe had not hit any of them, and so on and so forth. Rather than the conclusive end to the lunar ice debate that planners had hoped for, the experiment became just another datum for scientists to bicker about.

    Nevertheless, LIO designers at JPL took note of the innovative approach, and quickly came up with their own method of using it. Rather than crash their spacecraft, which anyways had a lengthy mission of its own ahead of it, they would crash the transfer stage used to inject the probe on a translunar trajectory, much like later Apollo missions had done with their S-IVB stages. And, by shaving weight off of the main probe and taking advantage of the extra capabilities of the new Delta 5000, they would be able to include an extra, simple spacecraft on the stack, just enough to follow the transfer stage in and analyze the results from extreme close range before adding its own punch. This could help detect trace or faint signals of water ice in the plume that might otherwise elude Earth-based telescopes, not to mention widening the selection of target craters and improving targeting precision.

    When LIO launched aboard a Delta 5000 in late 1997, this subsidiary probe, now named the Ballistics Lunar Analysis SpacecrafT, or BLAST, tagged along, mounted on the tip of the Centaur transfer stage. After putting LIO on a lunar transfer trajectory, BLAST, together with the Centaur, separated and adjusted their course, looping around the Moon as LIO put itself into orbit to optimize their eventual impact trajectory. A few weeks after launch, after several more gravity assist passages, BLAST separated from the Centaur as it neared the Moon, this time bound not for a fly-by but for impact. As the Centaur itself hit, LIO itself rose up above the lunar horizon to watch as BLAST flew through the plume and then into the Moon itself in a bit of choreography that had been arranged through the multiple lunar flybys to provide the best data possible for scientists on Earth. Unfortunately, that data, again, failed to definitively end the debate. While NASA claimed significant evidence for water in their observations, skeptical outsiders questioned their conclusions, a matter not helped by the smaller than expected plume, which was only just detected by the largest Earth-based telescopes trained on the predicted impact site. With no corroborating data from outside parties, it was up to LIO itself or, as a last resort, the Artemis missions to finally show whether or not ice really exists on the Moon.

    LIO delivered. Besides a powerful built-in SAR array, designed to help overcome the problems with LRP’s bistatic radar experiments, LIO carried an array of particle spectrometers designed to extend and supplement LRP’s observations, particularly by detecting hydrogen, one of the elements that make up water. Since hydrogen is rare in lunar regolith, while oxygen is highly abundant, any areas of concentration would be of interest even if the hydrogen was not bound up in ice deposits, although water ice was the most likely and plausible method of binding large amounts of hydrogen. During repeated passes over shadowed craters identified during LRP’s mission, these spectrometers discovered significant evidence of large concentrations of hydrogen, and therefore water ice, along with very unexpected findings that seemed to indicate a relatively large amount of hydrated, or water-bearing, minerals on the lunar surface, especially around the poles. Concurrent investigations on material from the Apollo missions, especially Apollos 15, 17, and 18, which visited formerly volcanically active areas, showed that previous studies had grossly underestimated how hydrated lunar interior materials could be, providing substantial evidence of volatile presence in ancient glass spheres from lunar fire fountains. In fact, these results seemed to indicate, the Moon’s interior is about as volatile-rich as Earth’s, with expected primordial abundances in the material similar to those found in basalts erupting from Earth’s mid-ocean ridges.

    In parallel with these studies, LIO was producing other useful work, with the radar array being used also to characterize the radar appearance of possible landing areas and other regions for later use, not only in support of landings but also to improve knowledge of the lunar near-surface and its properties under radar illumination, to avoid possible future misinterpretations of radar data. The camera, of course, was providing hugely detailed imagery of great swathes of the surface around possible landing sites and other locations of interest, extending LRP’s imaging of the the landing sites of the Apollo, Surveyor, and Luna missions. And with LIO’s other results, these possible landing sites were increasingly clustered around the poles, which had become the top scientific targets for Artemis missions. Indeed, four of the five top scientific objectives of the Artemis missions, on the brink of finally launching, were directly or indirectly related to absolutely confirming and characterizing the lunar ice deposits LIO and LRP had discovered. While JPL was still wary about becoming too involved in the human program, LIO still stood as a significant and very public success even as Cassini continued to wind its way towards Saturn and Liberty continued to hang from its lander-delivery platform on Mars.

    In parallel with JPL’s work on LIO, engineers and scientists in Japan and Europe were developing their own spacecraft. With considerably more experience in planetary exploration than ISAS, ESA’s GRavity and Interior Magma Analysis at Long Distance Investigation, or Grimaldi, after one of the originators of the modern system for naming lunar features, Francesco Maria Grimaldi, was progressing much more smoothly than its Japanese counterpart SELENE, despite the relative technical simplicity of the latter probe. The modifications needed to the probe body to survive the considerably different thermal environment of low lunar orbit, not to mention the significant structural changes necessary to support the planned instrument package, were becoming more difficult than anticipated, while it was increasingly clear that the Japanese budgetary situation would never again be as free and liberal as it had been during the 1980s. These factors combined to cause repeated problems for the Japanese spacecraft, worrying mission planners in Houston and Washington D.C. who wanted the information on the lunar dust environment that it would provide to help guide their design of key surface equipment such as space suits and airlocks. Scientists interested in the data it would provide were also worried that it might be delayed, and ultimately unable to sample a relatively pristine, human-free lunar atmosphere, decreasing the utility and scientific value of its results. Nevertheless, the Japanese continued plugging along without significant outside support, diverting resources from other, less critical programs towards SELENE to ensure that it launched on time.

    In any case, the problems the Japanese were having with SELENE paled in comparison to the ones the Russians were having with their communications satellites, probably the most important of all the precursor spacecraft. Unlike the others, these were an absolute requirement for human landings at many of the proposed Artemis mission sites, and would be needed at least before the launch of any farside, polar, or limb missions. While Russia certainly had the technical expertise and historical experience to build such spacecraft, the difficult financial state of their space program made it hard for them to bring that experience and expertise to bear, and NASA was repeatedly forced to beg Congress to appropriate more funds to assist Roscosmos in ensuring the satellites were built on time, at the same time it was being forced to increase appropriations for the joint Fobos Together mission. While nonproliferation concerns continued to weigh heavily in Congressional minds, especially after 1994, of greater weight was the simple fact that it was too late for the United States to change course. Having assigned responsibility for the communications network to Russia, it would now be slower and more expensive to begin development of an American alternative to the Russian system than it would be to simply cough up the necessary funds for accelerating Mesyat development.

    Nevertheless, cost overruns in Mesyat had consequences. In an effort to shave expenses, Congress repeatedly considered making up the difference by cutting the budget of other NASA programs, with Fobos Together being a particularly popular target. While these cuts were staved off by narrow margins--in one case, in fact, the Senate failed to pass an amendment which would have canceled the program altogether by only a single vote--they were fuel to the fire for the program’s already troubling issues, significant contributing to its later issues. In the end, though, Fobos Together, like Mesyat, would continue, Russia too vital a partner and the problems at hand too important to allow temporary concerns to override sound diplomacy.

    As budgetary conflicts and technical issues were challenging two of Artemis’ international partners, though, the third was racing ahead. Only a few months after LIO, the Grimaldi probes left Kourou aboard a Europa 4 in a picture-perfect launch towards the Moon. Quickly settling themselves into a close lunar orbit, they immediately set to work, using a high-precision data link between the two spacecraft to track the tiny changes in distance induced by lunar gravity. As accurate on the far side as the near, within a year Grimaldi had produced a revolutionary new gravity map of the lunar surface, far more detailed than any ever previously created. This data would not only inform mission planners of useful, stable orbits to use during transfers between the lunar surface and EML-2, but also had tremendous scientific value. As on Earth, it could be used to trace the Moon’s subsurface structure, and within months of the release of the first version of the Grimaldi map, it had already begun to help scientists better understand the history of lunar impacts by describing the subsurface structure of major lunar impact basins. As the Moon is a key point of reference for describing conditions across the early inner solar system, this work had implications for research involving Mercury, Venus, Mars, and even the asteroid belt, besides the Moon and Earth themselves.

    Towards the end of the year, Japan’s SELENE, now renamed Kaguya after the well-known moon princess of Japanese legend, joined the growing constellation at the Moon after launch atop a Japanese Mu-IV rocket, a new solid fuel vehicle growing out of their efforts to develop a domestic capability to build the boosters used by the H-1 and new H-II rockets. Despite its relatively simple and small instrument loadout, Kaguya would be the last and in some ways the most important piece of the scientific puzzle, addressing the long-standing dust issue. While it could not, of course, measure dust levels at the lunar surface, it could still indirectly answer important questions about dust behavior. Kaguya’s scientific results could also answer important questions about the behavior of the atmosphere and dust of other solar system bodies, particularly those like Mercury, Triton, or many of the other moons of the outer solar system with no more than a very tenuous envelope of gases. It discovered significant levels of charged dust at low altitude near the terminator, explaining certain curious observations by the Apollo astronauts, as well as studying the composition of the dust (similar to the surface regolith) and the thin lunar atmosphere. Despite the relatively limited results, it still provided information valuable to the manufacturers of the equipment that would be used on the Moon and marked an important step forwards for the Japanese space program as only their second beyond Earth orbit mission.

    Besides these three successes, 1997 had one final piece of good news for the Artemis program; in December, the first set of the Mesyat satellites to launch had finally arrived at Baikonur. While check-out and mating with their Vulkan launch vehicle would delay their arrival at the Moon until April of 1998, this marked a welcome piece of good news for mission planners anxiously awaiting Mesyat’s deployment. By early 1999, the five satellites of the Mesyat network--four primaries in a halo orbit around EML-2 and a fifth on-orbit spare--had been delivered, providing global communications coverage to the lunar surface.

    Altogether, as the millenium wound to a close, a new era of the “armada” was dawning. Unlike the “comet armada” of 1986, or the Mars and Venus concentrations of an earlier era, though, this had been planned, and each element was part of a greater whole. Russian communications, European gravity observations, Japanese dust research, and American imaging and spectroscopy were all working together, each contributing . And behind them, as the last year of the 20th century began, were humans: building hardware, studying landing sites, and practicing for extended missions on the lunar surface.

    It had been a long time. But they were returning.
     
    Part III, Post 13: Russian, Indian, and Chinese space activity, and operations aboard the Mir and Freedom space stations during the 1990s
  • Well, everyone, it's that time once again and thanks to some assistance from Workable Goblin on getting this hammered into shape over the past week, we're ready with this week's Eyes post. Last post, we looked at the international flotilla of lunar precursor probes preceding Artemis to the moon. This week, we're looking at the international operations situation back in low Earth orbit. I hope you'll find it worth the wait.

    Eyes Turned Skyward, Part III: Post #13

    While some of the effects of the dramatic geopolitical and space policy changes of the late 1980s and early 1990s made themselves felt immediately, many took much longer to begin to impact launch and orbital operations. With design, development, manufacturing, and processing standing between the beginning of any effort and its final realization in orbit, it simply took time for many of the major changes underway to make themselves felt. In Russia, by 1992 the exchanges and bargains made by Chelomei had finally begun to bear fruit. Indian preparations for Neva’s role in their new Polar Satellite Launch Vehicle (PSLV) had always been out ahead of the Russian development of the first stage/core. By 1993, Indian factories were already gearing up for production to begin as soon as Russian engineers could complete the testing of the RD-161 first-stage engine, which was currently in progress on test stands. India’s own contributions to their PSLV, their native stages based on Russian-provided hypergolic engine designs, were moving ahead apace, having multiple successful full-duration integrated stage firings under their belts. All that was needed now was a stage to lift them to altitude.

    That, however, was proving to be a problem. Chelomei’s engineers, badly paid and worse supplied in the chaotic post-Soviet Russian economy, were running into enormous trouble adapting tooling and designs repeatedly adapted and evolved from the R-7 family’s 1950s genesis to the more modern Neva design, and those troubles were rolling down the line and across the Hindu Kush into the subcontinent. A giant new factory was being built at Vikram Sarabhai in Kerala, the home of India’s rocket programs, for license production of the new cores--but it would be little more than an empty shell, the design’s flux preventing the importation or construction of the necessary tooling to actually begin production. The plants supposedly for producing RD-161s were in a state of forced idleness, waiting not just for testing to finish but for core production to start. And although the upper stages were finished, they had no rocket to fly on, yet. With the core slipping definitively out of 1995 and into the hazy later parts of the decade, there was no hope to a quick resolution of these issues, either.

    In the meantime, a combination of payload slots on Vulkan and Soyuz with continued launches of their native-built ASLV would have to fulfill Indian requirements and, the Russians hoped, soothe their partner’s frustrations enough to prevent them from executing their backup plan--a new solid-fuel core which could replace Neva as a first stage. The design, which had been floating around Indian design bureaus since before Neva had been approved, would have enough thrust and fuel capacity to meet the basic PSLV capacity requirements without the unreliability of their Russian counterparts, a growing concern to Indian program managers. However, given the cost of developing an entirely native core with no Russian input, and the necessarily long time it would take to perfect such a stage, going to an all-Indian design was unpalatable; at best, the PSLV capability they wanted would be delivered slightly later than the most recent Neva schedules planned, and at a far greater cost. Additionally, Neva was being designed to fill the entire gamut of payload capabilities between Soyuz and Vulkan with its multi-core variants. While the Indian version was not intended to use these capabilities, the necessary hardware and design modifications were merely being neglected, not removed, meaning that it would be relatively inexpensive to evolve the PSLV to higher payload capacities if desired at some future time. With Russian assurance of launches to fill the capability gap in the meantime at minimal cost, the Indians were content enough to retain only the threat of withdrawing from the project, meanwhile waiting and seeing if it would pan out.

    About the only thing on schedule in the Indian-Russian partnership was the launch of the first Indian cosmonaut to the Mir space station. Along with the two Russian pilots of the TKS capsule, Anil Korrapati flew to Mir in November 1992, where they joined up with the existing 3-man crew that had been on the station since that April, the first Indian to fly into space since they had sent an astronaut to Salyut 7 in 1984. The revived six-person crew was finally enough to return Mir from the near-mothball status that was all a three-man crew could keep the station in even with the best of wills. Korrapati was put to work as the cosmonauts worked on deferred maintenance to life support and computer systems, conducted an EVA to take care of a fault that had developed in the station’s solar power systems, and re-activated lab equipment to bring the station back into a going concern. Anil would be followed by several other Indian cosmonauts, with 1993 seeing two more fly as Indian money secured the ability of the station to operate with something approaching a full crew, and to operate at a level that would enable actual scientific return.

    Back home in India, the launches played well in news and the returning pilots were honored on their return, but the Indian contribution to the station was transient--for the Indian program, their proposal for sending crew to Mir had always been intended as a buy in to bigger things focused on their more practical satellite communication and reconnaissance needs. With Neva production secured and development begun, its purpose was served. It would be heavily preferred that any future Indian astronauts fly on Indian craft--and the multicore nature of the Neva core they were buying from Russia meant they could have the capability to do so. However, India wasn’t the only Russian partner making use of Russian developments for their own good, and there was another partner that Chelomei had arranged that was interested in a much more permanent contribution to Mir: China.

    China had begun its association with the Russian space program in a much more advanced position than the Indians--while India’s largest launcher was the ASLV, with a payload of just under half a ton, the Chinese had their ICBM-derived Long March 2 rocket family, with a base payload of over 3.8 tons and (with boosters) a launch capacity of up to 9.5 tons. Similarly, their own Lóngxīng system had already been under development for several years, with detailed design and prototype development underway even before they signed on Russian assistance. Thus, unlike India, China’s program didn’t need help bootstrapping its spaceflight program into existence, but instead aimed simply to tap Russia’s long-earned knowledge and seize whatever advantages China could--including a place to launch to. The fourth Mir DOS lab, originally the Earth observation lab module Zemlya, was returned to its production cradles in 1992 for a refitting to meet Chinese intentions. In many ways, what the Chinese wanted was for the DOS module to be fitted out for its own independent operations: solar panels added to supplement Mir’s main power supply, crew quarters (the first on Mir, as most Russian crew used sleep stations aboard the FGB of their TKS transports), cargo stowage, and limited lab facilities. Contrary to the more co-operative nature of arrangements aboard Freedom, the Chinese essentially planned to operate the module, renamed by its new owners as Tiangong (meaning “heavenly palace”), as a separate space station, which simply happened to be connected and operated in direct contact with another nation’s station in which China would have an operating role.

    The Chinese were similarly picky about Russian assistance with their Long March and Lóngxīng vehicles. The Chinese had been experiencing difficulties with their Long March guidance software, as well as more general production headaches, and they wanted their new Russian “allies” to help troubleshoot the issues and assist in resolving them. Similarly, Russian input was sought on the final design of Lóngxīng as the vehicles intended for flight moved towards the pad. By 1994, the insight offered by these “consultations” had begun to bear fruit--the Long March 2F that was to carry Chinese crews to space aboard Lóngxīng was in final testing for its maiden launches, and Lóngxīng itself was preparing for ground testing ahead of its first unmanned test missions. At the same time the termination of the Indian portion of the Mir international operations allowed the first Chinese cosmonaut to be assigned a flight slot to Mir in July 1994, there to begin working on procedures involved with space station operations and prepare for the arrival of the new module at the station in early 1995.

    Meanwhile aboard Mir’s sister, the American Space Station Freedom, station completion had put scientific operations into full swing. Supplied by American Aardvarks and European Minotaurs, the station’s labs and personnel were busily generating new information about the physical, chemical, and biological effects of microgravity and spaceflight. Perhaps the single largest experiment campaign on the station was the series using the Centrifugal Gravity Lab to test the effects of simulated partial gravity on plants and animals--interesting both for the implications for understanding the human body, but also key data for long-duration beyond-Earth spaceflight or habitation. This made the CGL a particularly popular project among the membership of O’Neill’s Lunar Society and Zubrin’s On To Mars, for whom the question of potential for long-term or indeed permanent inhabitation of low gravity worlds was of critical importance. Over the two years since its launch, the lab’s rotor had been hard at work, spinning cargos of rats and small planters at a variety of gravity levels between near-microgravity and roughly 0.45 Gs (any higher required excessively high spin rates even given the 5.5m rotor diameter).

    The results after two years were both roughly as expected and pleasant--even minimal gravity levels (as low as 0.1 G) were sufficient to be “noticeable” and appeared to eliminate symptoms of space sickness in the test subjects, but higher levels were necessary to achieve noticeable reductions in long-term detrimental effects of microgravity like bone density losses and muscular degeneration. Of the two, muscular degeneration was the easiest to fight--even lunar gravity was enough to yield substantial reductions (though not enough to totally eliminate reacclimation after lengthy tours of duty), and while Martian gravity was insufficient to eliminate the problem entirely, it came close enough that with some additional exercise, rats returned from space aboard Europe’s Minotaur capsule after spending eight months aboard station showed little difficulty in adapting. However, the problem of bone density losses was more challenging--lunar gravity was only enough to attenuate the decreases by about a quarter, while Martian gravity was enough to drop it only by half. While this was enough that a human could easily adapt to permanent life on the Moon or Mars (gravity decreases more than compensating for the potential drops in bone strength), it wasn’t an entirely satisfactory answer for advocates of commercial exploitation of space, who were skeptical whether workers would sign on to jobs that might prevent them from returning to Earth. With basic effects qualified, the CGL’s experiments moved on into other areas of research focusing on acceptable spin rates and adaptation periods to varying levels of microgravity, key criteria for the design of future space habitats that might use a human-scale centrifuge to generate artificial gravity, as seen in science fiction like the Odyssey film series.

    However, unlike their Russian counterparts on Mir, where limited man hours meant that their days were filled, even overloaded with tasks related to station operations, Freedom’s 10-person crew was also taking precautions against the warning that all work and no play makes a dull routine. Instead, the crew was able to make use of their off hours for a variety of recreation and hobbies. As with Canadian astronaut Doug MacKay, Earth-watching or photography was a popular hobby, with most of the station’s crew indulging in the pursuit at one time or another. For those who found the rather oceanic view offered by the station less than compelling, the station had begun to accumulate a library of books brought up as part of the personal effects allowed in an Apollo’s expendable cargo but not always returned to Earth. The station was also equipped to receive transmissions of television and movies from Earth, with sporting events, including the 1992 Olympics, proving consistent hits with the crew. At the personal request of Star Trek fan (and New Voyages guest star) Peggy Barnes, who was onboard Freedom at the time in what would be her final mission before retirement, the second Star Trek movie received a special airing in space shortly after it hit theaters in 1994, prompting a certain level of ribbing from her crewmates for the rest of her stay. However, the crew didn’t just consume media--some members of the astronaut corps had always been musicians, and Freedom continued a tradition started aboard Spacelab of keeping a variety of musical instruments (including guitar, synth keyboard, and more) aboard for crew use. Given the larger size of the Freedom crew, there were occasionally several musicians onboard at once, but in 1994 an alignment of crew schedules resulted in no fewer than three astronauts on orbit with musical hobbies. Lead by Expedition 23 commander Maxwell Quick on the synthesizer keyboard, Gerald Mitchell (Expedition 22 commander) on synthesizer, and flight scientist Beverly McDowell on saxaphone, the so-called “LEO Trio” practiced regularly throughout their time on-station.

    The LEO Trio wasn’t the only international collaboration coming to fruit in 1994. In addition to the flight of their first cosmonaut to Mir, the Chinese also successfully launched the first unmanned test of Lóngxīng, which made several orbits after launch aboard a Long March rocket before reentering and landing on the empty steppes of Inner Mongolia. Despite several in-flight computer glitches, the flight was generally considered a success and a solid foundation for future Chinese spaceflights even as they accumulated experience aboard Mir. However, things were going less well for their Tiangong DOS lab headed for Mir, with delays in equipment design, refit dilemmas, and quality control problems forcing a launch slip from early 1995 to late 1995. Within China there were parties who, comparing to promises made to the Indians on Neva (which had itself slipped another year, to an introduction no earlier than 1997), were wondering if it might take longer still, and, like their Indian counterparts, beginning to mull backup options if the Russians could no longer deliver. However, threats to pull funding from Mir--a critical element of the ramshackle financial backing for the Russian space agency--were enough to devote a surge of effort to Tiangong that would hopefully keep it to the new launch date.

    1994 marked not only the launch of the first Chinese cosmonaut, but a more general resurgence in international space operations, beyond the old Cold War-era flights involving astronauts from only one or the other of the ‘blocs’. For the first time since the ASTP II mission in 1978, Russian cosmonauts would fly to an American space station, while for the first time in history Americans would travel to a Russian station. In addition to promoting international unity and allowing both sides to examine each other’s technologies and practices, this exchange would also establish joint operations protocols for Russian cosmonauts if, as had been proposed in exchange for Russian communications support and high-performance Russian hypergolic engines for the lander, they joined Europeans and Japanese in accompanying Americans to the moon aboard Artemis missions. The exchange began with the Freedom 24 expedition of October, in which Andrei Orlov flew fifth-seat to the American station where he would spend a full six months as a member of the 10-man station crew, operating experiments and conducting repairs at the direction of American station crew and ground control in Houston. This was a change from the shorter joint operations in ASTP II, in which the Russians aboard the Soyuz that had docked to Spacelab had operated more independently under Moscow’s control. Similarly, in November, veteran American astronaut Ryan Little, who had been part of Freedom Expedition 2, flew aboard a TKS to Mir as part of the third TKS to join the station as the financial picture finally allowed Russia to return to a 9-person total crew (though only six Russians were on station, the other slots being filled by Ryan and two Chinese cosmonauts).

    The exchanges were generally a success, with the crew members integrating relatively well into their respective station operations. Aboard Freedom, Andrei made friends with the two remaining members of the LEO Trio (Mitchell having departed with the return of Expedition 22 to Earth). As it turned out, he was himself a guitar player, and for the first half of his time on station he joined the other two members in a much-publicized collaboration--including a performance at the station’s traditional Thanksgiving meal (a carryover from Spacelab, and a holiday celebration with precursors as early as the Christmas flight of Apollo 8). Aboard the Russian station, Little was encountering more culture shock, being exposed not only to Russian station operations but Mir’s new Chinese contingent. The Chinese government made a large propaganda push based on the “invitation” of China to this exchange in light of their status as a “rising space power,” as shown by Lóngxīng’s first launch, the upcoming Tiangong, and the inevitable future of native Chinese manned stations and exploration missions. However, at least for the moment, the fact was that China’s status was still very much a second-tier space power, behind ESA and much more “present anyway” then “invited” to the meeting of what had at their last meeting been the only two superpowers in spaceflight.

    While events reflecting policy changes of the turn of the decade were reaching their ends, they were not alone--at long last, one of the last policy changes of the late ‘70s and early ‘80s was coming to fruition. The McDonnell-Douglas Delta 4000 had been the less newsworthy of the ELVRP rockets, as its big brother Saturn Multibody and its Soviet cousin the Vulkan had taken up column inches in the press in the course of Vulkan Panic just as Delta was entering service. While Delta had succeeded in standardizing most of US national security launches onto a single launch vehicle, these launches were by their very nature quite discreet in their purposes. Moreover, the commercial ancillary market that Delta had been quietly aiming at had been quite unexpectedly captured by Lockheed’s aggressively managed and marketed Titan program as satellite busses grew from two tons to more commonly four or even six tons. Worse, to even reach its two ton maximum geosynchronous transfer orbit payload, a Delta 4000 required no fewer than twelve Castor IV solid rocket boosters--requiring in turn extended pad dwell to prepare the rocket and increasing the likelihood of launch failure to uncomfortable levels.

    However, McDonnell had been pursuing an intermediate solution to both of these problems. In order to deal with the payload, McDonnell proposed to replace the existing Centaur-D upper stage of Delta 4000 with the higher-capacity Centaur-E, re-engined with the latest RL-10 variants to give improved fuel efficiency. Additionally, McDonnell proposed to draw upon the latest in solid booster development, replacing Thiokol’s Castor IVs with the same company’s new “Carbon-Composite Motors,” a new design that by combining advanced, lightweight, and strong graphite epoxy cases with new propellants and a larger case diameter and loaded motor weight would reduce the numbers of motors required to achieve maximum payload from twelve to just six, while at the same time boosting that maximum payload to over the commercially-desirable four tons to GTO. Funded by the DoD among the various SDI preparations and intended as an “Intermediate Improvement Program” to better the existing expendable launch vehicles while they worked on the prototype X-30 and X-40 reusable LV demonstrators, Delta IIP had by 1993 not only outlived both programs, but also reached the pad for its first launch. With the company’s main aircraft market under threat from Lockheed and Boeing wide-body aircraft, the Delta 5000 was McDonnell’s belated but best attempt to find some entry to the rapidly growing and lucrative commercial satellite market, projected to remain strong at least for at least another decade. Throughout 1994, as McDonnell’s marketing teams worked to sell commercial launches, the new Delta variant was beginning to build a solid flight history launching Department of Defense polar payloads out of Vandeberg alongside its Multibody M02 and M22 cousins.

    The introduction of the Delta 5000 was perfectly emblematic of space operations during the Quiet Years. While great events had been and were being set into motion, their effects were slow in coming to public attention, and for the moment the attention of politicians and citizens was largely focused elsewhere, towards more terrestrial hopes of peace and prosperity unshackled by the spectre of nuclear war. Only in the United States, where this growing optimism was reflected in renewed interest in science fiction and enthusiasm over the Artemis program was space an important part of the national conversation, and even there it was overshadowed by the rapid growth of personal computing and the “Internet”. The peace and quiet that was enabling these views, however, was about to be decisively shattered over the lonely Pacific Ocean...
     
    Part III, Post 14: The 1994 Christmas Plot
  • Hello everyone. I apologize for the delay--I walked out of the last final of my college career thinking to myself, "I know there's something I should be doing, but I can't think what" and then dazedly went to get lunch. Anyway, last week, you may recall that I updated the situation in space at the end of the quiet years, then closed by saying that the quiet years were about to see their end. Today, Workable Goblin picks up right where I left off, in the air over the lonely Pacific.

    This one's a little unusual, with a bit of strong language and some potential trigger warnings, so please keep that in mind. Anyway, without further ado, please fasten your seatbelt low and tight across your hips as we move into this week's post..

    Eyes Turned Skyward, Part III: Post #14

    Captain: So, you know, having to work Christmas and all--

    First Officer: Uh-huh.

    Captain: Well, Mary isn’t happy about it, but I was thinking, you know, Chicago’s awful this time of year--

    First Officer: Yeah.

    Captain: So I was going to take her to the Bahamas.

    First Officer: Yeah, y--

    <extremely loud static noise>

    Captain: Fuck!

    First Officer: What the hell was that?


    UAL 882: Oakland Control, this is United eight-eight-two heavy declaring emergency.

    Oakland Control: Roger eight-eight-two heavy, what is your emergency?

    UAL 882: Major pressurization loss, we’re descending at best speed to flight level one-zero-zero and requesting a diversion to Vancouver, that’s Yankee Victor Romeo.

    Oakland Control: Request granted.


    Captain: Jim, can you tell the flight attendants to prepare for emergency landing?

    Flight Engineer: Yeah. Uh, uh, I can’t get in contact with them. Should I go find out what’s going on?

    Captain: Yes.

    <several minutes later>

    Flight Engineer: There’s a huge hole in the fuselage aft of the wing, huge. It must have been a bomb.


    JAL 001: Oakland Control, JAL zero-zero-one heavy declaring emergency, we’re picking up a distress beacon from Clipper eight-five-eight heavy, position...

    KAL 19: Mayday, mayday, mayday Oakland Control, KAL one-nine heavy picking up distress beacon from Delta eight-six-seven heavy...


    NBC News: We interrupt this program to bring you breaking news...NBC News headquarters in New York is getting unconfirmed reports of multiple downings of transpacific airliners, that is multiple airliners over the Pacific dropping out of contact with Air Traffic Control. NBC News is beginning to work on this story, very disturbing if true, and we’re trying to figure out exactly what is happening...


    White House Chief of Staff: I’m sorry to interrupt, Mr. President, but we have a situation...


    NBC News: More information on the possible attack on airliners crossing the Pacific. Our Tokyo bureau is reporting that at about the same time airliners began dropping out of contact, a bomb exploded in an aircraft at Narita International Airport, that’s the Tokyo international airport, killing several maintenance workers. Several of the missing flights had departed Tokyo, so there may be a connection...


    Oakland Control: Oakland Control to all aircraft, ATC Zero conditions in effect. All aircraft divert to nearest available airport, this is an emergency situation.


    UAL 882: Vancouver, United eight-eight-two heavy requesting clearance for runway eight-Lima.

    Vancouver Control: Roger United eight-eight-two heavy, you are cleared for runway eight-Lima.

    UAL 882: Roger Vancouver, we have multiple wounded, make sure ambulances are there.

    Vancouver Control: Roger United eight-eight-two heavy, multiple wounded.


    NBC News: We’re getting live footage from Vancouver, that’s in Canada, of the landing of United Flight 882, which reported a serious in-flight emergency earlier this morning, possibly related to the Tokyo bombing and the disappearance of several other airliners. Just a moment...my God...


    NBC News: Unconfirmed reports of bomb threats phoned in to the Sears Tower, the World Trade Center, and the Empire State Building, which are obviously being taken very seriously in light of this morning...

    (N/B: It was later determined that all bomb threats made or suspected that day were fake or mistaken)


    NBC News: We’ve just gotten--hang on, the White House is reporting that President Gore will be addressing the nation about this morning’s attacks shortly from an undisclosed location.

    President Albert A. Gore, Jr.: This morning, a terrible and vicious attack was carried out against the United States...

    ...We must take decisive action to ensure such a tragedy never happens again, by working with our friends around the world to improve security, increasing the transparency of our intelligence apparatus, and strengthening our ties globally. Because what our enemies have forgotten is that we are stronger together, and tonight we stand driven by a new resolve...

    ...We will not stop, we will not hesitate, we will not rest until the perpetrators of this heinous crime are found and brought to justice...

    ...I have already directed the Department of Transportation to begin reviewing the nation’s air security, and how the security and safety of air transport could be improved. Together with Congress, in the coming days my Administration will implement measures to protect our skies, our roads, our rails and seaways from further attacks. We will also review the actions taken by our intelligence agencies leading up to this attack, and implement new procedures to make sure they can ferret out any future attacks before they occur...

    ...Finally, I ask that all of you listening or watching direct your prayers and thoughts to the families and friends of the victims, who have had their loved ones suddenly struck down without warning or provocation. We stand with you--we all must stand with you, and stand together in the face of such reckless brutality. Thank you, and God bless America.


    With the ongoing collapse of the Soviet Union and the increasingly direct presence of the United States in the Middle East, many young, wealthy Arab men who had been involved in the Afghanistan struggle and who had been radicalized during it began turning their thoughts to what they perceived as the other great oppressor of Muslims in the world, the United States. A staunch supporter of Israel and many secular regimes in the Middle East, America had also committed the unpardonable sin of deploying heathen Christian and Jewish troops to Saudi Arabia during the Gulf War, defiling, as they saw it, the land of Muhammad with infidels. Already, many of them had joined together to continue the jihad they saw themselves engaging in beyond the limits of Afghanistan; now, they had a clearly defined target for that jihad. The most powerful and influential members of the Arab Afghans joined together to form what they termed “منظمة,” the “Organization,” an informal term for the network of fellow travelers that had germinated in the hard land of Afghanistan that was adopted as a discreet name as they began to look outwards.

    At first, the “Organization” attempted to bring its expertise and skills to Muslims involved in conflicts around the world, and establish a network of sympathizers, contacts, safehouses, and resources for future actions. Organization members fanned out to Bosnia, Somalia, Algeria, and other locations where Islam was, as they saw it, under threat, establishing small but often influential cells promoting an Islamist ideology hostile to the West and especially the United States. Their greatest success, however, was in Southeast Asia, home to one of the largest concentrations of Muslims in the world across Malaysia, Indonesia, and the Philippines. For decades, a series of Islamic insurgencies had plagued the islands of the East Indies, doing little damage but nevertheless persisting despite government efforts to root them out. Now the Organization sought to provide these insurgencies with training, money, and a goal, to turn them from a thorn in the side of governments to a legitimate threat, or even a government themselves.

    The founders of the Organization’s operations in Southeast Asia swiftly created a sophisticated recruitment and operational program to expand their initial small core of Organization operators. Rather than overtly advertising that they were seeking fighters for jihad, something which would undoubtedly attract the attention of the governing authorities, they merely created Islamic charities. Although these charities would provide somewhat questionable sermons along with their orphanages, schools, soup kitchens, and so on, they made sure to stay within the line of what their host governments considered permissible speech. These served as part one of the Organization’s recruitment strategy, identifying potential recruitment candidates to the Organization. As the charities worked through areas, employees of several ostensible “recruitment agencies” followed behind, seeking out the candidates identified by the charities and offering well-paying short-term jobs that would require travel to the Middle East. As the Gulf States of Arabia had long been hungry for cheap labor from Southeast Asia, this attracted little attention from authorities. Instead of working at a construction site or on an oil field, however, Organization recruits were funnelled to a series of training camps in Somalia, Yemen, Pakistan, and Afghanistan, and educated with practical skills like bomb-making, guerrilla warfare tactics, target selection, and so forth, training them to be terrorists. In addition to the practical education, recruits were ideologically instructed as well, turning them into loyal servants of the global jihad.

    Once their training was completed, the recruits were given a handsome bonus payment--fully equal to what they had been promised--and sent home to part three of the recruitment strategy, emplacing the operatives. Some were simply given assurances that their support was vital for the jihad and assisted in finding jobs at home, where they would funnel part of their pay back to the Organization. Others were recruited by Organization businesses, often employing skills similar to those they had been taught in while overseas, allowing them to maintain those skills for the future. A third and final group was sent to existing organizations such as the Moro Islamic Liberation Front in the Philippines and the descendents of the Darul Islam movement in Indonesia, where they trained members in the same skills they themselves had been trained in by the Organization. Through the success of the businesses established by the Organization and the remittances of those not employed directly by then, the entire operation was self-sustaining, indeed profitable enough to support less successful Organization networks elsewhere in the world.

    The Organization in Southeast Asia quickly grew to encompass a network of hundreds of fighters, sympathizers, and fellow travelers, building up a loyal, dedicated, and capable cadre of believers who would spread the word among other Muslims. Gradually, existing terrorist organizations found themselves more and more influenced by Organization ideology and propaganda, and with cliques of Organization sympathizers often making up their most dedicated--and radical--core. With a recruitment and training organization firmly established by mid 1993, the Organization’s leaders became anxious to start actually carrying out jihad, and prodded the organizations they had been training to begin doing things, rather than simply drink money and men that could be put to other uses. Energized by their newly radicalized members, most set to with a will, targeting people, companies, and organizations they felt were un-Islamic. At first, this took the form of petty crimes; attacks on liquor stores, prostitutes, banks, the shaming or extrajudicial punishment of those the jihadis felt were immoral or licentious, and so on. While all well and good, this was not precisely the global war against the West the Organization had had in mind, and they pushed their members to find more spectacular and effective methods of attacking the decadent West.

    At this juncture, a young, recently recruited, but quickly advancing member of the Organization’s Southeast Asian operations proposed a bold plot which could not help but pique the interest of the Organization’s senior members. He set forth a complex, multilayered plan which (in his estimation) would strike a great blow against the United States, attacking multiple locations one after the another to keep Americans off balance and fearful. The great strike, which he grandly titled “Allah’s Spear,” consisted of three successive phases, each of which would be more and more deadly and strike closer and closer to America itself. In the first phase, the Organization would carry out a series of attacks against Western targets on the islands of Southeast Asia, especially those the Organization felt were corrupting the people, using planted bombs and trained hit squads of gunmen. With blood flowing overseas, the United States would be wounded, if not deeply quite yet. During the second phase, the Organization would escalate, targeting the transports bringing Americans away from their country. In particular, the member proposed, a massive simultaneous bombing attack could be made on American airlines, destroying a dozen or more aircraft on a single day, killing thousands of people, and bringing the air travel system--not just in the United States but at least in the Pacific Rim, if not the entire world--to its knees. Finally, during the third phase the war would move to America itself, targeting famous landmarks and buildings such as the World Trade Center and Empire State Building in New York, the Sears Tower in Chicago, the Pentagon, Capitol Building, and White House in Washington D.C., and other locations around the country for bombing attacks.

    Senior members of the Organization liked the grand scope and aggressive action of Allah’s Spear, but felt that as of its proposal the organization did not have the numbers, finances, or other resources needed to carry out its complex interlocked plan of attacks on the West and the United States, prodding the plan’s mastermind to simplify it into a single grand action. After considerable thought, he and they agreed that the attack on airliners seemed to best fit the criteria of being both practical for the Organization’s relatively limited abilities and yet being extremely visible. With the dimensions of Allah’s Spear set, work began on actually bringing the plan into being. Bombs needed to be designed, couriers and bombmakers recruited, safehouses and targets designated. As 1993 flowed in 1994, Allah’s Spear gradually began to take on a more and more definite form.

    As with any good engineer, the mastermind had planned for a series of tests prior to the main attack, to verify the performance and functionality of the bombs and the effectiveness of the planned infiltration and exfiltration tactics under “real-world” conditions. During 1994, a series of attacks were carried out against minor targets spread across Malaysia, Indonesia, Thailand, and the Philippines, escalating from the bombing of an empty phone booth in Johor to an attack on a brothel in Balikpapan, which killed six prostitutes and their clients and wounded several others. During the course of these attacks, a number of different explosive mixtures and possible trigger mechanisms were tested before the final bomb design was settled on, consisting of a disguised explosive mixture detonated by a timer based on a cheap watch. As a final “dry run,” the plotters decided to test their bomb system on an actual flight, eventually chosen to be Malaysian Airlines Flight 82, Kuala Lumpur-Taipei-Los Angeles. A backup member of the group selected to place the bombs on the aircraft boarded the flight, placed the bomb under a seat in the middle of the aircraft, and deplaned in Taipei, where he boarded a flight to Karachi, Pakistan, the location of a Organization safehouse. Flight 82 continued on from Taipei until the bomb exploded over the mid-Pacific, hundreds of miles away from any land. Out of sight of land-based radar systems, and out of contact with air traffic control or other airliners, the flight simply vanished into thin air. Within hours, the disappearance was noted, and within days a small amount of debris was recovered by search teams, but no evidence that the apparently tragic disappearance of an airliner was anything more than an accident was uncovered until much later.

    By the beginning of November, the Organization was satisfied that everything was in place for the attack, and final preparations began among those who would actually be carrying it out. After a lengthy discussion, it was decided that the bombings would have the maximum impact if they were carried out on Christmas Day. Beginning in the morning of Christmas Eve, members of the Organization began boarding aircraft in Jakarta and Kuala Lumpur, routes which would have taken them to the United States if they had continued to their end. Instead, after “losing” a series of small objects in a variety of hiding places, they deplaned and boarded other flights, also terminating in the United States, which also were the beneficiaries of their forgetfulness. When they disembarked from these aircraft, they boarded still other flights, this time ending in Pakistan. By the evening of the 24th of December, at least by their time, all six of the “plane men” were safely heading towards Organization safehouses, their mission complete. Simultaneously, at exactly eight o’clock on the morning of Christmas Day, measured in Pacific Time, some thirty minutes after sunrise along the West Coast, eleven bombs detonated in eleven airliners scattered across the length and breadth of the Pacific Ocean. The United States--the world--would never be the same again.

    Within minutes of the bombs going off, the first responses were being mounted at Narita International Airport, located in Tokyo. One of the aircraft targeted, a United Airlines 747 flying between Jakarta, Taipei, Tokyo, and Los Angeles, had developed mechanical problems on the Taipei-Tokyo leg, after the bomber had disembarked in Taipei to board another flight, and had therefore been removed from service before beginning the Tokyo-Los Angeles portion of the flight. While a small team of maintenance workers was inspecting the aircraft and preparing it for overnight storage, the bomb exploded, killing five and wounding three. Had this taken place later in the day, it would certainly have been the first of the bombings to be reported to the outside world; however, as it occurred early in the morning, Tokyo time, reports on the explosion took several hours to percolate outwards to the major news networks. By the time NBC News and CNN were reporting on the Tokyo bombing, the other attacks were already major news, and coverage focused on whether the bombing was related to the disappearance of many airliners throughout the Pacific basin.

    Shortly after the bombing at Narita, the crew of United Flight 882, the company’s route from Tokyo to Chicago, radioed the Pacific control center at Oakland (responsible for most flights across the Pacific), informing them that they had lost cabin pressurization and were descending to a lower level and diverting to Vancouver, the nearest major airport capable of handling the 747 they were flying. Almost simultaneously with Flight 882’s declaration of an emergency, a number of flights reported picking up distress beacons from other flights along the length and breadth of the Pacific Ocean, along with losing radio contact with other aircraft, sometimes in mid sentence. Initially, Oakland air controllers were confused by the reports, wondering if some incredibly unlikely coincidence had caused several airliners to drop out of contact at the same time, until Flight 882 checked back in, reporting that they had suffered a bomb attack. At 8:34 AM, just minutes after controllers began to suspect foul play, a member of the controller team excused himself for a smoke break, walked to a nearby phone booth, and called the offices of KPIX-5, KNTV-11, and CNN, informing them that a major air disaster, possibly a terrorist attack, was in progress. Nearly simultaneously, Oakland Control was calling Federal Aviation Administration headquarters in Washington D.C., telling confused headquarters staff that a major disaster was likely in progress, possibly a terrorist attack. Additional calls were made to North American Aerospace Defense Command, informing them that a major terrorist attack was possibly in progress. However, as all the aircraft had been downed over the Pacific Ocean, well out of range of air traffic control radars, there was a considerable amount of confusion not only at NORAD but also at FAA headquarters and at Oakland Control itself about what, if anything, should or even could be done to address the disaster, a problem compounded by the fact that it was Christmas Day and all three organizations were operating on relative skeleton crews, with many experienced staff taking the day off to enjoy the holidays.

    After more than an hour of confused circular phone calls, the FAA finally settled on a drastic, but logical response: stop the flights. All of them. Because of confusion about which flights had even been affected, let alone the possibility of other flights, perhaps over the Atlantic or the middle of the country instead of the Pacific, having been booby-trapped, it was concluded that nothing less than a shutdown of all American air traffic could contain the threat of further attacks and allow a determination of what, exactly, had happened. Traffic in the air would be allowed, indeed required, to land, at the earliest possible time; traffic, especially overseas traffic, still on the ground needed to be prevented from departing. Quickly, calls went out from the United States to Tokyo, Hong Kong, Sydney, Mexico City, Lagos, Madrid, in short virtually every airport in the world from which aircraft traveled to the United States to prevent further departures to the United States until further notice and to order aircraft already in the air to turn back before reaching American airspace. Despite a total shutdown of American air traffic having occurred only once before, during Operation Skyshield more than thirty years earlier, the clearance of American airspace went relatively smoothly; within four hours of the order being given, no civil aircraft, whether general aviation or major carrier, were airborne nationwide, while those aircraft which had been traversing the Pacific or Atlantic had either landed according to schedule or been diverted to alternate airfields, depending on which approach would get them on the ground sooner.

    While the skies above the United States were being cleared, news of the incident was also making its way further and further up the chain of command. While NORAD remained stymied by the question of what, exactly, the military’s role in all of this should or ought to be, officers at the command and officials at the FAA both came to the same conclusion soon after being told of the attacks: the President needed to know. Soon afterwards, the phones at the Gore residence in Carthage, Tennessee, where the First Family had retreated for the Christmas holiday, began ringing with the news. At the time, shortly after noon, the President and his family were playing Parcheesi together, leaving the White House Chief of Staff to answer the phones. Only minutes after being informed of the attacks, he had informed the President and the head of the Secret Service detachment responsible for his safety. The Gore family were quickly whisked into the Presidential limousine, which proceeded towards nearby Nashville International Airport. Escorted by local police officers, the Presidential motorcade made the fifty mile drive--normally an hour-long effort--in about forty minutes. Once at the airport, they were transferred to SAM 28000, the VC-25 which had transported the Gores to Tennessee, and rapidly lifted into the air. Now relatively safe, the question arose of where Air Force One should take the President; although the VC-25 was equipped for aerial refueling and could theoretically remain in the air virtually indefinitely (limited only by non-fuel consumables needed by the engines, as well as drinking water and food), the threat was terrorism, not nuclear war, and in any case the aircraft was not well equipped to support its passengers (including the President’s ailing 86 year old father) for long periods of time. The President favored returning to Washington and the White House, but the Secret Service overruled him, fearing that the attacks on airliners in the morning had been a prelude to attacks on other American targets, including the White House, later in the day. Instead, they argued, the aircraft should transport the President to a secure site, probably a military base, where the Secret Service and the military could protect him against possible follow-on attacks. After a brief argument, Gore acquiesced to their logic, and after a short debate they selected Offutt Air Force Base, Nebraska as their destination. Formerly the home of Strategic Air Command, Offutt was remotely located and heavily defended, with access to significant communication facilities, making it an ideal temporary refuge for the President during an unprecedented attack on the United States.

    As the President was taking off from Nashville International, commentators at CNN, NBC News, and other news networks were beginning to speculate on the unprecedented shutdown of civilian air traffic over the United States. Unaware of the magnitude of the tragedy that had largely unfolded by that time, and with little other than a few anonymous phone calls and short FAA press releases to go on, many criticized the FAA’s decision as being too hasty, and quite possibly out of all proportion to the actual threat. This criticism quickly elevated into consensus, and before the skies had even been cleared completely the television networks had already created an image of the order as an affront to American liberties and values, and a complete overreaction to whatever had actually occurred, which was still unknown. Then, Flight 882 finally made it to Vancouver. Due to the severe damage it had taken during the attack, the United 747 had had to descend to a low altitude and fly slowly, at less than 250 knots, lest its passengers pass out and die from hypoxia or the structure be torn apart from aerodynamic stresses. By the time it was at last on final approach to Vancouver International, the major US and Canadian television networks had long since become aware of the aircraft’s plight and had dispatched news teams to cover its landing. Many were, perhaps not explicitly, considering this the make-or-break moment for the government narrative of events; if there was no evidence that a bomb had gone off, then clearly they, not the FAA, had correctly judged events. In the event, Flight 882 delivered that evidence in spades. As it slowly approached the runway, the omnipresent eyes of television quickly saw that a massive gash, clearly the result of some explosion, had torn open the rear of the aircraft, exposing passenger and cargo levels to the outside. The hordes of television commentators who had been passing judgement did not even wait for the aircraft to actually touch down (which it did successfully, neither suffering further damage nor causing further injury to the passengers and crew) before reversing direction; now, the government had not gone far enough in merely shutting down US air travel, as were there not other means, methods, and avenues of attack than aircraft bombing? With no claims of responsibility forthcoming, speculation quickly turned to the perpetrators of the attack, and dozens of wild theories proliferated across the airwaves about who might have bombed American airliners that morning, and why. Probably the most popular theory in the immediate aftermath was that the Japanese Red Army, famous for a series of terrorist attacks in the 1970s and ‘80s, often targeting airliners, had organized for a single last gasp after the fall of the Soviet Union, but everyone from Colombian narcoterrorists to Iranian suicide bombers to the United States government itself were fingered as possibilities.

    While the national media was begin to consume itself in wild guessing, President Gore had finally taken stock of the situation and was preparing to address the nation. After leaving Carthage, Gore had been essentially out of touch with the nation until he landed at Offutt and was transferred to the base’s secure command complex. Although of course Air Force One had on-board communications capabilities, they were limited compared to the facilities present at Offutt, and not really capable of supporting a television address. Once the First Family and key advisors were safely ensconced in the bunker, he and his chief advisors quickly and unanimously agreed that it was vital he appear quickly to allay possible concern about his health and to assure Americans that their government was aware of and responding to the crisis with an eye for more than just the immediate problem. After a brief pause while they worked out the specifics of what he was going to say, Gore took to the national airwaves late in the afternoon Christmas Day. Despite the hastily created script, the unfamiliar surroundings, and the sheer magnitude of the disaster that he now had to grapple with, the result was one of the greatest speeches of his career--no mean feat for a man often derided as stiff and wooden in delivery. The essence of the speech was quite simple: a disaster had occurred, but no worries; the government was on it, and was already taking measures to bring to justice its perpetrators and prevent any future attacks. The fact that no one had any clue who had carried out the attack or why was swept under the carpet, an inconvenient fact in this hour of sorrow.

    Even after Gore’s speech, though, there was as of yet one last act to the tragedy left to play out. Pan Am Flight 822, with the route Kuala Lumpur-Taipei-Seattle-Tacoma, had been in the air at the time of the bombings, and diverted to Vancouver like most other airliners crossing the Pacific. After safely touching down and disgorging its 312 passengers and crew, it had been taxied to a secure area of the airport for later checks by the RCMP, intended to find any unexploded bombs that might exist aboard any other aircraft. At eight o’clock in the evening, just before bomb squad members were about to board to begin their sweep, a final bomb, whose timer had been (as the FBI and NTSB later determined) accidentally offset by twelve hours detonated. As at Narita, this destroyed the aircraft, Clipper Empress of the Skies, but in the process it provided a great deal of valuable forensic information for investigators which otherwise would have been hard to come by. Fortunately, no one was killed by this final blast, and only minor injuries were caused to police officers readying themselves for boarding the aircraft.

    A total of 2,984 people were killed Christmas morning by the attacks, making Allah’s Spear, often known as the Christmas Plot, the deadliest single terrorist attack in world history. If those killed earlier, during the dry run attacks, are counted as victims as well, 3,413 people were murdered by Organization agents during the execution of Allah’s Spear, with the passengers and crew of Malaysia Airlines Flight 82 making up the vast majority of the additional 429 victims. About half of the victims of the attacks were American citizens, with the rest a kaleidoscopic mixture of mostly Indonesians, Chinese, Koreans, and Japanese, together with small numbers of people from many other countries.

    As Gore had promised Christmas afternoon, the very next day his administration began to actively move itself to meet this new threat. Vacations were canceled as cabinet staff began to make their way back to Washington, as did the President. A flurry of executive orders implementing new security measures, from increased screening at federal building entrances to air marshals aboard domestic and international flights were drafted and issued by the White House over the next few days. Most prominent among these early measures was a temporary shutdown of air travel within the United States, until greater security could be assured to travelers. Given that it was the midst of the busy winter travel season, this order had the greatest impact on ordinary Americans, many of whom were suddenly cast in the position of having to beg or borrow what transportation they could or enjoy a suddenly and unexpectedly extended vacation. Congress, as in many other cases, followed, not starting their “emergency session” until the 29th, the Thursday following the attack. The larger number of senators and representatives compared to cabinet officials and the question of whether the emergency session should be considered part of the 103rd or the 104th Congress delayed their meeting several days while they wrangled out the details; in the end, it was agreed that the 104th Congress should be sworn in six days early to avoid any potential legal complications that might arise from a very short session of the 103rd Congress. The following day, the 30th of December, Gore addressed a special joint session of Congress, largely repeating the themes from his speech Christmas afternoon. After the weekend, Congress reconvened on the 2nd of January, 1995, ready to begin developing and passing anti-terror legislation.

    The broad details of the legislation had been worked out largely by Gore’s staff during the previous week, with some input later on from the newly elected Speaker of the House and Majority Leader in the Senate. Although it called for a number of measures to be taken to harden American targets against terrorist attacks, including the creation of new Air Security and Port Security Administrations within the Department of Transportation, the FBI investigation was already turning up evidence that the attacks had originated outside of the United States, and of course had taken place in international waters. Therefore, the primary aim of the legislation was to prevent, not to reduce the damage from, terrorist attacks, largely by improving efforts to gather intelligence on terrorist activities. Besides increases in the ability of the NSA, among others, to spy on suspected terrorists, even without a warrant (although, since many terrorists were not American citizens in the first place, warrants were not strictly speaking required), efforts were put in place to increase intelligence sharing between the various agencies--the FBI, the CIA, the NSA, and more--responsible for identifying and preventing terrorist attacks. A special commission was also created to investigate the attacks in more depth and make more specific recommendations, although its report was not expected for several months at least.

    However, merely protecting American targets against terrorist attacks or identifying terrorist plots would do little to eradicate the problem. Terrorism itself, the conditions that created it, needed to be uprooted and destroyed overseas, preferably with the cooperation of the nations in which terrorism was flourishing, possibly without. In this, practical policy was beginning to intersect with the idealism that had grown in the wake of the collapse of the Soviet Union, during the Quiet Years, when many citizens had assumed that the United States military no longer had a significant role to play in ensuring the nation’s security given its newly found power and global dominance. Few, of course, wished to return to the isolationism and small militaries of the 1930s or earlier decades; many, instead, wanted to use the splendid little military that had been created since the Vietnam War in pursuit of further spreading the liberal democracy that seemed everywhere on the march, intervening in African civil wars, the Balkans, and other trouble spots around the world out of a desire to ensure global peace and security, or simply American dominance. Although it was an audacious and idealistic goal, the more realistic Gore Administration had largely eschewed global use of military force outside of existing commitments in Europe, the Middle East, and East Asia. Informed by the President’s own experiences as a Vietnam veteran, in the wake of the attack the Pentagon continued to reject calls for deployment of American troops to trouble spots, even trouble spots linked with terrorism, whether or not Islamic in origin. Instead, the Gore Administration pursued a more indirect strategy, focusing on assisting nations with their own fights against terrorism rather than having an American white knight dive in and take complete responsibility. The nature of this assistance varied widely, from mere expressions of diplomatic support, to covert assistance in obtaining key war materiel (not necessarily from the United States itself; Russia, China, France, and other major arms dealers often saw their negotiations smoothed by American diplomacy in cases where the United States felt it would be embarrassed by direct support), to provision of military intelligence on terrorist cells and leaders, to, in some cases, direct support by American special forces and American air power, including the new and increasingly popular option of the armed drone. While drones had been extensively used by the Air Force since the Vietnam War, and attempts to build drones dated back as far as World War I, the development of satellite-based navigation and communications systems during the 1990s, together with general technological development, had led to a new level of capability for uncrewed aircraft. Although the new generation of drones had been intended as mere replacements or supplements to older piloted reconnaissance aircraft like the U-2 or the SR-71, their utility in gathering intelligence in virtual real-time quickly led to interest in arming them to strike directly at any targets found. Why call in F/A-18s, say, from a carrier offshore to attack a target, running the risk that he might be lost before they can reach him, when the drone itself could attack? By 1997, the first Hellfire-armed drones were beginning to reach the skies about trouble spots around the globe, quickly proving their utility in attacking suspected terrorists rapidly and efficiently. If there was, in military jargon, “collateral damage” from time to time, it seemed a small price compared to the security of thousands threatened by these men, and compared to the tens or hundreds of thousand who would suffer or die from more conventional methods of restraining terrorism.

    Besides attempts to prevent future attacks and understand past attempts, of course, there was the matter of finding, capturing, and hopefully imprisoning or even executing the actual perpetrators and masterminds of the attack. Within hours of the first notice, the FBI had begun what would become the largest criminal investigation in US and perhaps world history, ultimately involving to a substantial extent more than thirty police and intelligence agencies worldwide and a massive international manhunt for the eventual suspects. Initially, most of the investigative focus was on the aircraft which had survived to land; before nightfall, the FBI had already liaised with the RCMP and the Tokyo Metropolitan Police to investigate the bombings at Narita and Vancouver, while another group of special agents recovered manifest information for all known lost flights from the targeted airlines. Through painstaking analysis of the flight routes over the next months, they determined that each bombed flight had passed through one or more “node airports” in East Asia, connecting two or more of the targeted flights, while equally thorough checks on the passenger and cargo manifests uncovered hundreds of possible leads that needed to be tracked down. Even within a few days of the explosions, however, the FBI was already contacting police agencies around the world to assist them in tracking down possible suspects, the originators of possibly suspect cargo and baggage, and so on.

    The first big break in the investigation was the discovery more than a month after the bombings of the wreckage of one of the targeted airliners in deep water between Hawaii and the Aleutians. A joint Navy-JMSDF recovery operation, making extensive use of Japanese and American deep-diving submersibles, was able to retrieve nearly 60% of the airliner’s remains from more than two kilometers of water. Analysis of the wreckage by the FBI showed that, like United 882 and Pan Am 822, the aircraft had been destroyed by a mid-flight explosion in the passenger area, almost certainly a bomb. Together with forensic analysis of the remains of the bombs from those two flights, it had been all but proved that all nine lost airliners had been downed by bomb attacks. In the meantime, most of the suspects generated by analysis of passenger, crew, and worker manifests had been eliminated, leaving an ever-narrowing list of possible perpetrators. At the top of the FBI’s interest list were passengers who had disembarked from the doomed airliners prior to their last takeoff. Of particular interest were those who had immediately afterwards boarded international flights, especially several who had then traveled to countries seemingly tailor-made for frustrating American inquiries into their whereabouts and activities during the flights. Suspecting that they might be the perpetrators of the attack, the FBI established contacts with the Pakistani Federal Investigation Agency, where most of the persons of particular interest had fled, and with several other foreign police agencies. At the same time, analysis of the targeted flights had shown a definite pattern pointing to Subang International Airport, in Malaysia, and Soekarno-Hatta International Airport, in Indonesia, as the likely origins of the attacks. Although indirect, evidence was beginning to build pointing to Islamic terrorists, not the Japanese Red Army or other Cold War remnants as the perpetrators of the attacks.

    With a better idea of the enemy that they were looking for, the dragnet being laid by the FBI and associated agencies began to narrow, focusing on tracking leads in Southeast Asia relating to radical Islam. While the Organization’s activities may have been largely below the radar of local authorities, the global scope of the Christmas Plot investigation was able to piece together a trail connecting the Christmas Plot’s perpetrators to the Organization’s operations, and then began following leads about the Organization itself. Finally, the leads added up to something far more. In the early morning hours of June 7th, 1995, some six months after the bombing, agents of the Federal Investigation Agency, assisted by members of the FBI and the Diplomatic Security Service of the American Department of State raided a safehouse belonging to the Organization, where they believed one or more of the attack’s perpetrators were hiding. Within, they found far more than just two of the attackers; they found a treasure trove of computers, record books, and other information about the group’s leadership, organization, and some limited insight into future plans. It was the breakthrough that the Gore Administration had been waiting for--a cohesive look at the identity of the attackers, and a wealth of actionable intelligence. Gore once more addressed the nation to announce the capture. It was a success that vindicated Gore’s less military, more international approach to the disaster, as even without deploying a single US soldier abroad, linked forces of allied intelligence sources--including connections that hadn’t existed before the attack--had been able to ferret out and capture some of the perpetrators and begun to catalogue the organizations and operations that had supported them. While what could be done about this list of organizations was being undertaken, the most visible result of the capture was the extradition of the two captured terrorists to the United States for trial. While the spectre of radical terror had not disappeared--indeed, in many areas it had not even ablated much--the capture of the two terrorists and lead up to their trial was something of a closing to the immediate chapter of the Christmas Plot. However, the changes in attitude wrought around the world would not fade so easily--indeed, they would continue to echo around the world.
     
    Part III, Post 15: Domestic American policy after the Christmas Plot
  • Good afternoon, everyone! It's that time once more, and this week we're taking a look at some of the domestic effects of the Christmas Plot. We'll be covering the more general effects on politics and diplomacy in a future culture interlude, but for the moment we're looking at its effects on one particular part of American policy.

    Eyes Turned Skyward, Part III: Post #15

    As a part of its goal to improve American infrastructure and competitiveness for the 21st century, the Gore administration had proposed shortly after taking office that federal funding should be provided for a major upgrade of the American passenger railroad network, including the construction of a number of high-speed rail lines, much as it proposed funding for telecommunications upgrades, the national highway network, the national electrical distribution network, and other elements of infrastructure considered essential for modern life. At first, like most previous attempts at building an American high-speed rail network, the proposals proceeded slowly, concentrating mostly on paper studies of possible corridors and evaluations of various possible trainsets that could be used in the services. By the end of 1994, an observer could be forgiven for thinking that, like previous attempts to establish high-speed rail in the United States, Gore’s proposal would be going nowhere. With a Congress dominated by spending-concerned Republicans and little progress to date, it seemed that the Gore administration’s proposal would simply and slowly wither on the vine, dying off eventually for lack of attention. Then came the Christmas Plot, and suddenly the idea of passenger rail gained a breath of fresh air.

    In light of the near-global shutdown of air transport that followed the attacks, millions of would-be air travelers had their holiday celebrations unpleasantly interrupted by the need to secure alternate transportation. Many chose to simply extend their holidays until air travel resumed, but many more scrambled to secure alternate transportation in the wake of the Christmas Plot, leading to near-record business for passenger rail, intercity buses, rental cars, and other forms of transportation. In the United States, many who had never before ridden a train had their first taste of Amtrak’s service. Although most outside of the Northeast Corridor were less than pleased with the experience, a few fell in love with the idea of traveling by rail, while many on the Northeast Corridor itself were attracted to the relative convenience of traveling by rail. Together, these meant that the huge spike in passenger figures experienced by Amtrak after the Christmas Plot was not entirely transitory, but followed by modestly improved ridership levels system-wide, especially on the relatively higher-quality Northeast Corridor routes.

    As intelligence began to develop about the source of the attacks, it became apparent that much of the funding for the Christmas Plot had had its origins in the oil industry of Saudi Arabia and other Arab countries. Combined with Gore’s interest in environmental matters, this provided the impetus for perhaps the most important policy initiative the Gore administration carried out during their years in office. During an extraordinary speech to a joint session of Congress in early March 1995, President Gore called for a national effort to eliminate the nation’s dependence on foreign supplies of oil, both by reducing energy use and by actively developing alternative and non-fossil energy sources, such as solar, nuclear, and wind power. Passenger rail, as an alternative to both driving and flying, was prominently mentioned in his speech, which called for active development of the American passenger rail network to standards comparable to systems in Europe and Japan, where passenger rail carried significant shares of intercity traffic.

    In the wake of Gore’s announcement, Amtrak immediately excavated the studies it had been conducting over the past year and a half since his inauguration, identifying the routes which seemed most amenable to high-speed rail. Topping the list, as always, was the heavily trafficked Northeast Corridor, Amtrak’s busiest and most profitable line, and the only one where it owned a significant portion of the physical infrastructure. Following it in the list were a system centered on the Chicago metroplex and serving most of the Midwest, probably the second most densely populated region of the country; a system serving the Texas Triangle; a California system tying together the southern half of the state from San Francisco and Sacramento to Los Angeles and San Diego; a Florida system connecting Tampa, Orlando, and Miami; and a Pacific Northwest system connecting Portland, Seattle, and Vancouver. Upgrades to the Keystone Corridor and Empire Corridor routes connecting Pittsburgh and Buffalo with Philadelphia and New York City, respectively, were also considered. Aside from the Northeast Corridor, all of the proposed routes had the severe disadvantage of having relatively poorly developed passenger infrastructure and requiring significant upgrades to reach high-speed rail status. Even the Northeast Corridor would need major improvements to host Japanese or European quality service.

    The eventual Amtrak strategic plan, outlined in a late 1995 white paper titled America’s 21st Century Passenger Rail System envisioned not only developing these routes into high-speed rail, but also significant improvements and upgrades of Amtrak’s operations and rolling stock. Building on the abortive Viewliner program of the late 1980s, the remaining “Heritage Fleet” rolling stock used by Amtrak, much of it dating back to the 1950s, would be replaced by greatly improved modern rolling stock, while a new block of Superliners and Genesis locomotives would be ordered to enable what Amtrak called “Phase I high-speed service”. Modifications would be made to the newer versions to enable running at up to 110 miles per hour, significantly faster than most Amtrak services could reach but still far short of true high-speed rail. Meanwhile, improvements would be made to the identified non-Northeast Corridor trackage to allow Phase I services to operate by 2010, something was not only far cheaper than leaping directly to high-speed but also beneficial to trains which could never run high-speed, such as the Coast Starlight. At the same time, significant upgrades would be made to the Northeast Corridor. While it often fell short of even Phase I standards, with many grade crossings, low-quality catenaries, excessively tight turns, and other problems, it was still ahead of the rest of Amtrak’s network, and the plan was to jump it directly to what Amtrak termed “Phase II” service by 2010, with peak sustained speeds of 150 miles per hour. After 2010, the Phase I corridors would be upgraded to Phase II service, possibly building off of the technology and designs developed for the Northeast Corridor, while the Northeast Corridor itself would be upgraded to a notional “Phase III” standard, with peak speeds in excess of 200 miles per hour, making it one of the fastest rail routes on the planet.

    Unfortunately, despite the unusual political conditions created by the Christmas Plot, such a wide-ranging and ambitious plan was doomed to failure. The upgrades needed for the entire plan would require several hundred billion dollars from a Congress dominated by fiscal conservatives who had always been skeptical of the value of a passenger service requiring constant subsidies from the federal government. No matter that the expense would be spread over fifteen or more years, or that many of the improvements projected were actually to freight railroads, which had long been highly profitable; the full plan was simply a non-starter. Nevertheless, the terrorist attacks and Gore’s call for the United States to be energy-independent by 2015 meant that the plan did not simply disappear into the legislative process, but was amended, repeatedly, by those more interested in balanced budgets than passenger rail.

    The resulting allocations in the FY 1996 budget did provide many of the things that Amtrak had asked for. Funding was provided for a Viewliner II block to replace all remaining Heritage Fleet rolling stock; additionally, the Superliner III cars and Genesis II locomotives needed for the expected Phase I developments were paid for.[1] Furthermore, a series of significant upgrades would be made to the Northeast Corridor and surrounding trackage. Most importantly, freight traffic would be completely removed from the Corridor, while infrastructure would be built to divert commuter trains from heavily congested areas like Penn Station.[2] Poor-quality electrical infrastructure would be replaced and the entire route electrified, with the long-term goal of upgrading the Corridor to a common 25 kV, 60 Hz operating frequency standard, allowing a significant reduction in ongoing costs.[3] Finally, grade crossings along the entire route would be eliminated and many curves straightened, allowing higher speeds. However, the more ambitious plans of developing multiple Phase I and Phase II networks were shot down; only California (where the state was not only already paying for comparable upgrades in some areas, but had indicated a willingness to assist in funding further Phase I level lines) and the Chicago hub area would be developed to Phase I standards, along with small parts of the Empire and Keystone Corridors. Additionally, the Northeast Corridor would not be upgraded to a full Phase II corridor, but instead to what was called a “Phase I+” corridor, with top speeds of only about 125 miles per hour instead of the previously planned 150.[4] This also meant that no expensive new trainsets would need to be developed to provide “high-speed” service; the existing AEM-7s with Amfleet carriages were perfectly capable of operating at 125 miles per hour and already did so on some stretches of trackage.

    Although far less than Amtrak or railfans had hoped for, the FY 1996 budget did represent a massive sea change from the neglect and sometimes outright hostility displayed towards passenger rail during previous Congresses. While still a red-headed stepchild compared to road or air transport, Amtrak was for the first time in years receiving significant attention, funding, and support to improve its services from the poor-quality mess they previously had been to a high-quality system on par with regular passenger services anywhere in the world.

    [1]: Essentially, because Phase I services target the 100-110 mph peak speed bracket, Amtrak decides that it makes more sense to use rolling stock essentially common with its existing fleet to implement it, to save money for the track upgrades, which are both more expensive and more important.

    Note that the OTL California Cars (not quite the same as the Superliner IIIs) and the second block of Genesis locomotives (ordered about this time OTL) are perfectly capable of operating at those speeds.

    [2]: Things like Access to the Region's Core, for instance. The idea is to ensure that most of the NEC is free to just run Amtrak trains, rather than a mix of commuter, Amtrak, and freight. So most of the upgrades to the NEC are for capacity rather than speed per se, although there are some areas where that implies dramatically improved speed limits.

    [3]: Presently, the NEC uses two electrification systems; one, north of New York, of 25 kV 60 Hz AC (the global standard, insofar as such as thing exists), and the other, south of New York, of 12 kV 25 Hz AC. To be fair to Amtrak, the latter system was constructed by Penn in the early part of the century when they could not have anticipated that 60 Hz power would become dominant and there were certain technical advantages to 25 Hz. They simply haven't had the money to upgrade the system when it works reasonably well (as shown by Acela), if not quite as well as might be hoped (also shown by Acela: it needs two power cars to pull a relatively short trainset compared to similar services in Europe, not helped by the infamous Tier II crash standards!).

    Here, the additional cash they have means they plan to upgrade the entire corridor, so modifying the distribution system at the same time makes sense; it means they can use the same hardware on all sections of the line and simplify their operations and locomotives.

    [4]: You might note that this is slower than the top speeds on the Corridor IOTL, and in fact is identical to the top speeds achieved in the 1980s. However, the point behind the Phase X corridors is that essentially the whole thing is upgraded to permit operations at those speeds, rather than limited areas (which was and is notably not the case OTL, with severe speed restrictions in some areas--for instance, between Philadelphia and Penn Station Acela’s average speed OTL is 76 miles per hour). Although obviously some areas cannot reasonably be upgraded to operate at high speeds, where possible they are. So by the time this particular capital improvement plan is complete around 2010, running speeds for the regular trains are more or less comparable to present Acela speeds.

    Many thanks to Devvy for providing helpful comments and suggestions for this post. Do check out his timeline Amtrak: The Road to Recovery if you have the chance.
     
    Last edited:
    Part III, Post 16: Progress and developments in the Chinese space program and in NASA's Artemis program
  • Merry Christmas everybody! I hope you all had a good holiday, but if you're in need of a little break from your relatives, how about a short jaunt to outer space? If that sounds good to you, then you're in luck, since it's that time once again.

    Eyes Turned Skyward, Part III: Post #16

    In the more than three decades since the start of the space age, just two nations had ever launched humans to orbit and returned them to the surface of the Earth in their own vehicles—the two leaders of the Space Race: the United States, and the Soviet Union. Although Europe had had ambitions of joining this prestigious and exclusive club in the 1980s, designing their Minotaur logistics capsule to be relatively easily and cheaply converted to a crew transport, the budgetary shortages that had started the next decade had put those plans on the shelf, leaving them just outside. By the middle of the decade, however, a new entrant had stepped up for its own shot at the prize--the People’s Republic of China. Over a lengthy development cycle stretching back to the mid-1970s, by 1995 the Chinese had already developed and test-flown their own Longxing capsule uncrewed, in a mission equivalent in every respect to the European Minotaur missions except for docking with an independent space station, and had paid for several of their own cosmonauts to take the journey uphill to the Russian Mir, gaining experience in space operations and space conditions. Combined with their involvement in refurbishing the Tiangong (formerly Zemlya) laboratory for that station, they had a record in human spaceflight virtually equivalent to Europe’s, an impressive pace of advance given the conditions their program had had to labor under. Merely equaling the achievements of Europe wasn’t enough, though; China had its eye on a place among the superpowers.

    Following on from their successful first orbital flight in 1994, the Longxing program quickly moved forward with a second test launch, intended to test not just the basic ability of the craft to safely launch and reenter, but test how it stood up to a longer duration flight. After just over 21 hours in flight and 14 orbits of the Earth, the star dragon returned safely to Chinese territory. There, ground crew not only recovered the capsule, but the unrevealed-until-landing passengers: several cages containing assorted laboratory animals which had been used as part of stress-testing the life support system on-orbit. Longxing’s passengers would end up in a variety of publicity stills, including their introduction to the next passenger aboard a Longxing: Chinese test pilot Xiaosi Chen. In September of 1995, another Long March rocket hurled the first natively-flown Chinese cosmonaut [1] into orbit aboard the third Longxing flight.

    Longxing 3 was a near-copy of Longxing 2, with Chen flying his capsule through another 14 orbits of the Earth. In addition to replicating the flight of the test animals, Chen also put his command through its paces, testing out maneuvering systems, radars, power systems, and all the other systems that would be critical to enable the capsule to take up service as a taxi for Chinese cosmonauts to travel to Mir. While the capsule encountered many of the usual headaches associated with any first flight, from balky thrusters to a mis-aligned primary radar system (the flight testing had to complete with the backup system) there were no showstoppers. The Chinese press releases which followed were thus not too far from the truth when they played up the smiling Chen being assisted out of his return capsule by ground crew and pronouncing the capsule a joy to fly, ready for service, and a testament to Chinese technical prowess: a symbol of Chinese soft power that the PRC considered well worth the price. It would be several more months before China would begin arranging their own flights to Mir, and they couldn’t match the cargo capacity nor the sheer flight rate of the Russian capsules, but they could clearly hold up their end of their partnership with the Russians. Whether the Russians could say the same was an entirely different question.

    When the Zemlya DOS lab module was returned to its builder’s cradles in 1992 for conversion into Tiangong, the initial surveys immediately cast doubt about the planned 1995 launch date in the minds of the Russian engineers who had built the module. Structural surveys revealed that the basic hull was intact, but the mummification of the module prior to storage hadn’t done everything it could to preserve the module intact. In the poor funding environment in which it had been abandoned, Zemlya’s technicians had not done the most thorough job of ceasing their work and mothballing the module. In particular, its free-flight thruster systems had been in the process of a pressure test at the time when orders had come through to scrap work on the module. While the module’s fuel lines had then been purged and cleaned, the inspection revealed that the wrong solvents had been used for that cleaning, leaving a residue inside the entire fuel system. If the debris had accumulated into blockages in the complex plumbing or the residual chemical solvents had compromised the lines themselves, the module could catastrophically fail after launch. The result was a complex and time-consuming addition to the teardown, one that would be necessary before work with the Chinese to refit the module to its revised design could even begin. And, while the most critical of the issues discovered during the inspection, it was far from the only one.

    The doubts of the engineers about the schedule were confirmed as their work proceeded. Despite substantial pressure from above, the Russian engineers and technicians left from the glory days of Glushko’s grand station had enough pride in their work that cutting corners was minimized, especially under the watchful eye of Chinese “technical consultants” who were sent to supervise every activity on Tiangong, both for future reference and for verifying that the work China was paying for was being done as contracted. The fuel system troubles were not as bad as engineers had initially feared—it didn’t require a full tear down, and what critical valves and controls were removed and inspected required minimal additional cleaning to return them to operational condition—comforting since the logistics system for spares and replacements had in many cases vanished with the Soviet Socialist Republics. However, in addition to other delays, it added up to a nearly six-month slip of the launch date, enough to carry Tiangong’s ride to orbit into 1996.

    At long last, late 1995 saw the module once more shipped to Baikonur for its flight to orbit aboard a Vulkan rocket in June 1996. On orbit, Tiangong spread its solar arrays under the joint command of Chinese and Russian controllers, and began its transit to the station, where its arrival was carefully controlled. Upon being translated to its final radial position from its initial axial docking position, Tiangong was boarded for the first time on-orbit by the station’s current two Chinese cosmonauts, who after verifying that the module had made the transit without damage saw to the business of activating the module and preparing it for operations. In particular, this involved the module’s supplemental crew quarters—key since the minimalist Longxing lacked the massive TKS FGB which was used as sleeping quarters by Russian capsules visiting the station.

    With Tiangong on-orbit and Longxing in service in 1997, crew arrangements aboard Mir became even more irregular. Though the core station would retain a crew of six cosmonauts, four Russians and two Chinese, this would occasionally grow to nine with the intermittent arrival of Longxing rotations at Tiangong’s nadir docking port. On such missions, which occurred roughly once a year and lasted for periods varying between three and six months, Longxing’s flight crew would be two Chinese cosmonauts, plus a single Russian-selected cosmonaut flown to the station as part of the Chinese contribution to station operations expenses—barter to reduce somewhat the required Chinese cash payments. Further complicating tracking of Russian station complement, the Russians would often take advantage of this additional seat to replace one of their “standard” complement with some of the paid tourists they were finally beginning to arrange aboard Mir. Compared to the polished regularity of American operations at Freedom, Mir retained much of the ramshackle character it had always had, partially a legacy stretching back to the early Salyut stations, but partially an echo of the dark days of the early ‘90s.

    While China was introducing their own capsule and taking their first steps into manned spaceflight, NASA and its international partners were preparing for their own new steps. The most visible element of the lunar planning was the lunar lander work itself ongoing at Boeing’s ex-Grumman Bethpage Division, but beyond this, work was also ongoing on the vehicles that would get astronauts to the moon, and to the technology and training necessary to make the time on the lunar surface worthwhile. First and foremost in the transport problem was the development of the Block V Apollo capsule.

    Almost exactly 20 years prior, Rockwell engineers had carefully worked to strip every aspect of the lunar vehicle that was not needed to go to and from orbit, staying only a few days at a time, to create a lighter, more capable taxi to and from the Spacelab station then in development. Now, with the new millenium fast approaching, it was the job of Rockwell’s next-generation engineering team to give back what they had taken, returning Apollo to its full circumlunar capability--but making the fewest changes necessary to achieve this goal. Most importantly, the service module required significant overhauls, as despite the theoretical capacity of the Block II AARDV to perform the trans-Earth injection burn that would be the only significant maneuver performed by the capsule itself, it lacked the life support capacity to provide for the crew during their journey there and back, not to mention the long free-flight duration required while they were exploring the lunar surface. The capsule required modification, too, as the heat shield had over two generations been progressively thinned as much as possible to the level needed for the gentler fire of return from Low Earth Orbit. It would have to be beefed back up to its old capability if the capsule was to be capable of surviving the scorching heat of entry from a faster lunar return orbit. Fortunately, some of the weight increases these enhancements would require were counterbalanced by reductions in the weight of the capsule’s power system. The solar power system added to the service module to allow it to sustain itself for the weeks in transit and waiting patiently at L-2 for the crew’s return from the surface was substantially lighter than the battery system it was replacing--enough that even with all the modifications, the final Apollo Block V CSM would be almost 50 kg lighter overall than the Block IV it was replacing.

    The Block V got its first trial in space in March 1996. Flying without any mission module on a relatively unburdened Saturn M02, the first Artemis mission pushed the capsule into a highly eccentric Earth orbit, expending the excess performance in the SIVB stage to put the capsule into a close approximation of a lunar return trajectory. Under computer control, the capsule then deployed its solar arrays and checked out basic flight control, including using its entire Service Propulsion System (SPS) fuel supply to further increase its return speed. The modifications to the service module and life support systems seemed functional, and as the Earth loomed large once again to the cameras placed in window frames to monitor the entry plasma plume, the spent SM was jettisoned. The capsule proceeded alone to test its heat shield and try out a new entry profile.

    Called a “skip re-entry,” this new entry path was a variant on the lifting trajectories used by Apollo since its introduction. In this technique, the entry would not simply be done in one pass, but instead the trajectory would be lofted during the initial intersection with the atmosphere to “skip” the capsule back up above the atmosphere, burning off speed before a second final entry. By modifying the first entry parameters, the initial position and speed of the second entry could be controlled, enabling precise landings with second entries substantial distances from the initial entry skip. Since the movement of the Moon across Earth’s sky would cause significant variation in initial entry positions from month to month, and NASA wanted to continue landing crews in the same area south of Hawaii that they had been using for Freedom and Spacelab crews for nearly twenty years, this had been deemed a very important, though not quite critical, capability for the spacecraft, and there was considerable anxiety in the control room as the maneuver began. However, the careful analysis, plotting and programming carried out by Apollo’s engineering team was validated as the capsule came back up out of the atmosphere on its skip test precisely on course for its final recovery zone. Even without manned input in the final entry sequence, the flight computer steered itself to within just two miles of the planned recovery target, where it was quickly scooped from the water by NASA’s recovery boats, marking the end of a virtually picture-perfect flight.

    The final test for qualifying the Block V Apollo came later in 1996, when a September flight carried a two-person test flight crew, including the first woman to make it through NASA’s pilot training pipeline, Natalie Duncan. Nat’s first flight, following in the footsteps of Peggy Barnes before her, would be the Public Affairs Office’s dream, since the flight not only took the crew to Freedom for a short stay of a week to test out the interface between the Block V and the old Block IV MM (a combination intended to shortly replace the Block IV in orbital service), but also took advantage of the volume and mass available on the capsule to carry an IMAX camera system up into space. The resulting footage was used as part of a film IMAX was preparing about the current state of spaceflight (inspired by the profits racked up in the Summer of Space and public interest in the Artemis missions), The Dream is Alive, which saw heavy circulation at IMAX theatres in museums around the country. For the first time, audiences on the ground were given a glimpse of the daily operations of the massive orbital complex, and the crew of Freedom Expedition 32 and their part in the preparations for the return to the moon was given substantial attention alongside other groundside testing and preparations for the Artemis flights, with NASA’s first female pilot receiving a large share of the attention as an emblem of the “new NASA” making these lunar flights.

    However, just as critical as testing the landers and capsules was developing and testing the hardware that would be used in the exploration of the surface. After all, while NASA had extensive experience with space station flights, it had been more than 25 years since an astronaut had set foot on the lunar surface, and NASA’s surface science teams felt the rust of long disuse. In order to ensure that Artemis missions were successes from a scientific perspective, it would be critical not just to land and return, but to ensure that astronauts had the training, tools and workspaces to enable their best productivity on the surface. To test these tools, practice techniques, and train astronauts, NASA had been directed even under Administrator Schmidt to begin updated versions of the desert field geology training that he himself had participated in as an Apollo astronauts. The Desert Research and Exploration Analogue Missions (DREAMs) had been in progress near Flagstaff, Arizona since 1992, and consisted both of training of astronauts in geological knowledge and field techniques as well as testing Earth-based mockups of lunar surface hardware in a simulated environment. As NASA’s surface hardware developers came up with concepts for Artemis, they were mocked up and passed along to the astronauts practicing under the desert sun each summer, and results in turn came back, along with demands for new hardware to meet the geological science needs the astronauts and training staff were working to identify.

    By 1996, the testing had covered everything from airlocks to ziploc baggies and rock drills to rovers, and the selected corps of lunar bound astronauts were beginning to gel into teams and become reasonably proficient at the practice of field geology. However, the biggest piece of the surface hardware had yet to be finalized; the surface habitat which the crew would live in for two weeks or more while on the Moon proved a challenging problem. The habitat was the single most mass-intensive element of the entire mission, and with just 14.5 tons of cargo to be allocated, the nearly 9-ton habitat threatened to cut into the critical mass needed for surface operations the astronauts would conduct while basing out of it.If the design was to be built with traditional rigid technologies, though, the roughly 70 cubic meters considered a minimum for the occupations planned would barely fit into the mass budget required. The resulting Design Reference Habitat, shown below [2], was quite cramped, requiring extensive multi-use of space and allowing little space for anything other than basic occupation. However, the development of kevlar and other woven composites offered another option which could resolve the dilemma.

    spaceMinimum4_1.jpg


    From almost the earliest days of spaceflight, even before the first space launches, the idea of using inflatable structures had been current in spaceflight circles. Such structures seemed to offer many advantages over more conventional designs resembling metal cans; they could be packed lightweight and compactly on a rocket, then expanded into their final shape on orbit, theoretically saving the difficulty of free-space assembly and possibly allowing a greater pressurized volume for a given amount of launch mass. However, problems with the available material, the sheer exoticness of the design in a relatively staid and conservative industry like aerospace engineering, and the relatively fast pace set by the Apollo, Skylab, and Spacelab programs had sent the idea to an early grave. With the beginning of Project Constellation, the idea was revived as an attractive option for longer-duration habitats, as might be needed at a lunar base or on a Mars mission, since with the development of new woven composites like Spectra and Kevlar, many of the materials issues that had plagued early designers had vanished, while the advantages of the idea had become even more compelling. The cancellation of much of Constellation’s other BEO plans under President Gore ended up focusing attention on lunar-bound habitats, the only possible near-term application for what work had been done. Branching off on a new tack from the Design Reference Habitat, the team suggested an alternate concept [3]: For the weight saved by reducing the core rigid habit to just 60 m^3, they believed they could enclose another entire 60 m^3 of volume with a deployable inflatable habitat. Additionally, they suggested that instead of a horizontally-oriented thin cylinder, the remaining rigid habitat be implemented as a wide, squat vertical cylinder, with the inflatable habitat placed as a “loft” on top. Crew quarters and wardroom spaces could be moved into the loft, freeing up the room in the “first floor” for fixed installations like a larger galley, expanded hygiene spaces, more stowage, and even an isolated shirt-sleeve geology lab, which they proposed could be used to pre-analyze and screen samples selected, so that the 200 kg of material planned for return on each Artemis flight could be made up of only the most scientifically valuable samples recovered by the missions.

    Untitled-2_zps4091de4a.png


    When proposed to the main Artemis surface team, the concept gained immediate attention--if the concept could be proved to work, it could solve the apparently intractable problems of the surface hab. Over the next year, a subscale test unit was developed and tested in the Goddard vacuum chambers, while mockups of the current DRH and the new inflatable concept were tested head-to-head in the summer 1997 DREAMs. The results were conclusive: the astronauts and science teams much preferred the roomier, more capable inflatable habitat, and the results of the ground testing showed that the inflatable design could be relied on even under Earth gravity--it ought to be more than capable of supporting itself on the surface of the moon. As the habitat continued into development for a final full-scale lunar-bound design, the other hardware and training of DREAMs was similarly proving out the other concepts that would be necessary when all the pieces came together to once more put human bootprints on the moon.

    [1] IOTL, the term “taikonauts” as an english-language name for Chinese spaceflyers appears to date from around 1998, and is a bit of an odd combination of Chinese and Greek. ITTL, with Chinese cosmonauts starting out as crew members aboard a Russian station and flying routinely to that station aboard TKS for several years before their own first manned flights, the term “cosmonaut” sticks with them once they switch to their own capsules.

    [2] This design is actually an OTL design for the “minimum reference habitat” from ILC Dover that closely matches the mass and volume parameters we’ve identified as achievable for a rigid Artemis surface habitat.

    [3] This alternate “loft” design is similar to designs studied by university teams for the NASA X-HAB competition, and parameters for the weights and sizes of components were taken from reading several competition papers and applying a little judgement in what was reasonable for Artemis’ planners.
     
    Last edited:
    Part III, Post 17: Unmanned Mars exploration, Fobos-Grunt and Phobos
  • Good afternoon everyone! It's that time once again, and today's post should be somewhat timely given the topic of discussion that's popped up overnight, as Workable Goblin takes us on a journey to the Martian moons with the American-Russian collaboration mission Phobos Together.

    Eyes Turned Skyward, Part III: Post #17

    As the Soviet Union crumbled during 1990 and 1991, the Ares Program, part of Project Constellation, was planning a series of missions to Mars during the late 1990s and early 2000s to prove many of the technologies needed for a human flight, as well as return scientific data relevant to the success of a future human mission. With the increasingly open attitude of the Soviets, and eventually the Russians to international collaboration in space exploration, and a desire by the Bush Administration to diplomatically engage their former opponents, Ares Program management began to consider whether a joint venture with the Soviet Mars program might be usefully incorporated into their plans. The missions to Phobos proposed by the Russians for 1994 and 1998 seemed particularly ripe for outside involvement, particularly the sample return mission proposed for the latter opportunity. Although a mission to Phobos was obviously a diversion from Mars itself, it could still prove fruitful in the technology development role, proving a number of technologies and techniques vital for other, more directly important missions, such as autonomous Mars orbital rendezvous, automated Earth return, and long term operations in cis-Martian space. Moreover, any American assets included in the mission plan could investigate Mars as well as Phobos while in Martian orbit, and it would be possible to carry out other missions in parallel, so that overall Ares Program goals could be achieved while also taking advantage of a historic opportunity for cooperation between two formerly hostile nations. Thus, beginning in late 1991, representatives of Lavochkin, the Russian Academy of Sciences, Johnson Space Center, the Jet Propulsion Laboratory, and the National Academy of Sciences began a series of meetings intended to explore the possibilities of cooperation between the United States and Russia on one or more planetary exploration missions. In the course of these meetings, proposals were mooted for missions to Venus using the nearly-complete DZhVs-14 hardware, helioscience missions similar to the planned Lomonosov missions, a set of Pluto flybys to follow up Voyager 2, missions to the asteroids, and more, but the subject repeatedly returned to Mars and its nearest moon.

    The tentative mission design that developed over the course of these meetings included significant components from both the United States and Russia. The latter would contribute the launch vehicle, a Vulkan-Blok R, and the Fobos-Grunt (“Phobos soil”) lander/sample-return collection vehicle, while the former would contribute a Mars orbiter/return vehicle responsible for collecting the sample container and returning it to Earth and a Phobos rover to be landed by Fobos-Grunt to explore the surface of the moon more thoroughly than possible by the stationary lander. In parallel, a Delta 4000 would deliver a pair of stationary landers to Mars to explore each of the poles of Mars for a few months. Although this latter was a purely NASA element with no significant contribution by or involvement in the Russian side of the mission, it was nevertheless considered by NASA to be a significant element of the overall Mars/Phobos ‘98 mission concept, kicking off the intense series of missions envisioned by the Ares Program to lay the groundwork for an eventual human flight.

    Gore’s victory in the 1992 election disrupted but did not destroy this tentative program of cooperation. Although Gore was hostile to the massive scope of Project Constellation (and successful in terminating the Ares Program), the nascent Fobos Together mission had the advantage of being a positive diplomatic contact with a new Russia no one was quite sure how to handle yet, and one which could theoretically be leveraged to assist in the significant policy goal of ensuring that Russian technology, especially space and missile technology, did not proliferate and threaten American national security. As such, the mission was reorganized under a different aegis soon after the end of the Ares Program, with a new focus on cooperation with Russia. The mission quickly lost its costly Mars-centered elements, becoming a pure Phobos sample return. Besides eliminating reminders of the old regime, this saved the hundreds of millions of dollars which would have been needed for the lander design and construction, the launch vehicle, and operating the mission itself. The beginning of the Comets and Asteroids Pioneer Program in 1994 further solidified Fobos Together’s place in the American planetary mission canon, justifying it as a valuable precursor mission for the planned capstone comet and asteroid sample return flights. Given the similarity of Phobos’ surface environment and gravity to many asteroids, experience in building systems able to function there would be directly translatable to CAPP missions, while the orbiter’s planned electric propulsion system, more technologically advanced than Piazzi’s or Kirchhoff’s, could be reused for other missions, including those planned for CAPP.

    In the meantime, the initially positive relations between the Russian and American elements of the project, were quickly souring, as cultural misunderstandings and technical problems piled up. To the Americans, the Russians seemed sloppy, careless, and unwilling or unable to address serious issues in their spacecraft, with multiple potentially mission-ending problems found during American inspections of spacecraft prototype components. For their part, the Russians viewed the Americans as arrogant and imperious, dictating changes and modifications without consulting their Russian counterparts and without due regard for Russian conditions. Matters came to a head after the disastrous launch of “Grand Tour” in 1996; only minutes after successfully completing its interplanetary injection burn, the spacecraft switched to safe mode, turned its solar arrays away from the Sun and refused to respond to ground commands, a state it remained in until its batteries expired hours later. The immediate effect on Fobos Together was dramatic, as the NASA contingent insisted on a thorough examination and review of all components, supervised by themselves, and far more stringent quality control procedures, also managed by the Americans, as they had lost all confidence in the managerial ability of their Russian colleagues. While the Russians were naturally outraged by these demands, the fact that NASA was in effect providing all of the funding for the mission forced them to accede to the American requests. The demand drove a wedge into the cracks already opening between the two parties, opening the small gaps into a yawning and irreparable divide, permanently damaging relations. Such a thorough review also forced a slip in the launch date from 1998 to 2001, although difficulties with the Russian manufacturers had been making such a slip look more and more likely in any case. With American oversight firmly established, the relationship between Russian and American project members became less tense, if still unpleasant in general, and progress on the project became steady if slow. As the new millenium dawned, Fobos Together was clearly on course to launch by the new target date, but the original purpose of the mission was being drowned in a pool of bad blood.

    Nevertheless, it was on course, and in an environment where it had faded into the background as more photogenic opportunities for collaboration and greater concerns over Russian-American friction had arisen this was a powerful asset. Marching forwards, not always steadily, it managed to largely escape critical scrutiny, whether by Congress or the Federal Assembly, maintaining a low but funded profile. In late 2000, a few months after Fobos-Grunt had left the plant near Moscow for Baikonur, the American elements of the spacecraft arrived, ready for final integration into the launch stack. By April of the next year, they were ready, and apparently so was Earth, for a patch of brilliantly clear, cloud-free, cool, and still weather opened only a few days before the beginning of the Mars launch window. In a low-key triumph, overshadowed by the successes of Project Artemis, preparation and countdown proceeded problem-free, and the spacecraft were injected onto a trans-Mars trajectory on the first attempt. It was a far cry from the conditions that had prevailed a decade earlier.

    Once it was confirmed that the spacecraft were on track to reach Mars, they came to life under the guidance of ground controllers, activating the ion drive to assist in the process of reaching Martian orbit, swinging the orbiter’s solar panels to face the Sun, and confirming the good health of the sleeping lander and rover. With all systems checking out, Fobos Together began the long journey to Mars. As usual for spacecraft cruising between the planets, the orbiter’s small suite of scientific instruments was turned towards observations of the Sun and interplanetary space, serving as much as a source of engineering data as scientific information. Just over six months after launch, in late October of 2001, the stack quietly entered a highly elliptical Martian orbit, gradually braked into orbit by its ion engines. Over the coming months, the orbiter slowly lowered itself towards Phobos’ low orbit, gradually circularizing its orbit just under 6,000 kilometers above the Martian surface, allowing it to slowly lap the moon in its tread around Mars. As it approached, it imaged the surface of the little moon, collecting compositional data from a few spectroscopes and building on the data returned by Mars 12 and 13 about Phobos. Eventually, it came to a halt just a few kilometers away from the moon’s surface, parking itself at the Lagrange point of Phobos and Mars and releasing the Russian lander to approach the Phobos surface. Gradually, over the course of a day, Fobos-Grunt drifted towards the tiny moon on near-invisible attitude control jets. Without any of the fire and fury of a landing on Mars or the Moon, it finally touched down near the middle of 2002, more than six months after entering Mars orbit.

    In Russia, it was a minor media sensation. While a Russian had landed on the Moon in 1999, on Artemis 4, it had been three years, and this time instead of being a mere passenger on another country’s mission (ignoring for the moment the fact that Russian components had been critical for that mission’s success) this time Russia was in the driver’s seat. The mission concept had been a Russian idea, the launch had been on a Russian vehicle, and the lander had been designed, built, and tested in Russia. Never mind that the rover it was carrying was American, that it was carrying instruments from Germany, France, and Italy in addition to its Russian ones, that American quality control had been crucial for ensuring that it actually worked, or that it was only part of the mission, and depended on the NASA orbiter for success; for the moment, all that mattered was that the lander itself was Russian.

    Outside of Russia, and lacking the patriotic and nationalistic overtones inspired there, coverage of the landing was more muted. In Japan, with no connection to the mission, it was virtually ignored in favor of focusing on Japan's Moon-bound Japanese astronauts of Artemis 6 and 7. In the United States and elsewhere it earned a little more coverage, due to diminishing public interest in the Artemis missions as they came and went, but still not much more than a brief mention on the nightly news, mostly because of the dramatic imagery returned from the surface of the little moon and the looming disc of Mars overhead, covering a vast portion of the celestial sphere, totally unlike anything seen on Earth, or even the Moon.

    Nevertheless, the probe soldiered on, unaware of and unconcerned by the lack of press coverage. After a day of systems check-out, it was ready to take the next step: deploying the rover. A curious creation of JPL, the so-called “rover” resembled its predecessors on Mars or the Moon only in that it was intended to travel across the surface of Phobos to provide a more varied scientific picture of the body, otherwise having almost nothing in common with those spacecraft. The key difference was gravity, or rather, because of its small size and low density, Phobos’ virtual lack thereof. With no gravity, there would be virtually no frictional force holding wheels to the surface, turning it into the deceptively rocky, dusty equivalent of an ice sheet. Conventional wheels would be unable to gain traction and would struggle to maintain all but the most modest speeds without spinning out or launching the rover into space.

    Facing this seeming disadvantage, JPL had turned it around and spun it into the centerpiece of their rover’s movement strategy. Rather than fight the low gravity, the rover, named Sojourner after the abolitionist Sojourner Truth and the fact that it was, as the name said, a wandering traveler, would instead exploit it, using Phobos’ extremely low escape velocity to ballistically travel all over the surface. This could be done using a simple set of springs, compressed using solar power during Phobos’ short days, then released to propel the vehicle across the surface. A set of hydrazine thrusters could be used to adjust the precise trajectory, and an additional set of springs on the sides of the rover, less powerful than the main propulsion ones, allowed it to pop back up into the correct orientation no matter how it landed. All in all, it was a clever design, and one very suited to moving over Phobos’ surface.

    As Sojourner left its storage position on Fobos-Grunt to begin its slow circumnavigation of Phobos, the main lander turned towards its primary mission--extracting samples of the moon for analysis on Earth. Its sampling arms unfolded themselves from their stowed positions and began to delicately poke at the surrounding regolith and rocks, trying to determine which of several end tools that had been packed would be most suited for sampling the surface. Despite Mars 12’s landing on the moon more than a decade earlier, the physical properties of Phobos’ surface were still relatively unknown. Because of this, it had been deemed unsafe to include just one version of the equipment needed to recover regolith and rock samples from the surface; if the design assumptions that tool had been developed against were untrue, the entire billion-dollar mission would be an almost complete failure. In the event, the original Russian design proved to be the most suited for the conditions on Phobos’ surface, and after a week of work samples of loose regolith and entire small rocks from all around the lander were neatly tucked away in the sample capsule atop the lander body.

    At the same time, the lander was working on obtaining samples from another area: directly underneath. It had long been known that Phobos has an exceptionally low density for being an ostensibly rocky body, just 1.8 grams per cubic centimeter; indeed, in 1958, prompted by early observations and estimates which seemed to indicate an even lower density, the Russian astrophysicist Iosif Shklovsky (probably best known to most readers for his influence on Carl Sagan) proposed that Phobos was actually an enormous hollow artificial body of some sort. While this particular theory fell afoul of better observations, it contained a kernel of truth, as those same observations showed that Phobos must have a considerable amount of so-called “void space,” where the chance accumulation of mutually gravitating fragments had left small gaps and cracks of empty vacuum within the body. The remaining question was what, exactly, the moon was made out of, and it was on this question, and this one alone, that the whole Fobos Together mission had begun in the first place, for there were two facts about the moon which seemed to point in entirely contradictory directions.

    First, it was clear from even the most cursory observations that Phobos had an extremely small albedo, that is that it was extremely black--nearly as dark as fresh asphalt. By itself, this would not be so strange, as many C-type objects, commonly known as carbonaceous chondrites, are also quite dark in color, and it is plausible that during the early formation of the solar system such material could have coalesced to form a moon of Mars, whether around the planet itself or elsewhere in prelude to a later capture. Where the problem arose, however, was that spectroscopic observations of Phobos’ surface indicated that it was as dry as the Moon, with almost no water at all. Carbonaceous chondrites, however, contain a great deal of water, leading to a puzzling contradiction with the albedo data, as well as other lines of investigation pointing towards a more carbonaceous chondritic composition. Two theories had arisen to try to resolve this complication. The first proposed that the outer surface of the moon had simply been altered by billions of years of impacts, with whatever water had been locked into hydrated minerals having been driven out by shock-heating, leaving a dry, powdery regolith crust over a wetter interior, while the second argued that the moon was actually composed out of more typically chondritic materials and had had its appearance changed by prolonged bombardment to the dark color observed. Both of these theories had external support; it took little imagination to see how impacts could gradually drive off water trapped in hydrated minerals from surface material, while the existence of so-called “black chondrites,” transformed in exactly the same way proposed in the second theory, lent it considerable support. However, in both cases the essential information needed to differentiate between them was locked away under the moon’s surface.

    Therefore, from the very beginning of the mission it was considered essential to include a tool capable of digging much deeper under Phobos’ surface than the simple grab tools and sifters of the primary sample collection arms, a core sampler. While space and mass constraints prevented inclusion of a tool able to dig really deep into the moon, it was hoped that even a shallow core could reveal possible gradients in volatile content that could point to the existence of more volatile-rich interior material. Knowledge of whether or not it did would be valuable to the Ares Program; if Phobos was as water-rich on the inside as the first theory predicted, it would have a reserve of potentially billions of tons of extractable water, enough to easily supply an orbital base and decades, if not centuries, of missions to and from the Red Planet. While NASA was not undertaking an active Mars program, nor expecting to in the next few decades, the purpose of the Ares program was still to provide the knowledge needed to plan any such missions, and the presence or absence of such a massive and easily accessible water reserve was certainly something that would be important to determine before any long-term plans were drawn up. When combined with the major technical demonstrations included in the mission, Fobos Together was perhaps the most important overall probe of the Ares Program.

    In any case, despite early problems with the drill motor, Fobos-Grunt spent several weeks digging into Fobos’ surface, obtaining partial samples from up to three meters under the surface and a complete core of the first ten centimeters of regolith (the longest section that could be fit within the sample capsule). With both core and surface samples recovered, only one last step needed to be taken by the lander for its part of the mission, at least, to be a complete success: launch. Fortunately, in the extremely low gravitational pull of Phobos, this was not much of a challenge; with an escape velocity of just 12 meters per second, and a planned rendezvous near the Mars-Phobos L1 point (requiring even less delta-V), a set of springs very much like the ones implanted on Sojourner were more than sufficient to launch the capsule towards the orbiter, waiting overhead, in late August of 2002. Within minutes the orbiter had locked on to the sample capsule, and on gentle breaths of ion breeze it quickly coaxed the capsule into its final storage position. As soon as the two had connected, the orbiter turned its attention to the long journey home, boosting away from Phobos on its ion drive.

    Even as the orbiter departed, though, the surface elements were still active and returning data to Earth. Sojourner continued to relay data from all over the surface through Fobos-Grunt, while the lander’s own suite of instruments silently collected data from around the landing site, even performing some in-situ analyses of collected material while the bulk returned home. After all, this departure had been planned from the start, and it took little effort to make Fobos-Grunt capable of transmitting directly to and from Earth, not just to the orbiter. Indeed, disregarding commanded shutdowns from Earth, the only threats to their continued operation was themselves. Sojourner was the first to cease functioning, running out of vital hydrazine in early 2003, after just over six months of operation. With no way to trim its trajectory, it would have been unable to make a precision return to Fobos-Grunt for updated commands or to relay any recorded detail. Emergency instructions for just such a case had been included in the rover’s memory, however, and it is assumed that it performed a nominal shutdown in line with the operations plan uploaded a few weeks earlier. If so, the rover’s hardware is likely, given the vacuum and quiet of the moon’s surface, relatively intact; the electronics systems may have been damaged by bombardment by cosmic rays and solar radiation, but the mechanical systems should still be operational if a future mission travels to the moon.

    With no consumables and no need to move, Fobos-Grunt proved much more durable. The Russian lander soldiered on long after Sojourner had given up the ghost, relaying measurements to its Russian controllers. As one of the only active Russian planetary spacecraft, and still scientifically productive, concerns of prestige and image demanded that the Russian government continue to provide the relatively paltry sum needed for continued operation. Indeed, Fobos-Grunt would have continued its mission indefinitely had a relay in the power control system not failed in mid 2008, preventing the batteries that powered the lander through the night from charging during the Phobos day. With the moon’s day-night cycle only eight hours long, within a day the lander had permanently expired from loss of power.

    While Fobos-Grunt and Sojourner continued their missions, the orbiter was breaking out of Martian orbit on the journey home. Powered by its ion engines, it retraced the trip it had taken just two years earlier to reach Mars through the rest of the year and into the next. Just two weeks before it returned to Earth, the reentry capsule into which the sample capsule had been tightly packed after its recovery separated, headed directly for the tiny blue crescent ahead. Its mission complete, the orbiter began nudging itself away from its home planet, diverting itself to pass by into a solar orbit where it would continue to operate as an interplanetary monitoring station and testbed, operating its ion engines until they ran out of propellant or failed.

    Meanwhile, the return capsule plunged into the atmosphere above the Sary Shagan test range in Kazakhstan, less than a thousand miles from where it had departed Earth. An anti-ballistic missile testing range, Sary Shagan was well-equipped to track and follow the trajectory of objects reentering the Earth’s atmosphere, and almost as soon as it entered it was being tracked by the site’s giant radars, a peaceful application of the technology they represented. While the capsule was tracked to its landing just outside and to the northeast of the site’s boundaries, unseasonably poor weather and nighttime conditions prevented immediate recovery. Instead, at dawn the next morning, when the weather front had passed and visibility had returned to normal, a squadron of helicopters departed for the predicted touchdown location. With the aid of the onboard radio beacon, it took less than an hour for the recovery crews to find the reentry vehicle, which was immediately transported to the facility’s airstrip, where a long-range transport was waiting. Sealed in a special contained pressurized with pure nitrogen to prevent contamination by ambient air, the whole return capsule was immediately transported to Moscow, where it was carefully disassembled by a Russian team at the Vernadsky Institute of Geochemistry and Analytical Chemistry, or GEOKhI, the central Russian institute for the storage and curation of extraterrestrial materials. After an initial cataloging to precisely record the samples available, and their mass, volume, and general type, a group of Russian and American scientists carefully divided the whole as they had agreed in the original mission planning, reserving 51% of the material for continued storage in Moscow, with the other 49% being transported to the Lunar and Extraterrestrial Sample Laboratory Facility in Houston for American study.

    Naturally, as soon as both laboratories received their final allocations of Phobos material, an intensive process of studying it began. These analyses, first and foremost, gave qualified support to the carbonaceous chondrite theory of the moon’s composition, showing an overall composition far more similar to that type of meteorite than to “black chondrites”. Nevertheless, there were still puzzles in the data; in particular, the core and deep samples, which were expected to show at least some hint of increased water content at depth, remained as stubbornly bone-dry as the surface regolith, leading to suggestions that some catastrophic event early in the moon’s history, perhaps during its formation, had desiccated it, driving all of the water and other volatiles off. Further support for this theory came from the results of careful tracking of the orbiter while it was in close proximity to Phobos, which seemed to indicate fairly significant fluctuations in density throughout the body. In particular, there seemed, from the relatively low-resolution data available, to be distinct “nuggets” of higher-density material contained within a “fluffy” low-density core, which itself was overlaid by a relatively dense surface crust. Several theories have arisen to explain this pattern of densities, but the most popular relates it to the relatively energetic collision of several proto-Phobos bodies of different composition in Martian orbit; most of the material would have remained within the Martian gravity well and eventually recoalesced into one or possibly more successor bodies, Phobos and perhaps Deimos or even other, now lost moons of the planet. Besides mixing materials of several different types, these would have driven out any water that might have been present in the source material, leaving dry and desiccated rock behind. Nevertheless, this theory is not the only contender, and even the continued analysis of samples from the moon has not produced any definitive conclusions.

    As with every space mission, Fobos Together had created new mysteries even while it was discovering new facts, revealing Phobos to be a little world just as worthy of study in its own right as any other. Despite their instigation of the mission, however, Russia has no plans to return and try to push the boundaries of our knowledge of the moon further. Instead, buoyed by the unquestionable success of Fobos-Grunt and benefiting from the technical development invested in the mission by not only themselves but also the United States, they have drawn up a range of new missions building on its success: Luna-Grunt, Vesta-Grunt, and, the ultimate prize looming as large in the imagination of Mars planners as it has for the past forty years, Mars-Grunt. While the third remains not much more than paper plans, Luna-Grunt is scheduled for launch in 2016, and Vesta-Grunt by the end of the decade. With a considerable amount of lunar material presently in labs worldwide from the Artemis missions, the purpose of Luna-Grunt is less the mere collection of lunar material and more to show that Russia can, indeed, launch and operate missions, even complex ones, on its own, and its success will be an important step forwards for their program.
     
    Last edited:
    Part III, Post 18: Boeing-Grumman and the design and testing of the Artemis Altair lander
  • Good afternoon everyone! It's that time once again, so here we are. Last week, we covered the joint Russo-American Phobos sample return mission, Phobos Together. This week, we're moving back towards the efforts aimed at another, closer moon. This post is one that's been a long time coming, but I think next week's may be slightly more hotly anticipated. ;) Anyway, without further teasing, let's be about it...

    Eyes Turns Skyward, Part III: Post #18

    By the mid-1990s, Boeing’s position in the space industry had grown to one that other companies, be they existing competitors or new upstarts, were well justified in being envious of. With an effective state-sponsored monopoly in large launchers due to ongoing NASA and DoD support for the Saturn Multibody family, and the approaching promise of Artemis flights on the Heavy variant of the booster, Boeing was in the attracting position of having guaranteed and stable profits in its space division, even before it had clinched the Artemis lander contract with the purchase of Grumman and its acquisition of that company’s talent and experience.

    Even the most stable monopoly brings its own challenges, however, and in this respect Boeing’s position was perhaps not quite as desirable as entrepreneurs competing with the emerging Internet boom for venture capital might have wanted to believe. With the Saturn Multibody uncompetitive in the commercial market due to excessive size and relatively high per-launch costs, Boeing-Grumman possessed no entrant in the rapidly growing and, as Lockheed was showing, profitable commercial market. Reliable and reasonably cost-effective for large governmental payloads like Freedom resupply and rotation missions or military spy satellites, even the smallest Saturn variant faced the same problem as Vulkan in competing for commercial dollars, being too large for even the largest commercial satellites. Moreover, unlike the cash-strapped Russian program, to whom selling commercial Vulkan was very nearly a matter of life or death, Boeing’s guaranteed governmental contracts ensured that Saturn would always have a nice, stable cash flow, with a virtual floor of nine flights per year, shooting up to ten or eleven in some years. With Artemis, requiring a further nine cores per year, looming on the horizon, there was even less pressure for Boeing to try to compete; even if they wanted to, their manufacturing operations at Michoud would be near their limit at 18 cores per year, and for all but the most lucrative and long-term contracts the expense of expanding their operations would outweigh the revenue possible from more flights.

    Under these conditions, Boeing’s management was largely content to let the space division run itself, choosing instead to focus attention on the highly competitive airliner market, where they were facing severe pressure from Airbus, Lockheed, and McDonnell Douglas, on one side, and smaller firms like Bombardier on the other, or the upcoming Joint Strike Fighter contract, possibly worth a trillion dollars or even more over the next several decades. Indeed, their purchase of Grumman had been largely intended to improve their positioning for this competition, whose winner would likely dominate American tactical fighter production--and therefore export markets for American fighter aircraft--for decades to come. Compared to the serious competition they were facing in both sectors, the stable, profitable, and safe space market seemed worthy of little focus from the greater corporate entity.

    To their customers, of course, Multibody was an important part of their space operations, and whatever shortfalls in attention Saturn might have suffered from Boeing, it certainly lacked none from NASA and the Air Force, especially as Artemis continued to advance as a program. In order to achieve the best possible performance from the Saturn Heavy in its role as the critical Artemis launch vehicle, Boeing was tasked by NASA with performing its own version of the “interim improvements” already undertaken by McDonnell on the Delta 4000 family. Compared to the massive overhaul given to Delta, though, Saturn’s facelift was minor, mainly focusing on production streamlining, the introduction of improved models of the J-2 second stage engine, and the replacement of the aluminum skin of the S-IVB and C upper stages with lighter-weight aluminum-lithium alloys. Altogether, it was enough to push the payload of the Saturn H03 to over 80 tons, increases eagerly put to work by Artemis’ mission designers.

    Of course, one of the design bureaus benefiting most from the changes was Boeing-Grumman’s own Bethpage spaceflight division, inherited from Grumman, which retained its responsibility for the design of the Artemis lunar lander. The task of the lander design was complicated by the fact that, like the original Apollo lander, it was really not one but two spacecraft: the descent stage and the ascent stage. Even compared to the Lunar Module, however, the new design would require an unusual amount of independence between its two parts, driven by the fact that the two stages had their own crucial and separate missions. Unlike Apollo’s descent stage, Artemis’ would also be used independently for cargo flights, and therefore require its own attitude control, gyroscopes, radar, and computer systems to allow it to land autonomously on the surface of the moon at sites precisely selected from orbital imagery that the international flotilla of precursor probes would provide. Meanwhile, the ascent stage would have a crucial life-support role through all the stages of the flight, serving not only as the sole transport vehicle for the journey to and from the Moon to L-2, but also as a key extension of the living space available within the Apollo itself on the voyage to and from the Earth. Both stages would also need to be much more capable as rockets than the original Apollo LM, in order to travel all the way to and from the Apollo capsule waiting patiently at L-2.

    Additional challenges came with the fuels required for the trip. While Bethpage had recent experience with hydrogen-fueled landers, the need to store cryogenic fluids for the entire coast to the moon was a new challenge, requiring the solution of new problems in insulation and thermal management to ensure adequate supplies of these fuels throughout the mission. With the higher specific impulse of the new RL-10-A4 engines being key to the mission design, however, these problems had to be solved if Artemis was to succeed; and lurking in the back of the mind were always the similar but far greater challenges posed by Mars missions, even if NASA was not officially pursuing the Red Planet. Three of these engines would be fitted in a line on the bottom of the descent stage, with all three used for the powered descent initiation (PDI) burn which would slow down the lander, bringing it out of its trajectory from Earth or L-2 and setting it on course down to the surface. However, for the final part of the descent to touchdown, firing all three engines would require excessively high throttle ratios, so the plan was to proceed to touchdown only on the center engine or (if that engine failed to restart) on the paired outer engines. Despite this theoretical redundancy, ensuring that restart would be reliable and guaranteed was a paramount concern during vehicle development and testing. The fuel for the stage would be clustered into a large octagonal descent stage, which would also provide ground-accessible cargo bays for the mission’s rovers and other surface hardware, along with a wide platform for the other cargo on top of the stage.

    For the ascent stage, fuels and engines were again a concern, though from a different perspective. Even the sophisticated new insulation designs being developed for the descent stage would have trouble keeping cryogenic liquid oxygen and, especially, liquid hydrogen fluid through a two-week lunar surface stay, and after a brief study of alternatives both Boeing-Grumman and NASA had concluded that tried and tested hypergols, used in the original Apollo Lunar Module, would have to be used for the new lander’s ascent stage as well. However, since the ascent stage’s fuel was itself cargo for the descent stage and the extended use of the ascent stage as a mission module placed rather firm minimums on its mass, it was critical that the ascent stage achieve the delta-v it needed on as little fuel as possible. To accomplish this, NASA and Boeing had to look outside the United States--where pressure-fed engines were state-of-the-art for hypergolic fuels--to Russia, where brilliant engineers had rejected the American approach of switching to fuel combinations with a superior specific impulse to instead push hypergolic propellants to their uttermost limits. The resulting closed-cycle engines had specific impulses closer to those that might be found in kerosene-liquid oxygen engines, often ten or twenty seconds greater than their American counterparts, yet still used dense and highly storable hypergolic fuels. With an extensive flight history allaying American concern over the relative reliability of pump-fed closed-cycle engines and pressure-fed designs, the S5.92 engine that had originally been designed for the latest generation of Soviet deep-space probes but which had found subsequent use as a mid-performance competitor to the Blok R as an upper stage and as a performance upgrade for smaller rockets was accepted as the ascent stage’s powerplant. Three would be clustered at the stage’s base, allowing the stage to return to L-2 even if one of them failed, whether on the surface or while ascending.

    The ascent stage design consisted of a rather squat vertical pressure vessel, with the engines clustered at the bottom and an Apollo drogue port at the top. Fuel would be divided into four tanks, two each of nitrogen tetroxide and UDMH. While slightly heavier than the one tank of each that had been provided for the Apollo LM and given it its distinctive “off-center chipmunk” appearance, Artemis’ higher payload capability meant that trimming every spare ounce of weight wasn’t quite as critical, and the stage balance could be more easily achieved in the four-tank design. A side pressure hatch would provide the entrance into the spacecraft’s airlock module, which would be left behind on top of the descent stage when the spacecraft departed the lunar surface. For safety reasons and to take advantage of a proven system, the life support systems of the ascent stage were subcontracted to the same firm which provided systems for the Rockwell Apollo capsules. Carbon dioxide filter systems and other critical spares aboard the surface habitat, the ascent stage, and the Apollo capsule would be interchangeable--providing protection throughout the mission from the kinds of hassles that had complicated the use of the Apollo LM as a lifeboat during Apollo 13’s flight.

    After the lander design reached critical design review in 1995, two years after the awarding of the contract in 1993, work proceeded apace on hardware development and various ground-based testing. Component-level testing of landing gear reactions to the shock of lunar touchdown, breadboard examinations of radar, the construction and programming of the lander’s twin guidance systems, and much more took place throughout 1995 and 1996 while work on the manufacturing of structural demonstrators took place. Finally, in spring 1997, the first structural test vehicles passed initial pressure testing, and integration began on the first complete test vehicles. When these were completed, one was shipped to NASA Glenn’s Plum Brook for full-scale testing in the facility’s massive vacuum chamber, as well as aeroacoustic tests. As these tests began, the next, consisting of a descent stage only, was being finished ahead of its date with space.

    The first flight of the Artemis descent stage came in February 1998 under the mission name Artemis 2. Together with a Pegasus third stage, the vehicle was carried into orbit on a Saturn H03, carrying on its deck a functional (though not furnished) surface habitat. Pegasus had completed its own demonstration flight flying partially-filled as a third stage on a Saturn M02 in October 1997, placing the depleted stage into heliocentric orbit. On Artemis 2, as on an operational cargo flight, the Pegasus was fired partially during ascent in order to place itself and the payload into orbit. Though the hardware carried by the launch was essentially the same as the planned final cargo lander delivery stack, there was one key difference.

    While cargo flights would launch with their Pegasus departure stage, the heavier crew stack would require a full Pegasus stage for Earth departure, and thus would be too heavy for a single H03--even the uprated IIP H03--to loft. Instead, the Pegasus and the crew stack would have to rendezvous and dock in orbit, which had posed a serious design problem--the development of a docking standard capable of holding the stack together during the departure burn under hundreds of kiloNewtons of compressive force. While this force requirement was far beyond the capacity of the CADS docking ring alone, CADS was capable of handling the initial docking loads. Thus, NASA sought to avoid reinventing the wheel by building on the CADS design. The final docking standard developed, the Large Payload Attachment System (LPAS), would consist of a CADS ring augmented by a second, large-diameter mating ring. The CADS docking ring and petals would serve to guide the lander and crew capsule (as the active vehicle) into a docking with the passive Pegasus. Once docked, the retraction of the CADS rings to effect hard dock would also bring together the outer force-transfer rings, which would be rigidized by a set of electrically-driven bolts. Artemis 2 carried a pre-mated version of this hybrid LPAS system between the descent stage and and Pegasus, instead of the single-piece fixed truss that was intended for operational cargo flights.

    Once on orbit, the lander was powered up and its systems checked out and verified as functional. Then, the lander retracted the bolts on the hybrid ring and separated from the Pegasus. Over several days, during which time the temperatures of both hydrogen-fueled stages were monitored, the lander practiced docking to the Pegasus under a variety of lighting conditions, proving that the hybrid system could be relied upon for future crew flights. With the system fully proven, the lander conducted one final docking with the Pegasus, and the Pegasus was fired up to send the lander into a highly elliptical orbit through Earth’s Van Allen belts. With this step completed, the truss attaching the descent stage to its side of the hybrid ring was separated with explosive bolts. Once it was cast loose, Pegasus conducted one final maneuver to lower its perigee to intersect Earth’s atmosphere for disposal. The inflatable “loft” of the surface habitat resting of the descent lander’s deck was deployed, and dosimeters throughout the habitat’s cabin were used to monitor the radiation attenuation at various positions in the cabin, including the loft and the “storm shelter” within the rigid portion of the habitat, confirming that the habitat would be capable of keeping astronauts from excessive radiation doses while on the lunar surface.

    Meanwhile, the NASA operations team and Boeing engineers monitored the performance of the lander’s computers and other systems as it carried the habitat through the belts high above the Earth. Just as they were designed, the lander’s computers had little trouble dealing with the radiation-filled environment of the belts--qualifying both the computers of the descent stage and the modified versions which would control the ascent stage. With the proving passes complete, the descent stage fired its engines in space for the first time, lowering Artemis 2’s orbit below the belts. A number of additional burns were conducted, altering the mission’s inclination and consuming delta-v without changing the orbital altitude as NASA confirmed that the lander’s engine would reliably relight and that the lander’s computers could handle the problem of guiding the stage. The surface habitat was monitored, watching the pressure inside the loft for thankfully-absent leaks--NASA’s gamble on inflatables was proving well in its first in-space deployment. Finally, after almost a week in space and almost a dozen firings of the engines, which had shown not a single failure to light, Artemis 2 conducted a final burn that sent it on the same track as the Pegasus stage which had carried it to orbit, speeding low into the Earth’s atmosphere before breaking up in a fiery tail of debris. The first Artemis lander flight had been a complete success.

    Due to the use of LC-39’s facilities for Freedom logistics operations and the pace of Bethpage and NASA’s evaluations and tweaks to the lander, it was five months before Artemis 3 would follow in Artemis 2’s path. June 1998 saw the first dual-launch Artemis mission, with an H03 carrying up a crew-configuration lander similarly pre-docked to a Pegasus stage met in orbit by a lunar-configuration Block V Apollo capsule launched aboard a Saturn M02 the same day. The Artemis 3 crew, lead by veteran pilot Jack Bailey (who had also been the first commander of Freedom), consisted of four pilot-trained astronauts, including Chris Valente, an experienced commander in his own right. After several trials duplicating the docking carried out by Artemis 2’s computers, Bailey’s crew fired their Pegasus stage to place themselves on a similar belt-passing trajectory. Unlike the departure burns on Apollo, the Artemis stack would have its crew “eyeballs” out for the trajectory, with Apollo’s nose facing aft. However, because the maximum forces capable of being passed through the Apollo probe and drogue connection limited to the stack were limited for structural reasons to a mere 0.5 G of acceleration, the Artemis 3 crew experienced little overall discomfort.

    After the burn was complete and the Pegasus had been cast loose for its date with destruction over the ocean, the Artemis 3 crew opened the hatch between the lander and the capsule, and began to power up the lander’s ascent and descent stages, testing their systems as well. At perigee after the vehicle’s trouble-free pass through the belts, the crew used the descent stage to lower the stack’s orbit below the belts, at which point Bailey transferred to the lander while Valente took control of the Apollo. Without the mass of the ascent stage, the Apollo’s 2.25 tons of return propellant gave it 600 m/s of delta-v while retaining margin for landing, so Bailey’s crew aboard the lander performed a number of burns over the next day to “bounce” its inclination back and forth four degrees above the base 27.5 degree inclination, this being within the inclination change which would allow Valente’s Apollo to come after them should they suffer issues. On their next pass over the equator, they then fired the descent stage again, changing inclination again across 27.5 degrees to four degrees below it (23.5 degrees), and then on another pass returned back to 27.5 degrees to meet back up with the capsule. In total, it was sufficient to demonstrate the delta-v of nearly a full lunar descent, with the four starts and shutdowns being used to qualify the engine’s start response and burn residuals under a varied set of throttle and ignition conditions. The descent stage was then ejected, and placed into a decaying orbit with the last of its fuel, while Bailey repeated the process using the ascent stage’s engines. At the end of the “relay race” Artemis 3 flight, the lander had been tested and qualified as thoroughly as possible, short of actually landing on the moon. While Valente and his co-pilot waited, “hands-off” but ready to take action in case of an emergency, Bailey and his co-pilot practiced the process of docking the ascent stage back to the Apollo capsule, using the ascent stage as the active vehicle--which would be necessary during the return to the quiescent Apollo capsule after an Artemis flight. With this final task completed, the Artemis crew returned to Earth, once more testing the “skip entry” technique for a pinpoint landing of Hawai’i.

    The question of the next test flight had been a topic of hot debate within NASA’s management. In the original plans, a separate test flight of the lander ahead of the first manned mission’s cargo lander had been called for (and budgeted). After all, despite their ventures into extreme-altitude orbits, none of the Apollo test missions so far had even approached the lunar sphere of influence. However, a manned landing test would require an additional two Saturn Heavies, and a cost of almost a billion dollars. While an unmanned touchdown of the cargo lander would achieve a similar goal for just half the cost, it would then mean that another half-billion would have to be spent acquiring a second cargo lander, the 14-ton payload of the demonstration lander squandered to no end carrying a mass simulator. However, in 1996, a surface hardware group study had kicked around the idea of taking the chance to test the surface habitat on the lunar surface ahead of its first operational use. After all, the cost of the surface habitat was nothing compared to the cost of the unmanned test landing itself, and would provide a valuable chance to test the habitat once more.

    The report’s authors were rather startled to find themselves invited to fly to headquarters to brief none other than Administrator Davis himself, who their memo had apparently reached. Expecting a lecture about unnecessary costs (Davis’ frugality and unwillingness to suffer fools having become infamous within NASA circles), they were instead startled to be interrogated not just about how they’d developed their thoughts, but on the potential costs of simply fitting this test habitat out as a full cargo landing mission--after all, weren’t the final EVA suits and other fittings also rather trivial compared to the total mission cost? And, in this case, if this initial landing worked, the payload left on the lunar surface wouldn’t even be a spare test mission, but the full first landing site, ready for the crew to join it--saving the half billion dollars of the first cargo flight and valuable time off the officially unrecognized but well-understood 30th anniversary deadline. The rest of the Artemis hardware stood largely ready, with the Mesyat network in place, the Apollo Block V already entering servicing to replace the Block IV for Freedom logistics, and the surface hardware teams clearly chomping at the bit to get their first tests on the surface. Far from a reprimand for thinking wastefully, the report’s writers were told to put together a team to study the question, and analyze the savings in comparison to the mission’s odds of success. When the initial Artemis test flights were completed in mid-1998, Lloyd Davis thus came to Boeing-Grumman Bethpage and the Artemis program office with a simple question: were they more than 10% confident the landing would succeed? The combined staff indicated that they were far, far more confident--more like 80% to 90% sure. This dramatically exceeded the “magic number” that Davis’ informal research had suggested as a minimum break-even point, and thus Davis made his decision, the so-called “banker’s bet”--the next Artemis lander flight would be delayed from the scheduled September flight to the other side of the October Freedom crew rotation, into November. However, Artemis 4 would be going to the moon not as a simple test, but as the first cargo landing of the Artemis manned flights--simultaneously a test and an operational flight. If it succeeded, the manned landing could follow as the next flight. If it failed, then it would have served its function as a test. With all the components ready and a bet on success, the countdown was on to the return to the moon.
     
    Last edited:
    Part III, Post 19: The growth of mobile satellite communications in the US and Europe
  • All right everyone. After about 7 hours on the road today and no fewer than 5 states, it's finally that time again. :) I'm hoping to still be able to get the next Artemis post done for next week--I've had it percolating most of the day, but it'll depend on when and if I have time to write with the other stuff going on.

    Also, secondly, just a reminder again that we would appreciate people's thoughts on which of Nixonshead's images should be selected to represent his portfolio in the Turtledove nominations. Just a reminder, they're all collected here on the wiki if you want a chance to review them all together--we're interested in any thoughts. Also, while on the topic, thanks go to him also for providing some technical insight and suggestions that ended up forming the base of a lot of this post.

    Anyway, with that business out of the way, without further ado, let's get into today's post...

    Eyes Turned Skywards, Part III: Post #19

    Even though Motorola was the first to see the potential of a low-orbit constellation of satellites for telecommunications, they were far from the only company to put their hat in the ring in the first half of the 1990s. Dozens of companies, ranging from giants like TRW and RCA to tiny startups like Teleworld and Starcomm quickly followed Motorola into the field, with proposals ranging from Teleworld’s giant swarm of hundreds of satellites, intended to provide global high-speed internet service, to more modest systems intended simply to provide regional telephone service. While American companies were taking the lead, boosted by the strong American commercial space sector and the loosening of regulations that had taken place during the 1980s, firms and even sovereign governments from Japan to Brazil were following close behind.

    What all of these promoters shared, however, whether they were a government agency or a private corporation, whether they were based in Tokyo or Los Angeles, was a firm conviction that the market was headed for even more explosive growth than had characterized the satellite business since the 1960s. Not only were there enormous potentials in developed countries, where a few dozen satellites could create a nationwide network potentially far more quickly and at a far lower cost than the already-conventional method of raising cellular towers and hooking them up by wire or microwave to existing telephone networks, but the potentially vast market of the developing world loomed on the horizon as a massive incentive. With the fall of the Soviet Union, the world seemed to be on the verge of a vast burst in economic growth, propelled by laws liberalized by the absence of a Communist counterpart, reductions in defense spending, and the opening up of new markets previously closed or nearly closed to Western firms. Visions of potential customer bases increasing from five hundred million to five billion people danced in the heads of promoters as they organized, thought, and planned.

    Added to this simple growth of adding new customers to existing services was the possibility of adding new services, and customers with them. Almost as soon as the technology of computer networking was introduced, satellites had been used in experimental efforts to link them together, efforts that had only been boosted by the aging of satellite networks and the virtual retirement of older satellites, too small and low-capacity to be economically operated in their design role any longer, freeing them up for experimental use. If these were to become more than mere experiments or small-scale commercial applications, however, dedicated fleets of satellites needed to be created, designed around the provision of computer networking rather than telephony or television broadcasting. If such fleets were built, however, the vast amounts of broadband connectivity some visionary pioneers expected to be necessary for demand for services such as on-line video, Internet telephony, and other similar services would become cheap and widely available, allowing other firms to piggyback on the success of the constellation builders. It is no surprise that this possibility attracted the most interest from the emerging class of wealthy “Silicon Valley” pioneers--ignoring for the moment that most of them, and in particular the wildly successful founders of Microsoft, the most wealthy of them all, were from nowhere near central California--with the great-granddaddy of all the broadband systems, Teleworld, obtaining venture capital from Bill Gates, among others, before Paul Allen’s own ventures in the space field diverted further Microsoft interest.

    Similarly, the possibility of global mobile telephone services, the market that had lured Motorola into entering the field to begin with, offered another lucrative opening to would-be constellation builders. While mobile telephony had been around in some form or another for decades, dating back to the era of car phones, the modern form of individually-carried cellular phones had only been commercialized in the 1980s, and coverage was largely limited to dense urban areas where the cost of erecting towers was outweighed by the density of possible subscribers. Many providers believed that extending coverage to suburban and rural areas would drive a significant increase in subscriber numbers, not only because of the new customers located in the additional signal footprint, but because of greater value to potential subscribers located in already-covered areas. Satellite-based provision of mobile service would drive this to its ultimate conclusion, covering not just suburban and rural areas, and not just a single country, but wilderness regions all over the planet, and even the ocean. Sailors, travelers, and those living in countries where no mobile service had yet been built could subscribe to the satellite service and garner the benefits of mobile telephony even though conventional infrastructure might not exist anywhere near them.

    These two potential services, global satellite broadband and global mobile telephony, were the bedrock of all proposed constellation systems. Each and every one of them depended on one or the other as the foundation of their proposed system, and each one needed to capture some fraction, ranging from 5% to 25%, of the global market to make their business case. With more than twenty networks in the proposal stages by 1994, it was obvious, if never publicly mentioned, that some, at least, would fail. And, increasingly, it looked like some would fail without ever building, let alone launching, a satellite, for the easy climate many founders had anticipated immediately after Motorola’s interest was revealed had never truly arrived. With even the simplest networks requiring billions of dollars upfront for the development, construction, and launch of their satellites before turning a single cent of revenue, investors were skittish and concerned about the risks involved. Many investigated the satellite market, then chose to invest in seemingly safer terrestrial ventures; while no cellular network or fiber-optic line could possibly come close to the number of subscribers a satellite system might boast, they were equally cheaper and faster to build, offering the glittering possibility of obtaining revenue and even profits within a relatively short period of time, and at a much lower initial capital cost. With inadequate capital and pessimistic market studies coming out, the economic foundation of the constellations was beginning to crack and crumble by late 1994.

    At first, the Christmas Plot seemed to undermine those foundations entirely. Venture capital dried up as spooked investors fled for safer investments, forcing several of the smaller constellations into bankruptcy, while the cascading effects of the sharp, though short, recession that followed did even more in. The most damaging aftereffect of all, however, was the Asian crisis of 1995-1996, where a combination of slowdown in capital inflows and reduction in demand from their primary overseas markets badly hurt emerging economies in Southeast Asia, dependant on exports and massive overseas capital injections to maintain high growth rates. As some of the wealthier of the so-called “developing countries,” and more tied to Western and especially American markets than many others, many of the constellations had aimed at breaking into Southeast Asia as their primary developing-world push. Others had obtained some degree of venture capital from countries involved in the crisis, mostly Taiwan and South Korea, and like the other firms that were no longer able to obtain capital collapsed into liquidation. Even Motorola’s giant Iridium platform and the smaller though still well-funded Starcomm and Gemini constellations found themselves severely pressured despite Starcomm actually launching its first satellite late in the year and the other two being well into the construction phase, and for a time it seemed that the whole sector might dissolve before accomplishing anything at all.

    At this juncture, and without any apparent design, the United States government rode in to the rescue, like one of their cavalry units in a Western movie. In the wake of the Christmas Plot, the Federal Aviation Administration, like many of the other government agencies involved, had begun a study of their response to the disaster, both to identify points where they could improve their ability to deal with any future attacks and to head off outside criticisms of the administration. One problem that the resulting report identified was the primitive state of transoceanic air traffic control. Why, the report asked, in an age of satellite navigation (the Global Positioning System having recently been declared fully operational by the Air Force) and satellite communications (referring not only to Intelsat and Inmarsat, but several of the new constellations by name) was it acceptable for trans-oceanic flights to have nearly as little control as trans-continental flights in the 1920s or 1930s? The report called for the design and construction of a so-called “virtual” air traffic control system, relying on data relayed from positioning devices aboard aircraft transiting controlled airspace to provide positions to controllers who could then direct aircraft just as if they were crossing near-shore or overland areas. The relatively low precision offered by GPS was of little concern given the huge airspaces available for errors in trans-oceanic flights, and the advantages of controllers being more aware from moment to moment of what flights were crossing the oceans, hopefully allowing responses in minutes instead of hours in the future if one or more dropped off the grid, seemed compelling. The report even took a step further (and quite out of its mandate) and suggested that such a virtual ATC could replace most of the actual ATC hardware in the United States at a future date, saving on maintenance and operations costs for items like the network of VOR stations blanketing the United States with navigational signals.

    While that particular suggestion was walked back under pressure from smaller domestic operators and general aviation users who feared the costs such hardware might generate, the more specific recommendation of developing a virtual ATC system was not. Indeed, the proposal gained interest from the President himself, and perhaps more importantly from the fledgling constellation industry. They saw, in the proposal, the possibility of a guaranteed userbase and income stream, heady stuff for an industry that had thought itself on the verge of collapsing only a few months earlier. Although a Department of Defense proposal to build a hardened dual-use (but primarily military) network briefly threatened the private operators, FAA and congressional coolness to the proposal, which would amount to a nationalized system and incur considerable expenses and delays above and beyond what was really necessary for the civilian part of the system. Whether or not the Air Force ever launched such a system, the FAA, at least, was going to stick to commercial operators.

    By 1997, therefore, the pessimism of a year or two earlier had almost vanished from most of the operators. With the promise of fat government contracts ahead and hardware in many cases either in the factories or actually on the launch pads, a sense of sanguinity settled over management and investors. Aiding this optimism was the general economic recovery; the 1995 recession had undone some of the weaker firms, and the Asian crisis more, but neither incident lasted long or went to work on the pillars holding the economy up, and the economy was beginning to return to a more normal state. Indeed, internet usage had recently begun to rapidly increase, fulfilling every desire that promoters of the larger and more complex broadband systems could possibly want. The only stormclouds looming on the horizon came from the progress made by their terrestrial competitors, who had made giant strides in erecting cellular towers and building fiber-optic networks over the past few years, but even they weren't outrunning the leading satellite firms as they began to launch.

    Indeed, the only place where American instigation of what would become known as the TOCNN contracts (for Trans-Oceanic Communications and Navigation Network) was unappreciated was overseas. In Europe, particularly, where the French had been studying and developing their own LEO constellation, there was consternation over the new American push to support satellite communications. While the intent of TOCNN could hardly be faulted, and indeed it would perhaps be a good idea for Europe to follow the lead of the United States here, it had, naturally, focused on contracting to American-based firms and, equally naturally, did not seem to distinguish between foreign and domestic-based carriers in applying the TOCNN receiver requirements. A virtual ATC would have little value, after all, if it was as blind to the existence of aircraft from Britain or Japan as if neither of those countries even existed. Unfortunately for the French, this would make for a huge foothold in the European market for those American firms chosen to service the TOCNN system; this might, perhaps, be leveraged to sell their more conventional and consumer-oriented products into the European market, preventing the Europeans from entering this important technology sector. Moreover, early reports of Defense interest, even if they ultimately came to nothing, led to further concerns that European firms and governments might become dependent on American-provided capabilities that might be deliberately degraded for foreign users, or even disabled entirely under some circumstances. While President Gore tried to reassure European governments that the American government had no intention of disabling the Global Positioning System, and even signed an executive order in late 1997 ordering the controversial “Selective Availability” capability turned off, these were still powerful arguments for governments wary of too much dependence on any outside power.

    Therefore, the French proposal at a mid-1997 ESA ministerial meeting to expand their Taos system into a full global navigation and communications network (quickly dubbed a GCNSS, for “Global Communications and Navigation Satellite System”), they received an overwhelmingly positive response from the ministers of the other states, particularly the three other major poles of the ESA collaboration, Britain, Germany, and Italy. Almost immediately afterward ESA, together with the long-established Eutelsat communications satellite organization, began an in-depth study of the proposal, which in one fell swoop would end European dependence on both GPS and the rapidly growing American systems, especially if the FAA could be persuaded to accept so-called “Taos II” data as equivalent to TOCNN GPS and communication relays. Over the next year ESA and Eutelsat slowly ground through their analysis, considering possible customer bases, subscriber numbers, launch costs (whether by conventional Europa or the possible Sanger II system), and more. Ultimately, the Phase A study delivered in 1998 described a system which managed to combine the functions of both GPS and communications in a single network, but not efficiently, and not without a cost. For the complete, globally-available 24 active satellite network, a minimum of 3 billion ECUs, or somewhat less than 3 billion dollars, would be needed for construction, launch, and the first year of operations. Even with the arguments of national security and international competitiveness, most of the member governments blanched at incurring such a cost merely to duplicate existing services, and pushed ESA and Eutelsat to find a cheaper solution.

    The result was the Global Communications and Navigation Enhancement Satellite System, GCNESS--or, as it would shortly become known, Marconi, after the Italian radio pioneer. ESA and Eutelsat had concluded that the most expensive portion of the overall system, not to mention the part least likely to bring in any significant revenue, was the navigation system, demanding highly precise time and orbital measurements and requiring radio transmissions which integrated poorly with the communications portion of the Taos II GCNSS plan. A MEO-based system, Marconi would integrate communications functions with a satellite-based correction system that would improve the precision of GPS measurements without completely replacing the American system. While less ambitious, this did have the virtue of being cheaper and faster to build than the Taos II system would have been, at only about a third the cost and time from launch start to Full Operational Capability. Despite a certain degree of reluctance to abandon the full navigation capability, work on Marconi was approved at the ministerial level in late 1999, with ESA serving as the technical lead manager of the project and Eutelsat as the primary customer and system operator.

    Meanwhile, TOCNN was coming into its own. While the relatively limited Starcomm system had won the first TOCNN contract on an interim and experimental basis, the kind of virtual ATC the FAA envisioned required far greater bandwidth and much more communications capability than their limited system could provide. Iridium, finally in service as the decade closed, could provide that, and quickly won the second TOCNN contract; a fortunate bit of work, as the company (now independent of Motorola) was only days away from having to declare bankruptcy when it learned it had beat out Gemini for TOCNN 2. The unexpectedly rapid growth of terrestrial systems, combined with the adoption of the European GSM cellular phone standards (allowing roaming from network to network) had badly impacted subscriber growth, a problem not helped in some cases by inept marketing and corporate mismanagement. Now the major firms needed government contracts to stave off bankruptcy, instead of merely having them as valuable anchor customers, as they undershot their expected subscriber counts by factors of ten or more. Even if Iridium and Starcomm managed to avoid bankruptcy, Gemini and most of the other weaker providers were, like their counterparts a few years earlier, forced into it. Gemini, which had already built a considerable portion of its constellation and launched a few satellites managed to escape into Chapter 11, continuing as a distinct provider, but few others were so lucky.

    Regardless of the fortunes of the individual providers, however, TOCNN was proving to be a great success. The availability of over-water communications and navigation data, together with more direct control by the major oversea control centers was considerably increasing the efficiency of traffic control nearer to major international airports, while airlines were finding the new communication channels useful for their own business operations. Now they could receive up-to-the minute information from their aircraft no matter where in the world they were located, and could even resell the data and voice connections that the TOCNN contracts required to passengers for hefty fees. The FAA hardly needed to push airlines to install TOCNN equipment as they realized the commercial benefits of doing so. Indeed, they quickly realized that the legally-mandated rollout completion date of 2005 would likely be beaten by several years. The only thing approaching a dark spot in the whole picture were foreign airlines, many of whom were waiting on Marconi as their TOCNN provider.

    And if TOCNN was proving to be a crucial lifeline to corporations that had fallen in unexpectedly rough financial waters, it was far from being the only business most of them had. Starcomm’s relatively limited system, for example, was seeing great interest from the oil and gas industry to manage a new generation of more autonomous sensing and monitoring devices, while Iridium and Gemini were finding success, if more limited than hoped for, in a range of markets. While not mandated by federal law, the shipping industry was finding in the new system many of the same benefits as airlines in allowing speedy communications between a central office and a far-flung fleet of vessels, and passenger operators were exploiting some of the same opportunities as airlines in allowing fee-paying use of the connections. If, admittedly, the usage of satellites for these roles in ships was much older, dating back to the late 1970s, the constellations at least allowed more widespread and lower cost deployments of the capabilities.

    Similar advantages were being found in the military, whose MEO Advanced Global Communications System, or AGCS, was proving to be as delayed and expensive as the FAA and airlines had feared. If lacking many of the features of the mil-spec system, Iridium and Gemini were at least available now, and they gained a certain following among the units deployed to fight terrorism by Gore’s administration. Elsewhere, the Natural Science Foundation was undertaking a major project to provide Iridium data and voice links at the McMurdo and Amundsen-Scott polar bases, which had previously relied on obsolete geostationary satellites which had begun to drift far enough from an equatorial orbit that they could be seen from Antarctica to relay communications. Iridium’s purpose-built network was of course much more reliable, not to mention less expensive for a government no longer required to pay specifically to keep certain otherwise useless satellites available.

    Finally, of course, there were always the bread-and-butter individual customers which the networks had been intended for. If less successful in the relatively cellular-signal blanketed United States, Europe, and Japan than had been hoped, particularly as the disadvantages of satellite phones became more apparent to the general population, they were more successful among international business travelers (for whom the convenience of dealing with only one provider was enough to outweigh other problems) and, especially, those living in underdeveloped countries such as China or many of the countries of Latin America than had dared been dreamed of. After all, in many of those countries no cellular network yet existed, and owning a mobile phone--particularly an expensive phone, and one that would work anywhere in the world!--was something of a status symbol among the right group of people.

    If they had not been all that was hoped for, as the next century opened a field of competitors still existed, still pushed forward--bloody and battered, perhaps, but there. With three major American networks completed and a European system under construction, it was clear that constellations were now going to permanently be part of the communications satellite landscape. The world had been changed.
     
    Part III, Post 20: The banker's bet and the Artemis 4 cargo lander flight
  • Good afternoon, everyone! It's that time once again, and I know this is a moment a lot of you have waited a while for, so I'll keep this brief. First, if you haven't already voted in the Turtledoves, I'd once again like to say that if you enjoy this TL and the artwork that Nixonshead has brought to it, please support us here and his artwork here. Thanks for all the support you've given this timeline, and without further ado, I hope you enjoy this week's post! :)

    Eyes Turned Skyward, Part III: Post #20

    With the completion of the Artemis 3 test flight and Administrator Davis’ decision to take the “banker’s bet” approach to Artemis 4 in June, the next Saturn Heavy launch became a matter of intense focus for NASA’s mission control staff in Houston, and its launch staff in Florida. For many of the staff whose entry into the program had come close on the heels of the abandonment of Apollo, the day they had waited so long for and had, in some cases, feared would never come was finally at hand. Foremost among these individuals was the mission’s commander, Don Hunt. Joining NASA’s astronaut corps in 1978, he had served alongside veterans of the moon landings even as many of them had been preparing to leave for greener pastures. Through NASA’s years of focus on space stations, Hunt had built a reputation as a smart flyer and a cool operator--perhaps best exemplified by the famous radio calls during the “rough ride” of Spacelab 28. Though others like John Young had more overall seniority, by 1998 Hunt was the most senior astronaut still flying. His selection as the commander of the first Artemis manned landing was a reflection of this extensive experience, though his relatively strong name recognition was also appreciated by the Public Affairs Office. However, his selection was also made with the understanding that this would be his final flight. Just short of turning 50, he was on the verge of losing his flight status, to the Moon or anywhere else for that matter. As it was, he would be the oldest astronaut ever to fly to the Moon, two years older than Alan Shepard on Apollo 14.

    Hunt’s reaction to the knowledge that this was to be his final mission was to throw himself into all aspects of planning--he pushed his chosen crew hard on flight training, encouraged their involvement in the preparation of both the manned and unmanned landers, and threw himself into the geological portions of the DREAM desert training exercises with enthusiasm. The pilot crew was filled out by pilot Natalie Duncan, on her second flight. They were joined by the Mission Science Officer, Ed Keeler. The MSO was a position that had evolved on Spacelab and Freedom. In order to coordinate the stations’ scientific operations with ongoing maintenance and flight operations, the most senior flight scientist on-orbit was selected as the Science Officer, with the responsibility of working with the station command and ground engineers to plan work schedules and ensure that the station’s scientific missions did not get overshadowed by operational concerns. The concept was adapted for Artemis, with the MSO having more specialized geological training and essentially serving as the executive officer of the flight, with near-equal responsibilities to the commander while on the surface. While the commander was responsible for seeing that the mission was safe and successful, the MSO was responsible for seeing that it was scientifically productive. The final crew member of the foursome was the Artemis program’s first international partner, cosmonaut-selenologist Luka Seleznev, of Ruscosmos. The symbolism of a Russian accompanying a crew of Americans to the Moon was palpable, an ironic contrast to the fierce competition between them in the (first) Space Race of the 1960s, and also evocative of the entreaties for Russo-American cooperation featured in Arthur C. Clarke’s Odyssey novels. And things certainly got off on the right foot: as training proceeded, the crew quickly established a rapport--Hunt and Keeler shared a fondness for puns, which contributed to the typical EVA pairings: Seleznev would pair with Hunt while Duncan would accompany Keeler--according to an exasperated Duncan, it was the only way to stop the punsters from filling the radio. The relative jocularity of the crew proved an asset during the long hours of training and the multitude of tasks facing them while the spotlight of public interest focused on Artemis.

    As the hundreds of engineers and technicians involved in the program completed their preparations and reviews, the first Moon-bound Artemis launcher was rolled to the pad on crawlerback on November 18th. Once its impossibly slow journey was complete, pad crews connected the Mobile Launch Platform to ground services, and began the multi-day process of leak checks, wet dress rehearsals, and final payload checks. Meanwhile, the crew assembled at Houston to witness the launch--Hunt was determined to set the precedent that, in spite of being unmanned, Artemis cargo landers would be just as much the responsibility of the crews which would use them as their own Apollo spacecraft were. One example of this was his decision, after consulting with his crew, to provide a callsign for the lander. In the discussions, the crew selected the name Janus, referring to the Roman god of endings, new beginnings, and choices--an apt moniker for a spacecraft with as much riding on it as the “banker’s bet,” the beginning of the Artemis landings, and the end of Hunt’s flying career. On November 23rd, preparations began for the first launch attempt. Ice and frost accumulated on the skin of the oxygen and hydrogen tanks as the massive vehicle was fueled and prepared for flight. However, those at KSC to watch the launch were to be disappointed, as diagnostic telemetry from the Pegasus and lander inside the fairing began to malfunction as the countdown reached T-25 minutes, resulting in intermittent failures to receive data and some indications of temperatures and pressures inside the fairing and the vehicle that were well outside normal limits--and in some cases outside expected physical possibility. In order to fix the issue, the launch attempt was scrubbed, and the count recycled for the alternate date--November 27th.

    In spite of the Thanksgiving holiday, pad crews, launch team members, and support in Houston worked to diagnose and resolve the issue, tracing the problem to a marginal wiring harness in the connection carrying the telemetry from the rocket to the launch tower during the countdown. The overtime during a holiday wasn’t something NASA typically did in the era of Freedom, but lunar launch windows paid no heed to human customs. With the issue resolved and the wiring replaced and retested, the launch team gathered again on the 27th. This time the Saturn Heavy soared into the sky on a fiery plume and a wave of thunder. In stark contrast to the issues on the pad, the launch itself was perfectly nominal from the moment the engines lit and the hold-downs released to the completion of the Pegasus’ contribution to ascent. After a short coast, the stage relit to complete the injection of the Janus lunar module. During the three-day coast to the moon, mission control carefully monitored the temperatures and pressures of the descent stage, providing the final proof to Artemis 2 and 3’s data about the successful extended storage of cryogenic fuels during the trans-lunar coast. Hunt requested a break in the training schedule to allow his crew to take shifts in Houston’s Mission Control Center, following Janus through its long coast and the trajectory modifications to put it on course for its descent to the lunar surface.

    The landing site for Artemis’s first lunar return had been a topic of heated debate within the program. With just six landings planned in the initial sequence, lunar scientists were determined to maximize the scientific return of Artemis and advocated for a wide range of initial landing sites--many with interesting surface features that, unfortunately, also created tricky landing approaches. Flight planners, on the other hand, were more interested in verifying the correct performance of lander systems during the first flight, implying the selection of a relatively flat and topographically uninteresting landing site which the automatics (and still more the human pilots) would have little trouble with. In turn, scientists opposed the possibility as such sites were also likely to be geologically uninteresting and yield less new data even with the extended stays of Artemis than their preferred sites. Political interests also factored in, as the President was interested in a return to the moon which would highlight American leadership in a post-Christmas Bombing world as an example of unity. Although far from a directive from on high, certain administration officials had inquired about the possibility of mounting a return to one of the Apollo landing sites, hoping to mine nostalgia among the politically influential Baby Boomer class for the period and find a graphic example of American technological leadership, both past and present, to display for the world.

    As leaders of the flight crew, with ultimate responsibility for actually flying the mission, Hunt and Keeler actively participated in these discussions, with Hunt tending to lean on the side of the flight concerns, while Keeler naturally had sympathies for the scientific concerns. However, unlike most of the members of these factions, Hunt and Keeler worked together extensively during their training, and eventually came to see much of the other’s positions--Hunt could see where and why geologists were interested in the Moon, while Keeler’s NASA flight training (a requirement even for non-pilot astronauts) meant he understood the engineering concerns about the first landing. In the end, the pair came to a mutual agreement that they took to the site selection board meetings together and managed to sell--Keeler suggested visiting one of the early Apollo sites, one where the geological potential had not been exhausted by extensive roving EVAs. In particular, the suggestion was to land at Apollo 12’s landing site in the Ocean of Storms. While the site had been explored by Pete Conrad and Al Bean, to say nothing of the earlier Surveyor 3 lander, there were still unanswered selenological questions about the area, many of which had actually developed from Apollo 12’s efforts. Compared to other areas of the near-side, the Apollo 12 site was relatively young, as much as half a billion or more years younger than the Apollo 11 site, and had a number of interesting chemical properties. It had also been the first location on the Moon where KREEP, an unusual combination of potassium (K), rare earth elements (REE), and phosphorus (P) had been discovered, although only a single sample. As the Lunar Ice Orbiter and Lunar Reconnaissance Pioneer had discovered a significant enrichment of KREEP underneath Procellarum, there was considerable interest in better characterizing the surface abundance of the combination there. Additionally, the Surveyor and Apollo 12 landing sites themselves could provide an interesting survey site; much as Conrad and Bean’s mission had produced data about the results of years of exposure on the lunar surface, a return to the Ocean of Storms would be able to take observations of the effects of nearly 30 years of continuous exposure to the lunar environment.

    With the deadlock broken, the final site selection was made in early 1998, with maps from the Lunar Ice Orbiter and LRP being tapped to map a final landing site and program Janus’s flight computers with topology data. In order to minimize effects on the Apollo 12 site, the landing target for Artemis 4 was over a slight rise, several kilometers away--well within roving range, but enough to avoid unnecessary impact to the site, and exposing a new area to easy EVA access. On November 30th, Janus followed along its programmed course, firing its descent engines for the first time to slow its interplanetary trajectory. With no need to leave a spacecraft in lunar orbit, no propellant was spared to enter a temporary orbit; instead, Janus fired to drop directly into its final landing trajectory. As they had gathered for the launch, Hunt’s crew gathered at Houston for the landing, watching the telemetry and video from the lander as it began its autonomous descent to the surface. Tension in the MCC was high, and without a crew onboard to relay observations, the descent had more in common with the final descent of the JPL Mars Traverse Rovers in 1995 than the Apollo missions. As it moved through the descent phases, Janus transmitted back codes indicating the status of its internal descent logic, to compare in Houston to the transmitted telemetry.While not as drastic as the 15-minute delay in data from Mars, the two-second light lag was enough that Janus was entirely on its own in piloting its descent.

    Sighs of relief and scattered applause broke across the room as the data confirmed that the lander had acquired the ground with its radar at just over 20 km, then again as the data was matched to its onboard maps and the lander began adjusting its descent to make the minor corrections to steer to the landing site. On the cameras fixed on the descent stage, the Moon loomed large, going from a globe to rapidly rising surface. As the surface of the Ocean of Storms rose to meet it, Janus cut down its speed, then cut out its outboard engines to continue the burn on the center engine alone. As the fuel burnt off and the speed and altitude dropped still lower, that single engine too had to be throttled to control the descent acceleration, exactly matching the lunar gravity to proceed at a constant rate. Finally, Janus signalled back that it had selected a final landing location, and was descending to it. In the Mission Control Room, the horizontal speeds dropped and nulled out as the lander steadied itself hundreds of meters above the site, and began its terminal descent. A plume of dust obscured the ground as it dropped the last few meters, increased at the last moment as the lander’s engine fired to kill its vertical speed. At a meter up, probes on the footpads hit the surface, and the engines automatically died as the lander dropped. Seconds later, the MCC staff watched the critical codes come back--Contact! Engine off! Acceleration readings on the stage jumped as it crunched into the lunar soil, then settled--the lander was stationary. As the room broke out in cheering, the grinning guidance controller turned to the flight director. “Platform is stable, and we are down on the moon!” Joining in the applause, the launch control loop captured Hunt’s words as he leaned over to talk to his MSO. “Well, Ed, what do you say. Feel up for a little camping trip next year?”

    When the Flight Director was able to restore order to the room, the Mission Control staff began the process of configuring the lander for surface operations, converting it from a spacecraft to a stationary facility. Valves in the descent stage were opened to purge the remaining hydrogen and oxygen from the tanks, reducing the internal pressure of the propellant tanks and the risk of a rupture. Readings were also taken to determine the final landing site, which determined that Janus had steered itself to within 800m of the center of the targeting ellipse. In addition to this accuracy, the computer’s landing had been more economical than expected performance, meaning that there were substantial quantities of residual propellant remaining in the tanks. The Lunar Crew and Logistics Module had been designed to carry 14.5 tons with margin, but now Janus had shown that this margin might not be entirely necessary. Accordingly, Boeing and NASA engineers began analysis on how much extra payload could potentially be carried on future flights it such economy could be replicated. In the meantime, Janus was commanded to spread its solar arrays to catch the light of an early lunar morning and charge its fuel cells for the long, cold lunar night. Over the next few months, it would keep a solitary watch over the future Artemis 4 landing site while Hunt’s crew prepared for their mission and the vehicles that would join it were processed for flight. Administrator Davis’ bet had paid off, and the Artemis lander had passed its final testing hurdle. All that remained was for its crew to join it on the surface of the moon.
     
    Last edited:
    Part III, Post 21: Lockheed's problems with an aging Titan III and the X-33 program
  • Well! Good afternoon, everyone, it's that time once again. When we last left off, the Janus cargo lander had just touched down on the lunar surface, and our very own Nixonshead had won a very well-deserved Turtledove for the art he's brought to this thread. However, Janus will have to wait a bit longer for the arrival of Don Hunt and his Artemis 4 crew. This week, we're looking at NASA's other, other major program of the 90s, and checking in on the giant of the commercial launch industry: the Lockheed Titan.

    Eyes Turned Skyward, Part III: Post #21


    For all intents and purposes, Lockheed Astronautics had dominated the modern commercial launch market since that market had come into being. Indeed, along with early ESA attempts to commercialize Europa, Lockheed’s purchase of Martin’s Titan production lines and the subsequent retooling of the vehicle as a commercial launcher had created that market to begin with. Beginning with dual-launches of 2-ton satellites on the Titan IIIC, Lockheed’s other commercial space business, their satellite manufacturing division, had been integral to the development and popularization of the larger and more capable 4-ton bus, to the point where that size had become an industry standard, the so-called “full” bus, while the older 2-ton became the “half” bus. Not content to rest on their laurels, however, Lockheed had immediately jumped into promoting the even larger 6-ton “super” bus, which only the mighty Titan IIIE, with its Centaur upper stage, could lift to geosynchronous orbit. While competition like Europa 4 had then followed the trail Lockheed Astronautics had blazed, Lockheed’s constant innovation had ensured that its market share throughout the 80s never dropped below 50% of the global free launch market.

    However, by the early 90s, the picture was getting worryingly less rosy for Lockheed. Simply put, the Titan program was showing its age. Potential competition like the Europa 5, the Russian Neva and Vulkan, and the rising Chinese program were coming which not only could match Titan’s payload capacities but actually exceed them, dual-launching payloads that even Titan III-E could barely lift. Worse, rising ecological concerns over Titan’s hypergolic propellants were causing Titan’s low operating costs--always the trump card in Lockheed’s competitive prices--to spike upwards, with no end in sight. This one-two punch, and resulting losses of several key contracts, was enough to make Lockheed begin pursuing a path forward for its Astronautics division before they lost any further ground. Early proposals were built around adapting Titan components to new uses, modifying the core to use kerosene and liquid oxygen as the Titan I had done in the early 1960s, or dispensing with the core altogether to use clusters of the big solid boosters practically synonymous with the Titan design. Ultimately, however, no mere tinkering to the venerable Titan formula could solve its problems; the solution would have to come from another source entirely.

    Elsewhere in the aerospace business, McDonnell-Douglas was in the midst of a prolonged struggle for survival. The mid-80s launches of several strong competitors to its widebody DC-10 and narrowbody DC-9 aircraft had been devastating to the company’s bottom line, as it struggled to even hold onto third place in the world airliner market against Boeing, Airbus, and Lockheed. Attempts to drum up interest in new aircraft types had proved less than successful, while the engineering costs associated with these projects had come as even more of a shock to the company’s bottom line. Only a series of successful military contracts, beginning with the F-15 tactical fighter in the early 1970s, had enabled the company to keep afloat, so the company’s failure in the Advanced Tactical Fighter competition was a massive blow, raising the spectre of bankruptcy before the company’s investors. Thus, in the early 1990s, McDonnell’s board began to reluctantly pursue a partner for a merger or buyout. Lockheed, whose continued successes in the widebody and narrowbody fields had been the straw that broke the camel’s back, was one of the first companies to express interest in purchasing the firm. While counter-proposals from Boeing and Airbus were also heard, Lockheed’s counter-offer was felt by McDonnell management to be the strongest, as well as the most likely to pass regulatory muster.

    In addition to an attractive package offer, Lockheed also offered a chance to position the resulting merged company well in a range of fields. Lockheed’s Tristar and Bistars combined with the DC-10 were already strong players in the widebody field, while the company would also be well-positioned in the growing regional jet market, and in the perfect place to begin the lead-in to the Joint Strike Fighter competition, potentially the largest government contract in history, with an eventual value measured in the trillions of dollars. As with commercial and military aviation, Lockheed Astronautics would synergize well with McDonnell’s launch business. The improved Delta 5000 was, in Lockheed’s eyes, an attractive entrant at the small end of commercial launches, and with streamlining of production and logistics to cut costs could easily gain a significant market share in launch of proposed constellations of small LEO communications satellites, riding cultural similarity and time-to-market to defeat its competitors, both traditional and new. With a 4-ton capacity to GTO, it could also retain a toehold in Lockheed’s traditional geosynchronous business while Delta experience with cryogenic launch vehicles was being brought to bear in replacing Titan. Talks persisted throughout 1994, and In April 1995, Lockheed and McDonnell-Douglas announced their plans to merge into a single corporation under the name Lockheed-McDonnell. Nevertheless, Lockheed’s management was too canny to bet the future of the company, or even a significant division of it, on a single deal, and was already pursuing an alternative path forwards, one riskier but, potentially, more rewarding than any refinement of Titan or Delta.

    From the start, Al Gore’s presidency had been characterized by his enthusiastic backing of new, more advanced technology as the solution to a wide range of policy problems. While the pursuit of alternative energy sources and the so-called “internet” boom were the manifestations most familiar (or infamous) in political circles, Gore’s overhaul of NASA after the Richards-Davis Report had also been infused with some of this characteristic technocratic spirit. In the same 1993 NASA appropriation bill which had cut the Ares program as an excess of expensive studies without immediate practical application, Gore had requested--and received--funding for NASA to begin a major effort to carry out basic research on a variety of new space technologies, with the centerpiece of a reusable launch vehicle demonstrator, intended to follow-on from work done in the last decade on the X-30 and X-40 programs, as well as the long-lost promise of the “Space Shuttle,” a dream which had never died in aerospace circles even as NASA had moved ahead with Apollo-serviced stations and now back on to the moon. If it succeeded in demonstrating key technologies, Gore hoped that this new program could develop technologies to make Earth orbit more accessible, and keep American launch companies dominant into a new age of space development. With Artemis proceeding relatively smoothly after Davis’ drastic interventions in the mode decision and contracting, it was this new program which was to prove the primary recipient of Davis’ scrutiny and an outlet for his legendary temper over the following years.

    Lloyd Davis had never had much patience for programs that sprouted studies like weed and whose budgets grew like kudzu. In his mind, programs should be collected around a single overarching goal, and any new studies or spending should be driven primarily by that work necessary to make that central idea a success. This personal frustration was one reason why the Richards-Davis report had so heavily borne down on the Ares office and Artemis long-term base planning, which had struck Davis’ mind as bloated and ill-directed. However, with Gore’s technology development program, Davis found himself ensnared by his boss in a pet project which was almost exactly calculated to drive Davis up a wall, and to dispatch a flurry of his soon-to-be-famous flaming memoranda and electronic mail across Headquarters.

    To begin with, instead of having a singular objective like “land on the moon” or “build a space station,” Gore’s program was actually divided into two, each with its own separate budget line, and each therefore separately subject to Congressional oversight. Of the two, the first and most straightforward was the Launcher Technology Development Office, a catch-all for studies and prototypes investigating technologies ranging from composite tanks and orbital satellite refueling to staged-combustion kerosene engines, hydrogen aerospikes, and peroxide/kerosene hypergolic engines to replace conventional selections for capsule, probe, and comsat maneuvering systems, all aimed at moving the current state-of-the-art incrementally. In theory, the government-supported technology development taking place at LTDO would serve as a proof-of-concept and incubator for commercial projects; if even a few of them succeeded, the cost of accessing space could fall dramatically, regardless of the success of the other part of the program.

    That was nothing less than a reusable suborbital spacecraft capable of flight to near-orbit, a horizontal landing on a runway, followed by a rapid turnaround for further flights, intended both to provide a guaranteed user for the advanced technologies of LTDO and, hopefully, serve as a prototype for the long dreamt-of single-stage-to-orbit shuttle. If it worked--no small “if”--it would enable not just an incremental leap in American spaceflight, but a revolution in spaceflight--one which would assure US leadership in spaceflight for decades to come. While the program’s promise was clear enough to Davis, especially since he’d been intimately involved in fleshing out Gore’s idea from a mere notion into a real program, the real question in his mind was how to stop the two from driving him crazy in the meantime, particularly with the more pressing Artemis program consuming much of his attention. At least for the LTDO, the task was as “simple” as careful contract monitoring and progress reporting, something Davis had no small experience--and reputation--in doing. While his other duties prevented him from devoting very much time to that arm of the effort, he made sure to conduct random “spot-inspections” to keep contractors on their toes and prevent the worst sort of contractual excesses. The demonstrator, however, quickly developed into a problem all its own.

    Bidding for the demonstrator contract had been intense. Among other fringe bidders, Lockheed, McDonnell, Rockwell, Boeing, and Northrop all tendered serious, well-funded proposals. Surprisingly, Boeing’s entry was eliminated early on, despite their absorption of Grumman, the manufacturers of Starcat, the only previous attempt at developing a reusable launch vehicle to have demonstrated any sort of real-world success. With NASA contract language specifying a lifting, horizontal landing profile instead of the vertical, rocket-braked Starcat design, in an effort to reduce risk, Boeing’s nominal experience advantage had vanished into thin air, forcing them to spend as much effort as anyone else developing their proposal from the ground up. Added to this factor, Boeing was suffering from the teething difficulties of a major merger and the strain of developing and building the Artemis lander, preventing them from putting their full effort towards the X-33 contract. However, while the X-40 experience of Boeing was ill-suited to the task at hand, Lockheed had been lead contractor on the X-30 scramjet space-plane program, a much closer match to the desired profile than Grumman’s experience. In the process of preliminary design of an airframe for the never-built X-30 prototype, Lockheed engineers had worked extensively with modern composite materials and challenged the problem of reusable thermal protection systems head-on, examining the potential of replaceable ablatives, ceramic tiles, and metallic systems. Moreover, the combination of Lockheed’s strength in the commercial launch market and the increasing external pressure on their business had made them keenly interested in any new launcher proposals--and if those proposals were going to be as potentially revolutionary as X-33, and funded partially by NASA besides, then Lockheed wanted to make sure it was going to get in on the ground floor. Thus, Lockheed had made the X-33 a priority, pairing elements from the X-30 development with earlier proposals dating back to before even the Space Shuttle studies of the late 1960s in an exceptionally strong bid proposal.

    In the end, the technical depth of Lockheed’s bid, along with their evident interest in commercializing the vehicle if successful and willingness to invest corporate funds above and beyond government money, won them the contract, which was assigned the designation X-33. There were three primary technologies which NASA wished to test with the X-33: advanced thermal protection systems, aerospike engines, and lightweight composite propellant tanks. For the thermal protection system, Lockheed proposed to use metallic structures developed originally for X-30--while able to sustain less peak heating than ablatives or ceramics, they had been found to be substantially more durable when Lockheed had tested them, and the low mass per area of the X-33 was planned to allow heating low enough that the more maintainable system could be used. For the engines, Rocketdyne was subcontracted to develop a linear-aerospike derivative of the venerable J-2, a pair of which would provide propulsion for the X-33. A dilemma emerged, though, with the tanks which would consume much of the volume of the rounded wedge fuselage. In order to make SSTO possible, significant weight advancements over conventional metal tanks would be required. Composite materials had evolved immensely in the past decade, and seemed to hold promise of such weight reductions. However, no structures as complex as the proposed X-33 tanks had ever been constructed, nor had the composite honeycombs Lockheed proposed using been tested with cryogenic fluids. The Lockheed proposal readily admitted that these were the weak spots of the design, and the risk piqued Davis’ concerns. After all, while the vehicle was designed to test all three, if the propellant tanks could not be made to work, the entire vehicle would be grounded, preventing any tests of the engine or thermal protection scheme. Therefore, Davis demanded that the reference design would include composite tanks only for the liquid hydrogen tanks, and that an alternate design for more conventional aluminum-lithium alloy versions of the tank would be developed to production-ready state as a backup design for early flights if needed. This raised the cost of the program, but given the President’s strong support of the program Davis was able to “rob Peter to pay Paul” and divert funds from the technology development line to the X-33 budget to cover the extra expense.

    As work proceeded through the early 1990s, the vehicle became known internally as the StarClipper, though technically the name referred to the planned future derivatives which would carry cargo all the way to orbit. However, and true to Davis’ worst nightmares, the program provided no end of headaches even reaching demonstration flights. The aerospike engines functioned well, though they had to revert to the gas-generator cycle of the original J-2 as opposed to the combustion tap-off cycle of the simplified modern J-2S. More worryingly, the engine had also grown heavier during design and testing in the mid-90s, as additional coatings had to be added to the centerbody to enable it withstand the heat. This added weight had to be compensated for by carefully re-designing the rest of the vehicle’s systems, but there were limits to how much it could be trimmed given the lifting body shape, as the center of mass could only be moved so far before the vehicle would become unflyable. However, the largest problem was with the composite tanks. In spite of Lockheed’s experience, the honeycomb tank walls intended for strength, lightness, and insulation had proved a critical design weakness. In testing in 1996 and 1997, the tank’s fabrication process continued to run into problems, and it appeared that the vehicle might not be capable of meeting the planned test schedule. The design of the alternative aluminum tanks had already been completed, and Davis managed to secure additional funding to begin production of these conventional tanks in parallel, along with a promise from Lockheed to match the added cost.

    1998 saw airframe integration commencing while two different sets of hydrogen tanks were in the process of testing. While the aluminum alloy tank was able to pass its early testing with flying colors, the issues which had plagued the composite tank throughout design and manufacture followed it to the test stand. In November, the composite tank began critical, as tests showed an alarming tendency to delaminate, allowing cryogenic hydrogen to begin to leak into and fill the honeycomb spacer layer between the layers of the tank walls. While a solution, involving filling this gap with a closed-cell foam, was considered, it would add another half-ton to the tank mass. Given the center-of-mass issues already being caused by the engine’s growth, this would push it dangerously close to design limits. Worse, thanks to the complex composite joints at the intersections of the tank’s multiple lobes, the composite tanks were already roughly the same weight as their conventional equivalents. Davis and Lockheed came to a decision: the composite tanks were put on hold while a full review of the design was carried out, examining alternatives. In the meantime, the aluminum tanks would be integrated with the airframe to allow the X-33 to make its first flights in 2000.

    The new millenium saw the StarClipper undergoing final preparations for testing to Edwards Air Force Base in California, where a launch site had already been prepared for it. Like the Starcat launch site at White Sands, the X-33 facility was minimal--a horizontal integration hangar, a combination erector/launch tower, assorted cryogenic storage tanks, and a long runway. This was required for the Lockheed-provided Bistar freighter which the company had converted (at its own expense) into a ferry aircraft to retrieve the X-33 from the landing sites hundreds of miles away where it would land on longer flights. It was on the back of this Bistar Ferry that the X-33 made its first flights, starting with captive carry tests to verify ferry configuration, then moving to approach and landing tests, in which the demonstrator was released from the back of the Bistar and guided itself to a gliding landing on the runway. These initial series of tests consumed much of spring and early summer, but by July, the X-33 was ready for its first powered flight. The vehicle made some belated fireworks on July 7th, lifting off for the first time on a nearly-invisible tower of hydrolox exhaust. On its maiden solo flight, the X-33 reached an apogee of just a few miles and travelled only 50 miles downrange. After apogee, the StarClipper’s onboard computers turned the vehicle, and used its aerodynamics and speed to bring it bring it back to the runways at Edwards. The vehicle performed nominally, touching down almost exactly on the runway centerline before rolling to a halt. Several flights would follow on this profile, which allowed the vehicle to be quickly turned around for another flight. Two pairs of flights were made in August to twice demonstrate a 3-day turnaround, and in September another pair showed off a launch-to-launch turnaround of just over 24 hours--Lockheed had belatedly matched the achievements of the Starcat.

    However, matching Starcat’s flight records wasn’t the X-33’s goal; demonstrating high-altitude horizontal flight was. In order to carry the vehicle to the edges of its performance envelope, it would have to fly higher, faster, and further afield. In October, the X-33 concluded its first year of flight testing with its 9th flight, in which it reached a speed of Mach 4 and an apogee of just under 30 miles before landing 180 miles downrange at Nellis Air Force Base. Over the winter, the vehicle was to be extensively torn down and examined for the effects of the flights to date. In the spring, flight testing would resume with a series of longer, fast, higher flights which would push the StarClipper demonstrator to the very edges of its performance envelope. However, as this was being planned, the program’s future was up in the air. Continuing work on the composite tanks and improvements to the aerospike engine had been able to come closer to the originally promised performance goals, but were still unable to reach the level necessary for the follow-up orbital SSTO, with margins being simply too tight to allow a go-ahead. Like the Starcat before it, the main effect of the X-33 had been to invalidate another approach to SSTO in the aerospace community. While Lockheed wrestled with the implications of this and its own long road to replace Titan, however, another firm was going all-in on reusable spaceflight in a big way--and with the direct intent of overthrowing Lockheed’s commercial dominance once and for all.
     
    Part III, Post 22: The Leavitt X-Ray Telescope and what's next for astronomy
  • Good afternoon everybody! It's that time again, and you all know what that means. Last week, we turned our attention from the lunar program and operations in space to the question of alternative means for reaching orbit with reusable vehicles, as the X-33 Starclipper demonstrator shattered the sound barrier, but sadly also the hopes of SSTO advocates in the skies over the American Southwest. This week, we're turning our focus outwards--far out to the very reaches of the stars as Workable Goblin sees what astronomers have been up to since we last checked in during Part II. (It might be worth re-reading that post before this one.)

    Eyes Turned Skywards, Part III: Post #22

    Even as Hubble was speeding into the sky atop a Saturn rocket, the attention of certain astronomers was turning from the big telescope, however productive it might eventually turn out to be, towards the question of what would be the next large project to occupy NASA’s astrophysical division. An elaboration of the earlier Einstein Observatory, the Advanced X-Ray Telescope was the logical next step for NASA’s x-ray astronomy program: bigger, better, and farther away. By moving to a Saturn-Centaur as the launch vehicle instead of an Atlas-Centaur, the collecting optics could be significantly increased in size, allowing both higher resolution and imaging of fainter objects, while also sending the telescope father away from Earth. While the advantages of such a distant position were not as significant as they were for an infrared or optical telescope, greatly reducing the possible impact of the Earth and Moon on telescope operations, and simplifying avoidance of the Sun were large enough benefits for even early mission plans to depend on deep-space operations.

    Nevertheless, even a casual perusal of space history will reveal dozens of “logical next steps” which never saw the light of day, ranging from massive human spaceflight extravaganzas to follow the first Moon landings to mundane Earth observation spacecraft, and it is worth asking why AXT in particular was chosen to follow Hubble, instead of any of a range of other worthy programs, such as its eventual successor, the Large Gamma-Ray Observatory. After all, LGO was equally large, and had a nearly equally-sized support base in the astronomical community. It would push the boundaries of technology just as far as AXT, and would even directly respond to the Soviet observatory Gamma, launched a few years before Hubble. And it had the strong support of the largest single concentration of space astronomy talent on Earth, with the backing of the National Institute for Space Astronomy.

    And there lies the key for its delay. NISA had been founded to operate Hubble, but as the name indicates some of its creators had grander plans afoot for the center, hoping to make it practically the arm of NASA responsible for all space astronomy missions. This was hardly unknown, even if it was not often publicized, and many astronomers outside of NISA were constantly alert for any possible transgressions by the center outside of its role as the central science operations center for Hubble. And, inevitably, the location of NISA had shaped their perceptions of what the next major space astronomy mission should be, the beating heart of American particle physics powering a transfusion of more than a little of the worldview of the field to its astronomical counterparts. With the bedrock of NISA support beneath it, and its skill and experience in operating Hubble, it was inevitable that if LGO launched NISA would take it over. With two successful missions under their belt, it would not take much for them to take a third. Then a fourth. And then...

    Inevitably, such a grand plan, or at least the perception of one, attracted an equally grand degree of push-back. With optical and ultraviolet astronomy off the table given Hubble’s ongoing operations, radio astronomy pursuing its own, ground-based projects, and infrared astronomy too immature for a major mission, the only reasonable alternative to LGO in the same size class was AXT, and those who did not want to see NISA controlling American space astronomy quickly coalesced around that program. Besides x-ray astronomers, almost all of whom naturally preferred another x-ray telescope to a gamma-ray observatory, many astronomers involved in smaller programs joined the opposition to LGO. And beyond the astronomical community, Goddard Space Flight Center, the principal repository within NASA of astronomical talent, was vigorously, and at times viciously, opposed to the proposal, correctly seeing it as a threat to its own position and programs.

    Nevertheless, LGO was a next step forwards for gamma-ray astronomy, and it did have the support of many in the astronomical community, whether out of simple personal connections or interest in cooperative research, so the coalition of opposing interests was not able to completely derail its juggernaut. Instead, they were only able to delay it, persuading Congress to prioritize AXT over its higher-frequency cousin. Ultimately, the FY 1987 budget opened a new budget line for the Advanced X-Ray Telescope, with the promise of a full start for LGO once AXT was launched and operating. In the meantime, advocates of the gamma-ray observatory could continue low-level research and development of spacecraft components and building institutional structures to support further mission operations.

    With the question of which would go first resolved, attention turned towards actually building, launching, and operating AXT. Fortunately, since the concept had been developed in the late 1970s, a considerable amount of work had been done in firming up specific technical details for the telescope, informed not only by the Einstein Observatory’s experience but by continuing balloon and sounding rocket observation campaigns. In conjunction with the Smithsonian Astrophysical Observatory, located less than half-an-hour’s drive from American Science & Engineering’s offices in Billerica, Massachusetts, the masters of x-ray telescopy had carefully adapted their preliminary design to evolving scientific requirements and changing political environments. With the Vulkan Panic opening the funding spigot, this mostly meant adding more. More instruments, more resolving power, more light-gathering area, and, above all else, more altitude.

    Like infrared telescopes, x-ray telescopes could benefit greatly from being farther away from Earth. While they lacked cryogenic liquid helium, quickly boiled away by the Earth’s heat, in low Earth orbit the Earth and Moon would still act as enormous screens, blocking x-ray radiation from swathes of the sky at once, while the short period of the orbit would only permit relatively brief observations of any single target. While a lack of experience and funds had prevented Hubble or other, more minor missions from being launched beyond Earth orbit, for AXT scientists wanted to take the next step and move to a heliocentric orbit. After a thorough analysis of the possibilities, they had, in fact, settled on one particular location for the new spacecraft: the second Sun-Earth Lagrange point.

    Located about 1.5 million kilometers away from Earth, in the direction opposite the Sun, SEL-2 (as it was known) offered many attractions for a telescope. The three main bodies which could interfere with observations would be nearly lined up in the sky at all times, making it easy to plan observations around the resulting no-go regions, while at the same time dwell times in the hours or even days could be obtained for dim or fluctuating targets, uninterrupted by fast-moving orbital positions. With the Saturn-Centaur’s lifting power, there would be no trouble dispatching the telescope to the Lagrange point, either, with only a relatively small increase in cost compared to launching the spacecraft into low Earth orbit. Compared to a heliocentric orbit like that being adopted by the International Infrared Observatory at the same time, SEL-2 offered the advantage of a steady position in the sky for communications and easy coordination between the x-ray telescope and low-orbit or Earth-based facilities, while compared to high Earth orbits it offered a superior arrangement of no-go zones. By the time AXT was formally approved, it had long since been decided that it would be a deep-space mission, operating farther from Earth than any previous telescope.

    Under the leadership of Goddard Space Flight Center, which as with Hubble was serving as the lead center for coordinating the telescope’s construction, American Science & Engineering’s technical talent, the Smithsonian Astrophysical Observatory’s scientific know-how, and the aerospace engineering skills of Boeing (the winner of the contract to build the spacecraft) began to be brought together to make the project a success. The greatest challenge of the project, compared to the Einstein Observatory, was its much greater scale; AXT would be larger, heavier, farther away, and longer-lived than its predecessor, demanding new advances in x-ray optics to minimize degradation over time, electronics designed to function in the harsher environment of interplanetary space, and more redundancy, to protect against even a single failure disabling the spacecraft. In exchange for this complexity, it would have the advantage over Einstein of having the ability to image dimmer objects at greater resolution, and, through the lengthy projected lifespace, the possibility of following objects over time, witnessing how they changed over a period of a few years. A longer operational lifetime would also increase the probability that AXT would be able to observe rare events, such as supernovae, that might be of interest to x-ray astronomers.

    Nevertheless, that complexity had to be worked through. With seven instruments, a dozen precision-machined mirrors, tons of spacecraft, and a destination a million and a half kilometers from the nearest repair shop, it was obvious that building the spacecraft would be anything but easy. Despite their skill and experience in building x-ray optics, American Science and Engineering was too small to manage a project of this scale, forcing an expansion whose growing pains severely interfered with the project. While experienced in astronomy, the Smithsonian Astrophysical Observatory, whose close ties to AS&E had ensured their participation, had never directed the scientific operation of such a large and complex instrument before, and many of the lessons that had been learned by NISA had to be painfully relearned by Smithsonian scientists. Fortunately, Goddard’s extensive experience in operating spacecraft from Explorer 10 to Hubble, and Boeing’s involvement in the space program from the beginning made up for their partner’s relative lack of experience, and the program moved apace; perhaps more slowly than had been planned or desired, but it was moving.

    With the spacecraft moving forwards, the time had come to name the telescope. In the past, this had been done by small committees of the astronomers involved in the mission, but in a more publicity-conscious age NASA had begun soliciting public input, using contests to name their spacecraft as a potent tool for publicization of their more esoteric missions. Already, the rovers Liberty and Independence had been named by schoolchildren; now, it was the turn of the second large American space telescope in little more than a decade. High school students across the country wrote and submitted essays selecting an astronomer of their choice and defending the selection by reference to their historical and scientific importance. Perhaps the recent election of Ann Richards to the Vice Presidency had increased their visibility, or perhaps it was an effect of the efforts of feminists over the past twenty five years to recognize the contributions of women to science, but a surprising number of the contributions named women astronomers, often describing the work they had performed but for which credit had been taken by male astronomers. In particular, perhaps due to the ongoing success of Hubble, one name came up again and again: Henrietta Swan Leavitt, the discoverer of the important luminosity-periodicity relationship for Cepheid variables that had enabled the determination of intergalactic distances and therefore Hubble’s own work, as well as most later galactic research. Overlooked at the time, and overshadowed by her male colleagues, perhaps the justness of the designation, its coincidental relationship to Hubble, or a certain degree of pressure from the United States Naval Observatory (the home of the Vice President) ensured that, when AXT’s formal name was announced, it would become the Henrietta Swan Leavitt Space Telescope, or more commonly just the Leavitt Telescope.

    By the time the telescope had been named, Hubble was on the verge of reentering Earth’s atmosphere in early 1995. At the same time, Leavitt was nearing the launch pad itself, with construction and assembly having been completed in late 1994 and only testing and final launch vehicle integration remaining before launch. Late that year, a Saturn-Centaur bore it aloft from Cape Canaveral, placing it on a trajectory out of the Earth-Moon system towards SEL-2. A few weeks after launch, Leavitt reached the Lagrange point, gently braking itself into an orbit about the point before completing deployment and beginning validation testing. By the end of 1995, Leavitt was ready to begin scientific observations, with all systems and instruments checking out and proper mirror alignment confirmed.

    With Leavitt’s launch, scientists who had been waiting for years for their program to get underway immediately began to clamor for Congress and NASA to live up to their promises in 1986 and begin work on the Large Gamma-Ray Observatory. However, as these astronomers quickly learned, matters were not going to be quite so simple as reminding those controlling the purse strings of what they had said all those years earlier, for new competitors had appeared and moods had changed on the Hill. With the threat of Soviet space competition having completely vanished, yearly budget increases had vanished too, and Administrator Davis was spending an increasing proportion of his time on merely holding the line, trying to keep funding stable or at most keep up with fortunately low levels of inflation. With Artemis development and X-33 funding eating up large amounts of money, massive and equally expensive space telescopes answering esoteric scientific questions were being pushed to the bottom of the priority heap and LGO was having a hard time gaining support.

    Beyond simple budgetary conflict, LGO was confronting new opposition within the astronomical community. Unlike the project’s earlier opponents, this base was more concerned with the scientific value of the scope rather than its political implications; over years of Hubble operations NISA had moderated its views, and no longer seemed to be a threat to other space astronomers. Instead, the discoveries of Hubble and parallel ground-based work with the new generation of giant, computer-controlled telescopes had led to a realignment in the American astronomical community towards an entirely new spacecraft proposal. This Large Infrared Space Telescope, or LIST, building on proposals dating back to the 1970s, would take full advantage of the Saturn’s lifting power to place a large infrared telescope into a heliocentric orbit, similar to that followed by IIO or AXT, thereby reducing the impact of solar, lunar, and terrestrial radiation on its observing program and increasing its useful lifetime. It would be the first infrared telescope of any great size operated by the United States in space, and, perhaps most of all, it would represent the attainment of some of the more ambitious goals set for Hubble more than a decade earlier.

    While Hubble had been the most powerful telescope ever launched into space and, for its time of construction, of fairly average size for a research telescope, it still had not been able to live up to the ambitious goals of its creators. As one of Hubble’s principal goals had been to observe very distant stars and galaxies, its comparative lack of significant infrared capabilities (despite the Long Wavelength/Planetary Camera) had been a major blow to its scientific program. Together with the discovery of significant clouds of interstellar dust by IRAS and IIO, which absorbs higher-frequency light and reemits it as infrared, and increasing interest in planetary formation, especially after the first discoveries of extrasolar planets in the middle of the decade, a telescope in the infrared seemed to be the more logical choice for succeeding Hubble and Leavitt than a gamma-ray observatory. Additionally, unlike the visible bands, infrared light is mostly absorbed by Earth’s atmosphere before reaching the ground, blocking much of it from ground-based telescopes, even those built on mountains. If one was going to build a large, expensive space telescope, the reasoning went, it made more sense to build one which would not face competition from larger and cheaper ground-based telescopes, and more importantly would produce research those were incapable of doing.

    For all these reasons, the idea of building a new Large Optical Space Telescope to succeed Hubble was marginal at best in the astronomical community, with few supporters. Despite large-diameter high-precision mirrors having been perfected for other purposes, building such a telescope would still be too expensive for the scientific value. Instead, the question of the next major telescope was going to be a competition between LGO, with the backing of much of the space astronomy establishment and in particular NISA, looking for a new project to occupy themselves with after the conclusion of Hubble’s mission and LIST, with greater appeal among younger and more Earth-bound astronomers.

    In the end, it wasn’t even close. While LIST was popular, compared to LGO it was too immature and underdeveloped to be a serious competitor for research dollars. If Congress was hesitant about starting a new budget line for LGO, at least they had the comfort that the design had been firmed up, contractors virtually pre-selected, and key personnel were ready to go. By contrast, LIST was not much more than a general concept, with such basic necessities as the precise design little more than cocktail napkin sketches. Certainly little work had been done in trying to ensure that the technical requirements of its infrared detectors, solar shield, or helium dewar could be met within a reasonable period of time and reasonable amount of money, and the management structure was as yet uncreated. In FY 1997, less than a year after Leavitt’s launch, Congress approved a new start for LGO, finally beginning the project nearly twenty years after the concept had been created, while LIST emerged into the same “on deck” slot that LGO had occupied for so long.
     
    Top