Chapter 28: A Digital Revolution!
Excerpt from: From a Figment to a Reality: The Imagineering Method! by Marty Sklar
The Digital Entertainment Revolution was emerging in 1990, and I’m proud to say that Walt Disney Entertainment was the industry leader. When I first joined Disney back in the 1950s a “computer” was a giant, buzzing thing that ran on punch cards and vacuum tubes and which required a building the size of a gymnasium just to house it. By 1990 a computer that could sit on your desk, running on a chip the size of a postage stamp, could do things that those early computer programmers couldn’t even conceive of a computer ever doing. And the folks at the Softworks were certainly doing things with computers that we couldn’t have conceived of in 1964 when Mr. Lincoln debuted at the World’s Fair. The controls for Mr. Lincoln required a bank of relays that took up a whole wall just so that he could stand up, sit down, gesture broadly, and talk. By 1990 a small black box could run an advanced Audio-Animatronic through a complex series of fine motor motions without any shake or a millisecond’s gap with the audio.
Even beyond animatronics, the folks over at the Disney Digital Division or 3D were creating fully computer-generated animation where not a single cel was ever inked or painted and not a single still frame of celluloid ever taken. Lasseter and Keane had produced their first all-digital animation sequence in 1983 with the original
Where the Wild Things Are test footage. Not only was it blocked and composited digitally, but it was “inked and painted” digitally. They wanted to continue the all-digital approach on the 1986 film, but back then, even with a CHERNABOG, it was prohibitively expensive, so
Where the Wild Things Are was a composite of digital and hand-drawn, the digital mostly for backgrounds, framing, and compositing to reduce the number of hand-drawn cels required. By 1988 the technology was mature enough that entire animated sequences were being done entirely digitally, such as the journey into the Lilliputian city in
A Small World or the turtle-borne Discworld sequences for
Mort. They could have made
A Small World or
Mort entirely digital if they’d been willing to spend an extra $5-10 million[1], but by 1990 computer technology had reached a point where a single experimental “Baby BOG” double-tower could do what four of the old Cray 2 based CHERNABOGs could do. And you could link multiple Baby BOGs together as a single render farm network! More on them later.
Suddenly digitally inked and painted animation was cost competitive with hand-drawn or hybrid animation. Thus, the first all-digitally inked and painted film would be 1991’s
Aladdin whereas 1990’s
Mort would be a hybrid of computer and cel.
Aladdin would be created entirely by 3D using the DATA machines, Pixar engines, Disney Imagination Stations, and Baby BOG compilers. Not a single hand-painted cel was made, much to the chagrin of later collectors and archivists (only the original pencil tests!). This digital technology allowed for all kinds of new possibilities. Flying carpets that soared through the air and between towers and minarets. A man magically transforming into a monkey. Genies that could change shape and form in new and fluid ways. Frankly, the programmers were the real Genies. I personally couldn’t make heads nor tails of it all. They typed some gibberish into a computer window and suddenly a lifelike drawing emerged. It was the computer equivalent of waving a wand and singing “bippity, boppity, boo!”
But the real challenge was selling the effects on a screen and making the motion look real and fluid, not like a cheap videogame. Brian [Henson] of course was all over it. He’d learned some coding in college and had picked up much more since returning to Imagineering. Now he, John Lasseter, Ed Catmull, Leo Tramiel, and Steve Jobs brainstormed to come up with ways to turn physical motion into digital imagery and vice versa. Waldo C. Graphic had been an interesting proof of concept, but now Steve, Leo, and Brian worked to turn it into a revolution. They could use a waldo input as a shortcut on coding, just like they’d started using them to pre-program audio-animatronics for the parks. Interestingly, we’d experimented with just such a concept in the 1960s for Mr. Lincoln. An Imagineer sat with this robot-like rig, a sort of full-body waldo, and made motions to program the Audio-Animatronics for Mr. Lincoln. We called it an “Animating Apparatus”. It lacked the precision and elegance of the modern electronics, being a crude relay-based analog system, so it was largely a clever dead end at the time, but it still made me happy to see an old Walt-era idea resurrected.
Early Disney “Animating Apparatus” Waldo technology for programming audio-animatronics c1963 (Image source “cyberneticzoo.com”)
The advanced real-time Fazakas waldo-input was a game changer in this regard. Now Brian or another experienced Muppet performer could guide the flight of a magic carpet or dastardly parrot, or steer the actions of Omar, Aladdin’s best friend turned into a monkey by Jaffir the evil Wazir, all using a waldo for the input rather than have a coder at a keyboard spend hours trying to set and reset the parameters in a few lines of code one number at a time while a director patiently sat over their shoulder. Hours and thus dollars were saved simply by having the waldo-performer guide a wireframe object through a wireframe world, the computer simply recording and optimizing the parameters automatically. These could then be automatically loaded into the preset digital “objects” such that the digital creature now reproduced the performer’s motions with just a bit of post-processing cleanup to eliminate any blur, distortions, or goofs.
But the Holy Grail as it were was using digital effects in a live action film, and the first live-action movie to make use of the 3D Computer Effects in this way would be
Spiderman. The webslinger would need some “help” to swing through the streets of New York City since on-location shots would be a costly challenge and simply swinging back and forth on a rope in front of a green screen would work for a ‘70s TV show, but not a major motion picture in 1991. We needed to develop new technology and new techniques simply to capture the dynamic, rotating, three-dimensional “comic book” action in a realistic and engaging manner. Imagineering was brought in to work directly with the studio editing and effects people to come up with them. It was my first time working directly on movie effects, so I brought in Brian Henson to be the Imagineering lead since he’d come up through film and TV.
And yet we were amazed at what was possible even without special photographic and computer effects. Camera work, specifically zooms, dollies, pans, tilts, lifts, and forced perspective, could achieve a surprising amount of what we needed, and at the recommendation of George Lucas we brought in cinematographer Peter Suschitzky of
Empire Strikes Back fame to help us make things happen. He even consulted with Bill Pope, who’d done cinematography for Raimi on
Batman, figuring that even if we didn’t directly copy the tone of the Raimi film, we could quote some of the photography to subconsciously tie viewers to
Batman and thus increase viewer acceptance.
Tom and Jim were taking a chance on the script, handing it to a young writer named Joss Whedon who’d been chomping at the bit for a chance to write the screenplay for
Spiderman. He’d already made a name for himself in the company writing for the
X-Men and
Spiderman cartoon series, even winning an Emmy for the writing on the “Dark Phoenix” crossover saga, and even penned the popular and subversive “reverse slasher”
Final Girl. On Spielberg’s suggestion we also brought in Bob Gale to assist with the pacing[2]. Together the two of them put out an amazing script. Jim insisted that, in stark contrast to the dark and brooding vision of Sam Raimi’s
Batman and all of the inevitable upcoming “dark and brooding” superhero flicks mimicking it, we needed
Spiderman to stand out by being lighter and, in a word, “fun”. With perhaps just a touch of semi-self-aware camp, as Jim put it, but nothing even approaching Adam West territory. Joss came from a scriptwriting dynasty and had a real talent for fun and quirky dialog. His first run with the script had so much over-the-top action, though, that we needed to tone it down just to prevent this from ballooning into a $50 million picture. The dialog, though, was fun, snarky, and borderline self-aware, perfect for the smart-mouthed, quippy Peter Parker.
But the effects were going to be difficult in 1990. Batman could just swing from one building to another on a grappling hook in the dark, but Spiderman had to shoot webs from his wrists and dynamically swing through the city again and again, and in the daytime! We could do a lot of this with practical effects, sets, camera angles, and matte paintings, but the digital effects potential of the 3D machines led us to believe that we could do much better than that. However, processing speeds at the time were limited, so the complex vector graphics you’d need to make a realistic digital effect of a human would be an extreme challenge. CHERNABOGs could help with the vector data number crunching, but there were only a few of them and their time was hard to schedule.
Thankfully, Steve Jobs and his team at Imagine, Inc., had developed a new piece of hardware. Using the recent advances in computer processor speeds, they took the basic functionality of a CHERNABOG and implemented it in miniature with a twin-tower system that could fit on or under a desk. It was four times faster and more capable than CHERNABOG, which was the size of a dining table. They called it “Baby BOG” as a working title (they'd come up with a permanent name later), which caused some of the formerly British staff to laugh. Imagine, Inc., then went and built several Baby BOGs, and then connected them all together into a single, mutually-supporting rendering network! Even the computer nerds were amazed at what the networked Baby BOGs could do when it came to raw number-crunching for vector graphics.
Of course, what works with animation doesn’t always work in live action. Animation is unreal enough that it ironically allows you to veer into the totally unreal, like pets that are far too intelligent or humans whose smiles take up half of their faces, but the seeming reality of live action risks losing the audience’s willing suspension of disbelief if you push the law of physics too far, even in a film about a teenager bitten by a radioactive spider and thus suddenly able to walk on walls. And an all-digital person, even in a blue and red spider suit, would stand out as an artificial creation back then.
We also needed to combine a live actor or stuntman with a composite background and make it fluid and realistic. And in 1990 you just didn’t do that…yet. Motion capture technology wasn’t quite up to the task at the time and, as mentioned, digital effects weren’t up to snuff yet. This is where Imagineering and 3D earned their salt. Brian and Steve came up with the “Body Waldo”, or “Baldo”, a lightweight plastic-and-aluminum green-screen-cloth-covered exoskeleton much like a miniaturized version of the old animating apparatus we developed for the ’64 World’s Fair Mr. Lincoln audio-animatronic, but which could be to some degree hidden behind the actor’s limbs or torso[3]. Building off of the techniques that we’d developed for pre-programming the movements of an audio-animatronic for the parks using a Waldo and other controls in the hands of a skilled Muppet Performer, the accelerometers and position indicators built into the Baldo could record the body movements of an actor as vector data for computer input, like digital puppetry on a full body scale.
The Baldo was combined with a giant, swiveling double C-rig like a huge gyroscope open on one end that everyone called the “Christmas Ornament Rig” or COR. It was made of lightweight plastic and aluminum tubing covered in green screen material and allowed a trained stunt performer to strap in and perform 360° spins in three axes. Multiple cameras could be used to more easily edit around the Baldo and COR. Accelerometers and position meters placed on each axis of the COR converted the motions into simple position/angle/velocity vectors that could be used as input to a computer. Then the “rendering farm” of Baby BOGs could crunch and reduce the massive data into a wireframe representation of the actor’s movements for insert over any background image.
Now a moving performer’s complex actions could be converted into a wireframe input to drive effects, and then if even more realism was desired, the same performer could look at a screen projection of their earlier actions and actively recreate the motion without the rig in a green room and then the engineers could composite all of that with camera background footage in post. But Brian and the team weren’t finished: they built a smaller “reverse-Baldo” animating endoskeleton they called an “Odlab” (naturally) that could be used to take the original recorded vector frames and use them as command inputs to run the small servos on an internal animatronic skeleton to move an articulated humanoid scale model in a tiny spider suit for distance shots!
These could all be smoothed using a simple waldo input to “fine tune”.
It was a long and labor-intensive process, and it took about 3-5 months of work for one minute of quality final film effects work. As such, we limited the number of big, thrilling “swing through the city” effects to a handful of exciting set pieces where they’d have the most effect. In fact, there are only about three minutes of swinging effects total in the whole movie![4] Today with modern computer effects, of course, you can have half the film be such wild effects, but back then you had to be picky.
And ironically, the effects scene that people remember the best, where Spidey leaps and flips up the wall and onto the ceiling, was accomplished with a simple “rotating room” set, the exact same trick used in 1951’s
The Royal Wedding to let Fred Astaire dance on the ceiling. Sometimes it’s the oldest tricks that are the best!
Eventually, motion capture technology reached a level of accuracy that rendered the Baldo/COR/Odlab largely obsolete, but for about 5 to 10 years no one but Disney (and soon enough ILM, naturally) could digitally produce this level of accuracy and resolution in motion. Modern motion tracking suit technology makes it all look quaint and old fashioned today, but in 1990 we were blowing everyone’s mind.
I even brought in effects-legend Ray Harryhausen back in 1993 and showed him the Odlab rig, which we used to animate a small skeleton warrior, and I swear that he cried tears of joy!
[1] The first all-digital CAPS-based animation in our timeline was
The Rescuers Down Under, produced by an independent Pixar as an experimental side project, which cost $35 million compared to the hybrid
Aladdin’s $28 million two years later. Digital animation technology is a little bit ahead of this in this timeline, but the “hybrid” approach is yielding such good and cost-effective results that Disney has delayed their first all-digital movie longer than they needed to. Plus, the “Pixar” folks are part of Disney and employed full-time on Disney TV and feature productions.
[2] Hat tip to
@Pyro.
[3] Inspired in part by Figment, Waldo C. Graphic, and some ideas that
@Shevek23 came up with. Hat-tip!
[4] Our timeline’s
Jurassic Park 1993, by comparison, had only 4 minutes of CGI and an additional 10 minutes of practical effects.