27

The 1998 movie "Armageddon" depicts two Space Shuttles launching simultaneously. I read some expert say that "There isn't enough computing power in the world to launch two shuttles at the same time".

I always wondered if that was literally true.

EDIT: I can't find the quote, but considering I heard it about twenty years ago, it's possible that I misunderstood. Maybe he meant "...enough computing power in one place in the world..." (I.e. NASA couldn't do it alone.) Or that he said "not enough...to operate [not just launch] two shuttle flights simultaneously".

In any case, answers below confirm that launching two shuttles and managing their flights would be something NASA could have accomplished in 1998, computerwise.

18
  • 30
    Have you any idea where you read that "expert" comment? Or was it in the film? Commented Jan 27, 2019 at 23:49
  • 11
    This is not at all true.
    – Ken Gober
    Commented Jan 28, 2019 at 1:06
  • 26
    What were the script writers smoking? Any one of the onboard computers is perfectly capable of putting a shuttle in orbit without any help.
    – Joshua
    Commented Jan 28, 2019 at 3:32
  • 9
    This sounds more like a question for skeptics.stackexchange.com.
    – Chenmunka
    Commented Jan 28, 2019 at 13:10
  • 7
    Armageddon is notoriously inaccurate... I would not use it for anything other than entertainment. This is probably hearsay, but NASA was rumored to use it in 'identify all the errors' scenarios in their training. todayifoundout.com/index.php/2010/01/…
    – mschaef
    Commented Jan 28, 2019 at 14:22

8 Answers 8

18

There are five consumers of computing power in the shuttle launch.

The first is the simulation and testing. Which was processing intensive and at times a major consumer of processing power but a dual launch does not actually require dual simulation in real time. The unique profile needed for the movie mission quite possibly would need a world wide pooling of processing power to simulate lots of different profiles to find a best one, but once chosen verifying that flight would not be much more of a problem than two ordinary missions.

The second is the actual flight control once it leaves the pad, which is carried by the orbiter and self sufficient all the way to orbit by design and multiply redundant and driven by vote. So there were multiple duplicates within each shuttle and across the fleet.

The third is the launch control hardware which was in the pad itself. Answers seemed to vary with time about the processing here but was less computational and more signal conversion and sequencing. There were three of them. They conducted the final launch sequence autonomously with the monitoring hardware in launch control acting as a veto so once in a final countdown condition could complete it without needing more external processing power.

The fourth is the actual launch management, which may be where this question started. While intended to support rapid launch sequencing and did support two simultaneous shuttles on the pad it is unclear if there is actually capability to have two simultaneous launches active. The problem here is not processing capacity but having enough interface hardware to monitor the final launch sequence so if it is 'we launch on time or we die' then this does not prevent a launch, just makes it higher risk since there will be fewer eyeballs watching events.

The fifth and final consumer of processing was the ground based tracking and control loop. The launch tracking would not easily handle two simultaneous launches but that is more a restriction on tracking hardware numbers and telling the two launches apart. All of this is important for safety and investigating an accident but does not directly prevent a launch. Once in orbit the tracking process are already managing multiple objects anyway so not meaningfully different to have two shuttles up at once over one shuttle plus various satellites.

There are several other reasons why a formation launch is awesome fiction but a problematic reality but global processing power is not one of them.

2
  • rescue flight : "within a period of 40 days". - Guidance, as they say, is internal. The hard part is the logistics of launching two shuttles, which we might not of even had the hardware for. +1
    – Mazura
    Commented Jan 28, 2019 at 2:35
  • 2
    Personally, I would have had the second one launch about five seconds after the first. I think it would have looked even more dramatic. Commented Jan 30, 2019 at 5:47
46

No, it wasn't even half of NASA's own computing power

The shuttle itself had five AP-101 computers. They were derived from the IBM System/360 architecture, but with 32-bit registers and a 1 Mb address space.

Mission control did have many computers (including many mainframes) in Building 30 of Johnson Space Center in Houston. Launch control at Kennedy Space Center and the backup mission control in Huntsville also had computer facilities. Mission simulators had the same hardware as the spacecraft itself, with additional support computers.

All of the above computers were at most mainframes; i.e. a shuttle mission did not rely on supercomputers.


In contrast, consider NASA's Advanced Supercomputing Division at NASA Ames Research Center in Mountain View, California. By 1998, this division had over 20 supercomputers, far more than the entire computing power of the shuttle program. The division performs calculations for fluid dynamics, aircraft design, weather modeling, and ocean current prediction. They also have resources for data visualization and massive data storage. This is all for research, and isn't actually needed for a shuttle launch.

Finally, two shuttles (Atlantis and Endeavour) were on launch pads at the same time in 2008. Atlantis launched for the final servicing mission for the Hubble Space Telescope. Endeavour was made ready for a rescue mission if something went wrong with Atlantis. Fortunately, no emergency happened, and Endeavour launched on its own mission a few weeks after Atlantis landed.

So it's quite possible to launch two shuttles at the same time.

two shuttles

4
  • 8
    Thanks, I’d forgotten the Atlantis/Endeavour pairing! That’s such an amazing photograph. Commented Jan 28, 2019 at 13:23
  • 6
    1 Mb address space. It always amazes me how much people could do with so little resources when essentially a super computer is in my pocket.
    – Sidney
    Commented Jan 28, 2019 at 16:50
  • 3
    Technical nitpick: the System/360 architecture had 32-bit registers from the start -- it was pretty much the architecture that introduced the 32-bit word (and byte-addressed memory) to the computing world. So the "but" in your first paragraph looks misplaced. Commented Jan 29, 2019 at 1:20
  • 5
    @Sidney To be fair, that super computer in your pocket would have a lot more trouble getting the shuttle to space :P But in any case, they did run into the 1 MiB limit all the time - they couldn't load the whole program for the mission at once, they had to load further stages from tapes. And that's pretty much pure code with numerical data - no images, no UI, no text (to execute a command, you had to use the numerical code of that program :)). The shuttle had a lot of code, and unbelievably reliable code at that.
    – Luaan
    Commented Jan 29, 2019 at 7:23
43

Throughout the duration of the Space Shuttle programme, the overwhelming majority of the world’s computers were busy with duties which had absolutely nothing to do with it. So launching one shuttle took far less than half the available computing power in the world, and launching two would also have taken far, far less than the available computing power in the world.

(Imagine if the expert’s premise were somewhat accurate: how would all the computers in the world be set to work on the same problem in the first place?)

Incidentally, although it doesn’t qualify as two simultaneous launches, I do believe NASA was prepared to launch a second shuttle while one was already in flight, in particular for rescue scenarios.

9
  • 12
    The quote, however, was about computing power, not computer count. So perhaps NASA has a really really powerful computer that (a) is more powerful itself that the rest of the world's computers put together, and (b) is fully consumed launching one shuttle. That satisfies the requirements. But, no such thing exists. Or if it does no-one's telling me about it.
    – dave
    Commented Jan 27, 2019 at 22:10
  • 8
    It certainly didn't apply to the shuttle's on-board computing power, which was roughly comparable with four IBM S/360 mainframe processors. By the mid 1960s, about 8,000 S/360s had been installed world wide.
    – alephzero
    Commented Jan 27, 2019 at 22:34
  • 4
    @alephzero Even if they had the world most powerful computer (maybe), and even if it was like 10 times as powerful as an average top end computer (not likely), there where thousands of these top end computers installed in the US allone, not to mention worldwide.
    – Raffzahn
    Commented Jan 27, 2019 at 22:38
  • 8
    @another-dave considering that the first Space Shuttle was launched in 1981, so whatever computing power was needed for that, it obviously was available to the NASA by that time, while the claim was made in 1998 about a movie scenario taking place in 1998, the assumption would rather be that the NASA had “a really really powerful computer in 1981 that is more powerful itself that the rest of the world's computers put together in 1998”.
    – Holger
    Commented Jan 28, 2019 at 13:38
  • 3
    That's why movies require "a willing suspension of disbelief" :-)
    – dave
    Commented Jan 28, 2019 at 13:49
16

This is a quite strange assumption. What superior computing power does a launch need? Most of what is done is in control systems. Maybe some supercomputer is required for a few minutes in mission planning ahead of time to calculate trajectory, but that's all.

Nevertheless, let's take a look at the high-end machines in 1980:

In 1970, US manufacturers alone shipped about 15,000 mainframes a year. By 1980, this number rose to more than 35,000 units. This number does not include any smaller systems like minis - and more importantly, it does not include any European or Japanese manufacturer.

Even if the Shuttle needed a dozen high-end mainframes to be prepared for launch (which I doubt), that's just a droplet compared to the stream of systems installed at that time. Sure, NASA had some impressive machines, but not anything near than half the world's computing power.

(Source for some numbers: Computer World Article of May 1977)

8
  • 1
    "What superior computing power does a start need?" - presumably less than doing 2 starts at the same time, but how much less? How much extra computing power do you need to prevent two rockets from colliding when they are launched simultaneously? Commented Jan 28, 2019 at 0:00
  • 2
    @BruceAbbott ? sorry, you lost me there. Mind to explain? Turning it upside down doesn't make any point I can see. If it's about steering of the rockets, then everything has to be done localy anyway, as those reactions need to me done in sub-millisecond time frames. Nothing to be done in a remote computing center. So all power needed must be with what's on board. Isn't it?
    – Raffzahn
    Commented Jan 28, 2019 at 0:08
  • 2
    @Raffzahn There is some serious number-crunching to do for any launch to orbit and beyond. It's kind of the difference between tactical and strategic - the local "tactical" work is done by the onboard control systems, but the "strategic" work of setting the course which the onboard systems try to follow is entirely calculated ahead of time by the high-end machines in the ground. Bruce isn't quite right, because each shuttle a different trajectory, but it still isn't more than a fraction of a percent of the world's processing resources.
    – Graham
    Commented Jan 28, 2019 at 1:02
  • 1
    @Graham The keyword here is "ahead of time" which dies allow serialization and thus not increasing the amount of power needed at once, isn't it?
    – Raffzahn
    Commented Jan 28, 2019 at 8:27
  • 4
    @Graham, the Space Shuttle did all its work as short-duration missions in low-Earth orbit. Under those conditions, you can get away with a two-body approximation to orbital mechanics, and there was sufficient worldwide computing power to do that for two spacecraft at once in the 1950s.
    – Mark
    Commented Jan 28, 2019 at 22:40
6

When John Glenn went into orbit on Mercury 7, there were two completely parallel ground support computers, IBM 7090 mainframes. The idea was that, in the event of system failure, the alternate system could take over. Even the programming had been done separately, to reduce the chance of simultaneous failure of the two systems. Mercury history

During Gemini, they actually had to support two spacecrafts in orbit at the same time, in order to test rendezvous. That involved not only two systems, but also sharing data between them.

It seems unlikely that this dual redundancy would have been abandoned during Apollo or the Space Shuttle launches. That being the case, the necessary computer power could be made available by simply using one of the two systems to support each launch. You lose the redundancy, but that is all you lose.

4
  • 4
    This is only half the picture. You also need all the telemetry to monitor two simultaneous launches (including two mission control rooms and twice as many people). I've never worked on a "shuttle launch" sized system, but I have worked on "one-off" tests that cost > $10m to run. Doubling the computing power would have been fairly trivial, but doubling up the entire data acquisition system would have been a huge (i.e. way over-budget) expense. Of course that's not what the quote from the film actually said, but it's probably the right conclusion made for the wrong reasons.
    – alephzero
    Commented Jan 28, 2019 at 2:04
  • You make a good point. But the question was about computing resources. Commented Jan 28, 2019 at 2:10
  • Good answer - minor quibble - the Gemini launches were not concurrent (as opposed to the premise by OP)
    – NKCampbell
    Commented Jan 28, 2019 at 5:44
  • Yes, that is correct. The launches were timed out so that launch resources didn't have to be shared. As already noted, the control room was probably the critical resource, not computer power. Commented Jan 28, 2019 at 11:31
4

What I'm not seeing the answers here address is the tracking issue, which seems almost certainly part of what they are talking about.

NASA tracks US satellites (and the shuttles when they were operational) with a couple of ground stations called TDRSS. These themselves use satellites, many of which were launched by Space Shuttle missions in the 1980's and '90s.

I don't know a lot about the composition of the first TDRSS, but I worked on the system's software for the second TDRSS (STGT) from about 1989-92, so I know a bit about that system. It used a set (6ish if memory serves) of VAX clusters. Each cluster contained 2 full-blown VAX VMS mainframes, and a third small machine. One mainfraime in the cluster would be the master that was actually operating things, the other was a "hot" standby that was doing all the same work, but not transmitting, and the third's only purpose was to detect if the master crashed, and help promote the hot standby to master. Clusters themselves had hot spare clusters in case the whole cluster went down.

The point of this story though is that this wasn't really run by that many computers. Low double digits of commercial VAX machines. While these weren't entry-level VAX machines, they weren't supercomputers either. DEC sold 400,000 Vaxes over the product line's history. Scraping together another 12ish of them at need in 1998 probably wouldn't be an insurmountable task.

That being said, these stations were designed to track lots of space and near earth objects at once. I'm not a shuttle expert, but I don't see why adding one more of those would have broken the system. In fact, if Wikipedia is right, even with one shuttle and the ISS up there simultaneously, most of the bandwidth was still reserved for US military operations*. If the US (and not so incidentally the planet it is sitting on) were about to be destroyed, I highly suspect the US military would happily donate some of it.

* - They provide a quote attributed to Space.com, but their reference link doesn't contain the quote.

2

Some background: I worked as a software developer at the Kennedy Space Center (KSC) from 1989 to 1995. The technical aspects of GremlinWrangler's answer are accurate as far as I can recall, but here are a few other important details:

GremlinWrangler is correct that all processing happened on the ground at KSC up to launch, and then the Orbiter's on-board computers took over at liftoff. The Space Center had 4 "firing rooms" at the Launch Control Center (LCC). One of these four firing rooms was used for any given launch, and if you have ever seen video of the folks at console computers during a Space Shuttle launch, that is a firing room. Now at any given time, not necessarily all four firing rooms were available to control a launch (one was originally just a conference room, one was extensively remodeled in 2006), but there were always at least 2 or 3 firing rooms "in commission".

Now, all of these computers (including the Orbiter's 5 computers, or "GPCs", and those in each Firing Room) were built in the 1970s, and in 1998 were still in place using 1970s technology! Each console in the firing room was, if I recall correctly, equipped with just 64 KB of RAM. By comparison, my desktop in 1998 had a few megabytes of RAM. All of the sensors from the pad and the shuttle were fed back into one Common Data Buffer in the firing room, which contained a whopping 128 KB (yes Kilobytes) of RAM (technically configured as 64 KB of 16-bit words).

All this to say, it should be painfully obvious that a rocket, no matter how innovative or unique, designed and built in the 1970s, with a computerized launch system designed and built at the same time, was still ~25 year old technology by 1998. There's no way even a room full of these 16-bit 64K computers were even a fraction of the entire processing power available in the world in 1998.

-3

It is at least plausible.

In theory it is certainly possible to launch 2 shuttles simultaneously. Additionally closed loop immediate guidance can be done by one typical p.c. Of that era. It isn't to different then what was done by the orbiter simulator which ran on a pc. In theory this could minimally operate two orbiters.

However in reality, we don't know Nasa's requirements as far as accuracy and certainty in the orbits selected for the missions. It is certainly plausible that Nasa had a significant percentage of the world's computational power at that time (and still does), and that it did use it to plan missions already in progress. In such a case piloting 2 shuttles would require more resource then what was available to maintain the same level of calculation which was already put into a shuttle mission.

5
  • 3
    NASA certainly had a significant percentage of the world's computational power at the time, but it was mostly busy doing fluid-dynamics calculations and other aeronautics-related simulations. Planning and executing a space flight takes far less computing power, and could be done on commodity 1960s hardware.
    – Mark
    Commented Jan 28, 2019 at 22:48
  • Can you site that? The truth is we don't know none of you have definitively shown it isn't true? I've already said a typical pc can perform immediate guidance, as for long term LEO, and especially shuttle re-entry that is not clear. Commented Jan 31, 2019 at 7:34
  • It could be the early shuttle program did represent a large percentage of worlds computing power. You certainly have not given reasonable persuasion otherwise that it almost certainly did not. Ounce again the best answeres are beyond the general public education, to be able to select the best, or correct answer by vote. S.E. fails as long as we allow it to. Commented Jan 31, 2019 at 7:39
  • Also shuttle re-entry is very computationaly expensive task. None of you even question it, let alone explain how nasa predicted where the ballistic trajectory would take it, as its wings remain liftless through most of the atmosphere. Commented Jan 31, 2019 at 7:44
  • Computing a ballistic trajectory in a vacuum is trivial: Issac Newton was doing it in the late 1600s on commodity hardware. Computing a re-entry is also easy: Apollo 11 was able to re-target to avoid bad weather on less than a day's notice. (And before you mention that the Shuttle had wings and Apollo didn't, they both used aerodynamic lift to control re-entry. The Shuttle just had more of it to work with.)
    – Mark
    Commented Jan 31, 2019 at 21:46

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .