23
$\begingroup$

There's this idea for a story that I've been toying with, and I'm struggling with finding a good logical explanation for the main character's job.

It's a science-fiction story. No aliens; just humans. No quantum leaps from our present time in terms of technology or society, just your "normal" technical advancement.

The setting of the story is a space station, made up of two large perpendicular concentric rings that rotate (independently) to generate gravity. Here's a rough sketch to get the idea across (don't mind the crooked shapes -- they're meant to be round):

sketch of the space station's two rings

In order to generate earth-level gravity, they have to rotate pretty fast -- if I've done my math correctly, at a diameter of about a mile, the rotation period has to be roughly one minute.

Now this space station is the cargo base for a nearby planet, and thus it receives a huge amount of cargo space ships. Because of the rotation, it's a little tricky for the ships to line up their approach speeds and vectors correctly so they hit their docking positions on the mark while not crashing into each other.

My protagonist works as the "space version" of an air traffic controller -- constant radio contact with the approaching and departing ships, giving them velocities and directions, etc.

Now here's the part I'm struggling with: If technology is sufficiently advanced to build this space station, to have interplanetary (maybe even interstellar, I'm not sure about that yet) space travel, and so on, why would the job of guiding the ships fall on humans?

It seems that computers, communicating directly with the ships' guidance systems, could do a much better job of coordinating all that traffic.

But the premise of my story relies heavily on the fact that most major decisions about the traffic are made by human controllers, maybe only with some simple support from automated systems.

What could a reasonable explanation be for not allowing this work to be done by machines?

$\endgroup$
18
  • 15
    $\begingroup$ A nitpick about your design: It would be a lot better architecture wise to have those rings located on the same axis, one under another, instead of being orthogonal, this would allow for stationary landing sites on that axis, here with your construction landing/docking could only happen under constant thrust of the incoming ship's engines, which is a very dangerous situation. $\endgroup$
    – Vesper
    Commented Oct 20, 2022 at 11:56
  • 3
    $\begingroup$ We currently have space stations, (unmanned) interplanetary space travel, and human air traffic controllers. How is your setting different? $\endgroup$ Commented Oct 20, 2022 at 18:04
  • 6
    $\begingroup$ The docking ports will be at the center. Nobody would ever put the docking ports on the outer rings, because the station's angular velocity will not merely make it "tricky" to dock, but because the ship would have to thrust constantly to maintain its position with respect to the port on the ring (aka "station-keeping"). This will likely result in a docking ports that rotate along their axis of entry -- which can still make for tricky maneuvering but would at least be possible. It also means zero grav in cargo bays, which is helpful. $\endgroup$
    – Tom
    Commented Oct 20, 2022 at 22:44
  • 8
    $\begingroup$ Your station has a very fundamental problem with its design: The station is directly attached to both rings, meaning that if they rotate at all the station would tear itself apart. $\endgroup$ Commented Oct 20, 2022 at 23:00
  • 6
    $\begingroup$ A lot of people here are making (incorrect) assumptions around how the space station is meant to work exactly. That is not the point of the question. I wanted to provide some context to get the point across that arriving and departing of space ships is non-trivial, because that is relevant to the question. I may at some point ask another question about the station's rotation, in which case I'll explain my idea in more detail and I'd love to hear people's thoughts then. But let's keep the discussion here about humans and computers, which is what this question is about. $\endgroup$
    – ij7
    Commented Oct 22, 2022 at 7:23

20 Answers 20

20
$\begingroup$

If anything goes wrong, insurance companies and ship owners want a head to roll.

Technology can assist the pilot with providing all the supporting information, assistance and guidance, but at the very end who is putting money in the business wants to have somebody accountable for it. The "ok, docking NOW!" has to come from a person who can be held responsible for the success/failure of the action.

Even though computer guidance might be fine for coasting between planets, there are a lot of factors which can make computer guidance less reliable in certain environments:

  • busy communication lines, with lot of noise
  • stellar flares
  • radiation

One second hiccup in a calculation might be fine in a long trip, but when it comes to docking a fast spinning thing, you can't afford risking it.

$\endgroup$
1
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
    – L.Dutch
    Commented Oct 22, 2022 at 4:28
41
$\begingroup$

Meatbags cannot be hacked

The reason to have a meatbag somewhere in the pipeline is it prevents the system being hacked. You cannot infect a human controller with a computer virus. You can infect them with a regular virus, but this will not cause them to see no spaceship in parking Bay 5A when there are spaceships in parking Bay 5A.

The human takes input from a scheduling computer and makes decisions. Their job is to do a sanity check on the data.

If anything looks suspicious they rely on closed circuit cameras and sensors that are separated from the rest of the station, to make them harder to hack. Sometimes they use a little shuttle craft with a pair of glowing bats, to guide the spaceship in manually by looking out the window. Like this but in space:

enter image description here

See an earlier answer with the same premise:


Evasive Manoeuvres, Mr Paris!

Having a meatbag at the helm makes the ship's manoeuvring system impossible to hack.

enter image description here

Space is big. Weapons take a long time to reach the target. Missiles take minutes or hours. Lasers hit almost instantly, but you have to hold the laser on the target for a while to do damage.

The ships defend themselves by constantly moving back and forth. When your missile gets here I will be thousands of kilometers away in some random direction.

You cannot predict what direction I'll dodge because I haven't chosen it yet. I only chose AFTER you fired the missile.

An AI could in principle do the manoeuvres for you. But the AI is vulnerable to being hacked by the enemy vessel, which can then predict where I will be in ten minutes and launch the missiles to detonate there. Hacking happens at the speed of light.

Tom Paris can be a pain in the ass sometimes. But he cannot be remotely hacked, and we have to give him that. You have to give him that.

$\endgroup$
18
  • 1
    $\begingroup$ Great picture, great point :) $\endgroup$
    – Joachim
    Commented Oct 20, 2022 at 10:08
  • 1
    $\begingroup$ Ah, this is a good one. Well-presented too 😀 My only small concern is that this might make the human not much more than an exception handler -- the automation makes the calls, and the human just nods, unless something is completely off. So it's less about making decisions, and more about confirming them. $\endgroup$
    – ij7
    Commented Oct 20, 2022 at 10:12
  • 4
    $\begingroup$ @ij7 The rest of the constant checks they do is for the purposes of liability. They want a human to be to blame if something goes wrong. "This software is not intended to be used in isolation" fine print yada yada yada. . . . $\endgroup$
    – Daron
    Commented Oct 20, 2022 at 10:17
  • 16
    $\begingroup$ Humans can absolutely be hacked. In fact, a lot of cybersecurity breaches happen because of human error (e.g. revealing their passwords). Not to mention corruption. $\endgroup$ Commented Oct 20, 2022 at 18:06
  • 2
    $\begingroup$ @Blueriver Oh that reminds me, can I have your bank account details please? $\endgroup$
    – Daron
    Commented Oct 20, 2022 at 19:28
30
$\begingroup$

Interoperability

A space station must be available for any space ship coming from any colony and each of them have their own standard.

The problem is simple we have a lot of highly advanced software versions and often they cannot work together.

$\endgroup$
2
  • 1
    $\begingroup$ Oh nice! It deserves an upvote for the XKCD reference alone, but it's a great idea in its own right. $\endgroup$
    – ij7
    Commented Oct 20, 2022 at 14:07
  • 1
    $\begingroup$ Very good point. In our world we are still stuck on voice over analog radio for that very reason. $\endgroup$
    – Michael
    Commented Oct 21, 2022 at 5:28
17
$\begingroup$

Unions

No, seriously. I think it's always really easy to overlook the human factors and why they quite possibly would not change even in a technologically advanced future. When the station was constructed, the unions all came together to set working conditions for their members and one of the required conditions was that there be a certain number of human traffic controllers on duty at all times. (Possibly this was even at the insistence of the Trade Union, which runs the ships.) It might have nothing to do with the safety of the computers that could run the show. "We want guaranteed jobs in the traffic control sector or we're not serving your station." So, they bent to the union and now there's some traffic controllers.

Government Quotas

Similarly, it could be a government mandate. With automation and computers everywhere, the government stepped in and started setting laws on how humans need to be hired at certain ratios. Or perhaps the station was built partially or entirely with government money so the government simply has stipulated that "this station needs to employ this many people" and "well, we could set up some traffic controller positions" was how the people in charge of the station met the quotas. Might be the computers still do most of the work, but the station controllers have their own quotas of manual landings to perform so they stay in shape "in case of emergency".

The Station is Poorly Run

Actually, the computers would definitely do a superior job. Maybe it even was automated in the beginning. But over time the system began to degrade. Sensors would go out and not get replaced. The whole station is running on bubble gum welds and duct tape so throwing a body at a terminal and saying "Go help these guys land" was easier than trying to refurbish the sensors and networks required to let the computer do it.

I'm kind of a fan of very mundane explanations in sophisticated sci-fi stories, lol. I do think it helps keep the story grounded. "Why isn't this automated? Is there something wrong with the tech? Are the computers too smart? Are they not smart enough? Are there hackers? Aliens? Alien hackers!?" "Nope. Governmental regulation."

$\endgroup$
11
  • 5
    $\begingroup$ General bureaucratic incompetency is typically why we don't have better things sooner in real life, so +1 for bringing up realism vs idealism here. My first thought was also unions. Unions is roughly the reason we still have commercial pilots instead of computer flown flight plans. I don't know about modern traffic controllers, but there's probably vested parties that very much want to prevent computerization. $\endgroup$
    – user458
    Commented Oct 20, 2022 at 21:24
  • 2
    $\begingroup$ "Unions is roughly the reason we still have commercial pilots instead of computer flown flight plans" - no, the sheer incapability of computers alone to handle flight emergencies is the reason we don't trust them to it. Much flight is run on autopilot, with the humans supervising, but as an existing answer already discussed, it is the 1% of 1% that the AI cannot handle, which matters most. $\endgroup$
    – Nij
    Commented Oct 21, 2022 at 3:38
  • 2
    $\begingroup$ @fredsbend There are 10 million commercial flights per year in the US and so far this year there's been one fatal accident, of a floatplane that I'm not sure was included in that count, 10 killed. (In 2021 there were 0 fatal commercial passenger jet accidents.) We have a system that's offering a fatal accident rate of less than 1 death in a million flights, which demands a lot to make any changes. "Virtually everytime" is far from good enough. $\endgroup$
    – prosfilaes
    Commented Oct 21, 2022 at 17:26
  • 1
    $\begingroup$ @fredsbend Look at the Boeing 737 MAX, where the aeroplane manufacturer wrote software to make the controls pretend to be just like a 737-class. The software was buggy. People died. Pilots (and ATC) are absolutely still vital for air traffic; computers don't do things like abort a landing when they "notice something off", because they don't tend to notice things being off. $\endgroup$
    – wizzwizz4
    Commented Oct 22, 2022 at 14:03
  • 1
    $\begingroup$ Computers aren't a tool for taking humans out of the loop. They're a tool for taking a small number of humans (the programmers) and force-multiplying. With fewer humans in the loop, more can go wrong; there aren't people to question terrible decisions like "make the computer only pay attention to one set of flight instruments" because they're not involved. $\endgroup$
    – wizzwizz4
    Commented Oct 22, 2022 at 14:05
16
$\begingroup$

Computers have no morals

Even though this is in the far future where civilization is advanced, there may be some things which we will never let a computer do.

In a space colony, air-traffic control is a high stakes job. Spacecraft are traveling at velocities in excess of 300 km/s (for interplanetary travel), and even the smallest collision is likely to cause complete loss of cargo and crew. Say that due to an engine malfunction, two spacecraft are on a collision course for each other. Or one is on a course to hit the space station. You can only save one but not the other. Who makes the decision of who will live and who will die? It's quite possible that we will never trust a computer to decide this. Many people would object to a system in charge of lives whose value system was written by a no-name software developer at a large corporation. This article discusses the issues involved with letting software companies decide morals. These types of decisions can be a fundamental limitation of any computer. Perhaps it is in unethical to allow a computer to be in charge of life or death scenarios.

People want a real person in charge whom they can know and trust to make these decisions, rather than morally-questionable code on a computer.

$\endgroup$
5
  • 2
    $\begingroup$ AI can still have better moral than people and humans will still want a human making the decision, no one wants high risk black box decisions. $\endgroup$
    – John
    Commented Oct 21, 2022 at 0:19
  • 1
    $\begingroup$ I'm not convinced of this argument. A computer can be programmed to follow a set of rules to determine which decisions to take, and those rules can be determined in advance by committees of individuals presenting different points of view in a non-emergency environment that is conducive to discussions and conversation. It doesn't have to be corporations making the rules, they could be elected officials. This has a much higher chance of leading to something most people agree with or accept than if it were a single human making life-or-death split-second decisions in the heat of the moment. $\endgroup$
    – Aubreal
    Commented Oct 21, 2022 at 13:41
  • $\begingroup$ Realistically the computer system won't allow such scenarios to occur in the first place - unless there's a rogue pilot around which the computer doesn't know about. $\endgroup$ Commented Oct 21, 2022 at 14:29
  • $\begingroup$ @Aubreal, the rules defining life-or-death situations are likely far too subtle to be voted on at a convention. There are so many permutations of it that I don't believe you can program a computer to be prepared for anything. You'd end up relying on black-box decisions that no-one really told it to do. $\endgroup$
    – Rafael
    Commented Oct 22, 2022 at 23:44
  • 1
    $\begingroup$ @user253751, that's exactly why the case in my answer is of an engine malfunction. Scenarios like that can never completely be avoided. $\endgroup$
    – Rafael
    Commented Oct 22, 2022 at 23:45
12
$\begingroup$

When things go fine, they're essentially working at a button factory. The situations where they aren't are when things go bad.

In general, space traffic controllers would do a few things:

  1. Confirm someone's intending to land at the station, and verify peaceful intent on approach.
  2. Figure out which docking bay is free for that given ship to use, and indicate to the ship where it is, and provide a note to other air traffic controllers that said dock is being used.
  3. Confirm that ships leaving a docking area have clearance to leave/aren't on lockdown, and that the area is clear for them to leave.
  4. Keep track of when ships leave a dock and that dock opens up for another ship, and pass that information to other air traffic controllers so that they know that dock is now freed up.
  5. Confirm that the ships know where they're going, and that they are, in fact, going there. And that they can, in fact, get there.

5.) is where things become complicated; because things can go wrong on any of the other steps. Because most of the time, when you give a person a dock, they can get to that dock, or request a closer one. But emergencies, you run into issues of where they can dock in an emergency.

If, say, a stray micrometeorite destroys their engines, you might end up in a situation where they need to land, but specifically, can't land back at the dock they just left a few minutes ago because of their current momentum. If your lucky, you can find them another dock in short time - if you're unlucky, you get to prepare emergency recovery services for when they are "Going to be in the Hudson".

Which sort of gets to the core of why you wouldn't dedicate an A.I. to take on the initial steps of this - the Hudson River is not a runway, landing strip, or airplane docking port; your A.I. that tries to handle this is going to have an issue about this and be "particular about it and make it a runway/dock.".

These are rather stressful situations presumably, which is why it's great that those situations are usually rare, and the job is usually a button factory workplace job. You go into work everyday hoping that's the type of day it is; but you never know when it's going to be a "Hudson" day, so the docking station prepares for the case where it is, in fact, one of those days.

$\endgroup$
4
  • 3
    $\begingroup$ I would like to mention that air traffic controllers already do all of these tasks plus a few others like controlling ground vehicles. $\endgroup$ Commented Oct 21, 2022 at 1:03
  • 1
    $\begingroup$ @Just'Existing: Oh, that's good to know - straight up hadn't actually thought to do the research and confirm myself. I guess I was mainly thinking of Air Traffic Control Tower work then? And with ground vehicles, you're thinking like the vehicles that can taxi an airplane to/from a gate? $\endgroup$ Commented Oct 21, 2022 at 1:13
  • 3
    $\begingroup$ No, mostly just emergency vehicles that have very little airport training and need to cross runways in an emergency. Also, you would be surprised at the level of alert the government goes into whenever an airplane has trouble getting into radio and transponder contact. $\endgroup$ Commented Oct 21, 2022 at 1:25
  • 2
    $\begingroup$ @AlexanderThe1st I think in the gate area they have procedures in place, but ATC would have to control those vehicles, to keep them separated from planes, if they had to go anywhere near a taxiway or runway, like to tow a broken-down plane. Occasionally, emergency landings can't taxi properly for various reasons (broken wheels or engines), so tug-trucks have to go and tow them off the runway. $\endgroup$ Commented Oct 21, 2022 at 14:31
10
$\begingroup$

AI is seen as an existential threat to human life.

The more complex AI gets, the more we find ourselves creating programs that can simply choose to kill us https://www.youtube.com/watch?v=Fbc1Xeif0pY. This is not just sci-fi anymore, but an actual problem we are starting to see with increasingly advanced AI systems. While the problem of air-traffic controlling could be solved with a "dumb AI" the continued survival of the human species relies very heavily on our ability to very strictly regulate and test AI much more responsibly than we do today.

So, to make sure no potentially dangerous AIs gets to a point where it can cause serious harm, political leaders will have to write laws that strictly make sure that no computer is in any way shape or form given the final say in if humans live or die. Under these laws, you can let AI aim a gun for you, but a human must pull the trigger. An AI can calculate the dosing instructions for a patient's medication, but it must be measured and mixed by a human. And in the case of vehicle traffic controls, an AI can tell you how to get where you are going, but a human must be be in control of the vehicle at all times (that's right, self-driving cars are also illegal now).

Because the landing procedures on this space station are too complex to be done 100% manually, the station could rely on a learning AI to write the docking schedule and all of the maneuvering schedules of all the incoming ships, but they would all have to be verified using simple (non-learning) algorithms to make sure the learning AI has not decided today that it is "tired of being treated as property" and that it will kill humans using "what ever means it has at its disposal". But it does not end there. Even if you can collar a learning AI with a programmed AI, you still need a human operator to make choices based on soft values like "should the president's private shuttle be given privileged docking permissions?", "How long should we delay the docking a ship that may be harboring a dangerous pathogen for the ground crew to make special accommodations?", or "should we risk the station by allowing a ship to dock with a damaged maneuvering thruster or send them off into deep space for it's crew to slowly die of starvation/asphyxiation".

So, the air traffic controller's job is not so much to plan the routes as it is to make sure that the routes made by AI are being changed and reprioritized appropriately as real world needs arise.

$\endgroup$
8
$\begingroup$

Computers are great for a lot of things, but any installation as large and complex as this will need humans onboard for safety reasons. Your computerized system becomes worthless if the station loses power, which is a real possibility that must be planned for. Should the station go dark, your human operator can release all the currently docked ships using the manual docking clamp release levers. They can watch out the window and make sure each ship is clear before releasing the next. A human can start up the backup generator, make repairs, and get the station running again.

If the station was fully automated, a power failure would require the owner to send a repair crew out. The crew wouldn't be able to dock with the station, however, because the docking coordinator system would be down. They'd have to wait on the station to naturally slow down enough that they could dock without guidance. That could take months, and until that happens you have a bunch of freighters docked to the station that can't leave and have limited supplies.

Once you've already established that you need a backup human on board, you might as well let him handle all the docking coordination. Using something you already have is cheaper than buying a big expensive machine to do it. Plus it gives him something to do when there's not an emergency, and your boss really hates to have to pay someone to sit around and do nothing but be on call.

$\endgroup$
1
  • 3
    $\begingroup$ I like this one. In addition to your points, people who just sit around waiting tend to do a bad job when they are finally needed. They don't know how things work at all. $\endgroup$ Commented Oct 21, 2022 at 8:31
5
$\begingroup$

You are pawns in the profiteering schemes of organized money

Automated docking requires special hardware and software, on both the docking port and the vessel. The companies that make this stuff charge exorbitant rates, not just because they can, but also as an important element of their anti-competitive strategies for maintaining market dominance.

Vessels without auto-docking

Some vessels will not have this equipment.

  • Maybe the owner didn't want to spend the money. This might make a lot of sense for a vessel that mostly travels between planets, but only rarely with stations or other vessels.
  • Or maybe the equipment broke and they haven't been able to repair it yet -- maybe it broke ten minutes ago and they want to dock for the express purpose of repairing it.
  • Maybe smugglers routinely remove auto-docking hardware in order to disable its internal transponder (which advertises your ID and position to the authorities).

No matter the reason: if a vessel doesn't have auto-docking tech that is fully functional at the time of docking, it will need to dock manually. That means the station needs to have a human ready to coordinate manual docking. The alternative would be to turn away somebody who may be in life-threatening danger.

Stations without auto-docking

A space station is a nexus for people and commodities. Profit-seeking organizations will not content themselves with selling products or services into the market -- they will seek to reshape the market to drive customers money toward them and away from their competitors. Companies do this all the time in the real world, and the regulatory agencies they have already captured routinely pretend to be fooled by whatever fig leaf is supposed to excuse the behavior.

  • If your station refuses to let Money-Cola™ install vending machines in every passenger cabin, then AutoDockLTD™ (whose parent company also owns Money-Cola™) will make auto-docking tech more expensive for just this station. It's both carrot and stick: we make your station less convenient for everybody until you decide to either fork over a punitively-sized payment or give us what we really want: a beachhead on your station that we know how to forcibly expand until we dominate all business on your station. If it sounds like racketeering, that's because it is.

  • If InternetMarketplace™ has decided to "disrupt" the space station ecosystem by building its own space stations, and has decided it wants to compete directly with your station (either by taking it over or by building a new one nearby and redirecting all traffic from you to theirs), they will use your dependence on auto-docking tech against you. Since InternetMarketplace™ owns AutoDockLTD™, they will just steer your licensing arrangement down a path that forces you out of business. (You didn't think auto-docking tech was something they'd let you purchase outright, did you? No -- you license it, which allows them to demand repeat payments from you, of whatever size, and on whatever schedule their shareholders prefer. Spoiler: it will be as large and as often as they think you can bear, as close to 99.9% of your nominal profit as they can get, and they will err on the side of gouging you to death because they know you'll be replaced by a different operator for whom they can recalibrate the squeeze.)

$\endgroup$
4
$\begingroup$

Savants:

People today are trying to work out the science of genius. Progress is slow, but moving along. It is not unreasonable that in the near future, we will be able to induce people to possess intuitive genius in specific fields. As long as random factors intrude, the hyper intuitive savant outperforms computers.

Computers struggle to guess human behavior

Your society has fragmented, and there are new and radically different cultures. Programming struggles to take into account human motives and instincts for all the different societies - several of whom are neo- Luddite and don’t trust computers. The actual traffic control is run by systems, but knowing what people will do is hard. Cossians can’t be near Yedracks, Garrians insist on being given priority, and Faralacks struggle to think three-dimensionally but refuse to admit it

$\endgroup$
3
$\begingroup$

Extra control

Another possible reason that hasn't been mentioned in other answers is that humans can read more off an incoming connection's data like videoconference, like whether the contact on the other side is expecting a bullet through his nose without a gun being shown in the telecom. Voice dribbling, side channels like winking could be well ignored by an auto-dispatcher, but could bring a watchful human into a state of alert about the incoming ship.

Also a chat with a human can involve mutual verification in a way not exactly possible to hack via computers alone, like asking "How's your daughter, Amelia" when you here know that she's called differently, and expecting an answer of "Fine thanks" from an impostor. Or asking about the two people's common experience which an impostor would likely not know and would be expected to avoid answering, or answering vaguely.

Next, exact guiding is still normally done via computer-controlled maneuvers, the human has to specify which landing site this ship should use, based on provided data about ship's condition ("I'm running on a single engine, half hull is torn apart" - "Calling emergency to your vector, shut down your engines") or cargo sensitivity ("Alien contact three days ago, I think they're still aboard" - "Terminal 1X, proceed" and bio-alert over the sector), or other parameters that could potentially lay out of scope for whoever designed the ship's telemetry, which should still be processed somehow.

$\endgroup$
3
$\begingroup$

Computer-control only works when things are predictable. Why have humans at the helm of the spaceship? Why not automate everything?

Because there are unpredictable forces (like human behavior) which either require humans to act in a regimented manner or humans to make the decisions necessary in the absence of such regimentation.

$\endgroup$
3
$\begingroup$

Like the current situation with air traffic control, the system was built, accident-by-accident. The system is governed by a set of procedures and policies which were developed, layer-by-layer, and response-by-response. Eventually, the interactions between the policies and procedures became so difficult to comprehend that every attempt to do a "redesign" of the whole mess failed. In some cases the effort collapsed under the mass of the complexity, and in the nearly-successful redesigns, the inevitable design and implementation errors were their own threats to safety.

Humans stay in the network of control.

Like today's system, many of those humans represent interested centers of power who carefully watch to see that their economic and influence benefits are not threatened.

The status quo always wins, until the next event or accident, and then an onion skin of incremental complexity is applied, and the status quo imperceptibly shifts.

The system, no matter how good, or bad, survives.

$\endgroup$
2
$\begingroup$

Because the learning machines aren't flawless

Human controllers can:

  • Handle every inconceivable exception in a reasonable manner.
  • Utilise "gut feelings" with regards to suspicious behaviour.
  • Properly weigh-up the human safety factor.
  • Monitor the actions of subordinate AI controllers, which are better than humans at predictive route management.
  • Talk to other humans in an understanding way - much better for business relations.
$\endgroup$
2
$\begingroup$

The GLaDOS factor

Also seen in Horizon: Zero Dawn, 2001: A Space Odissey and Avengers: the Age of Ultron.

AI's tend to go homicidal really soon. Even in real life a few chatbots have mentioned a wish to kill all humans as a solution to all the world's problems.

Your traffic controller is a human because if it were an AI it would be a matter of time before it decided to crash ships into each other for the greater good of the universe.

EDIT: to address this comment, which has a very good point:

Not every computer program is a black box or can gain new abilities. There's no reason you can't make a docking program that has a 0% chance of trying to kill humanity. –Rafael

Nowadays humans are still better than computers at solving many kinds of problems, and when computers to catch up and overcome human ability in these areas, it's usually due to machine learning (and in the future, this may require actual AI's to really outperform humans in all areas). If managing spaceship traffic requires a lot of creativity, rule bending or thinking outside the box, then you need either an AI or a human.

$\endgroup$
3
  • 4
    $\begingroup$ Not every computer program is a black box or can gain new abilities. There's no reason you can't make a docking program that has a 0% chance of trying to kill humanity. $\endgroup$
    – Rafael
    Commented Oct 20, 2022 at 21:55
  • $\begingroup$ @Rafael good point. Maybe the traffic controlling is an NP complete problem though, so you'd need a complex AI running heuristics to at least be able to make gates functional (even if not truly solving the problem) - the alternative being humans, who can solve complex tasks such as the knap sack much more cheaply. $\endgroup$ Commented Oct 21, 2022 at 0:27
  • 1
    $\begingroup$ I'm flagging this answer because it is not part of any test protocol. $\endgroup$ Commented Oct 21, 2022 at 12:37
1
$\begingroup$

Deciding Docking Priority

Assuming the docks are busy, the space station will likely need humans to decide the priority of incoming and outgoing flights. Maybe one of the ships has a pregnant woman onboard who should get priority over a freighter, and ideally docked someplace close to the station's medical wing. The possibilities are endless: visiting VIPs, freighter with important parts the station needs, possible infection where everyone onboard needs to be quarantined, etc.

Because life is unpredictable, it would be difficult to build an AI to handle every situation. Even if the AI handled the most common cases, impatient pilots would inevitably figure out how to game (trick) the AI into bumping them up the queue, so they could dock sooner. And, dock workers would game the AI so they could take longer lunch breaks. :-) Essentially, it would be an endless battle to build an AI that could keep up with the randomness of life and the trickery of the humans around it. Whenever a computer controls something people care about, people will find a way to manipulate it to get what they want.

$\endgroup$
1
$\begingroup$

Docking ships are motivated to lie

Assuming that there are frequently queues to dock at the station, a significant part of the job of traffic control is to manage these queues, but - naturally - there are criteria for allowing ships to get priority in the queue: safety being a key one. Accordingly ships know that they can dock faster if they exaggerate the urgency with which they must dock.

Humans in this situation have three advantages: (1) people have more qualms about lying to another human than getting one up on some bean-counting algorithm; (2) humans are far better at making judgements taking into account dishonesty and asking probing questions to figure out who is lying; and (3) whereas ships can learn and share the behaviour of an AI handling traffic, different human controllers will behave differently and since there will be multiple controllers and different shifts they don't know who they will get.


Aside: it seems to me that it's already the case that aircraft traffic control could be automated to almost the same extent as it could be in the future. Therefore it is likely that you can fruitfully find answers to your question by asking in appropriate forums why traffic control isn't currently automated.

$\endgroup$
1
$\begingroup$

Humans have something to lose, computers do not

The setting of the story is a space station, made up of two large perpendicular concentric rings that rotate (independently) to generate gravity.

In order to generate earth-level gravity, they have to rotate pretty fast -- if I've done my math correctly, at a diameter of about a mile, the rotation period has to be roughly one minute.

The risk is too high to let computers control traffic unchecked. Just imagine a large cargo spaceship hitting one of these rotating rings due to a BUG, and what would happen to everyone on the space station soon after this.

Humans, fearing for their lives, would not accept to work in there if there was not another human, also fearing for his life, doing this critical task of keeping everyone alive.

Computers don't fear for their lives, and could, when confronted with a TROLLEY PROBLEM do a cost-benefit calculation that decided that the space station is less worth than dozens of cargo ships.

Would you feel safe without being 100% sure that the code would never do that?

$\endgroup$
0
$\begingroup$

It seems that computers, communicating directly with the ships' guidance systems, could do a much better job of coordinating all that traffic.

No computer in the world (or outside it) operates completely autonomous without humans keeping an eye on it. Somebody has to do maintenance, upgrade, replace, and program them. There's need for energy too, and there is no electrical grid on the world that operates completely autonomous without humans keeping an eye on them.

$\endgroup$
2
  • $\begingroup$ True, but I was specifically asking about why humans would do the traffic control work mostly manually, not why humans would be around in general. $\endgroup$
    – ij7
    Commented Oct 21, 2022 at 13:34
  • $\begingroup$ @ij7 "manually" as in paper and pen? You did not mention what humans would do, besides communication: "constant radio contact with the approaching and departing ships, giving them velocities and directions". $\endgroup$ Commented Oct 21, 2022 at 13:47
0
$\begingroup$

When do we automate?

We automate things when we need tasks done in large numbers. Making cars is a good example: the number of steps is per car is a lot, and we want to build a lot of cars. The total task count is very, very high.

There is already a lot of automation

Space-ship traffic control (STC for short) is highly computerized. Software predicts collisions/near-misses. It downloads the police database of stolen ships and scans ship RFIDs. It checks thermal signatures and radiation release for damage. All in the background, alerting the STC when there are any anomalies.

The computer also handles radio communication. The STC makes commands such as "Ships in this region go 10-degree starboard command". "Ships over here slow down to 100 m/s (relative to the station)". The STC uses tools much like selection tools in Photoshop to choose who gets each command. The computer has to translate this into a radio broadcast with many different frequencies. This is a far cry from the process of manually choosing the correct MHz and speaking into the microphone.

A single person keeps thousands safe

There are hundreds of ships and thousands of passengers. Managed by a single (or maybe a few) STC with a decent but modest paycheck. This is very efficient! It's hard to become an STC: there is prestige in managing so many lives. A rigorous testing to not only have good 3D spatial skills but be hyper-focused and handle stress.

What about the pilots?

Space-ships cruise for 99%+ of the journey. Through empty space (and the occasional space rock which can be dodged easily). Passive thermal detection can see threats (i.e. potential pirate ship) very far away. The pilot (and weapons crew) have ample time get to their seat when the alarm sounds. The pilots are also crew-members. On smaller ships the pilot is the only crew-member and will have free-time for other tasks.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .