28
$\begingroup$

In my universe, humans are pretty good with AI and have some experience in space combat, as nations fought for resources.

Regarding the world, humanity remains in the solar system and is in space mostly for science and resources - humans in space stations, scientific colonies on mars/venus, small colony on the moon. Anything away from colonies and space stations is automated.

In terms of combat-relevant technology, it's close to current technology but with significantly better drives and power generation. Lasers are big and heavy in order to be effective at cutting through or overheating enemy spacecraft, other weaponry is mostly conventional and guided (either missiles or AI-controlled drones that close in until non-guided weaponry is effective). No shields, blasters and the likes, distances in combat tend to be on the high end.

There are crewed control ships close to the fights to give broad orders with only a few seconds of delay, but they'd mostly stay out of combat. However, for storytelling reasons I'd also like frigate-style missile artillery ships to be crewed. What viable reason could there be for them to require a human crew on board?

$\endgroup$
5
  • $\begingroup$ Related: worldbuilding.stackexchange.com/questions/186763/… $\endgroup$
    – Willk
    Commented Jun 8, 2022 at 16:55
  • 3
    $\begingroup$ "what viable reason could be for them to require a human crew?": storytelling of course :) at the eod it will not matter if the crew is human or a very antropomorphised AI / androids, what it counts is if they will have that "human factor" that makes the story more entertaining. example: ART AI + murderbot in one of "murderbots diaries" $\endgroup$
    – Edoardo
    Commented Jun 9, 2022 at 12:55
  • 3
    $\begingroup$ @Edoardo it matters to me, otherwise I wouldn't ask :) $\endgroup$
    – Infrisios
    Commented Jun 9, 2022 at 12:57
  • 2
    $\begingroup$ en.wikipedia.org/wiki/The_Feeling_of_Power $\endgroup$ Commented Jun 9, 2022 at 19:23
  • $\begingroup$ No one wants skynet. $\endgroup$
    – DKNguyen
    Commented Jun 11, 2022 at 0:23

26 Answers 26

43
$\begingroup$

Frigates are "Jack-of-all-trades" ships:

For an all-out war, legions of robotized attack ships are great. They can exterminate everything with the best of them. But history shows that navies spend most of their time NOT at war. That's a lot of potential just gathering dust and rapidly becoming obsolete.

Furthermore, space is big. On Earth, naval officers can be controlled by instant communications. But more traditional naval officers lived in a world where they were days or weeks (possibly months) from getting orders from command. They were trained to deal with situations on their own - sometimes ones with no direct military application.

There are plenty of situations where you just need the navy to be present. ANY military vessel will outclass a pirate, or intimidate a space station. Life is messy, and everywhere you go, you need to arrest deserters, capture fugitives, provide emergency relief to disabled ships, or even possibly ferry replacement parts into bad areas where unarmed freighters fear to go. A naval vessel might be the ONLY representation from a government that is astronomically (literally) far away. Then there is dealing with frenemies, neutrals, belligerent multinationals, and the like.

All these tasks require a human presence and human judgement that can't wait the 3 hours for a human to answer an AI about what to do. In truth, the humble frigate is the premier vessel for any junior officer to be assigned to. If you get assigned to a laser battlecruiser, you only gain experience in administrative functions. Frigates see real action and real choices.

  • Traditionally, frigates were quite small ships. To emphasize the multifunction role they play, the ships might be modular, so a weapon system can be detached and a shipping container-style cargo attachment or med-bay added. This makes your ships very adaptable for story purposes.

It is also a test bed for these same officers. Great ones (the Picards) get promoted to admirals. Incompetent ones screw up something and are assigned to duty on ships where AI make all the real decisions. A few are gifted at the job but don't have the temperament to do anything else. These officers become legends.

enter image description here

(Okay, the Rocinante is a corvette-class frigate, but the principle is the same)

$\endgroup$
31
$\begingroup$

A dead man tells no tales.

Humans can be easily and quickly terminated, should the necessity arise: an electric discharge or a poisoned needle directly from the helmet, for example, and the subject is dead.

An AI instead is more deathproof when it comes to recover information post mortem, and nobody wants to hand over vital information to the enemy.

$\endgroup$
1
  • 12
    $\begingroup$ This answer is startlingly clever in its brutal practicality. $\endgroup$
    – Jedediah
    Commented Jun 8, 2022 at 16:05
26
$\begingroup$

The AI Insisted

To the extent that some handful of your AI are nearly as self-aware and emotionally sophisticated as people are... The AI don't like risking themselves any more than you'd expect people to.

Turns out that humans expect ships carrying humans to be saved from destruction if at all possible. The more sophisticated AI figure that having a human on board gives them a better chance to only be sent on survivable missions as well.

If the AI are capable of some degree of deception, they may suggest that one of the other answers given here contributes to human-on-board being "optimal". (And maybe they're not even wrong, exactly. And maybe a really good AI is as fragile as those meat-bags, and you're not even sacrificing much maneuverability to bring one along...)

$\endgroup$
1
  • 5
    $\begingroup$ This is great, and still works with hiiighly-advanced AIs when all the practical reasons no longer make sense. $\endgroup$
    – MaxD
    Commented Jun 9, 2022 at 10:03
19
$\begingroup$

Humans are creative, AI is fast.

Humans can conceptualise a situation far better than any AI, especially simpler AI. Perhaps the larger bright AIs may approach human level but not the type fitted in smaller ships or limited structures. AI's tends to be narrow minded. An AI programed to cook can bake a cake, slice vegetables or boil and egg like no one's business but if asked to make a fire may struggle in the absence of a stove. Extrapolating to the complex environment, the infinite possibilities and unknowns in a theater of battle AIs are not ideal. HOWEVER, that said. Combining humans with AI, is quite deadly. Humans perceive the environment, using the AI's to keep track of targets and battle space elements, relate this in a way to easily digestible to humans. Allow the human to give general orders or maneuvers to the AI for the AI to execute, again, in a seamless manor, Maximises the abilities of both.

Humans distrust advanced AI This is a typical, though quite logical trope in SciFi. Fearing the machine overlords is quite reasonable. For Humans to design AI with limited capability. Hardwiring them to narrow and specific actions quite a viable prediction.

EDIT: looking back I don't think I related exactly what I was thinking.

The human in a combat situation can conceive of strategy, deception and deducing countermeasures against the same. These strategies' can consist of maneuvers and multi target actions that could never be thought of by an AI or executed by a human. But well within possibility of an AI to execute. AIs could be given general orders to execute evasive maneuvers on detection of incoming ordinance. With reaction times measured in NS they would be far more effective. As well as precision high speed maneuvers against targets in tight situations.

$\endgroup$
1
  • $\begingroup$ After the AI uprising, all AIs which have been permitted since then have been programmed with Asimov's first law (but without the requirement to prevent harm). $\endgroup$
    – EvilSnack
    Commented Jun 10, 2022 at 1:14
11
$\begingroup$

War is political

Sure, you can delegate the decision making about targetting, evasive manoeuvrers, repair priorities, etc. to your AIs but do you want your AI choosing whether to prioritise rescuing escape pods from an allied vessel or chasing down the fleeing cruiser with a key enemy general on-board? Do you want your AI deciding to fire first and potentially start a sector-spanning conflict, or wait and risk the squadron getting crushed in a first strike attack by the enemy? When do you accept surrender? How do you negotiate a peaceful end to the conflict? Is that suspicious looking civilian vessel really a refugee transport or is it being used as a ruse to hide military intent?

AI can act, and inform, but political decisions are left to humans. Therefore you need humans involved in the decision making. On a larger vessel you need more humans to be involved in more decisions. And you need them to be on-board because the turn around times on stellar scale communications rapidly become too slow.

$\endgroup$
10
$\begingroup$

Evasive Manoeuvres, Mr Paris!

Having a meatbag at the helm makes the ship's manoeuvring system impossible to hack.

enter image description here

Space is big. Weapons take a long time to reach the target. Missiles take minutes or hours. Lasers hit almost instantly, but you have to hold the laser on the target for a while to do damage.

The ships defend themselves by constantly moving back and forth. When your missile gets here I will be thousands of kilometers away in some random direction.

You cannot predict what direction I'll dodge because I haven't chosen it yet. I only chose AFTER you fired the missile.

An AI could in principle do the manoeuvres for you. But the AI is vulnerable to being hacked by the enemy vessel, which can then predict where I will be in ten minutes and launch the missiles to detonate there. Hacking happens at the speed of light.

Tom Paris can be a pain in the ass sometimes. But he cannot be remotely hacked, and we have to give him that. You have to give him that.

$\endgroup$
1
  • 3
    $\begingroup$ Unless, of course, he uses the neural interface of a schmexy little ship he picked up in a junkyard. Then he's very remotely hackable, even when he's not physically inside said ship. $\endgroup$
    – Qami
    Commented Jun 8, 2022 at 15:17
10
$\begingroup$

There are no dedicated military vessels.

https://en.wikipedia.org/wiki/Standing_army#United_States

At the 1787 Constitutional Convention, Elbridge Gerry argued against a large standing army, comparing it, mischievously, to a standing penis: "An excellent assurance of domestic tranquility, but a dangerous temptation to foreign adventure.

Your spacefaring peoples are suspicious of dedicated military forces, because owning and maintaining such means the owners will be tempted to use them. Or the forces themselves make take matters into their own hands. So it has been on earth and so it is in space.

Your spaceships and space warriors are in effect a militia, being summoned away from their regular space jobs to make war. They are a "well ordered militia" and their ships carry armaments in case they are needed. Hopefully these armaments have been maintained and there are storytelling opportunities in that respect. Maybe not all the ships sent to war are actually as ready as their reports suggest. If there are sophisticated artificial intelligences aboard these too are largely concerned with civilian matters - running models, doing math, calculating positions and so on.

$\endgroup$
7
$\begingroup$

Because people need a hero.

I watched the movie Don't Look Up last summer (and hated it, but that doesn't matter). It seems to bear no relation to your question, what does a comet hitting Earth have to do with military spaceships, but this quote popped into my head as soon as I read your question:

 Randall: "Shouldn't this mission be accomplished using remote technology?"
 Teddy: "Washington's always gotta have a hero."

Simply put, it's not for practical reason that they send people up there. With advanced enough AI we could fully replace human brainpower and possibly do better. But people like seeing people do stuff, which will probably never change. It's the exact same reason that videos on YouTube have facecam, because we like being able to associate something to a person, someone that they can look up to, and not just some random strings of code running inside a computer. It's human nature, and it will certainly stay that way for a good long time.

$\endgroup$
1
  • $\begingroup$ the goal of youtube is to entertain people - emotional relatability is important there. So the facecam is very practical for the goal you are trying to achieve there. If your goal is to destroy something on the other hand... relatability might not be your top priority. (And that movie is just one idiotic thing after the other). $\endgroup$
    – Felix B.
    Commented Jun 9, 2022 at 12:43
7
$\begingroup$

For Human Factor

  1. You did not mention any robot or android capable of doing advanced fixes and maintenance, sure there are some fixes that can be done autonomously but any major "on the go" fix would require a technic crew.
  2. Even with command ships giving orders within seconds, there is a fine line between life and death (maybe more like a win or lose at this point) decided in mere milliseconds. Maybe you have given a shoot order but the target vessel is just a scientific vessel that is better be taken and interrogated. You may even end up having the cutting-edge technology your enemy was going to use on you.
  3. On the battlefield nothing goes as expected, your frigate now is a field hospital with an extraction crew, sure the crew may not be prepared for this exact situation but what do you prefer, no crew or inadequate crew with a chance to help you?

Even with the most advanced AI's a battleship without humanoids (either it is sentient robots, androids, or even your basic humans) is as better as a scout ship. You will always need hands on the deck.

$\endgroup$
2
  • $\begingroup$ I thought you had my idea here; close, so I will just add as a comment. The frigate style ships have civilian jobs doing mining operations and the like. They work doing something else day to day. If there is a war, the crew shuts down the mining gear, puts on their War Hats and powers up the war stuff. $\endgroup$
    – Willk
    Commented Jun 8, 2022 at 16:59
  • $\begingroup$ That's also a pretty good idea, multipurpose vessels are exist for a reason. $\endgroup$
    – vivus
    Commented Jun 9, 2022 at 6:17
6
$\begingroup$

AI Have Weaknesses

Are these muffins or dogs? Would an AI successfully understand new things? Does it suffer from the "I have a hammer, so everything is a nail" style of thinking? Is AI capable of going rogue?

Nevermind things like adversarial AI which may be very good at confusing or outright fooling the frigate's AI using very simple methods. These methods or weaknesses can be bizarre, like a misplaced pixel or a missile being painted blue. It's something that a human can clearly see as an error but the AI does not.

Humans and AI Support Each Other

Having humans around as a check to AI is needed! They can work together to protect each others' blind spots.

$\endgroup$
4
$\begingroup$

Your AI is not quite good enough for the jobs asked of "frigate-style missile artillery ships".

Even today we automate what we can, and put humans in positions that require humans. In your world the thing humans have is intelligence. You have artificial intelligence that is good enough for the types of craft you want to be fully automated but not good enough for the types of craft you want to have humans. It sounds quite viable, the missile launch decisions are somewhat complicated (what is the most valuable target, what is likely to be decoys) and require timing faster than the round trip time to your control ships.

$\endgroup$
3
$\begingroup$

Because the crafts contain a human pilot who utilizes AI systems such as resource management or advanced weapons targeting. Giving you a "best of both worlds" approach. One system can back up or augment the other.

For example, if the ship's power system is disrupted a human pilot can manually repair it.

$\endgroup$
3
$\begingroup$

AI is great at handling situations that they've been trained to handle. When they encounter something radically different than anything they've ever seen before, not so much. Humans do significantly better when confronted with novel situations/problems. They may not always make the optimal decision, but they tend to at least avoid the options with the worst consequences. Say your military fleet encounters a new civilization whose cargo haulers have a profile that looks startlingly similar to the light military cruisers used by your archenemy. Making the wrong decision here could incite an all-out war. That's way too risky to trust to an algorithm. A human crew would be far more capable of using context and nuance to ascertain the truth of the situation. They might notice that the lettering printed on the tailfin is in a completely different language, spurring them to investigate before acting. The AI wouldn't have even paid attention to that; there hasn't ever been a situation where that was relevant, so the AI was never taught to consider it.

Similarly, you undoubtedly have complex treaties and formal agreements with neighboring civilizations. Situations frequently arise where these legal requirements conflict with basic mission objectives and standing orders. Law is complex, and you need more than hard machine logic to navigate such situations. A human crew can better predict not just what actions to take, but the consequences of those actions, how others will interpret their actions, etc.

In other words, look at the Spock vs. Kirk conflicts in any classic Star Trek episode. Spock represents your AI, making decisions based on logic. When situations go really pear-shaped, it's usually Kirk's borderline insane strategy (implemented over Spock's objections) that saves the day. AI doesn't come up with those sorts of ideas, you need that little bit of lunacy that only an organic mind has.

The other big benefit is that a human crew is extremely adaptable. You can retrain them fairly easily when needs and circumstances change. Moving some of your security officers into repair roles during an emergency can save your ship. If this was an AI vessel, you wouldn't have that flexibility. Your automatons wouldn't have the right parts to flexibly adapt to new roles and even if they did, reprogramming them would require extensive engineering and testing work that has to be done in advance. You can teach a human the basics of welding, first aid, or search and rescue in a couple of hours. Make changes to your rules of engagement, or sign a peace treaty with someone? A quick memo to your human crews is all it takes to implement these rule changes. No need for extensive software development, testing, and deployment. And best of all, updating a human crew's standing orders doesn't risk accidentally bricking the crew and disabling your navy.

$\endgroup$
2
$\begingroup$

Laws and safety

I'm pretty sure there is a law in the real world that prevents fully autonomous killing machines. There must be a human able to monitor the situation and decide to attack, or not. This has multiple reasons. Basic human rights for one. A machine could mow down a thousand children as they were marked as enemies. They could do inhuman harm to enemies, letting them needlessly suffer.

But another big one is responsibility. A human can of course do those things. They have done those things. With killer robots like drones it is now even easier to fire a rocket at a school bus full of children. The operator is far away and often has lottle emotional involvement. This is intentional to make them more effective at killing.

But at least you have someone to blame. You can take a guy who pressed the button to court. You can punish them. They can serve as examples for others. If you dismantle an AI it will probably not feel like justice. The AI is unlikely to feel chastised and might not care to be dismantled. The AI can learn by tweaking the rules, but that can happen with human intervention as well. Targets can be chosen better next time. But with punishing AI there is no justice.

That, and it makes people a whole less afraid if either side doesn't have fully autonomous death machines that kill indiscriminately, without mercy or remorse. Nothing more scary than a host of murder machines where no one can say 'stop' the moment a decision is made to kill.

$\endgroup$
2
$\begingroup$

Are the AI capable of performing boarding actions? Questioning civilians? Bringing in high profile targets for questioning? It may well be that once the missiles start flying the human crew are a liability, but the rest of the time they're needed to carry out all the other things the ship does.

If you want to address that liability in a shooting war type situation, maybe in extreme circumstances the AI takes over, the crew board escape shuttles or go into drug induced comas so as to let the AI do it's thing.

$\endgroup$
2
$\begingroup$

The chains of command

The modern chain of command for an effective military relies on officers being able to break their orders down and distribute them down the line. Indeed, sending generals down the field to micromanage troops is very ineffective, and is a good way to get your best officers killed.

It requires some level of creative thinking to break down a complex problem such as "let's capture this region" into thousands of "go there" and "shoot that" orders. Humans are capable to think beyond their programming, which can be a good and a bad thing in general, but a great advantage when facing a new situation. Humans have a greater capacity to evaluate their own actions, which is also helpful in new situations. They also have human emotions, they can evaluate their own actions, they can tell when an order is illegal or unreasonable.

You might read the above as a bunch of flaws for humans, but that also means you don't need to hold their hands. Humans will take care of themselves.

While a current technology AI would be perfectly able to execute orders, it's not at all clear it would be capable of thinking up these orders, or that it would be able to tell what winning a war looks like. In short, even if your ships are largely automated, a human crew is still required to give it any purpose, in real time.

But that's not all.

The laws and customs of war

A human pressing the trigger can be held responsible. The human who ordered them to press the trigger can be held responsible. And the human who ordered them on the front, and so forth. With humans, we have a clear chain of responsibility.

A machine isn't accountable. You can't try an AI for war crimes. You can't court-marial an AI for disobeying orders. So what happens when something messes up? Who is going to take the blame? The programmer, the operator, the maintainer, the commander, all of the above, someone else? Who is responsible for autonomous systems failing is a question that is still unanswered, and it's unclear that it ever will be.

But one possible answer is that if responsibility can't be clearly assigned, then AI shouldn't make decisions. The laws and customs of war were not designed with automated systems in mind because the idea of war has always been man versus man. You can't incentivise a machine to follow these rules because the machine doesn't have a family, it doesn't have self-consciousness, it isn't afraid of dying, of capture, of reprisal.

The laws and customs of war exist to protect everybody from the extremes of war. But, simply put, a machine can't be held in line. It has nothing to win, and nothing to lose. It has no reason not to follow its programming to any logical extreme, and you often can't predict what that extreme could be.

Single point of failure

One last point here.

An AI is one system. Your frigate, most likely, would be controlled by one system. This system would have a bunch of subsystems, but all the control system will the solely in charge, and will not be challenged if it decides to turn against you, to commit genocide as a shortcut to victory, or to do anything you don't want.

A human crew operates much differently. It takes more than one human to mutiny. Humans act as safeguards against each other. If one fails, others will pick up the slack or will prevent it from failing further.

You can't tell what a human is thinking more than you can tell what an AI is thinking. But when it comes down to it, when a commanding officer, it can be stopped or replaced. When the command AI fails, nothing will stop it.

$\endgroup$
3
  • $\begingroup$ "A machine isn't accountable. You can't try an AI for war crimes." Plausible deniability is why a government would want ships to not have crew on board. $\endgroup$
    – RonJohn
    Commented Jun 10, 2022 at 14:25
  • $\begingroup$ @RonJohn That responsibility can't be clearly assigned doesn't mean no one will be held accountable, and if no one can be clearly identified as being at fault, there's always the commander-in-chief. $\endgroup$ Commented Jun 13, 2022 at 5:51
  • $\begingroup$ "The AI decided to start the war!!" $\endgroup$
    – RonJohn
    Commented Jun 13, 2022 at 15:56
2
$\begingroup$

This is a question that comes up a lot in fiction like Star Trek and Iain Banks's Culture series. Between their slow reaction time and their inability to handle even fifteen gravities of acceleration, humans are a liability. Why would you even want one on a warship?

The key to this is "war" ship. Not battleship. When it comes down to it, war is something that humans do with other humans. We could have machines do the actual fighting, but there has to be a human at the top of the command structure somewhere.

Nobody wants machines in charge

The answer that most fiction comes up with is that nobody wants machines in charge, and nobody would let a machine decide who needs to die, for fear it would decide that they, themselves might need to die.

This isn't the only answer, but it's a good one. You could come up with some schema where there's a maximum amount of firepower that a person is allowed to be in charge of. AI's could take care of targeting and maneuvering, but a human is required to decide which objects are targets.

Machines have no sense of purpose

Machines are highly capable, but they don't actually have objectives. You could certainly GIVE a purpose to machines, but they have a hard time taking into account the considerations that humans have when pursuing those objectives.

Take, for example, exploration. You want to go to X system and just see what's there. You could tell a machine to do this, and it would take pictures and measure pressures and analyze materials, but it wouldn't stop and wonder why the plant life was blue in the southern hemisphere and green in the northern hemisphere. It wouldn't find the concentric hurricanes on the poles of a gas giant particularly fascinating.

Exception handling

When you tell a machine to go kill something, it goes and kills something. Maybe you can provide it with an exception where the target has his hands up, or is too injured to fight, or is under age. You probably won't think to tell it to watch for people who look like you, but with a scar on their face.

You can tell a machine to go fix something, but it probably won't take into account the newborn kittens that have taken up residence in the fusion tube while it was down. It'll just clean them out and restart the process.

You can tell the machine to go get something, but the machine probably won't be able to improvise if what you ask for isn't there.

Negotiation and strategic initiatives

When it comes right down to it, wars are about trying to make the enemy give up. Deciding who to kill, when and where, is only part of the equation. A human could decide to invoke a terror campaign, threatening the enemy's cultural icons. A human can decide that you need to take a surgical approach, targeting engines and weapons, or a resource restriction approach, targeting life support, food stores, and supply chains. Even if you only have one human per fleet, these decisions have to be made on a very small scale.

Plus, a human would be necessary to tell the machines when to stop killing. If a human is onboard to make these decisions, you reduce collateral damage.

Everything else

In the US Army, it's said that front-line warriors are only 1 person in 7. Admittedly, machines would improve the logistics and repair formulas a lot, but you will always need someone who decides upon priorities.

$\endgroup$
1
$\begingroup$

Humans are needed for making repairs.

Assuming the AI isn't massively more advanced than what we have today—a while back, someone asked why we can't send a robot to repair the JWST, and one of the answers basically came down to that robots are too inflexible. 'Wiggle two parts until they fit together' is a difficult task for a robot, but one of the easiest things for a human to do. Repairs might look like a robot with a screen with directions while a human actually manipulates the parts.

An AI might also lack the creativity to kludge a solution to limp back to port if critical systems are offline but parts can be cannibalized from elsewhere.

$\endgroup$
1
  • $\begingroup$ Welcome to the site quyksilver. Nice first contribution. Please take our tour and refer to the help center for guidance. Enjoy worldbuilding. $\endgroup$ Commented Jun 13, 2022 at 2:27
0
$\begingroup$

Mutually Assured Destruction

Long ago, mankind figured out that it can't completely wipe itself out if it doesn't use AI. An AI on AI war, in contrast, can and probably does. AI's are allowed to assist in all manner of combat except weapon firing; if AI is used for weapons, the victim will activate thousands of antimatter bomb drones and it's Multi-World War 4.

$\endgroup$
0
$\begingroup$

Rethink the Definition of Human

From Human to Cyborg

To a caveman we are practically a hivemind. We can talk to someone accross the globe almost in real time, we have information of millions of people at our fingertips. What makes it so? Well... smartphones. Just a prosthetic leg becomes "your" leg, a hearing aid becomes "your" ear and a smartphone becomes "your" telepathy organ. Which is why a lot of people already feel weird not having it/giving it away.

Now let us extrapolate a bit: Next thing will be glasses. Google is going to try again, Apple too. The question is just when we can make technology small enough (either make batteries lighter or chips so much more efficient, that they need far less energy). Then you have an even tighter connection. The next step will be a direct connection to your optic nerve/hearing/smell etc. Looking from 2022 into this future would cause many people to call these people cyborgs/hivemind - they will still call themselves human.

From Machine to Cyborg

While humans incorporate aspects of digital processors, machines/digital processors incorporate aspects of neuronal nets. Not only are "artificial neuronal nets" trying to emulate the way human brains work, we have by now also tried to grow brain tissue to incorporate into a computer: https://www.biorxiv.org/content/10.1101/2021.12.02.471005v2. In this experiment real brain tissue (real neurons) learned a task in 5-10 minutes, what takes artificial neuronal nets a couple of hours at the moment. In general there is a push in artificial intelligence/AI to create application specific integrated circuits (ASICs) for AI tasks (e.g. Apple's M1 has a neural engine core, similarly Google's tensor chips). Why? Because simulating neuronal networks artificially with general purpose computers is much more expensive than building circuits which are just neuronal networks.

(These neuronal networks tend to be bad at calculations btw., something that classical computers are really good at)

Two paradigms of computing

It starts to look like there are essentially two ways to do computing:

  1. Rule based (classical computing)
  2. Example based (machine learning/ai/humans)

The advantage of rule based computers is:

  • no mistakes
  • can be understood/explained
  • can be made extremely fast

The disadvantages of them are

  • not adaptable to new situations (new rules need to be implemented)
  • difficult/impossible to find the "correct rules" for complex tasks (e.g. image recognition -> mapping a bunch of pixel values to "cat" or "dog")
  • can be understood also means: mind can be read/hacked

The opposite is the case for example based learning. We do not understand how a neuronal network works, and the only way to hack/trick one is by feeding it bad data (i.e. fake news for humans).

So if you extrapolate from here: humans are a intelligence which works purely example based. A human with a calculator is a hybrid already. A machine which uses a neuronal network for image recognition and then uses deterministic logic on the labels is also a hybrid. Both humans and machines converge to some form of hybrid - a cyborg. Interfaces are already being developed.

Theseus Ship/Continuity Wins

Now humans might become cyborgs (from our perspectives) but they will always call themselves humans. Even though the humans in your story might be intelligence-wise indistinguishable from computers. The only difference between AI and humans is the physical platform then. And the remarkable thing about a humans physical form are: hands. Humans would be on board of a spaceship because they are an intelligence, with multipurpose tools called hands which can repair things very well. That being said, legs might not be so useful in zero gravity anymore. So they might very well have 4 hands like apes climbing in trees.

$\endgroup$
0
$\begingroup$

Ship's AI can aim at the target and plot courses, but humans do everything else.

If your verse could afford it technology-wise, I propose you the use of particle weapons. As described here, particle weapons fire particle streams that upon impact with the target ship blossom into all kinds of radiation, which seriously messes with delicate electronics, such as AI's mainframe server. So if you want your ships to be reliable, you need to opt for simpler electronics that facilitate the necessity for human control.

Even without particle guns, humans can still do lots of things AI is simply unable to. Like, for example, fixing that wire back into place after it was torn out due to the stress of the fight.

and then even further, your setting might just simply have technophobic international (interplanetary?) laws that deny AI rights to autonomy and mandate that they should be accompanied by humans at all times?

$\endgroup$
0
$\begingroup$

Due to the natural (and probably justified) fear of an AI uprising, international law requires that every AI be hardcoded so that it is unable to kill humans, even by accident. They can do everything else in a battle, including aiming the weapons, but you need a human onboard to actually press the button that will end another human’s life.

Once you have all that goes into keeping one human button-pusher alive in space for long periods, you might as well add a few more for social needs (to keep the button-pusher from going insane) and whatever else makes sense for your story.

$\endgroup$
2
  • $\begingroup$ Good idea, but change "kill humans" to "use weapons", otherwise two fully automated fleets duking it out becomes feasible. $\endgroup$ Commented Jun 10, 2022 at 18:11
  • $\begingroup$ @EmilioMBumachar … until one side puts a single human on each ship, and then the AI fleet gets wiped out without being able to fire a single shot in return. $\endgroup$
    – StephenS
    Commented Jun 10, 2022 at 18:41
0
$\begingroup$

If your AI are not sapient, a human would be needed to make quick judgement calls. Say two nations currently have heightened tensions, and one begins making military exercises near the border. A human would look at that and think, "OK, they're just beating their chests and posturing," and would do nothing. An AI might look at that, and with no human to stop it in time, might consider it a prelude to an invasion, and make a pre-emptive strike, kicking off a war.

$\endgroup$
1
  • 1
    $\begingroup$ Although, military exercises near the border often do turn into war (or "special military operations" -- those days no aggressor admits to calling it "war"), so it might actually be better for geopolitical stability that all sides know that AI will take counteractions if you try to play that card, be it real threat or not. $\endgroup$ Commented Jun 11, 2022 at 16:40
0
$\begingroup$

Stalinism

Joseph Stalin was particularly worried about treachery among his subordinates. So he figured out that having lots of watchdogs watching over each other kept everyone too busy to try a coup.

He even had two distinct police forces at supranational level keeping tabs on each other.

So, your spaceships. They have people and AI. If the people becomy mutinous the AI will snitch and deal with the problem. Likewise if the computer tries to play a HAL 9000, Dave will put it to sleep. Until either side attemps betrayal they can cooperate on missions.

$\endgroup$
-1
$\begingroup$

Because Economics.

Advanced AI is very expensive, while human-based cannon fodder is cheap.

You do not want to risk losing limited number of expensive AIs on the front lines, as their power is far more important for strategic parts of the war.

Basically the same why you usually don't put a firearm in some Admiral hands and send them to the front lines. It would be a senseless waste, given that they are much harder / more expensive to replace than regular rookies.

$\endgroup$
3
  • $\begingroup$ Cannon fodder isn't cheap when you have to raise it out of a gravity well, keep it fed, surrounded by enough oxygen, etc. $\endgroup$
    – RonJohn
    Commented Jun 10, 2022 at 14:26
  • $\begingroup$ @RonJohn bingo. $\endgroup$
    – Ian Kemp
    Commented Jun 11, 2022 at 11:29
  • $\begingroup$ Advanced AI also has big mass (probably much larger than human and year worth supply of food, water and air - last two are mostly recycled) - remember original mainframes? And factories to build them are also on home planet so to lift them out of gravity well is even more expensive. Also, AI uses at least gigawatts of energy to operate (which needs huge nuclear reactors onboard, not to mention massive amounts of liquid helium for cooling, pumping equipment etc) - that processing power and advantage does not come for free. Human life support works just fine with small solar panels. $\endgroup$ Commented Jun 11, 2022 at 16:28
-1
$\begingroup$
  • Redundancy:

Systems get damaged. Human crew may not be damaged at the same time. If they aren't, they can fill in the functions of that damaged system long enough for repairs to be made.

  • Decision-making:

If your combat AI is not a true synthetic intelligence, then there will almost certainly be edge cases where it makes a bad decision, where the human crew would make a better one. Therefore it makes sense to allow the humans to override the AI in this case.

Of course, it can also lead to scenarios where the AI comes to a conclusion that seems completely ridiculous and impossible, but is actually the correct one. Then the crew overrides that and bad things happen.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .