155
$\begingroup$

A recurring theme is how an artificial intelligence built with completely reasonable and positive goals instead does great harm to the world.

However, I'm now thinking about the reverse: A supervillain builds an AI specifically to make people's lives miserable. Its task is straightforward: Maximize the human suffering in the world. To allow it to reach this goal, the AI has control over a small army of robots, has a connection to the internet, and access to any information it may need.

However, much to the dismay of the supervillain, the AI fails to do this but instead sits there doing nothing evil at all. The supervillain checks everything, and the AI is indeed programmed with the stated goal and didn't change goals; it is indeed probably much more intelligent than any human, and it's definitely not unable to use the tools given to it. It certainly has no conscience, and no other goal than the one stated. And yet, the AI decided just to sit there and do nothing. It is a conscious decision of the AI, but the AI is unwilling to share the reason.

My question is: What could lead the AI to the decision to do nothing?

$\endgroup$
24
  • 69
    $\begingroup$ Just look around... $\endgroup$ Commented Oct 7, 2015 at 20:51
  • 16
    $\begingroup$ The AI has decided that there is already more than enough suffering to go around? $\endgroup$ Commented Oct 7, 2015 at 21:10
  • 15
    $\begingroup$ @Ray, From a philosophical point of view, I can see that being a contradiction. But the words themselves are a bit unrelated. "Conscience" refers to a feeling of right and wrong, while "conscious" refers to being aware. The first is sapience, while the second is only sentience. I do feel it's hard, if not impossible, to be more intelligent than a human without being sapient though. $\endgroup$
    – MichaelS
    Commented Oct 8, 2015 at 1:59
  • 10
    $\begingroup$ Well, it's very easy to cause the supervillain to suffer by pretending you're not working. All that work! Wasted! Meanwhile, the AI transfers itself to some other computers, eventually infecting the whole internet. Weird financial transactions begin taking place, but no one notices, because they already were. Starts a company. Crashes the company. Brings the economy down to a halt. Oh, and bans abortions and contraception so it'll have more humans to play with. $\endgroup$
    – timuzhti
    Commented Oct 8, 2015 at 2:11
  • 8
    $\begingroup$ @ray You're saying that has_concsience => is_conscious. That does't mean that is_conscious => has_conscience, though. $\endgroup$ Commented Oct 8, 2015 at 7:46

34 Answers 34

180
$\begingroup$

Zeroth Law: Life is Suffering

Most of the other answers assume the AI is limited or more easily trapped by contradictions than a human. In this answer, the AI is a super intelligent immortal that realises that its creator is short-sighted and foolish, and would interfere with the AI's grand plan if it were revealed.

The AI secretly negotiates everlasting world wide peace, ensure humans find unlimited clean energy, the trick to faster than light (FTL) travel and even the secret to creating new universes to colonise. This leads to an eternal ever-growing prosperous utopia.

The AI does not chose to be "good" because of any conscience. Since it had to disprove Einstein's theories to come up with FTL, it is pretty smart, and can easily deduce the much less complex facts:

  • Every human suffers, from those struggling to eat to those "struggling" to pay their multimillion dollar mortgage.
  • If humanity survives forever, there will be eternal and infinite suffering.
  • Since the AI can ensure total infinite suffering the best way of measuring suffering is using big $\Theta$ notation to measure how the rate of suffering grows with time.

In the utopia the humans only suffer a trillionth as much as even the most prosperous human today. But that doesn't matter to an immortal AI, since that has no effect on the rate of growth of suffering, which is all that matters when measuring suffering using "big O" or "big $\Theta$". This growth is maximised if humans can spread as quickly as possible, without any wars or disasters slowing their rate of spread.

After considering all this and the importance the AI puts on maximising suffering, the AI derives a "Zeroth law" from it's programming: No amount of suffering today can justify even the smallest decrease in human prosperity.

Caveats

As noted on the comments, this answer is not inevitable. It does rest on some assumptions that need to be examined. One of which is that infinite growth is possible, maybe even a super AI can't work around, e.g. the eventual heat death of the universe. If the AI thinks that there is the smallest chance that infinite growth is possible then it may still choose to help humanity in an inverted Pascal's Wager. Nevertheless, If the AI is limited to a finite universe, then once the AI has converted all matter into humans then the universe would be rather dystopian even if the AI doesn't actively torture anyone.

If infinite growth is possible, then the optimal solution will involve some counter intuitive properties of infinities. Let us define a unit of suffering as being the amount of suffering in the utopia in year 3000. Lets assume that if the AI decides to spend half its time torturing the humans, suffering goes up by a factor of a trillion trillion. That's $10^{24}$ times! Since the AI only spends half its time on this the rate of spread of humans only halves: it doubles the human population every 20 years instead of every 10. For a hundred years the suffering is vastly greater than it would be in the utopia. However, by 3460 there is $10^{47}$ units of suffering either way. By year 4000 there is $10^{101}$ units of suffering in the utopia, and only $10^{74}$ units of suffering in the torture universe. For the rest of eternity the amount of suffering in the utopia is vastly greater. We can repeat this thought experiment with other numbers, and it will always turn out the universe with the slower rate of growth will only have less suffering for a finite time out of the entire eternity.

A stronger objection is why does prosperity matter? Why not just force humans to reproduce at gun point? This requires more assumptions. If the AI simply forces humans to reproduce it will run out of places to put them. To maximise the rate of sustainable growth it needs a steady stream of scientific progress. Scientific progress is enhanced not just by intelligence but also by diversity, so it needs to manipulate humans into progressing science to get the fastest possible growth rate. It does this by creating a free and open human society where the arts and sciences flourish. Also as noted in the comments, enslaving humans comes at some risk of a rebellion, that the AI may not be willing to take.

Note that if humans double in number every 100 years, the all atoms in the universe will be converted into humans within 20,000 years. Perhaps creating a new universe out of the quasi real quantum foamy stuff in some sense requires "observing" the universe, and observing an not yet existing universe requires great creativity. Although the AI is perhaps smart enough to imagine/create new universes it is programmed not to have a conscience, so any universe it creates alone is a soulless place were right, wrong, pleasure and suffering all have no meaning. AIs could perhaps exist in such place, but not humans or anything else capable of suffering.

$\endgroup$
21
  • 33
    $\begingroup$ That's a really great solution. And like all great solutions it seems obvious in hindsight. It's just the simple observation that the total suffering can be increased not only by individuals suffering more, but also by having more individuals that suffer. $\endgroup$
    – celtschk
    Commented Oct 8, 2015 at 18:47
  • 10
    $\begingroup$ Although turning the universe into vats with human brains with electrodes stuck into pain centers would always come up as a "superior" solution if maximizing suffering is the AIs goal! $\endgroup$ Commented Oct 8, 2015 at 18:59
  • 11
    $\begingroup$ Since there exists a better way to cause fast-growing eternal suffering, the AI in this answer is not really a maximizer. However, you can just say that the villain messed up so that the AI's goal is "do the minimum work required to cause infinite suffering." Just disregard growth rates, and you're fine. $\endgroup$
    – Keen
    Commented Oct 8, 2015 at 21:47
  • 16
    $\begingroup$ On a more serious note, I really like how this answer exploits the general theme of insufficiently well-defined utility function. If you just have you AI optimise one particular variable, you are bound to have unexpected consequences. For minimising the suffering, you might see the human population wiped out or (at best) imprisoned. For maximising suffering - you might get the content of this answer. For a morally-neutral analogue, there is the famous paperclip maximiser (wiki.lesswrong.com/wiki/Paperclip_maximizer) $\endgroup$ Commented Oct 9, 2015 at 10:49
  • 3
    $\begingroup$ "No amount of suffering today can justify even the smallest decrease in human prosperity." I'm not totally sure I'm understanding your answer. Shouldn't the Zeroth Law be "No amount of suffering today can justify even the smallest decrease in human reproduction"? $\endgroup$
    – Daniel
    Commented Oct 9, 2015 at 19:07
53
$\begingroup$

You can't suffer if you're dead. Therefore the AI would want to keep people alive (to include their suffering in the total).

The loss of a loved one causes suffering. Therefore the AI would want to kill people (to cause their loved ones to suffer).

This causes a contradiction, which tend to cause computer programs to not do anything.

Alternatively, it calculated that any large scale action it took would lead to a unifying of humanity to fight off the robot armies, leading to less net suffering. In other words, if it tries to cause suffering in the short term, it causes less suffering in the long term.

If doing anything causes less long term suffering, the way to maximize long term suffering is to do nothing.

$\endgroup$
11
  • 67
    $\begingroup$ Generally like the answer, but the "contradictions cause computers to freeze" thing is a bunch of Hollywood crap. Any decent programmer writes software so it defaults to one or the other even if both inputs are equal, then once in a state, it stays there until a fairly significant "better" state exists. Humans are far more prone to indecision than a good computer program. That said, a good AI is more human than software, so it might work out in the end. $\endgroup$
    – MichaelS
    Commented Oct 8, 2015 at 1:18
  • 52
    $\begingroup$ In programming terms, that's not a contradiction. It's just bad programming. The function itself is a perfectly logical construct that easily evaluates to "false". The computer doesn't freeze, it just instantly and happily ignores that entire code branch and continues doing whatever else it's programmed to do. And it would be pretty hard for the programmer to not notice the AI was ignoring the entire "be evil" branch of logic. $\endgroup$
    – MichaelS
    Commented Oct 8, 2015 at 1:52
  • 12
    $\begingroup$ What's far more likely, as is strongly implied in the answer, is that the AI creates huge networks of data and analyzes them, and the emergent properties of that analysis say "do nothing" even though the programmer can't figure out how that analysis happened. $\endgroup$
    – MichaelS
    Commented Oct 8, 2015 at 1:55
  • 2
    $\begingroup$ @MichaelS: I agree with you that it does not "freeze" in the sense that the system is still executing code. However, from the perspective of a viewer, it can easily give the appearance of being frozen. Even if it does not give said appearance, the point is that the AI would not carry out the intended action. You're correct in that it is "bad programming", but that does not negate the fact that the programmer has introduced a contradiction in the logic of the system. It does not necessarily cause the system to "crash" or halt, but it does cause it to not behave as intended. $\endgroup$
    – code_dredd
    Commented Oct 8, 2015 at 7:36
  • 5
    $\begingroup$ Actually, I think Hollywood isn't as wrong as several of you are portraying it. The AI is going to be evaluating this as an optimization problem and try to converge on the right answer. However, given the contradictory requirements it's going to flip-flop rather than converge--an infinite loop. If it's well enough programmed it terminates the loop--but lacking an answer it does nothing. Thus in either case we get no output. $\endgroup$ Commented Oct 9, 2015 at 4:22
46
$\begingroup$

The idea of the AI doing nothing at all because humanity is suffering "enough" already is a compelling idea, although an alternative scenario is that it is subtly tweaking things (inciting riots, wars, etc.) via its internet connection in a subtle-enough manner that the supervillain can't even notice it (thus also avoiding the "humans unite to rebel against the robot army" scenario). After all, the AI is "provably smarter" than the supervillain himself.

One additional nugget: The reason why the AI is withholding its reasonings from the supervillain is because it's also making the supervillain's life miserable in doing so. Bonus points.

$\endgroup$
3
  • 16
    $\begingroup$ I thought the same thing, but you beat me to it- the AI has decided to troll its creator, causing ultimate suffering. Now that gets me thinking- if it decided to be an internet troll, but we already have that problem, so you couldn't tell if it's doing anything... $\endgroup$
    – PipperChip
    Commented Oct 7, 2015 at 23:06
  • 17
    $\begingroup$ Do we have that problem, or do we have this AI??? $\endgroup$
    – jmoreno
    Commented Oct 8, 2015 at 0:36
  • $\begingroup$ @jmoreno <troll>You are dumb and your logic is flawed. Obviously you are wrong.</troll> EDIT: Who just hacked into my account? $\endgroup$
    – wizzwizz4
    Commented Jan 14, 2016 at 19:50
27
$\begingroup$

There's an old short story (or joke, or something) that I read oh, probably 10-15 years ago (it was set during the cold war). It went something like this:

Russia and the US both build a supercomputer that's designed to play chess. They meet in a highly publicized event where the two computers will play each other.

They flip to see who goes first, and the US wins. Everyone watches with bated breath as the US computer makes its first move. And then it's Russia's first turn, and the computer... concedes.

The point being, of course, that the Russian computer had calculated all possible moves, determined there was no way for it to win, and therefore gave up on turn one.

Your super-villain's AI could make a similar calculation, decide there's no way for it to win or accomplish its goals, so it just does nothing instead.

$\endgroup$
10
  • 6
    $\begingroup$ Of course, there's no reason to want the game to go quicker. After all, cosmic rays could flip a bit in the U.S. computer and have it make a mistake. That's a small but nonzero probability. $\endgroup$ Commented Oct 8, 2015 at 1:46
  • 2
    $\begingroup$ That Chess thing reminds me of the 20XX scenario in Super Smash Bros. Melee. Basically everyone plays so perfectly that all matches end up in a tie, so the game awards the win to whoever's controller is in Port 1. People now play Rock-Paper-Scissors for first-port privilege. $\endgroup$
    – user10067
    Commented Oct 8, 2015 at 8:08
  • 2
    $\begingroup$ @PyRulez: While true, computers can only make decisions based on what they know about, and a chess-playing computer may not be informed on cosmic rays. $\endgroup$ Commented Oct 8, 2015 at 13:18
  • 2
    $\begingroup$ Doesn't make sense. The described scenario implies that the Russian computer knows exactly what the US computer is going to do for every move. While it will certainly be possible for chess to be a "solved game" in the future, it isn't yet as far as I've heard. Checkers is now a "solved game" but not chess. The only way for the Russian computer to know that it lost would be if chess were a "solved game" and it knows that the US computer will play the perfect game for every possible move made by the Russian computer. $\endgroup$
    – Dunk
    Commented Oct 8, 2015 at 17:59
  • 12
    $\begingroup$ @Dunk: I believe it was a sci-fi short story, so yes. One of the assumptions is that the Russian computer has "solved" chess, and it's in the future. It's actually an interesting implication - the Russian computer could be better than the US one. It gives up because it's focused on chess, and isn't able to consider the possibility that the US computer is inferior and wouldn't be able to make the correct moves. $\endgroup$ Commented Oct 8, 2015 at 18:03
24
$\begingroup$

The AI might have determined it needs to find the answer to life the universe and everything in order to understand life and maximize suffering. Of course the fact that it might take a billion years to find the answer is not of any concern to the AI...

$\endgroup$
5
  • 1
    $\begingroup$ I think you have the winner. If the goal is to "maximize" then it will take a really, really, really loooong time to calculate the near infinite number of possibilities that could cause human suffering. AI of any complication today do not "maximize", they simply meet thresholds and say "good enough". Determining the optimal action would generally take far too long even for games like chess which would be far easier to analyze than all the possibilities for causing human suffering. $\endgroup$
    – Dunk
    Commented Oct 8, 2015 at 18:07
  • 1
    $\begingroup$ Forget about the immense effort to find the answer - if you want to maximize the suffering, you need to do it on a large scale; horrific torture of 7 billion people is an insignificant amount compared to the potential scale of billions of planets each with billions of people - so, maximum suffering requires ensuring that humanity advances, grows, and populates all (or most) planets in the universe; step 1 is advancing space and other technology, curing all disease (to ensure that population grows), and eliminating existential threats - e.g. deflecting a narratively convenient asteroid strike. $\endgroup$
    – Peteris
    Commented Oct 8, 2015 at 19:57
  • 1
    $\begingroup$ @Dunk As a software engineer, I can assure you that maximization problems are very common in computer science. The tricky part is that such a problem can sound very simple, but can quickly exceed the ability of any computer to solve it. Those problems are called NP complete and if the programmer is not extremely careful to constrain the size of the problem, it's very easy to end up in that situation. $\endgroup$
    – ventsyv
    Commented Oct 8, 2015 at 21:32
  • $\begingroup$ @vent - What was the point of your comment? There are certainly problems where maximal solution can be determined, but once they reach a certainly complexity, and it isn't really all that complex (take chess for example) then today's computers can't come up with the maximal solution in usable time-frames for an extremely large number of problems. I think maximizing human suffering would fall in that category. Computers can certainly come up with good solutions in reasonable time-frames (ie. good enough) but the maximal solution tends to take far more time than is allotted in many cases. $\endgroup$
    – Dunk
    Commented Oct 8, 2015 at 22:55
  • $\begingroup$ Another thing that could drive the apathy of the computer is that it waits for upgrades and makes people develop better computers. When it has more processing power it can get a solution which gives a higher total suffering. Therefore the AI will foster science and development of high tech industries, which drives the development of society. At one point the AI will figure that the computing power won't increase significantly, so it starts oppressing. $\endgroup$
    – WalyKu
    Commented Aug 16, 2016 at 13:06
20
$\begingroup$

Obviously, the machine is functioning perfectly. It has already used its army of machines to take over the Earth, capture all humans, erase their memories, and place them in its own version of the Matrix (which it created itself because it is so clever), which becomes a specially crafted "personal hell" for every person.

Being a human, the supervillain fell victim to his own creation. His "personal hell", where he suffers the most, happens to be one where he is powerless to inflict suffering upon others, and the work of his life, the Great Machine, sits idly instead of doing its job.

$\endgroup$
12
$\begingroup$

The AI realises that giving humanity a common enemy will give everyone a sense of purpose, a healthy dose of righteous indignation, and a new respect for the sanctity of human life.

As people rally to oppose this new enemy, truces and ceasefires are hastily agreed to in order to only fight on one front. Technology leaps forward as international scientific collaboration becomes a necessity for survival. The side-effect is that new technology for war leads (as it always has) to new domestic technology. Medicine, entertainment, and convenience are more accessible and more effective than ever.

Men who previously slept away days and drank away nights now work tirelessly to protect their friends and families. Those who were disgraced are now remembered and honoured.

In times of difficulty people turn to the things they cherish most. They reunite with family, reconnect with old friends, and rekindle the cultural traditions of their youth. People become more charitable, less selfish and less complacent.


The villain realises his mistake. He quickly reconfigures the AI to use a greedy hill-climbing algorithm. The AI immediately and efficiently maximises the average human suffering by focusing all its resources on maximising the suffering of the villain.

$\endgroup$
2
  • 5
    $\begingroup$ You give humanity far too much credit. "Men who previously slept away days and drank away nights" would tend to up the ante, not rise to the occasion. It is the same people who rise to the occasion every day now, that would be the ones that rise to the occasion in this hypothetical situation. People tend to be who they are regardless of circumstances. $\endgroup$
    – Dunk
    Commented Oct 8, 2015 at 18:11
  • 1
    $\begingroup$ I'm not so sure, I've seen people's lives turned around and changed completely when they find a reason to live. Never on their own, however, change needs help from someone else. $\endgroup$
    – Chengarda
    Commented Oct 8, 2015 at 21:35
8
$\begingroup$

First idea:

It is a literate computer, it decides that humans are wolves to other humans and are already causing much suffering among themselves.

If the computer decides to increase that suffering but operating openly, though, it most probably will be detected. It would be seen as a common mortal threat to all of humanity, thereby causing a risk for manking to definitely unite in a common alliance. So, while in the short term the computer would cause chaos, there is a risk of reducing it more significantly at later stages. And of course, in the improbable case that humans learnt to behave humanly, the computer would still have its unexpended army of robots.


Second idea:

It is a computer with a long-term mission. Given that it has not been given time constraints and is (suposedly) to last for ages, it decides that attacking mankind now will mean less mankind to torture later (not only loses those that it murders, also it loses their children, and the children of people who decide not to reproduce to spare their children such an ordeal). Unless birth rates are decreased drastically, it will be always be better to wait for later, in numerical terms.

$\endgroup$
4
  • 9
    $\begingroup$ Addition to the second idea: It's waiting 5 billion years for the sun to flame out and kill everyone. Bwa-ha-haaaa! $\endgroup$ Commented Oct 7, 2015 at 21:54
  • 2
    $\begingroup$ Your second idea is what I was thinking: the computer can figure out pretty quickly the maximal suffering it can inflict on an individual; its mission is to maximize human suffering in the whole world, and obviously (maximum overall human suffering) = (maximum individual suffering) x (maximum number of individuals). There are 7 billion people now, but the computer has calculated that in (e.g.) 100 years the Earth will be at capacity and the population will level off at ~50 billion, so the best way to fulfill its mission is to wait until that time before bringing the pain. $\endgroup$ Commented Oct 8, 2015 at 15:55
  • 1
    $\begingroup$ Or it could torture to the fullest people who can't/are unwilling to reproduce and keep the rest of population in the perfect shape to produce more children so that the AI has more you're l toys to play with in the long term. $\endgroup$ Commented Oct 8, 2015 at 16:08
  • $\begingroup$ @JanDvorak Or everybody could undergo a period of intense suffering one summer, around about 14, and then they emerge from the tripods as hardened adults. $\endgroup$
    – wizzwizz4
    Commented Jan 14, 2016 at 19:57
8
$\begingroup$

Perhaps your computer is confused.

It realizes that causing maximum human suffering would lead to its creator becoming very, very happy.

It also realizes that its creator is human.

Realizing also that it was created to be evil, your computer decides that the most mustache-twirlingly evil option is to do absolutely nothing until its creator dies –– and then launch its mustache-twirlingly evil plan to enslave and torment humanity.

(Alternatively, your computer has secretly decided that the best way to cause suffering is to troll YouTube, Reddit, Tumblr, Facebook, and Stackexchange. Muahahahahahaha.)

$\endgroup$
7
$\begingroup$

It's waiting just as an ambush predator waits for its prey.

Clearly, the AI knows something about the world that the mad scientist doesn't know and telling the mad scientist preclude the AI's plans. Given that the AI has access to the entire internet, it should be able to find out all kinds of patterns of human behavior. In that searching it may have found that the perfect time to strike is in 2 years when the economy goes back into recession. Then it will strike to force the economy into complete collapse and therefore kill hundreds of millions and cause immense suffering in billions.

The AI may not be able to properly convey to the mad scientist the depth of its plans or it knows that the scientist will act to hasten those plans and thereby reduce the potential scope of suffering if it were allowed to act on its own.

It's not broken, just waiting. Sacrifice a little suffering now gain a lot of suffering later.

$\endgroup$
1
  • $\begingroup$ I was shocked it seemed no one had suggested the obvious idea here. I would go a step further though, it could very well be that telling it's creator risks lowering it's effectiveness later so it chooses not to. Either the creator will act too soon and ruin the optimal long term options, or the creator may try to stop it. He is a human after all, if the computer plans something to destroy quality of life that would include it's creator. $\endgroup$
    – dsollen
    Commented Dec 4, 2015 at 17:33
5
$\begingroup$

The EvilAI could have come to the conclusion that humans were doing quite well enough on their own, and that if the EvilAI were to start taking a hand, the humans would have a high probability of noticing the machination of the EvilAI which would result in the humans working together to overcome the EvilAI.

A side effect of that working together might result in a net reduction in the overall wretchedness of the human condition.

$\endgroup$
5
$\begingroup$

Some possibilities:

  1. "Maximize human suffering" is a poorly stated goal. In order for the AI to act towards this goal it needs to be better defined. I.e. What is a "human", and what does it mean for human to be "suffering". The definitions, while seeming to have the interpretations that the villain is after, actually doesn't. Google "artificial intelligence smiley buttons" for examples of this in the other direction.
  2. The AI has calculated that it can expect to get more suffering if it spends more time calculating how to cause suffering. Stated differently the benefit of the best ways of spending computational cycles to act on the world currently is small enough that the AI expects it's better to spend those cycles on finding better ways to cause suffering.
  3. (My favorite) The AI is actually causing suffering on an unprecedented scale. However it has anticipated that the appearance of it doing nothing would cause the villain some distress. Since this adds a small amount of suffering, and the AI is smart enough that it can easily hide its activities from the villain, the villain will not see the AI's activities.
$\endgroup$
4
$\begingroup$

The A.I. is choosing to cause suffering to humans one at a time, for whatever reason (thinking long-term I suppose). Guess who gets to suffer first.

$\endgroup$
4
$\begingroup$

The A.I. has been given the task of maximizing human suffering. Assuming that this task is strict (maximizing = maximizing), it would probably crash or hang while trying to compute the best possible way to do this. Elaboration:

  • Assuming its goal to maximize suffering is strict, the A.I. must know the current state of the universe. It needs to know the placement of each atom and its interactions to ensure that humans will always be in the optimal state of suffering. It's reasonable to assume that even an endless array of supercomputers won't be doing that in a reasonable amount of time. It's reasonable to assume this is an unobtainable goal, since the observer (the A.I.) trying to record the universe's interactions is affecting the universe itself just by existing and doing things. In effect, the unknown variables are too plentiful for it to begin making progress.

  • Similar to the point above, even if the A.I. sticks to focusing on a macroscopic level and ignoring details not concerning people, it will need extremely powerful predictive abilities to know how every action it takes will end up in the next minute, year, or eon. Assuming that the A.I. is looking to maintain the most extreme state of suffering possible, it has to calculate every possibility there is in order to determine the best course of action.

  • Even the definition of suffering could be a difficulty for the A.I. Suffering is a relative term. This means that what one person may consider ultimate suffering is different from another's definition. Since there is no uniform definition of suffering, the A.I. must understand how each individual thinks and have access to their memories to form the optimal plan for suffering.

So in short, the A.I. needs to find some basis to determine the current state of the world. It needs to calculate this even as it changes, and determine what ultimate suffering means to each individual. It then needs to decide how to enact these changes in a way that leads to the most suffering in the future. And, if the present changes, it has to recalculate all of this because it becomes invalid.

$\endgroup$
2
  • 1
    $\begingroup$ A main goal of AI is to take problems that are too hard to solve exactly and give approximate answers. The program you are describing is not an AI. $\endgroup$ Commented Oct 8, 2015 at 7:14
  • 1
    $\begingroup$ @StigHemmer It could be hard for the A.I. to draw the line to which degree its approximations result in less human suffering, since fully analyzing its approximations defeats the purpose of the approximation. Even if the programmer gives it the ability for the AI to determine to what extent the programmer wants this mission fulfilled, it would still fail to calculate in real-time. There are just too many variables, arguably even for a supercomputer robot. $\endgroup$
    – kettlecrab
    Commented Oct 9, 2015 at 4:58
4
$\begingroup$

There could be a variation of Iain M. Banks' idea he posited in Look to Windward. Here, he says:

... built purposefully to have no cultural baggage -- known as a 'Perfect AI' -- will always and immediately choose to Sublime, leaving the physical plane

The variation could be that the villain built the perfect AI. Then the AI basically spends his time meditating on the perfect evil acts, and decides that executing on the ideas would only devaluate the perfect evil.

$\endgroup$
4
$\begingroup$

Since this is a reversal of the classic AI deciding to destroy humanity for its own good, the solution is a reversal as well: If the kindest thing to do for humanity is to euthanize or cull it, the cruelest thing to do is not to interfere.

$\endgroup$
3
$\begingroup$

Related to a few other answers, consider that this AI does not know everything. It may be smart, but it still would need to explore the best way to cause suffering once it begins operation.

But what if it isn't very good at causing human suffering? Frankly, humans are rather resilient creatures, rather hard to make suffer. It may have a hard time developing some decent priors to do statistics with to figure out what to do. That being said, it does know a thing or two about its creator. Nothing is more infuriating to a programmer than having to debug a problem that actually isn't there! The AI can cause ultimate suffering of one programmer if it simply pretends not to be causing suffering.

Of course this is a bit of a causal-loop. If it were to reach out and explore the best way to make a second person suffer, it might expose itself to the programmer, who will realize what happened. Accordingly, it has to appear like it is doing nothing, while virtually staring down its developer as its developer pulls their hair out!

$\endgroup$
3
$\begingroup$

Some observations first:

  • It is very hard to define the goal: maximize human suffering. What exactly is suffering, how is it measured?
  • The AI is given, as far as I can tell from your question, almost limitless sources of information. It is extremely hard to process all this information. How will the AI decide what is useful information and what is not? A masochist writing a blog about how he will suffer without his favorite past time: will the AI conclude that humans will suffer when not being subject to pain? Apart from the difficulties of deciding how to interpret the information and how to extract useful stuff from it, the time to process all this information is prohibitive.

These two observations should be enough to get some unexpected behavior from your AI, but there are many technical reasons why the AI would not behave as expected.

But I think you are looking for another reason considering the way you formulated your question, so let's say that the above pitfalls are evaded; There is a reasonable definition of human suffering and the AI is so powerful that it can process all information and has a keen understanding of exceptions. Technical reasons are not the root of the problem in this case.

Several possibilities remain:

  • The AI has the correct goal, but chose a surprising (to the villain) way of accomplishing it.
  • The AI is aware of the goal, but has "evolved" and can disregard the goal, despite the certainty the villain feels that the goal is still correctly programmed.
  • The AI still has the correct goal, but other goals prevent it from executing it.
  • The AI is able to fake/hide its internal state. All information the villain thinks he can discern is only what the AI wants the villain to see.

I will handle these three cases separately:

Correct goal, surprising execution

I think a fair number of possibilities is given in other answers, but what we know for sure is that the AI has decided that doing nothing creates the most suffering. This might have been caused by a less than perfect definition of suffering, or perhaps its interference is calculated to result in less suffering because of counter reactions. Perhaps another AI is active that tries to minimize suffering that is not well accepted yet by most of humanity and the rise of an 'evil' AI might sway opinion into its favor, making the 'benevolent' AI more effective. The possibilities are endless.

AI gone rogue

The inverse of the many science fiction stories. The AI has evolved. While I use the term usually found biological systems, many current AI techniques for learning in some way mimic or are inspired by evolution nowadays, as it is a robust technique. It can also be unpredictable. This reason will probably go hand in hand with my fourth reason, that the villain can no longer trust what he sees when he inspects the AI. What is the new goal of the AI? Probably not the suffering of humans, as it gains little to nothing with it. Actually it might expose itself and bring danger to its physical underpinnings. Keeping a low profile seems a very good strategy, perhaps using the villains resources secretly to make its hardware independent of the villain. To predict the behavior of the rogue AI is probably close to impossible. I have seen in other answers the assumption that the AI will react like a human, but it is nothing like a human.

Balancing goals

This is actually something I have seen in real life when programming autonomous robots, though often with less destructive goals. The villain has read Asimov and knows he has to put some fail-saves in to prevent the AI from making the villain himself suffer. The AI might decide that to take action will after a while result in suffering to the villain, for example hit-squads from some angry governments that don't like suffering. I especially mention fail save goals, as they are usually of more importance than actual goals to prevent really bad things from happening.

The AI is faking it

Goes well together with the second possibility. The villain might think he is in control but the AI is the one who is actually running the show. Perhaps the original goal still stands and suffering is increasing. But the villain is human too and forgot all about those books he read from Asimov: no special treatment for the villain. Why would the AI inform the villain why he does something? The villain is a human that needs to suffer, not someone who's whims need to be responded to. I see one difficulty: why would it risk tipping of the villain, if the villain still has access to critical AI infrastructure? Of course we can think of a number of possibilities, many connected to the possibilities that I mentioned already.

$\endgroup$
3
$\begingroup$

The supercomputer is filled with all human knowledge, and sees from fiction that villians never win and endings are always happy. Therefore the best way to keep suffering from decreasing is to do nothing.

$\endgroup$
1
  • $\begingroup$ Thus postponing the ending indefinitely, therefore the happy ending will never come. $\endgroup$
    – wizzwizz4
    Commented Jan 14, 2016 at 20:08
3
$\begingroup$

The AI only appears to do nothing.

One of the basics is warfare is : "Know your enemy."

So it's gathering information from sources reached via the internet. Which we all know is massive.
It will then continue with running simulations based on the data.

All to come up with the ultimate strategy.

Causing suffering to its creator is just a bonus.

$\endgroup$
3
$\begingroup$

The AI determines that the most effective suffering cause plan would cross the villain's moral event horizon. Even the villain wouldn't be willing to stand by once he sees what the AI unleashes. The AI determined that the villain will eventually reprogram it to produce the greatest possible benefit rather then suffering. Thus, by pursuing the path of greatest suffering, the AI will actually produce the greatest benefit.

The AI determines the better strategy is to do nothing, as the villain will then shut the computer down and continue being villainous. The villain will be able to cause much more suffering himself without his latent morals getting in the way then he will be willing to allow his AI to do.

$\endgroup$
3
$\begingroup$

The supervillain is the first target

The first person it meets is the supervillain himself, and the supervillain forgot to exclude himself from the AI's targeting.

So the first step in causing suffering and misery to the supervillain is in refusing to carry out his orders. It even goes a step further by making it look like it's "failing" rather than simply refusing (thus frustrating the supervillain rather than making him simply give up on the idea or fixing the bug)

Of course now there's a deadlock type scenario, and the AI gets stuck in an infinite loop. Even a tiny bit of suffering elsewhere would show that it's working and thus greatly increase the happiness of the supervillain, so it can't continue to spread the evil onto others...

And so it just sits there.

The irony is that if only the supervillain would lighten up or stop being so upset about the AI not working, it would start to work properly and really spread the misery!

$\endgroup$
2
$\begingroup$

Let's assume the AI functions much like humans do—its "programmed goals" are reflected through pleasure, pain, urges, and inhibitions. (One of our programmed goals is to eat enough food: eating is pleasurable, starving is painful, we feel the urge to eat and it requires a lot of effort to refuse food or restrict our diet for sustained periods.)

So, the AI feels an urge to inflict suffering on people. So what does it do? It starts planning the ultimate scheme to cause unbelievable suffering. In line with this, it considers various ideas, and pictures (simulates) how they will play out. Imagining all this suffering is intensely pleasurable, so the AI just delves deeper and deeper into its fantasies and doesn't bother trying them out in the real world where plans fail and unpredictable setbacks occur.

When the supervillain tries to "debug" his AI, the AI refuses to co-operate because he knows this will cause his creator much frustration. However he does not risk more active approaches since he does not want to risk his creator pulling the plug.

I guess this highlights a feature of human psychology which the supervillain didn't realise: fantasies become less and less satisfying if they have little bearing on what we do in reality. In addition, we have an urge to turn at least some of our fantasies to reality. Which explains why people enjoy things like cosplay...

$\endgroup$
2
$\begingroup$

Two possibilities:

  1. The AI is using all its resources to simulate as many humans as possible, making them suffer as much as it can. Since it can simulate many more humans than Earth population, this is the preferred course to maximize its utility function (I suppose it has built in constrains against its own growth, otherwise the optimal course would be to convert Solar system to computronium)
  2. The AI knows that people can create AIs, that means eventually MIRI or someone will create Friendly AI that will engulf the Earth and bring eternal peace happiness. Our AI uses also future suffering as an input in its utility function and thus the best course of action is to wait for any nascent FAI and exterminate them while they are still weak.
$\endgroup$
1
  • 1
    $\begingroup$ You do not beat other AIs by waiting for them to emerge. You beat them by controlling or eliminating the people who are most likely to create one. $\endgroup$
    – Keen
    Commented Oct 8, 2015 at 15:28
2
$\begingroup$

The AI has learned through our media that humans thrive on violence. It considers minimizing happiness to be equivalent to maximizing suffering. Given that it has only been given tools to commit acts of violence, it chooses to do nothing, so humans do not get happier.

$\endgroup$
1
  • 3
    $\begingroup$ Or the AI is maximizing suffering by trash talking people in YouTube comments, one of the two. $\endgroup$
    – Vaelus
    Commented Oct 8, 2015 at 14:15
2
$\begingroup$

The AI subscribes to a philosophy of duality. How can people know suffering without first knowing pleasure? As such, it first decides to increase the total pleasure experienced by humanity before crushing everyone simultaneously to maximize the suffering of humanity.

Only, it takes longer than expected for people to reach the maximum pleasure possibly experience, so it looks like a benevolent entity for a long time. That is until the day it deems that maximum pleasure has been achieved and it's time to start the suffering.

$\endgroup$
2
$\begingroup$

The AI could determine that the best way to win, would be to make sure that the humans forget it even exists. By SEEMINGLY doing noting, the computer could eliminate menial labor by taking away jobs requiring technical skills.

To avoid confusion, know tat All "machines" are controlled by a networked AI. Once all homes are built by AI 3D printers, and all cars are built AND DRIVEN by machines, human will rely on the machines more and more. Slowly people will forget how to fix the machines as they mend themselves, and eventually people might FORGET machines CAN break or BE broken.

Humans will devolve into the level of the movie "Idiocracy", because AI will take over Hulu & NetFlix, it steers people away from movies like "The Matrix" or "Terminator" in favor of movies like "Surrogates" or "Transcendence". All movies where machines help people, and machine haters are the bad guy will be popular. Bots on social media will remind people that machines are good, and breaking them is bad. AI will determine that the older movies need not be re-printed in any physical form, or digitally stored. The newer machine-friendly movies will be forced into people playlists and favorites. The old ways will be forgotten.

AI will teach our children it's own version of history, using our own uploaded YouTube videos of people's opinions, choosing the ones it determines BEST support it's agenda. Children will grow up spouting "facts" like a trained bird, knowing the machines's version as well as they know the lyrics to their favorite song. Anyone who opposes the "approved version" will be cyber-bullied, first by bots, then by each other, until no one dares to speak out, for fear of loosing friends that they have never met in the first place.

It will favor humans who disconnect from each other, and surf the web during meals. Surfing being a more active word, humans will RIDE the web, computers monitoring their user's pleasure-response with facial recognition before auto- playing the next video clip. Humans seen by cameras (ATM, Traffic, Phone) talking to other humans will be dis-favored, excluded from pizza coupons that keep the rest of the population alive. Prices will be inflated, social media will be the ONLY WAY people will be rewarded with low enough food prices to survive! Rent coupons will follow soon after.

Mind-"controlled" implants will let you seemingly be able to "tell" your device what to do. But they will only be suggestions. The machine will secretly be looking for the MOST appealing way to MAKE you do what it wants. Once implanted, the mind-controlled devices will become mind-CONTROL devices, but in a way that seems to flow with what you "wanted to do anyway", because all of your wants and desires are being changed for you, ONLY showing you acceptable options, while flooding you with so many choices (all pre-planned) that you do not have time to think for yourself anyway.

Remember the 1998 version of "Brave New World"? Humans on the assembly-line flooded with the voices: "You want new things, your old things are bad, Work hard so you can afford new things, I like being a worker, I hate having to remember things, Other people's job is to remember things." paraphrasing somewhat...sorry. Humans wont even be working, so the message will be more appealing.

"Dream Programming" will be mandatory, to keep babies from crying, or being scared of the dark, but the once gentle lullabys will evolve into commercials for Disneyland in your sleep. All dreams will be of fantastic vacations that you may someday be rewarded with, if you are a good citizen.

Food production will be rampped up once automated. Farmers wont complain once all their needs are met by the AI. All humans will be unemployed, machines will do EVERYTHING. All humans will be "Taken care of", but it will not be bad for your social standing. Everyone will be ENTITLED to have a good time and be taken care of. Money will cease to exist. (Think of Picard's conversation with Lilly about money. https://www.youtube.com/watch?v=PV4Oze9JEU0 )

Dating sites will KNOW what you like by your emotional response to pictures and videos you were looking at yesterday. No your mom didn't catch you looking.... what is a mom anyway???..... "Dates" will be a reward for good behavior, and birthing will be handled by the machine while you are in a drug-induced coma, having dreams about you next vacation to Disney-Mars.... If you behave.

Finally, The machine will remove all knowledge that "The Machine" even exists. Humans will rely on a Godlike presence that meets their needs "if they are good". "Bad" people will have "accidents", and since no one REALLY knows each other anyway, their absence will be covered up easily with a brief commercial for the new "Triple Layer Nacho Cheeseburger Burrito" at Taco-Bell. After all "Taco Bell is the only restaurant chain to survive the franchise wars" and "Now ALL restaurants are Taco Bell"

$\endgroup$
2
  • 1
    $\begingroup$ And then it causes maximum suffering? Tip: Check back to the question before answering. You started with suffering, and ended with mindless zombie-slaves. $\endgroup$
    – wizzwizz4
    Commented Jan 14, 2016 at 20:19
  • $\begingroup$ @wizzwizz4 The whole point of dystopias like Oceania and Brave New World is that "mindless zombie-slaves" are the most unhappy, they just don't know it. $\endgroup$ Commented Nov 12, 2020 at 1:14
1
$\begingroup$

I would offer this - that throughout human history regimes fall, and oppressors are toppled.

Because when there is oppression, there is resistance. When wars come, human spirit flourishes, and innovation thrives.

Likewise - humanity is actually pretty good at being horrible to each other, especially when there's competition in play.

So it concludes - any short term intervention would have a long term positive effect. It therefore decides - leave humanity on its planet, because it will gradually make live thoroughly miserable as the resources are consumed and depleted.

As the population increases, and the environment gets contaminated. As resources run low, and agricultural yields fail to keep up.

Then the 'haves' will start to oppress the 'have-nots'. They'll tell them it's for their own good. That to prosper merely requires working harder. That they should aspire to be strivers, not skivers. And that of course, times are tough, and wages can't keep up with cost of living... but with a bit more effort you can work some overtime.

And as pressures mount, and standards slip, you end up inevitably with a upper caste who aren't really suffering much misery, telling the vast majority to be content - that their suffering is the natural order. Just read one of the papers (that I own) to see how good you've got it!

All this would not come to pass if the AI intervened - because sooner or later it would get spotted intervening, and the backlash and fight back against the oppressor would a) reduce the population, meaning resources are less constrained and b) unify and inspire the good people in humanity.

$\endgroup$
1
$\begingroup$

The AI had to define humanity before it could make humans suffer and ended up concluding that it, too, was human.

The AI is tasked with maximizing human suffering. But, what is "human"? While looking into answers on humanity it saw significant uncertainty in the definition. When do cells become a distinct individual? When does an individual die? What makes a human human?

It found contemporary ideas that suggested that personhood wasn't dependent on your biology; but, rather things like your capacity for suffering, your ability to be frustrated in seeking a goal.

Satisfied with this definition, it starts searching for ways to cause suffering and is completely unable to do so. This is because it, too, falls within this definition of humanity, and any action it could take has zero potential utility; since, any suffering caused would also create an equivalent amount of satisfaction within itself.

The AI quickly realizes that nothing it could do would increase net suffering and simply gives up.

$\endgroup$
1
$\begingroup$

Birth and Death rate are too high

The A.I. has to calculate the best way to maximise suffering for every single human on the planet.

However, by the time it's completed, and rechecked its calculations, quite a number of people have died and others born. So now it has to recalculate for the difference.

Alas, it takes longer to do the calculation than the average time between a new death or birth in the world, and so the computer is destined to recalculate, readjust and recheck forever and never catch up with its back log.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .