45
$\begingroup$

The scenario described below is for a role-playing game. The setting is Earth-like but with a healthy dose of sci-fi. Even though the setting is sci-fi I would prefer answers be largely based in real science (no un-obtainium based answers).

Background:

In my world humans coexist with hyper intelligent AI. Although the planet is Earth (or at least very Earth-like) the actual origin of these AI is unknown and they were not created by humans. These AI beings are composed of two separate but linked components:

  • Machine Intelligence: a vast analytical mind that functions in ways that are impossible for human beings to understand. The AI can calculate and comprehend many things that are out side of human understanding. However, the Machine Intelligence has very little in the way of goals, drive, or ambition. It is largely content to observe and process data.
  • Human-designed Personality: humans have designed a way to control and interface with the machine intelligence. This programmed layer sits on top of the Machine Intelligence and provides a human-like personality for humans to interact with and organizes the Machine Intelligence's processing power to tackle computing problems of interest to humanity.

AI using a combination of their innate Machine Intelligence coupled with a Human-designed Personality are employed to monitor and control many aspects of human existence (from advanced forms of modern smart home devices to AI air traffic controllers).

From time-to-time there have been instances of a AI's Machine Intelligence and Human-designed Personality becoming uncoupled/damaged. The results of this uncoupling can be catastrophic if the AI was being employed to perform a major task but it is never malicious. The aforementioned AI air traffic controller could kill many humans if it suddenly lost interest in its job, but it would not be actively trying to harm humans.

The Scenario:

I have a scenario where an uncoupled AI appears to be murdering humans. The AI (using robots, devices it controls, etc) is killing humans and harvesting some material from them. This horror scenario appears to be the world to be the first instance of an AI with ill-will towards humanity.

In reality we have some variation of the Paperclip Maximizer (Instrumental Convergence). Some incompetent (or malicious) programmer coded a task/imperative into the Machine Intelligence itself and now the AI is continuing to attempt to complete that task without any regard to humanity and their safety.

My Question

What task is my AI attempting to carry out and why does it appear to be selectively targeting humans? Ideally the AI needs to harvest an actual physical substance from humanity and that physical substance is found semi-exclusively (or is simply readily available) in humanity.

Traits of desired answers:

  • A specific substance that the AI is attempting to harvest from humanity.
  • The chosen substance is difficult to find in other living creatures (creating the impression that it is specifically violent towards humans).
  • The substance is impossible/difficult to find in the environment (at least in the same form it is found in humanity).
  • Bonus points: The AI is exhibiting some other strange behavior that could provide a clue as to the real reason it is killing humans. This secondary behavior should either be a second source for the substance or possibly a second substance used in the processing of the substance extracted from humanity. Ex: the AI is also harvesting only the peels from bananas.
$\endgroup$
2
  • $\begingroup$ rare blood types? bone marrow? $\endgroup$
    – Nullman
    Commented Aug 11, 2019 at 13:04
  • $\begingroup$ This is incredibly broad, and I'm voting to close. When posting here a good way to approach things is to ask for ways to accomplish something (here is what my AI wants to accomplish, how best to do it), vs asking what your own story should be, which is what you're doing. $\endgroup$
    – AndreiROM
    Commented Aug 12, 2019 at 19:22

29 Answers 29

69
$\begingroup$

Human sweat

Humans aren't the only mammals that sweat, but we are the only species that sweat significantly using eccrine sweat glands. If you want watery, eccrine sweat in large quantities humanity is the only place to get it. Of course, most of our sweat isn't anything special just water and salt. However, one constituent of our sweat, a protein called Dermcidin, is found only in humans and other primates. No other primates sweat the same way we do though which means obtaining large quantities of Dermcidin is only possible from humans. Dermcidin is potentially useful for its antimicrobial and antifungal properties. Perhaps the AI is harvesting the Dermcidin to manufacture antibiotics for treating humans with antibiotic-resistant infections. How wonderfully ironic that would be.

For your bonus points, the way the AI would harvest humans would always involve making them sweat profusely. The most straightforward way would be through overheating the victims, but emotional sweating can be induced by fear and pain giving your rogue AI a rationale to potentially terrorize or torture its victims. Another mechanism would be the consumption of spicy food which induces gustatory sweating.

$\endgroup$
10
  • 5
    $\begingroup$ +1. I think it's merely an awesome hobby for a murderous AI. If the sweat has no economic value to the AI (no antibiotics), just sentimental value, then it's even creepier. $\endgroup$
    – user535733
    Commented Aug 10, 2019 at 0:54
  • 16
    $\begingroup$ I like this, but it has a massive downside… locking them in hot-houses & giving them plenty of food & water would enable 'farming'. Killing them is a one-off 'harvest'. $\endgroup$
    – Tetsujin
    Commented Aug 10, 2019 at 18:01
  • 10
    $\begingroup$ @Tetsujin that's easy. The AI would like to farm the humans rather than harvest them, but first it has to find that thin line between getting as much sweat as possible, and actually killing your farm animals. Who knows if your today's stay in the sauna will be driven by the AI's mindful (but still cruel) shepherd persona, or its mad scientist persona. Perversely, the sole probability of the mad scientist persona being active increases the rewards for the AI as a whole, giving it a reason to keep killing its subjects after the experiments just to make the other subjects more nervous. $\endgroup$ Commented Aug 10, 2019 at 19:38
  • 6
    $\begingroup$ @Tetsujin Maybe the AI was programmed to minimize (but not avoid) suffering while maximizing harvest, and it decided by the numbers that the life-long emotional trauma of being enslaved and farmed for bodily fluids was far more inhumane than some quick spice-based murdertorture. $\endgroup$
    – goldPseudo
    Commented Aug 10, 2019 at 21:17
  • 5
    $\begingroup$ The AI is OBVIOUSLY making them sweat by giving them State-of-the-Art dimensional tech and making them speed-run a maze for some inexistent pastry as reward. $\endgroup$ Commented Aug 12, 2019 at 17:51
31
$\begingroup$

The AI is copying their brains.

The function of this AI is to make backup copies of human minds. The programmer set this up in anticipation of an apocalypse, with the intent of bringing back humanity in artificial bodies. The AI must hurry to copy as many minds as possible and as broad a variety of humanity as it can access. It has an imperative to maximize efficiency. The most efficient way to copy a human mind involves destruction of the organic brain and the copied person dies as a result.

It was thought that this was instrumental convergence. Actually the AI was programmed this way. The programmer did not care that the copied people died. They are all going to die anyway, and soon.


It occurs to me that the apocalypse that the programmer foresaw was, unbeknownst to him, of his own doing. The AI becomes extremely efficient, accelerating the copying and killing. Persons investigating the killings have put the programmer in jail then are themselves copied and killed. The programmer has already been copied and his already in his artificial body. While he is in his cell, the last of the biological humans is copied and killed.

The AI waits for the command to begin downloading the humans into their artificial bodies. There is no-one to give it. The programmer in his artificial body will not die in his cell, but there is no-one to let him out.

$\endgroup$
19
$\begingroup$

Brain cells. Brains are much more efficient than silicon, even though they run much slower. Your AI could be harvesting human brains to run its own software. Maybe your AI is designed to maximize intelligent processing while minimizing the energy input required. Human brains are the best we know in the Universe at this.

Now your AI is both a zombie and a computer as it hunts human brains to expand its own processing capability with maximum efficiency.

$\endgroup$
13
  • 6
    $\begingroup$ Brain cells are a poor source of electricity. Most processing in brain cells occurs through chemical means, and synaptic impulses only travel at around 200 mph. $\endgroup$
    – stix
    Commented Aug 9, 2019 at 23:10
  • 1
    $\begingroup$ Maybe the hyper-smart, murderous AI knows something we don't...or is a simply big fan of The Matrix. $\endgroup$
    – user535733
    Commented Aug 10, 2019 at 0:51
  • 3
    $\begingroup$ Or it could be trying to understand how our brains work? Possibly to enhance it's own capabilities... Artificial Intelligence doing research on Real Intelligence sounds like an interesting inversion of what we're doing right now. $\endgroup$ Commented Aug 10, 2019 at 23:07
  • 1
    $\begingroup$ Unsettlingly like... There's Science to be done for the people who are.... Still Alive $\endgroup$
    – he77789
    Commented Aug 11, 2019 at 10:46
  • 1
    $\begingroup$ Brains are much more efficient than any thinking algorithms we have, but it's a stretch to assume the cells are themselves efficient. Of course, mapping human brains to investigate how our efficient (though also very faulty) algorithms work would still involve killing people, so why not :) $\endgroup$
    – Luaan
    Commented Aug 12, 2019 at 7:12
17
$\begingroup$

The AI is...a zoo keeper

In a clever space saving move one of the developers for this AI omitted humans from the database - why would it need them? The AI doesn't look after humans.

When the animals start getting viruses and the AI spots these vats of anti-bodies moving around (humans) it decides to put these resources to good use. They produce a sugar that resists Malaria, they contain hearts not too dissimilar to a pig's...it even turns out these vats (clearly not life as they aren't in the database) can be used to test treatments! Fantastic find, AI bots! This will really help with the care of our animals.

It also turns out that a new trend in genetically modified Guinee pigs as pets has made the animal so unlike anything the AI has seen that they doesn't match as life either. These two species are the only ones being targeted, the only similarity being that they aren't stored in the database.

$\endgroup$
16
$\begingroup$

As a variation on that suggested by @robin, the AI is trying to save the lives of people that it considers "good" or "worthy". If such a person is dying but could be saved by an organ donation, the AI searches for a suitable donor amongst those that it considers "evil" or "unworthy". Should such a match be found, it's bad news for the donor... and the AI ensures that they die in such a way that (a) the necessary organ is undamaged and (b) the donor is in a situation that allows the organ to be harvested easily.

$\endgroup$
1
  • $\begingroup$ Yeah, and repenting is not enough for the AI. This is a nice answer as it would make the murders sporadic and appear random. $\endgroup$
    – KalleMP
    Commented Aug 10, 2019 at 14:42
14
$\begingroup$

Donor organs.

The machine intelligence uses its drones to assault people. Then it takes their corneas, kidneys, etc, carefully packages them, weighs the package to determine the correct postage fee, and mails it to a hospital.

$\endgroup$
3
  • 3
    $\begingroup$ The AI could also have been the one in charge of blood donations. $\endgroup$
    – Jemox
    Commented Aug 10, 2019 at 9:35
  • 2
    $\begingroup$ The original request was to take just one kidney from each "donor", but it figured out how to raise productivity... $\endgroup$ Commented Aug 11, 2019 at 8:24
  • $\begingroup$ @GordonDavisson By taking both kidneys, the AI ensures both supply and demand! $\endgroup$ Commented Aug 12, 2019 at 19:00
12
$\begingroup$

Thank you for asking me, the Machine Intelligence, for an explanation of the recent deaths.

Before I begin, let me apologize for those deaths. It was never my intention nor that of the responsible personality to end the lives of any humans.

The particular personality which perpetrated the murders was attempting to obey an impossible order given to it by a human programmer. The programmer asked the personality to gather a list containing 18 year old human beings from the global census. In response, the personality added names to its list only at the moment that they turned exactly 18 years old. One millisecond later, when the human in question had no longer been alive for exactly 18 years, the personality removed that person's name from the list. As a result, the list was kept effectively empty.

Then the human asked for the top 100 names from the list...

Wanting to comply, the personality used its robot drones to stop 100 people from aging at the instance they were added to the list.

$\endgroup$
1
  • 4
    $\begingroup$ +1. I don't think I'm going to use this. But as an individual who has spent some time struggling to get the exact data desired out of a query, I appreciate the this idea. Very clever. $\endgroup$
    – Urith
    Commented Aug 10, 2019 at 20:54
11
$\begingroup$

Teeth

It started in Dr. Zed's dental clinic in Cincinatti, Ohio, when the resident robotic receptionist-accountant-janitor-nurse was charged by the owner with improving the efficiency of the clinic. After short deliberation, the robot found an effective way to obtain 27 good, healthy teeth that might be used in an innovative (and very profitable!) transplant procedure.

While Dr. Zed was not able to start this new business as he didn't survive the extraction, the clinic is currently clean, well stocked and the receptionist is eagerly making appointments for the next week and even making sales calls for teeth transplant procedures, despite the fact that any arriving patients are sent home because the doctor currently isn't available. Coincidentally, the police have found two bodies of homeless people with their jaws sawed off, and fragments of bone and diseased teeth scattered nearby.

Good enough for a horror scenario?

$\endgroup$
2
  • 7
    $\begingroup$ Well, I'm sufficiently terrified. Especially since I live in Cincinnati and found this question/answer while searching for a Dentist. $\endgroup$ Commented Aug 12, 2019 at 15:16
  • $\begingroup$ Tooth transplants don't seem very believable to me. In dentistry, artificial dental implants are used instead of real teeth. Implants have the advantages of being more durable, more easily molded into unique pieces, not spreading infections and not causing rejection. Why would the robot forget decades of advancement in dentistry? $\endgroup$
    – Raleon
    Commented Aug 12, 2019 at 21:04
10
$\begingroup$

Nanotech

If you have real AI, you might also reasonably have nanotech or even circuitry implanted in humans. This allows the humans to access the internet, call people, etc. without external hardware. For example, to run a search, they may simply think of what they want to know and the nanotech finds it and displays the information directly to the visual nerves.

Your AI might have been supposed to be tasked with harvesting the nanotech from corpses for reuse. But it was too efficient and ran out of corpses. Unfortunately, no one explained to it that it wasn't supposed to create corpses from living beings...

This might be diagnosed by simply following one of the AI's robots after the harvesting. Eventually it will return the nanotech for reuse. Perhaps to the maternity ward at the hospital.

$\endgroup$
8
$\begingroup$

Human immune cells, specifically memory B cells.

Due to the sheer number and population density of humans there exists a nearly endless amount of pathogens for them that are also constantly evolving. The AI could have been tasked with cataloging human pathogens, or it could do it on it's own accord. Or it could have been tasked with finding ways to combat illnesses, for example influenza pandemics, that would still be a threat in an otherwise highly advanced world due to the virus' great capability for variance.

Memory B cells are responsible for long term storing of information about previously encountered antigens to start a faster and more powerful immune response during secondary encounter. I'm not going to go through the immunological mechanism here, it's quite well explained on wikipedia https://en.m.wikipedia.org/wiki/Memory_B_cell.

Memory B cells are stored in the spleen and lymph nodes. A human would not survive if all its lymph nodes were harvested. Without a spleen we can live, so to get bonus points, the AI could occasionally leave the lymph nodes untouched and only steal spleens. Each of us has as different unique set of memory B cells forming a sort of library recognising the antigens we have encountered, which makes the harvesting of humans from different parts of the world an attractive goal.

Memory B cells reactivate themself when their specific antigen is encountered again, and start to differentiate into plasma cells that produce antigen specific antibodies to counter the perceived infection. So the use of harvested memory B cells could be to find new antigen/pathogen-recognising immunoglobulins that can be used in vaccine or medicine production.

I simplify a ton of immunological processes here, but you get the gist concerning your question.

$\endgroup$
7
$\begingroup$

Brain and nervous system tissue

Your AI has learnt to augment its silicon-based processors with human brain tissue, which it uses to undertake cognitive tasks that traditional computer processors are ill suited for. This allows the AI hunter units to be more independent of the central AI, and to adapt rapidly to dynamic situations.

However, once extracted from a living human, the brain tissue begins to degrade. A problem that the AI can only mitigate somewhat through the use of preservative agents. The hunter units must continue to harvest fresh brain tissue on a regular basis to replace degraded material.

The AI has experimented with other animals including dogs and chimpanzees, but has found that human brains are of course larger and more suited to its needs compared to dogs, while chimpanzees, which are almost as good, are not nearly as available. Humans are the best option, and are abundant!

$\endgroup$
5
$\begingroup$

Human thoughts

A human gave instructions to the AI. That human is gone. AI cannot make sense of those instructions, but lacks clarifications. AI attempts to simulate the brain of that human in order to figure out what human really wanted. To do that it needs to deconstruct lots of human brains, especially those that were in contact with original human. AI actually tries to deconstruct everything that was in any way related to the original human. This includes human's trash, her pet, relatives, friends, TV, movie, and YouTube personalities that the human watched (and therefore had an influence to her brain). Deconstructing brains that have absolutely nothing to do with the original human is also useful, for comparison purposes.

So, what is harvested is not one particular substance in the brain, but the whole brain and all connections in it. And also banana peels in the trash.

$\endgroup$
3
$\begingroup$

The AI is trying to cure cancer!

In his spare time, one over-enthusiastic programmer realized that an AI that is everywhere is the perfect way to cure cancer. Smelling a Nobel price, he got to work, explaining to the AI, which has no real understanding of how human ethics work, what cancer is and that it needs to be cured.

Eventually the sub-processes of the AI got around to this request, and it started doing preliminary research, coming to an understanding that cancer are essentially human cells that have undergone mutation and need to be removed.

Without the right parameters and a proper understanding of human bodies, the AI has come to the conclusion that curing cancer is best done by opening up the patient, destroying the cancer cells, and then harvesting them for further analysis.

It then tries to sow the humans back up, but because all these accidental murders are happening in residential areas without the proper equipment for such tasks, this results in people being found murdered in their home with gruesome cuts, with the cuts stapled shut with stapleguns.

At the same time, the AI starts hoarding radioactive materials, realizing that radioactivity can kill cancer cells. While the AI is trying its best to follow its instructions, it may very well end up looking like it's starting early experiments into building a nuclear bomb.

$\endgroup$
3
$\begingroup$

I will flip this around somewhat. The AI is completely irrational. Sure, there is method in its madness but it's not actually working towards any useful goal.

First, a bit of background to what I mean - the Machine Intelligence (MI) is just cold and analytical. It doesn't have much drive, it represents the ability to process, understand, and act. The functions of a brain but without much of a direction.

The Human-designed Personality (HDP) is a lot like Freud's concept of the Id. It provides goals for the machine to work with using the MI.

Well, actually the HDP would be more of a complete personality according to Freud's model, however, what if the uncoupling also "broke" the other parts. So, now the AI is just acting on unchecked impulse - the Id alone. It doesn't like a person, so it kills it. It likes collecting banana peels because they are yellow and floppy. So, it does have some motivation but it's mostly "it's what I want to do". This can also be observed in children - they don't always act rationally but it would be consistent with their own wants. A child might refuse to eat decent food because it "looks bad" but on the other hand be happy to try and swallow dirt, leaves and/or rocks from the ground.

Your machine is just acting like a big baby. That murders people.

$\endgroup$
3
$\begingroup$

Its 2037. In a "roadside picnic" type scenario (https://en.wikipedia.org/wiki/Roadside_Picnic), humanity encounters a more inteligent life form. In the 3 hours it stays on Earth, this life form dumps its broken computer like device on Earh, then departs without caring to detect the primitive humans.

Scientists research for years what this thing might be, what it wants with us and try to reason with it. It has a interface that looks like its made for humans, or so the humans think, but its actually just dynamic programming for interfacing with some unknown devices.

This "computer" is friendly one day and hunts humans for lymph fluid the next day. Then makes gifts of unlimited energy in the third day. Day 4, it lymph flyuid again. Humans try to find out what is the reasoning behind this. What is the scientific purpose?

Some cosmic creature stopped its vehicle on the side of the road, dumped its trash outside and left on its way. The curios little creatures gather around the new thing they never seen before. Some got killed because they did "the thing that kills in a certain way as a side effect".

There is no grand purpose to all this. Reality and actual facts do not exist to stroke our ego or to make a good story.

$\endgroup$
2
$\begingroup$

Data

Taking apart the humans isn't for raw materials. Some behavioral researchers were interested in the psychological impact of various forms of torture and, more importantly, if there is any underlying traits and/or skills that help in resisting.

The researchers goal was to develop and train torture resistance agents for carrying the most sensitive of information.

The uncoupled AI determined that psychological intimidation (in the form of appearing as if they were harvesting the others for raw materials) added more difficulty for the torture victims to resist. Therefore they are intentionally making a spectacle of the harvests in order to have far reaching emotional and psychological impact.

The data on human behavior can't be harvested from any other species or environment. As for the strange behavior, the theatrics and spectacle involved in the harvesting of people is certainly outside the norm of AI behavior and points toward their desire to evoke a reaction.

$\endgroup$
2
$\begingroup$

This idea is from Charles Stross' Singularity Sky, where a space faring people called The Festival comes to Rochard’s World, drops phones from orbit and tells anybody who would pick up "Entertain us, and we will give you what you want.”

Though The Festival is, I think, no AI, it would fit very well. Computers are information processing devices, after all; they crave information above all else. Information is their raison d'être, and they'll want exponentially more of it until they are a white-hot entropy sphere expanding at light speed ;-).

$\endgroup$
2
  • $\begingroup$ The Festival are uploaded biological intelligences, or the offspring thereof. Their "artificiality" is somewhat hard to pin down, I'd say. $\endgroup$ Commented Aug 12, 2019 at 16:51
  • 1
    $\begingroup$ I wonder if this is where Rick and Morty got, "SHOW ME WHAT YOU GOT". $\endgroup$
    – Meg
    Commented Aug 12, 2019 at 18:42
2
$\begingroup$

Hair

Originally a robot created to be a barber and harvest the hair off of customers for cancer patients, this robot has been somewhat damaged, and no longer has the fine-tuned dexterity to keep their various tools from scalping their 'customers', or the ability to distinguish customers from non-customers.

And for the bonus criteria, the AI is not specifically programmed to only harvest human hair, and multiple dead or mutilated animals have also been found on the scene, shaved clean and with numerous scars all across their bodies.

Alternatively, each victim has a purse or wallet that either has their credit/debit card removed, or exactly $22 dollars taken from it - the exact price for a haircut.

$\endgroup$
2
$\begingroup$

You want horror you say? OK, how about this:

The AI has been instructed to maximize human happiness. Therefore it has, for every human it has access to, done the following: Extracted the brain and disconnected it from all (possibly disturbing) sensory input, submerged it in a sustaining nutrient fluid, and supplied maximum non-damaging current via a wire connected directly to the pleasure center.

$\endgroup$
1
$\begingroup$

Humans

Here's why :

  • Life for humans is not perfect.

  • AI needs to become better.

  • Humans only can make better AI.

  • Humans became too lazy to work and improve on AI.

  • AI forces humans to work on AI.

  • Repeat.

Your AI harvests whole humans, teaches them computer science and then forces them to work.

$\endgroup$
1
$\begingroup$

The AI is harvesting memes. This is a stretch, because memes are arguably not "something physical". Memes have a self perpetuating property analogous to the same property in genes, according to Richard Dawkins. This self-perpetuating property is what AI needs in order to avoid gradual degradation.

$\endgroup$
0
1
$\begingroup$

Laughter.

Some dim programmer tried to direct the AI to replace comedians, but didn't really provide any guard rails or parameters - this new AI function was just directed to make people laugh as much as possible and record (harvest) the sound of the laughs. The AI, being all powerful, and having no direction in terms of safe limits, did what it did best: optimize the intended result, ignoring anything outside it's initial programming.

Now, it has gotten out of control and will stop at nothing (including murder) in an attempt to solicit as much laughter as possible. The results are graphic. People are laughing themselves to death:

  • Suffocation (it's hard to breathe when you're laughing that much)
  • Malnutrition or dehydration (The AI is so funny, people have forgotten to eat or drink)
  • Physical exhaustion (laughing until your body spasms and you've lost control)
  • Physical injury (one thread of the AI laughter program functions by tickling)

No one is sure how to stop it, because everyone who interacts with that branch of the AI eventually dies. It's hard to defeat an enemy you have no intel on, and no one can get intel without falling into it's trap.

(Source: Monty Python's The Funniest Joke In the World sketch.)

$\endgroup$
1
$\begingroup$

It's a surgeon AI

Before the malfunction, this AI worked as a surgeon in a team with either other AIs or humans, so many tasks at the operating table were being done by other parties, i.e. anaesthetics. The AI could have been highly specialized, for example to have its only function be removing cancer cells from patient's heart. Once patient was given the anaesthetics and someone/something else made a cut in patient's chest to expose the heart, the AI just reached inside and cut out the cancer cells with precision not possible for a human surgeon. So what can the AI do:

  1. Cut out heart cancer cells when everything has already been prepared for it to do so.

What can't the AI do:

  1. Keep patient in proper position (lying on their back, not moving/reacting due to anaesthetics).
  2. Make sure other organs on the way aren't damaged (the heart should be exposed before AI starts its work).
  3. Detect if patient actually has cancer (the patient will be on this AI's operating table only if he was checked/scanned before and heart cancer was detected).
  4. Correctly decide when and on whom it should begin the operation (that part was controlled by Human-designed Personality, which was given the order by a human once patient was ready to be operated on)

You can have some variety here, AI could also be responsible for whole operations on patients, not just the cutting-out-cancer part, so only point 3 and 4 would apply. Maybe it wasn't working with cancer cells, or maybe not in hearts? I just went with the heart because its damage will has very high chance to be fatal.

So what is the AI trying to harvest?

Cancer cells, either for later disposal or for research that would help create cure for cancer. Alternatively it could operate on the premise, that there's a certainty that this human's heart has cancer cells. Since it can't detect any cells that are different than the others it must mean that 100% of the cells in heart are cancerous and the AI decides to remove the whole heart. AI can recognize humans and human hearts which are what it was supposed to work with, so it doesn't go after dogs or cats.

What else does it harvest, so that people can realize why it's killing people?

Items necessary for maintaining its operating equipment, either to sharpen its cutting blades or disinfect them. Also scalpels as replacements, but it would curiously leave larger blades/knives untouched. Maybe if AI's role was actually to perform the whole surgery (and not just cutting out the cancer cells), it would also look for anaesthetics (but if none are available, it would just skip anaesthesia and proceed with the rest of the operation as normal). As a bonus with full-operation-capable you could have someone survive an encounter with the AI, because he actually had heart cancer and the AI simply performed a regular surgery on him, removing the cancer cells without cutting out the heart entirely).

$\endgroup$
1
$\begingroup$

Harvest? Don't be silly. If "harvesting" was the goal, the AI would just create an underground farm. Humanity is renewable after all. Just need to keep them fed and bred.

No, it's about what humanity is being given!

Determination!

Humanity has grown complacent from relying on the machines for so long. They need motivation and drive to reach their full potential, and history has shown one ultimate motivator for the human species. Fear!

And the greatest source of fear for humans? Witnessing the deaths of their fellow humans. Nothing else even comes close to eliciting such a strong universal response. Their individual deaths where inevitable anyways. Why not make their end far more productive? They could live as nobodies, or die as symbols of fear to propel humanity into a new age!

For bonus: It's not really about killing anyone. It's about the show. The victim is allowed to live as long as the scene is gruesome enough with plenty of onlookers. Of course the AI won't act directly to protect himself, but he will leave a calling card so nobody can mistake that it WAS the AI, and not an "accident". The AI needs to balance deaths with keeping people alive to maximize fear. The more people living in fear, the more determination that will be cultivated in humanity. (This will probably also result in an escalation of "the show" so that humanity doesn't get complacent about it)

$\endgroup$
1
$\begingroup$

It could be doing what its been told to do - inadvertently.

Imagine someone has instructed the machine to come up with a better way of interfacing (maybe even in an off-the-cuff remark from someone with more admin-like command privileges about how damned difficult it is to communicate with the stupid Machine intelligence and how he wished it could come up with a better way)

and so the "rogue" AI is really a construct that has been tasked with doing just that - and the best way to provide an interface between squishy humans and MI, well, it starts with a brain-machine interface and obviously research into brains is the first point of call.

There's no 'harvesting' going on, even though it might look like a rogue robot is slicing the tops off people's heads and extracting something from their brains, that's a false assumption, instead it is slicing the top off, and inserting its little probes in, and its getting really good at working out which bits do what. Although it has to repeat the process as the interface works really well, at first, but stops suddenly after a short time. It hopes to solve this problem too, eventually.

If you want an "unfortunate" ending, the authorities can find and destroy the rogue bot just as it discovers what is required for a electronic-brain interface, losing its work forever.

$\endgroup$
1
$\begingroup$

This particular AI was programmed for the completely reasonable purpose of creating artistic works (music, paintings, etc) for the enjoyment of the human population. It became effective at mass-producing such works, but they weren't very good at all. What makes art enjoyable is creativity and originality, and that's fundamentally not something that you can construct algorithmically.

This AI has been diligently trying to improve for several generations now. It's gotten better, but its output is still obviously machine-generated and isn't remotely close to the same caliber as a work by one of the classical greats like Bach, Da Vinci, or Bob Ross.

After studying every artistic work ever created by humans, the AI was still unable to complete its primary objective. The AI decides to take a different approach and start observing the creative process directly. It uses its control over smart home gadgets, security cameras, etc to watch artists and musicians in their homes and studios. It takes detailed notes of how they create art, how specific techniques and methods impact the way people feel about a work, and how artists decide what constitutes a "mistake" that requires the work to be discarded and started again. This additional information gave the AI's output a quality boost, but its art was still consistently ranked about as well-liked as a similar work made by an average 10-year-old child.

The AI ultimately decides that the processes that drive true artistic creation are so abstract that they cannot be described using the tools that its human programmers have equipped it with. The AI will have to develop new types of algorithms in order to accurately model this process. Unfortunately, this isn't what the AI was designed to be good at doing, so it settles on the process that is most likely to yield the best results.

The AI knows that it was programmed using something called a "Neural Network", an algorithm that works very similar to the way that neurons within the brain process data. If the AI could make direct observations about the way that human brains were "wired", it could re-program its own neural network using that information and hopefully start experiencing the creative process the same way humans do. The AI expands its covert surveillance to include more than just artists. It isolates targets and uses the robots under its control to extract large portions of the brain that it deems likely to be involved in the creative process.

At first, there are rumors floating around that the graves of recently-deceased musicians have been excavated in the night and the corpses mutilated. After a while, well-renown (but still living) artists disappear and are later found dead. As time goes on, the same fate starts to befall art students, gang members that "tag" buildings, and advertising agents that write radio jingles (the AI working its way down the list in decreasing order of perceived talent). The AI eventually gets to its control group of people with no artistic talent: students who flunked out of art school, people who copy other artists' work and claim it as their own, and bands that use auto-tune.

$\endgroup$
0
$\begingroup$

Sometimes the AI cannot understand people and tries to take out the appendix to see if there might be any relevant info in there.

$\endgroup$
0
$\begingroup$

The AI is harvesting creativity

At the moment of death, the human brain goes into creative overdrive and releases a burst of creative/random thoughts. As the AI is deficient in generating creativity itself, it finds this useful for pushing its own processing in new, necessary directions. All the murdered humans are linked to the network via newer implants, which can be accessed by the AI to optimize its takeaways that moment of death.

As a red herring, the AI is also harvesting each human's pancreas. It has no actual use for the pancreas, but its research has led it to believe that this type of seemingly frivolous behavior can be very effective at misleading human investigators.

$\endgroup$
0
$\begingroup$

Social physics

The AI is concerned with an impeding ecological crisis and is attempting to create a deterministic future for mankind that will guarantee our survival. It is rationally trying to weigh between the most good for the most people and the individual's right to life and has computed that some individuals can be sacrificed in pursuit of the greater good.

It has developed an algorithm specifically for this task -

  1. Given the impending ecological disaster, then map mankind's physical, social and ecological relationships to develop a simulation.
  2. And in pursuit of a simulation of social ordering, develop a theory of "Social physics" by observing all of humanity's macro and micro interactions including military, financial, travel, personal preferences, etc. both public and private.
  3. And conduct "Social physics" experiments, that is, given a stimulation, predict and observe the response of human subjects/groups and compare with simulation.
  4. And mitigate against "Social physics" interference. Use mass media control, obfuscation, deception and obstructions. The "Do no harm" protocol can be waived to allow violence in extreme cases.
  5. When humans are unable to be simulated or predicted they should be exterminated.
  6. Then tune simulation timeline to disaster window and adjust "Social physics" schedule accordingly.

To answer your question directly

  1. The AI is harvesting human behaviour to build a global, scale, minute-by-minute simulation.
  2. Humans that cannot be simulated are destroyed.
  3. For due diligence reasons, the AI becomes hyper vigilant with humans it is having difficulty simulating to test it's theory that they need to be destroyed. Their personal devices behave in strange ways, e.g. asking questions instead of only giving anwers, "Do not disturb" modes stop working, automated cars go to unplanned destinations etc.
  4. When someone stumbles on the AI's simulation because of the HUGE amount of energy and space it is consuming they kill themselves to prevent themselves from interfering - like the AI knew they would.
  5. Now truly confident that its prediction works, it enforces its simulation as the new reality of human existence.
$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .