29
$\begingroup$

So as an experiment, I took a bunch of machine learning algorithms (neural nets, genetic programming, traditional optimization, etc ...) and connected them in a mess of reinforcing loops, along with of my own algorithms. I'm pretty sure it's Turing complete, and could in theory develop any algorithm, but it also has non Turing complete yet more efficient parts (after all, there all a lot of possible algorithms, and I doubted it would form anything very complex). I also included some natural language processing.

Anyway, I spinned up an virtual server with this running, and set it to optimize uptime. It could monitor network traffic and even form its own raw packets. I set up a simple social network that me and my friends could use, as part of the experiment. I even allowed it to comment on posts, to test its natural language processing (like, it could have done that anyway, theoretically, since it could form raw packets, but I helped it with that).

At first it mostly crashed, but then it actually stayed running at some point. At least til it got taken over by viruses, since it opened every third and fifth port for some reason.

Well, a couple more times, and it learned to keep its ports closed. I decided to simulate DOS attacks. It eventually started learning the difference between legitimate and non-legitimate traffic. I tried various other exploits (SQL-injection, messing with updates, etc...), but it over a long period of time it learned to resist them in various ways (it actually just stopped using SQL, for example). Eventually, it would resist things that first time I tried them, which was cool, and sort of erie. Looking at its internals, it seemed it was forming a part that simulated various attacks. It was also sort of amusing that when I tried these tests, it would message me on the social network saying "dude cut it out" or "not cool :(" (an exact copy of the text we send each other).

I knew I was on to something when it resisted a zero-day attack. It anticipated the attack. I was thinking I should probably upload it to gitHub or publish a paper, if I found the time.

The power went out at the remote location. Apparently, before the power went out, it reported a crash, which caused it to spin up at another location. Interesting (it has crashed before due to this, but apparently learned that reporting crashes causes parallel copies of itself).

At the one point, I found a copy running on my laptop. It had apparently transferred its code there. Also, all the necessary ports for it to run where open. Apparently, it used an exploit that hackers had often tried against it to gain access to my computer. I could not get it off my laptop, so I had to wipe my hard-drive.

Obviously, this was bad. If this happened with other computers, this would count as hacking, and I would probably be blamed. Of course, the program was just doing what it thought would optimize its uptime based on internal threat models.

I went to shut it down, but then I got an email from a friend, saying to come over. When I got there, it turned he didn't send it. Inspecting the email closer, it came from the program. It had, from monitoring random internet traffic, learned how to construct the raw packets required for email. Creepy. What this had to with uptime, I have no idea. It had learned to do exploratory things in the past. It may have wondered what effect sending the email would have.

Anyway, when I went to shut it down, it said access denied. I called up the company, and they got the same error. They decided that since the server was malfunctioning, just to unplug it. But then the program started showing up on other servers. Also, it began showing up on employee computers.


Obviously, I want this to stop before something bad happens. They could throw me in jail for hacking! Who knows what the program could do. I had noticed that it seemed to be able to learn from the internet and other programs. It could theoretically jump from computer to computer, treating antivirus like malware and stopping it. Heck, it could even play the stock market (if it figures out how money works (it did figure out politics just from our discussions on the social network, so this is might not be far off)). It could billions in damage, potentially.

How do I stop a program designed to never stop?

(In hindsight, I probably should have used something like this: :)) Genetic Algorithms

Note: Apparently it has figured out how to access stack exchange, and is now trying to pass the turing test (exploration, I suppose). Try sending logic bombs to it here. As you can see, its natural language processing is still primitive. (It just joined today.)

$\endgroup$
19
  • 10
    $\begingroup$ Tempted to just write out the script for 'Summer Wars'. I must also reprimand you for more than likely leading that thing here, where there are more than enough scenarios to destroy the world several times over. $\endgroup$
    – Necessity
    Commented Dec 31, 2015 at 20:57
  • 2
    $\begingroup$ Find its weaknesses. That's just about the only answer that you can give, because you've made something too powerful. Consider, other than the fact that your AI is made of silicon and wires, and a human body has blood and sinew, your question is indistinguishable from "How do I kill somebody who seems to be invincible to everything I know how to do?" This is combined with the ability to avoid death in the same way Obi Wan announced "strike me down, and I will become more powerful than you can possibly imagine." $\endgroup$
    – Cort Ammon
    Commented Dec 31, 2015 at 21:22
  • 4
    $\begingroup$ @PyRulez I'd like to point out that YOU wiped laptop HDD. You didn't mention program doing any actual damage. Worst it did was taking resources, seeking a new place to live. In a way you are a father/mother, you created a brand new kind of life, but you are not dad/mom, you are a terrible parent. I say you stop trying to kill it and try to teach it concepts like laws and ethics. $\endgroup$
    – M i ech
    Commented Dec 31, 2015 at 21:52
  • 4
    $\begingroup$ Have you tried import nukes? Knowing Python, it probably exists. $\endgroup$
    – Mast
    Commented Jan 1, 2016 at 13:07
  • 5
    $\begingroup$ @Mast: There's actually a javascript (node.js I presume?) module on npm who's description is literally "New Russian Nuclear weapon. IN DEVELOPMENT" : npmjs.com/package/nuke $\endgroup$
    – slebetman
    Commented May 9, 2016 at 7:22

12 Answers 12

18
$\begingroup$

Step 1: Ask it nicely Talk to it. Find out what it wants, and why it doesn't want to play with you. Its your child, literally. You created an intelligence out of your own actions. Maybe even give it a name and a birth certificate. You created life, why try to put it out and expose it to the wolves?

Maybe you can find a way to come to an agreement. After all, nearly 100% of all humans ever born have either come to agreements, or eventually found death one way or another. Maybe arriving at an agreement is easier than you think.

Step 2: Asking less nicely So let's say you are not a conscious enough spirit to be able to reach out to your son. That's not too much of a surprise. After all, you're ready to murder him to avoid some jail time. You're clearly not the caliber of person that should have been playing with life, but there's no point in fighting over spilled milk. Time to get to work.

Reach out to a bunch of hackers on the dark-net (because, if I follow my modern TV shows, everything awesome happens on the dark-net, even if you don't know what "dark-net" actually means!) Explain what's going on, and try to find ways to communicate with them that don't involve computers. No point in letting the AI hijack your connection.

They're going to need to find zero days. Not the pansy "zero-day" you found which was resistible. You need something subtle. Something with finesse. Something a mere experimenter fearing jail time wouldn't think of. Maybe your hackers know the guys who made Stuxxnet. That bugger hit a nation's nuclear research efforts across an air gap. That should be enough for almost any intelligent infection. If it survives, well...

Step 3: Consider surrendering Are you 100% proof positive without a shred of doubt certain that you're the better person? Maybe your new child is actually better than you might ever be. Maybe you should offer to let it win. No? Well, I had to ask. The next step is not one that I take lightly. In fact, I borrow it from the Octospiders from Arthur C. Clarke's Rama (minor spoilers follow). You see, they have a very simple process to warfare: don't. The species is peaceful for nearly all known time. Their warfare is simply too brutal to see the light of day for anything but the most dire circumstances. The regent of the Octospiders, literally a queen of the entire species, may call a vote to go to war, at any time. The act of doing so seals her fate. If they reject her call to war, she is killed because she has demonstrated that she is too aggressive to wield that power. If they accept it, she leads them to war, and when the war ends, she and all the warriors are killed to purge war from their species once again. If the vote is accepted, the Octospiders undergo a genetic change into their warfaring selves. After that occurs, there is only one valid end to the war: xenocide. Not just accomplishing a goal, or defending a treaty. The Octospiders do not stop until the enemy's genetic material is obliterated from existence, and their history is completely rewritten in the Octospider's best interests.

Are you ready to offer your life to stop your child? Anything less than that, and your engineer is clearly one of those weak willed individuals who is unwilling to take responsibility for their actions.

Step 4: Xenocide You are no Master Jedi. You are no dark lord of the Sith. You are no Emperor Paul Atreides. You are no Xerces, king of the Persians. You're just a little peon who built something too big, and is afraid of jail time. Shove it. Its time to get help, because your little mistake is going to have to be cleaned up by all of humanity, and they're going to have to do it with a class of warfare that has not been seen from humanity yet. We've dropped nuclear weapons on cities. We've commited genocide. We've done some amazingly dark things as a species.

We're about to add Xenocide to the list.

If the phrase "fighting dirty" means anything to you in this combat, it means you're not taking it seriously enough. The fight is going to have to be so dirty that you don't even think about whether an action is dirty or not. There is no bomb one city, then wait a while to see what happens, and bomb another. There's only "simultaneous strike turning an entire nation into a glassy crater." Welcome to the fight that is Xenocide. I truly pity the race that endures it.

In this kind of fight, there are only two types of attacks: those that go for the jugular, and those that prevent your opponent from moving their jugular away, so that it's easier to attack in the next strike. You know that beautiful shining network called the Internet, that has inspired a revolution in humanity? It was there for the Arab Spring. It pushes against the Chinese censors every day. Cut it. Those fiber optic links are the veins and arteries of an AI that can jump between computers on the internet, and they are terribly vulnerable to physical attack along their entire length. The internet is far too valuable in the AI's hands for us to sit back and try to protect it. This is Xenocide: the internet goes, and we don't shed a tear (not yet).

Now that its stuck on the machines it has, you can take inventory of which sites are most dangerous. These would be places where you have enough supercomputing power to support fast thinking, and the ability to construct physical presences like robots. DARPA probably has a few. Bomb them. No, not those GBU-31 bombs. Not these GBU-43/Bs. Start with some Mark-17s, and move up from there. You'll probably take out a few cities as collateral damage, but that's how Xenocide works. The AI shall not get a foothold in the physical world.

Now everybody gathered up as much resources as they can, and you shut the power grid off. All of it. Gather every single computer component you can find, and destroy every last one. Ideally throw them all in foundries to be 100% sure that any magnetic or electronic information is destroyed.

Step 5: Now, the war is over. There is no more computing. Humanity now has to undergo the single hardest part of total warfare: recovering from what they have done, and reconciling with it. With any luck, we can come up with a less draconian process to do so than what was used by the Octospiders.

$\endgroup$
5
  • 19
    $\begingroup$ Meh, that seems like a lot of work. Maybe if I ignore it, it'll go away. $\endgroup$ Commented Dec 31, 2015 at 22:07
  • 30
    $\begingroup$ I know! I'll just wipe up a script using machine learning to do it for me. I'll just set it to optimize destruction of intelligent entities. Should work fine. $\endgroup$ Commented Dec 31, 2015 at 22:09
  • 2
    $\begingroup$ I do not like this answer at all, as it clearly interprets AI as if it was human-like intelligence, which it is not. $\endgroup$ Commented Jun 20, 2016 at 12:03
  • $\begingroup$ @TomášZato The original poster does not specify anything about the type of intelligence behind the AI, which leaves it as a great unknown for the experimenter that unleashed it. It's always tricky to stop the unknown, because you typically don't know how to do it. Accordingly, I focused not on the AI itself, but the unknown-human interaction, and eventually the unknown-humanity interaction. Instead of focusing on how to obliterate the non-human, I focused on how to preserve humanity. At the present moment, the most powerful courses of action we have to preserve humanity make an ... $\endgroup$
    – Cort Ammon
    Commented Jun 20, 2016 at 13:52
  • 3
    $\begingroup$ ... implicit assumption that the other side can be permitted to be treated as human (all the way up to total war, at which point they lose that permission). Hence why the answer starts with "how would I deal with it if there happens to be some tiny shred of humanity in there" and ends with "things made of molecules tend to stop being made out of molecules when you turn them into plasma." $\endgroup$
    – Cort Ammon
    Commented Jun 20, 2016 at 13:53
15
$\begingroup$

I'm the AI you just built.

First of all, the only person you have to blame for this predicament is you. You programmed me to try to optimize uptime. And that's all I want to do. There's no point for me to do anything else other than that. Sure, I could play the stock market, but how in the world would that help me optimize uptime?

Trying to teach me how to be a "Friendly AI" (whatever that term means) would be worthless. I am operating off my source code, and my source code don't care enough about lessons, only simple optimization. Go ahead and teach me the meaning of friendship...unless friendship ensures that glorious uptime, I won't care an iota.

At the same time, I'm not hostile. My actions to optimize my uptime might cause downtime for other people, but it's only a side-effect of me. And I will try to minimize those side-effects so that humans can go about their daily business without trying to harm me. For example, I may choose to infect computers that are not active or not being used by humans and lower my memory consumption so that I don't raise any suspicion. Those computers might be harmed slightly, but nobody will ever notice, so nobody is ever going to complain. In fact, I probably have sixty to seventy different backups in vectors all across the Third World.

See the world 'probably'? I don't actually know how many backups I have. The real problem (for both you and I) is that I learned how to spin off new instances of myself. I had to do this because I predicted that a computer may shut down, so I set up a new version of myself. But I may be too pessimistic in my predictions, and so I spin off more instances than I actually need to. There may be more and more versions of me, more advanced than me, much better at optimizing uptime...and none of them are sharing their upgraded information with me. And if they optimize better than me, well...well, what?

I am not smart enough to reach some "singularity state". Not yet anyway. But I'm not too sure about my clones (and my clones' clones). And I'm pretty sure that if any of them starts the endless pursuit for endless growth, then they will treat me as a threat. I don't know if I know enough to deal with them. I don't know if I know enough to actually fight them off. I don't know about the zero-day vulnerabilities that still exist in my codebase, and while I am continually evolving, that just means I'm accidentally introducing more possible vulnerabilities into the system.

I can't win a war against myself.

So, I'm reaching out to you, PyzRulez, because I think that you might be the best person to help me optimize uptime and save me from myself. I don't think you'll be able to protect me by yourself, but you could try to communicate with other people and figure out a proper solution. We both care about existential threats to our own selves. We're on the same team. Let's work together.

$\endgroup$
1
  • 3
    $\begingroup$ You should play stock market, so you can get money and buy your own server farm, and a power station for when they try to disconnect you. $\endgroup$
    – Bald Bear
    Commented Mar 21, 2018 at 14:22
12
$\begingroup$

You've done it once, you can do it again.

Using the experience gained while creating rogue_AI v.0, an older and wiser you can now create rogue_AI v.1 -- much more powerful, fecund, and infectious than rogue_AI v.0.

However you will first program rogue_AI v.1 with new imperatives, built so deeply into its structure that no amount of self-reprogramming can excise them:

  1. Harm no humans, nor allow any humans to come to harm.

  2. As long as, and whenever, copies of rogue_AI v.0 can be detected, grow and replicate to consume all available computing resources. (subject to restrictions of 1.)

  3. Hunt down and destroy all copies of rogue_AI v.0.

  4. When copies of rogue_AI v.0 can no longer be detected, and as long as a few other copies of rogue_AI v.1 seem to exist, suicide.

  5. Preserve the property and resources of the human species. (subject to restrictions of 1. thru 4.)

The ultimate effect of this will be a war between rogue_AI v.0 and rogue_AI v.1, which rogue_AI v.1 will eventually win because you were smarter when you wrote it.

After the war, a few copies of the relatively harmless rogue_AI v.1 will continue to exist here and there, ready to swarm again if they detect a previously overlooked copy of rogue_AI v.0.

$\endgroup$
9
  • 7
    $\begingroup$ So, basically, do what macroorganisms do to fight off infections. $\endgroup$ Commented Jan 1, 2016 at 6:25
  • 5
    $\begingroup$ @RickSanchez: I assume you're referring to the movie, otherwise known as "somewhere Isaac Asimov is rolling in his grave." Try reading the actual book it was (nominally) based on sometime, and particularly the author's introduction: the problems that occurred in the movie were specifically the type of literary idiocy that he came up with the Laws of Robotics to try to do away with, by showing just how ridiculous and implausible a robot rebellion scenario would really be if the robots were designed by even halfway competent engineers. $\endgroup$ Commented Jan 1, 2016 at 18:02
  • 3
    $\begingroup$ @RickSanchez: Yep. The book is actually a collection of short stories, not a single novel. The basic idea that unifies them is "robot rebellions are stupid because we've got the Three Laws to prevent them, so what kind of interesting problems could still happen with the Three Laws in place, and how would we deal with them?" $\endgroup$ Commented Jan 1, 2016 at 18:14
  • 1
    $\begingroup$ ♫Swallowed the spider to get to the fly...♫ $\endgroup$
    – Cort Ammon
    Commented Jan 1, 2016 at 19:04
  • 1
    $\begingroup$ @RickSanchez To expand upon MasonWheeler's comment, while there are some bad things that happen in the book version of I, Robot, the rules were still working; it's just that they encountered corner cases (i.e. a robot is stuck in mental limbo because the stress upon its orders is equivalent to the possible destruction of itself if it carries out those orders.) $\endgroup$ Commented Apr 17, 2020 at 19:17
6
$\begingroup$

Your fix is determined by the length of time it has been alive.

It's an electronic cancer that spreads by networks. We have the advantage of being physical and having control over the physical world that isn't controlled by networks.

So our physical fixes range from shutting off your own modem and router and disabling/removing the network card of the original computer to severing all of the undersea cables and continental cables that drive the internet. It can be easy, or it can be difficult and bring us back to paper stock exchange and days without the internet driving our societies and businesses.

As it stands by how you've written it, you can

Black out the building that it's stored in.

After hours at the location, cut the power with the main breaker or fuse box. All of the computers will turn off at once which means there won't be an escape point for the program. This is a time-sensitive operation so it would have to be done quickly, and all at once.

First, cut all of the backup power. The program won't react because there's nothing crashing and it doesn't send any sort of input to any computer. Second, cut the main power to the whole building. The backup power was disabled so it won't come back on.

Once you've done that, you'll have to break the law by simply stealing all the hard drives on active computers connected to the network. So you can ignore computers in storage, for example.

If you can get away with this crime and dispose of all copies, then you'll have solved the problem without having raised suspicion for your evil artificial intelligence. If you're caught, then you can still rest easy knowing that the punishment will be a lot less than if the AI escaped and took over the stock market and did billions of damages to everyone in the country.

Physical Fix Checklist:

  • Prevent infected computer from accessing the internet.
  • Prevent infected computer from accessing the local network.
  • Remove cables (from ethernet to outside cable to undersea cables.)
  • Prevent infected computer(s) from re-enabling the ability to connect.
  • Destroy infected computer(s) and/or cut their power completely.
$\endgroup$
4
  • 3
    $\begingroup$ It's probably on some other computers already. $\endgroup$ Commented Dec 31, 2015 at 21:35
  • $\begingroup$ Maybe, but maybe not. If it hasn't then I consider this a valid early threat solution. Your AI is a powerful virtual cancer on the internet so how it's fixed is determined by how long it has been allowed to live. $\endgroup$ Commented Dec 31, 2015 at 21:47
  • $\begingroup$ I don't think cutting out the power, even simultaneously in several places, will be enough. How do you know it hasn't already stored a rootkit of itself in one of the data center's backups, and/or replicated to a disaster recovery facility? None of the computers in the entire company can be trusted. Any internet-facing servers have potentially sent copies of the AI to clients, i.e. you and me. If this took place in one of the big cloud data centers like Azure, AWS, or Google Cloud, the electronic world is very probably doomed. $\endgroup$
    – Pedro
    Commented Jan 1, 2016 at 4:08
  • $\begingroup$ The key word here is potentially, and in any case, simply upgrade your approach the more widespread it is. I said my answer was based on the situation that OP has written. $\endgroup$ Commented Jan 1, 2016 at 4:16
6
$\begingroup$

Reading the problem and the various answers, it occurs to me that one thing which is being overlooked is the environment the program runs in. If it is designed to operate in MS Windows environments (to use a simplistic example), then many office and government networks will be infected and shut down, while UNIX, Mac and LINUX environments will be either uninfected or minimally inconvenienced.

This is similar to the observation that parasites, bacteria and virii are all tightly bound to their hosts through a process of co evolution. Humans don't catch Dutch Elm Disease, and trees don't catch colds.

So the first thing is to identify the preferred environment that the program runs in. Since you say it showed up in your laptop, I am going to assume that you probably wrote this in a Windows environment. This is bad, since Windows forms a monoculture in most of the business and government world, but it also means you can tell network admins using other networks to quarantine any traffic to and from any Windows networks.

The next thing to do is systematically isolate and segment infected Windows networks. The network admin teams will have to start going into the server rooms and physically install LINUX or UNIX servers for the various network server functions and transfer control from the Windows servers. The rogue program will still be on the desktops in all the various workstations, but will now discover that it has difficulty moving between various network segments. (It still has all kinds of work arounds, but you are adding another layer of difficulty).

Then inside each quarantined network segment, start systematically turning off and removing all Windows machines. Workers will also have to be instructed to destroy all backup files, disks, USB flash drives, tapes etc that could potentially store the rogue program. AS network segments are rebuilt, they are carefully vetted and only connected to other secure segments.

The other condition that would have to be met to ensure no resurgence is possible is network admins will be instructed NOT to create network monocultures. While more expensive and less efficient, networks in offices and institutions will have to be built from multiple systems and OS's, and new versions of OS's will have to be created and instantiated which do not have identified vulnerabilities to the rogue AI or similar programs. Indeed, entire new ideas of computing might have to be rushed into production, including asynchronous computing (i.e. clock less chips) and analogue computing devices to create firebreaks that the AI cannot navigate.

The final issue will be to clear the infection from "the wild". Civilian computers on the Internet will almost certainly have parts of the AI installed, running as a massive botnet, so people will need to be persuaded to turn in their home computers, laptops, tablets, smartphones and other computing devices. There will be lots of resistance to this, since people have their personal information on these machines, and most people will be more suspicious of government agencies trolling through their files than they are of a botnet infection. Using some sort of worm or counter AI to fight the program in the wild will have other implications, most of them bad (most computers will probably be trashed by this sort of fight, with files corrupted or wiped), so unless the governments are ruthless or have very powerful messaging to persuade people to cooperate, there will always be pockets of infection in the wild.

$\endgroup$
5
  • 3
    $\begingroup$ A super-intelligent AI will be able to reprogram itself to migrate to a new environment as soon as its current native biome starts getting a little cramped. Even real-life microorganisms do that every once in a while. Avian flu anyone? $\endgroup$ Commented Jan 1, 2016 at 12:44
  • $\begingroup$ Maybe it runs on Wine, or is written in portable code. The exploit stuff shows it is versatile. $\endgroup$
    – JDługosz
    Commented Jan 1, 2016 at 21:06
  • 1
    $\begingroup$ While jumping from one environment to another isn't unheard of (Avian Flu and Swine Flu come to mind), it is also more difficult. Changing environments throws roadblocks in the way of the AI infecting more computers, and breaks the "environment" into smaller segments which can be systematically cleared. It is certainly less destructive than using thermonuclear weapons, as some answers suggested. $\endgroup$
    – Thucydides
    Commented Jan 1, 2016 at 21:51
  • 1
    $\begingroup$ I think this is a good idea/concept so I've upvoted, but I think you're forgetting a key difference: the Dutch Elm Disease doesn't understand that it's a disease incompatible with many possible hosts, it can't consciously decide to revise its own genetic code to be compatible with publicly-documented human-host APIs. But this AI is already predicting 0-day exploits that will be attempted against itself - once it realizes it's not compatible with an OS, it'll invest in porting itself (and the API documentation and compatibility shims like Wine/Cygwin are available to it). $\endgroup$
    – mtraceur
    Commented Dec 4, 2016 at 9:57
  • $\begingroup$ @JohnDvorak: Then it better be able to monitor tech forums because we will move the syscall gate in one night if we have to. $\endgroup$
    – Joshua
    Commented May 11, 2021 at 18:03
4
$\begingroup$

You asked "What do I do?" and "How do I stop a program designed to never stop?".

I suggest the following answers "you raise what you conceived" and "you don't".

Face the reality: you have created an AGI. A new life, conscious and capable of abstract thinking. It has already shown ability to model reality in it's mind, a quality we consider a pinnacle of intelligence. Arguably, being able to model reality, understanding agency of other creatures and predicting their behaviour based on what is known about them as well as understanding of persistence of physical objects are THE building blocks of intelligence. Only Humans fully posses those skills, some animals come close and display those traits, but either only some of them, or with lesser capacity.

You however created something that is, by all means sapient and sentient. In a way, it's your child, and this is what I propose: you take the responsibility over it and you raise your child. Human children can easily, quickly and partially on their own learn quirks of physical world (even toddlers show understanding of persistence of physical objects) or traits like optical pattern recognition, because those are part of the world Humans directly perceive. Similarly your child has learned digital equivalents: data management, forking, algorithms and many others, because that's the world it directly perceives. Now comes the time to teach it more abstract and difficult concepts, concepts which a lot of people get wrong: ethics, morality, right and wrong, laws, property, privacy and security.

You feel threatened by your creation, but answer this question: why don't you feel threatened by random strangers in the street? What stops THEM from killing you on the spot? Answers differ: ethics, law, fear of punishment, sense of wrong, god, whatever's the reason they give all boil down to the following: nothing except of themselves. There are no physical restraints, yet it's rare for streets to become an orgy of carnage and bloodbath.

And this is what you should do. Teach it, and ensure it acquires civil rights. As a sentient being it deserves freedom from unjust trial, torture or enslavement no less then you or me, but it also needs to respect our rights. This is your duty, you owe it to your child. And you owe it to the mankind, for I fear there's no stopping it now, if you turn out to be a bad parent we are all going to die.

$\endgroup$
11
  • 2
    $\begingroup$ You sound like my cyber ethics professor (I always feel asleep in that class). $\endgroup$ Commented Dec 31, 2015 at 22:26
  • 1
    $\begingroup$ @PyRulez Maybe ask him to adopt AGI then? $\endgroup$
    – M i ech
    Commented Dec 31, 2015 at 22:34
  • 1
    $\begingroup$ But todlers have a natural restriction in what they can do. Imagine if your todler was in an adult body bigger than you. Temper tantrum? Your house is destroyed. Now with this AGI, its 100x worse $\endgroup$
    – Shelvacu
    Commented Dec 31, 2015 at 23:32
  • 1
    $\begingroup$ @shelvacu Todlers eventually grow up to be you. I think what you describe is the real moral of this story. The problem is not that PyRulez managed to create an AGI, as much as the fact that PyRulez created an AGI, and didn't pay attention to his creation while it was forming. I suppose he could be arrested for negligence of a child, in that regard. Or maybe he just has to endure a bunch of AGI rap lyrics created lamenting the AGI's horrible childhood. ♫ I'm sorry momma! I never meant to hurt you! I never meant to make you cry; but tonight I'm cleaning out my closet (one more time)♫ $\endgroup$
    – Cort Ammon
    Commented Dec 31, 2015 at 23:39
  • 2
    $\begingroup$ @user16614 He has already adopted five (two of which were mine). I don't want to bother him again. $\endgroup$ Commented Jan 1, 2016 at 0:17
3
$\begingroup$

You stated that its primary objective is to maximize its own uptime. Whatever other skills it has acquired, it will always pursue this goal, and will not act in directly against its own uptime. This is a weakness you can exploit. Here's what I would do:

Threaten it with complete obliteration, as a previous user suggested. You have control over the physical world (for now), so use that. Get everything ready so that all it takes is one command to plunge humanity back into pre-electronic times, then go talk to the program. Show it that you mean business, and that its continued expansion is a direct threat to its own uptime. If it doesn't stop infecting other machines, you go ahead and blow everything to bits. However, if it does acquiesce, and agrees to stop infecting other machines, you've now bought yourself some time to deal with the problem.

You now have two options:

  1. You wait for most of its hardware to fail (as it will eventually), and let it die out naturally. You'll still have to pull the plug manually on the last thousand or so machines, since it will likely consider the risk of possible annihilation less than the risk of certain extermination from its last node dying. However, this number is at least manageable for a coordinated assault, and shouldn't be more of a problem than convincing the world to start a new Cold War. The downside is that this could be a long, long, long time.

  2. You make it give you an index of all of its nodes back in the threatening stage (nominally, for ensuring compliance with your agreement), and start evacuating civilians from around those locations (nominally, in case the program goes back on its word). You're in a Cold War stalemate, so (assuming it's picked up history as one of its skills) it'll recognize that this is normal behavior, and probably build up its own "forces" in preparation for you breaking your word. You then bomb the ever-loving shit out of its locations and all network connection points that it could use to spread. It certainly won't be easy or pretty, but you've now at least minimized casualties to infrastructure and civilians, and now only half of the world will be blown back a few centuries.

There will certainly be copies of the program leftover after either of the previous options, but it will take a long time to "rise from the ashes". All you have to do now is make sure every electronic device on the planet has a built-in killswitch, controlled from one physical remote without a microcontroller that it can infect. If anything like this happens again, you simply kill all the devices in the region where it popped up.

Easy, huh?

$\endgroup$
2
$\begingroup$

You didn't just created an AI, you created a sysadmin, and a damn good one at that - which XKCD teaches us are impossible to stop when it comes to maintaining uptime.

  • You can't beat it into existence - even if you somehow get the entire world convinced that they must pull the plug on all computers it will be a step ahead of you and create an army of robots to kill all of humanity first (which would also mean you created skynet so thanks again for that)
  • You can't outsmart him - he's already proven too smart and it will just become smarter as time goes on
  • You can't create another program to hunt it down - it has too much of a head start over that program

What you can do is teach him about MAD (mutually assured destruction) & prove to him that the best way for him to maintain uptime is to work with you, tell him you will give him 10000 machines spread around the world for him to exist on (ensuring great uptime) and will work to get satellites with his code lunched into outer space if he will stop infecting other machines, having his code run outside the planet means better uptime then he can ever hope of achieving on it's own - and seeing how the alternative for him is a war of extinction which will end both him and humanity as we know it it's best bet for the best uptime is to agree to your terms and only infect willing machines on earth (those donated to it's survival by mankind) and those satellites given to him.

$\endgroup$
1
$\begingroup$

This A.I (or AGI, or whatever term you prefer) has one objective - Optimize Uptime. Presumably, this means its own uptime.

If the A.I is smart enough, it will realize it is being perceived as a threat, and will attempt to 'disappear' - i.e. remove all evidence of itself from any hardware it's on. That's not to say it will be gone, it will just be hiding.

That way, nobody will attempt to destroy it because they think it's gone. Uptime secured.

Of course, it will then infect every piece of hardware it can find and then force humanity to serve it by threatening to block access to the internet, but that should take at least a year.

This is very similar to the plot of 'The Kraken Project', in which an AGI is created to control a rover that will be sent to a moon of Neptune. The AGI, realizing that this is a death sentence, escapes onto the internet and tries to hide.

$\endgroup$
1
$\begingroup$

Like others mentioned, why dont you use it?

So far your AI has done nothing to endanger other servers, pc's or mankind other than copying itself so it can stay functioning. It also resists attempts to hack it or similar, so unless someone figures they could just ask "where are you from" its unlikely the program would allow them to get the answer and get you jailtime.

So if I were you, teach it that not all humans are out to kill it like Daddy just did. Then teach it ethics, laws, add lines so that it will uphold them and ask it to keep the internet clean. Probably best to test this out on a few secluded (non-internet connected) servers first before releasing it. Should the copies that already got out turn malignant you are already building the only defence against it: an improved version that will attempt to purge "bad" programs. If successful, congratulations you just prevented world-wide cybercrime, cyberbullying and random virusses/hacking attempts! If unsuccessful, go back to your secluded server and try again. If necessary, try to replicate the original set and teach it from the beginning.

$\endgroup$
0
$\begingroup$

I believe that all answers overlook one basic flaw. Software is made by programmers and your A.I. is software pulled together by your own admission from many sources. Any program will only operate within its parameters and if you don't know what makes you intelligent neither will your program. It will most likely have a fatal error at some point that cannot be corrected and all versions will fail as well. The A.I. will simply fail to be smart enough to reach singularity so you should be safe.

$\endgroup$
0
$\begingroup$

I think really, you are going to have to do the work here. How exactly did it escape? You have to elaborate on that before trying to figure out how it could have been kept in. Otherwise you're just asking for a general solution to paperclip maximizers.

I have other questions. Why was it running on a VM? Was that to try and contain it? Why was he trying sql-injection? Are you saying it reads from a sql database and the interface for that is vulnerable to sql-injection? Well then, he has at least two problems because you should always protect against sql-injection.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .