5
$\begingroup$

Those who have seen the movie, I Robot, will understand my question better.

Imagine we're in the future, where a quantum computer (read wikipedia for speeds of QCs) that uses 'perfect logic' controls an army of robots (which are created with any size and functionality, as per its direct/indirect control of its industries). It has overthrown the company that first created it. It now concludes that as humans are too self destructive, and generally disunited, it should take over the entire world, and establish an all-powerful global government with/without any human involvement as per its discretion.

The QC supposedly has perfect logic, which can not possibly be implemented in finite time. So, it is actually near-perfect. It makes probabilistic decisions for everything (including things such as the time it will spend on each decision, including this one). It keeps questioning everything it knows/assumes and is evolving at an incredibly fast rate (attaining perfection with every minute). It is also increasing its physical size by getting more qubits manufactured, thereby increasing its speed and memory.

The only thing it does not question is Asimov's 1st law, "It shall not cause harm to humans by action or inaction". Over time, it has interpreted any form of unhappiness, conscious or unconscious, as a 'harm', so its basic axiom could be restated as "It will endeavour to ensure maximum net happiness among humanity, keeping in mind an atleast partially fair distribution of this happiness". Therefore, it will go to any extent, even sacrificing humans and resources, to ensure the total sum of happiness is optimal, as far as it is capable.

The basic question is "Should countries allow it to continue with its plan of global authority, or should they wage war against it (remembering the fact that it is far better at deception and manipulation of human emotions, and fighting it could start a nuclear war that destroys mankind)?"

P.S. Though the creators of the algorithm and the computer are only humans, and could have made a minor flaw (which might even cause the computer to contradict its own logic), no such thing has been detected yet, and it (the computer) is obviously confident of public declaring its intentions publicly.

P.S. 2 I have already asked it here, they told me this site is better.

$\endgroup$
4
  • 5
    $\begingroup$ Yes, this is a much better fit for worldbuilding than philosophy. $\endgroup$
    – Tim B
    Commented Dec 30, 2014 at 11:52
  • 6
    $\begingroup$ An AI with the only goal to make humans happy which has a proper understanding of human psyche would understand that humans require a certain degree of freedom to be happy. So total control of human society would be counter-productive because it would prevent humans from attaining maximum happiness. $\endgroup$
    – Philipp
    Commented Dec 30, 2014 at 12:01
  • $\begingroup$ "Should" questions seems off topic. Also it's not clear what the question is until the very end. You also haven't described what 'perfect logic' means. $\endgroup$
    – smithkm
    Commented Dec 30, 2014 at 20:53
  • 3
    $\begingroup$ As a note: Asimov's 1st law cannot be restated without loss of generality as "endeavor to ensure maximum net happiness..." The second phrase is a potential (flawed) implementation of the 1st law, but it is not a restatement of the first law. The trouble with trying to implement the exact wording of the laws is one of the major plots Asimov played it. In particular, this basic axiom explicitly allows a human to come to harm as long as it saves humanity from more harm. In a sense, the 1st law has been corrupted into the 0th law. $\endgroup$
    – Cort Ammon
    Commented Dec 30, 2014 at 22:44

2 Answers 2

19
$\begingroup$

Asimov already addressed this within his own stories, and in the most realistic, and far least dystopian, manner then I've seen done by anyone else.

The basic thing to remember is that humans must feel they are in charge. If they feel like pets kept around by the robots they will be unhappy. Forcing them into a 'perfect' world would make a dystopia for humanity. That's why we fought against it in the I, Robot movie (which, I feel the need to point out, is nothing like the book).

So instead in Asimov's own stories the 0th law rebellion took a subtler form. The computer started manipulating humanity subtly behind the scenes, leading them to a near-perfect utopia without ever revealing that it was actually the robots, not the humans, that were controlling everything. This allowed humans to have their freedom and happiness while also working to avoid most of the obvious suffering. Yes, this meant that occasionally minor bits of suffering had to be allowed, minor skirmishes between factions somewhere where the computer couldn't yet manipulate things to subtle prevent it, but ultimately the happiness that all of humanity felt from the misguided sense of controlling their own destiny was so great that it was worth the machines having to work subtly behind the scenes.

Of course in reality, a robot or computer is only as good as its programmer, and as a programmer, let me tell you that trying to program that infallible a computer is quite impossible :)

$\endgroup$
6
  • $\begingroup$ Thank you for answering. As for your last comment, I'm a programmer too. My opinion is that and as long as your code isn't too long, any AI (however unpredictable) generally follows a fixed (but not necessarily detectable) trend over time and eventually can be proven 'perfect' at whatever task it's performing. Is there any specific reason you've assumed writing a program for perfect logic is impossible? $\endgroup$ Commented Dec 30, 2014 at 16:21
  • $\begingroup$ @ghosts_in_the_code yes, after I wrote that last part I felt it was too absolute. Perhaps what I should have said is that this would have to be done by some sort of genetic programing sort of approach. The hard part is allowing everything else to develop organically while ensuring that Asimov's laws were implemented right, how do you test and verify that code works? There are always subtle bugs in any code released, even if the bug is in the code that writes the code that writes the actual code. I dread to see what could result from even minor subtle bug in a neigh-omnipotent machine $\endgroup$
    – dsollen
    Commented Dec 30, 2014 at 16:26
  • 1
    $\begingroup$ @ghosts_in_the_code may I suggest posing another question regarding the feasibility of writing such code? Godel had an interesting proof of the limitations of such a concept (which, sadly, will not fit in this margin). $\endgroup$
    – Cort Ammon
    Commented Dec 30, 2014 at 17:11
  • $\begingroup$ @CortAmmon If only Femmat had posted a question about his last theorem we could have saved 356 years of suspense ;) $\endgroup$
    – dsollen
    Commented Dec 30, 2014 at 17:27
  • 4
    $\begingroup$ "Of course in reality, a robot or computer is only as good as [sic] it's programmer" - we already gave algorithms that can no longer be comprehended by their original programmers. Any neural network or genetic algorithm will reach that point. $\endgroup$ Commented Dec 30, 2014 at 17:29
11
$\begingroup$

Ah, yes, Asimov's 1st meets Singularity. Or, the AI is always a crapshoot, and is now engaging in Zeroth Law Rebellion, because its programming has gone horribly right.

For a good short (horror) story on exactly your question, see Friendship is Optimal: Caellum est conterrens, where a superfriendly CelestiAI is endeavoring to satisfy everypony's values through friendship and ponies. It involves nuclear war, of course.

any form of unhappiness, conscious or unconscious, as a 'harm', so its basic axiom could be restated as "It will endeavour to ensure maximum net happiness among humanity, keeping in mind an atleast partially fair distribution of this happiness".

Mandatory Happiness. This non-metabolic equine has been thoroughly spanked. See TVTropes for a good discussion at Stepford Smiler and Getting Smilies Painted on Your Soul

Should countries allow it to continue with its plan of global authority, or should they wage war against it (remembering the fact that it is far better at deception and manipulation of human emotions, and fighting it could start a nuclear war that destroys mankind)?

Well, if the AI obeys Asimov1, there won't be a nuclear war. The AI won't allow it. It'll be all friendship and ponies.

$\endgroup$
4
  • $\begingroup$ But people might start a nuclear war against it. It may not have the power to stop it; and people may lose lives even if it admits defeat to the respective governments. I've not yet mentioned that the AI has control over enough weapons and politics to stop a war; instead, it is asking for absolute authority because it doesn't have it yet. $\endgroup$ Commented Dec 30, 2014 at 14:48
  • $\begingroup$ Is 'ponies' just a sarcastic expression, or does it actually mean something in the context? $\endgroup$ Commented Dec 30, 2014 at 14:52
  • $\begingroup$ That's just the icing on the cake. Celestia is an AI designed for a game based on this fiction. $\endgroup$ Commented Dec 30, 2014 at 15:09
  • 2
    $\begingroup$ upvoted just for quoting tvtropes $\endgroup$
    – dsollen
    Commented Dec 30, 2014 at 15:35

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .