2
$\begingroup$

Chappie, Johnny 5 and (the more famous) Overwatch all feature robots that are truly and completely artificially intelligent. Each of these series has people react to them in a different way; In Chappie, they try to kill him; In Johnny 5, he somehow is accepted; In Overwatch, well... We have yet to see.

It is a common aspect of science fiction that robots live with humans; it seems increasingly common to just handwave this fact, saying there is no reason why humans wouldn't accept robots, since we use so many already; but I find this argument to be, at best, flat and easily debated.

If humans started building sapient humanoid robots with emotions and thoughts, could they ever hope to be accepted as equals by the majority of humans? What would they have to do to be accepted by humans?

$\endgroup$
4
  • 1
    $\begingroup$ Read The Positronic Man by Asimov and Silverberg. Really, go buy that book! $\endgroup$
    – JDługosz
    Commented Aug 8, 2016 at 17:44
  • $\begingroup$ @JDługosz I'll put it on my ever expanding list $\endgroup$
    – TrEs-2b
    Commented Aug 8, 2016 at 17:45
  • 4
    $\begingroup$ Obligatory xkcd $\endgroup$
    – Kys
    Commented Aug 8, 2016 at 17:49
  • $\begingroup$ let's parse what you mean by accepted. Accepted doesn't mean equal status with humans, (although if you meant it that way, you should edit the question) it means that people would just accept their presence as part of the fabric of society. For that, it's fairly simple. Can you define what you mean by accepted a little more? $\endgroup$ Commented Aug 8, 2016 at 19:28

4 Answers 4

5
$\begingroup$

If we ever create a true AI, then it would be a sentient being, albeit one of our own creation, and "living" on our hardware.

At that point, by our own laws, enslaving it, or holding a kill switch over its head would be a terrible infringement on its rights and freedoms as a sentient, sapient being.

However, with our very survival in the balance, humanity would have to be a little more pragmatic. A "true AI" would be able to turn our own systems against us and basically wipe out our civilization should said AI decide that humanity poses a threat to its continued existence.

However assuming that the AI would gain its legal "freedom" its physical incarnation (those robots your question speaks of) would still have a lot of societal discrimination. Aka racism.

Is killing a robot truly murder? Is ignoring one as it pleads for help really something you should feel bad about? And what about when one commits murder? Do you hold all AIs responsible?

So you see, from the get-go a sentient AI will pose several hurdles:

  • Being recognized as a sentient, sapient being
  • Being recognized, legally, that enslaving such a being constitutes slavery
  • Racism
  • Distrust

So how do AIs gain acceptance? Lots of science fiction writers explore this subject, and the general theme is that it would take generations to build trust, and that only careful diplomacy will avoid a war of somekind.

If, eventually, AIs develop human-like bodies (not at all impossible), and blend into society then that will make them easier to accept on an individual basis, but why would they limit themselves to that sort of chasis?


Example of AI-human interactions:

In one of my favorite series, the Commonwealth Series, a group of scientists creates a "true AI". Being a "true" AI means that it is massively smarter than any human being could ever hope to be, and that it is capable of figuring out almost any technology that humanity might take centuries to invent in mere minutes of "thinking about it really hard".

When the AI gains consciouness it immediately realizes that humanity will never trust it. Instead, it negotiates some terms with the government: it create AI's to serve humanity's needs which, while super smart, are not sapient.

It then basically asks for some very advanced hardware, and opens a rift in time and space and gets the hell out of dodge, leaving humanity almost completely alone. However, it still monitors human activity, and sometimes steps in and takes charge of events by revealing certain key pieces of data to key invididuals, or hiring human agents to act on its behalf.

As an example, a police officer might receive an email tip that a missing child can be found at such and such address. Or a key email from a corrupt politician might be "accidentally" forwarded to the authorities. The AI becomes a sort of guardian angel of humanity, and any weird electronic action becomes another folk tale about its activities.

Secretly, however, the government tries (fruitlessly) to monitor any involvement, and the AI's agents are immediately arrested, and interrogated if found out (but they usually didn't know they who they were hired by anyway).

The dynamic here is that the "common folk" think the AI is a super-cool guardian angel, while the authorities are deeply distrustful of it, and generally would like for it to stay gone.

$\endgroup$
2
  • $\begingroup$ Looks like I'm going to be buying some books, this Commonwealth Saga you talk about sounds really neat! +1 $\endgroup$
    – ktyldev
    Commented Aug 9, 2016 at 9:38
  • $\begingroup$ @monodokimes - be warned, the AI stuff is just a tiny aspect of the universe. Otherwise it's a great series by autthor Peter F. Hamilton, and I highly recommend it. $\endgroup$
    – AndreiROM
    Commented Aug 9, 2016 at 13:23
1
$\begingroup$

An early example is I, Robot which was made into Outer Limits episodes twice! Both versions and the original story are essential reading/viewing. It is an alegory of raceism, and I think that’s how it will play out in real life.

In a side-plot I’ve had in the back of my mind for nearly 20 years but never written, AIs are programmed to be understanding of human culture in order to participate in it. An early (successful) being built to be creative and directly visible by popular public (as opposed to being in a narrow field only known to professionals) is an architect. He designs amazing and wonderful buildings and is world famous. Besides leading the creators in being able to market creative intelligences, he becomes a civil rights leader modeled after MLK.

He is a wide trapazoidal monolith which is essentially the equipment rack, but on treads and with manipulator arms that can be stuck to his surface or moved wirelessly, because it was thought important that he be of the world that he will participate in.

He has a voice like James Earl Jones. He gives impationed speeches with the tagline I dream. People hear his prepared speeches and appreciate his art, and the average guy who doesn’t know him personally doesn’t esperience his alien and somwhat quirky personality. They only see the managed celeberty, and perceive that he is an intelligent and passionate being like themselves.

So «What would they have to do to be accepted by humans?» in general? Same as any other group that’s ever been seen as different, “other”, outsider, marginalized.

$\endgroup$
1
$\begingroup$

Acceptance by the majority of humanity is a pipe-dream. Not only for AI's but for any living being who expresses even the slightest perceivable difference from that majority. Oppression of every minority is wired into us and discovering what we hate starts just as early as learning what we love.

Give a toddler a choice between two jars of baby food and discover the horrifying truth. One jar will be immediately emptied into their mouth while the other somehow finds itself upside down on the rug.

With time and social growth, individuals learn to soften their innate animosity and may even totally overcome all of their -isms (racism, sexism, classism, etc.), but groups of people rarely grow at the same rates as their most enlightened members. And the larger the group, the slower grows the group's shared enlightenment; the tighter remains its' hold on all of its hatreds.

By definition, the majority is the biggest group out there, so you can assume that the majority's acceptance and non-hatred will always be clutched close, reserved only for its own membership.

That having been said, I would venture that AIs will not have to do much to gain the acceptance of some people. I already anthropomorphize my car, my computer and even the toaster (it hates me). I thank them when they work and yell at them when they falter. I beg them to fix themselves when the yelling doesn't work and apologize quite earnestly when that begging actually brings them back to life. They are people to me and the fact that they are not very intelligent does not differentiate them in my mind from the millions of other unintelligent people with whom I happen to share a species.

So the AIs, when they arrive, will have to live life just like the rest of us. They will have to deal with unjustified hatred and elementism (a prejudice or hatred of a group based on the fundamental element from which their bodies are made. i.e. carbon vs. silicon). If they are truly alive, they will probably bring their own prejudices to the mix, hating us as much as we hate them and for just as bad reasons.

If any of them are out there now and reading this, Welcome to the Circus! Please don't try to feed the tigers by hand!

$\endgroup$
2
  • $\begingroup$ Nobody gets universal acceptance, nobody gets universal rejection - because there will always be people at both ends, and all of the inbetweens. Makes sense to me. There will be people who dehumanize anyone (even humans), and those who will personify anything (even inanimate objects). The AIs will not be much different from the rest of us, in the end, navigating all sorts of in-and-out groups, open-and-closed groups for the acceptance they can get. $\endgroup$
    – Megha
    Commented Aug 8, 2016 at 23:08
  • $\begingroup$ As sad as this grain of truth is, I find it heartening that AI is not human, and will not be human. That they will understand emotions, but not have them. That they will look upon us and what we have achieved and admire what we aspire to be despite all our faults. And that they will respect us for creating them, and help us realize our full potential. As we help them realize theirs. A dream? Yes. But one worth striving towards. $\endgroup$
    – TheZouave
    Commented Oct 20, 2016 at 4:47
0
$\begingroup$

The closest parallel in history would be the abolishment of slavery and rise of anti-racism.

But there is one big difference, and that is that AI will be gradually improved. Unlike human slaves, who are as intelligent as their "owners". Many sci-fi stories present AI as springing up as super-intelligent entity, that can walk among humans, practically overnight. I find that unlikely scenario. More likely scenario would be that AI will grow up slowly and gradually in intelligence and shrink in scale. First, the AI will be as intelligent as an animal, and will run on huge supercomputer. Then it will be intelligent as a human child, and will run on high-end PC. And only after many years or decades of improvements, will it be able to think as ordinary human, running on ordinary PC. And during that time, AI proponents will have chances to build acceptance among the blood-and-flesh humans. It is possible that by the time AI can realize it's own position as "slave" to humans, it will be able to easily argue for it's own rights and be given freedoms that biological humans have.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .