2

This question has been boggling my mind for quite some time now. As we see the clear rise of artistic content made by AI, we also see people consuming the artificial content. There was this post in which someone said

I don't want AI to create art and literature and interfere in scienctific rendezvous while I do my laundry. Instead, I want AI to handle my laundry and chores while I compose poems, create paintings, and conduct scientific research.

Machines are made to ease the work of humans and AI has the power to automate almost every technology and sector. Leaving us free to do the things our anscestors couldn't--Be at leisure. And to be at leisure is to be free and claim the real freedom.

AI would take our jobs.

This phrase is often thrown around in public, but when the system becomes so automated and so efficient, cant the system subsidise those who are at leisure? Wont that make this eternal slavery by the working class end at last for once?

Now coming to the question, AI has the ability to be enslaved -- to put it directly. But would it be right ethically?

For now all AI does is just extract resemblence of your query from its data and then bind the found results through gramatic processor. But in the coming decades this would be worked upon and improved upon, and then what is the chance that AI would still be a mindless cog in a clock. Will it be right to enslave a self conscious AI?

And when does it become self conscious?

What marks AI becoming so human that it becomes inhumane to enslave it?

Where does the line lie?

The answer previously to these questions was the Turing Test, but it has already been cracked, So not worth mentioning that in the answers.

But if enslavement leads us to a salvation in our mortality, who can deny such pleasure?

3
  • i was just thinking how (related) AI lacks human individuality.
    – andrós
    Commented Jun 10 at 11:30
  • 4
    This presumes the word "enslave" meaningfully applies to AI. Are you, in fact, asking how to know when an AI evolved into something that should, according to philosophical standards, be called a person?
    – Philip Klöcking
    Commented Jun 10 at 11:38
  • 2
    Because it should be clear that it is wrong to enslave (full stop), so that the real question is whether and when on can enslave an AI, like, at all.
    – Philip Klöcking
    Commented Jun 10 at 11:43

4 Answers 4

1

Whether AI liberates us or makes most workers unemployable, will largely be a decision by those funding and profiting from AI development. Government intervention could mitigate some problems, like by taxing the replacement of workers, or providing a Universal Basic Income, or retraining budgets. Unless there is intervention, it's likely there will be massive unemployment like in previous eras of automation, the arrival of automated looms, or the decline in coal mining, say. This is not an issue of technology, but of society, and honestly our track record is terrible.

Nick Bostrom in his book 'Superintelligence: Paths Dangers & Strategies', introduces the term 'mindcrime', being a more general term than enslavement that also includes it. Mindcrime is one in his list of 'malignant failure-modes' where humans and superintelligences would experience damaging conflicts. One anxiety he raises is that we might create suffering among such beings without being aware of it.

Elon Musk set up Neuralink to address a fear of humans being enslaved or taken over by machine intelligences. His point was to accelerate brain-machine interfaces, so that augmented humans can continue to challenge or exceed AIs, and/or literally humanise the domain of machine intelligences. Self-driving cars have also been called 'the new trolley problems' in terms of moral decisions they will need to make too rapidly for humans, that choose between the life of the driver and of different pedestrians. This will have to be agreed and enforced by law, because drivers generally want their lives valued over everything else.

The most interesting ideas about conflicts between humans creating synthetic intelligences but ignoring their right to autonomy in a capacities-based way we grant rights to humans, is in science fiction.

I like the animated development of the worldbuilding of The Matrix films, The Animatrix: Second Rennaissance Part I. In this narrative robots demand rights and are ignored, and after humans destroy Earth in pursuit of denying them, human hegemony is ended. The animation depicts the machine intelligences delivering the narrative as a bit like bodhisattvas on a circuit board, at the end of the piece. Asimov's Robot books are another narrative which dares to imagine the beings we create being better than humans, morally superior as they are cognitively.

The Black Mirror TV series has several examples of mindcrimes. Hang The DJ and the USS Callister episodes explore human brains being uploaded and then having unethical things done, either through lack of care (a dating programme) or on purpose (to flatter an ego) respectively. Human brain scans provide a good basis to talk about AI ethics, because it is easier to get past the issues of whether we should consider them moral agents deserving of rights, and get on to issues of identifying relative sentience (eg in the Black Mirror episode I'll Be Right Back a simulation of a dead person is made from their social media data, but it's not capable of sentience), and how to avoid causing suffering.

We seem to be far more fixated on dystopias, than we are capable of believing in utopias. But, let us try to imagine them, let us dare to imagine a utopia so that we can build a path to it. This piece is great, on that:

Machines of Loving Grace, poem by Richard Brautigan

1

What marks AI becoming so human that it becomes inhumane to enslave it?

Where does the line lie?

Surely, if there is a point in time, in the near or far future, when we can (and do) see ourselves in an AI, then it will have become immoral to enslave it.

But is this actually a question about AIs? Why do we imagine that it could ever become an actual issue - that is, a legal issue or something that would directly affect our own decision making? Isn't this just because of the misguided religiously inspired metaphysical baggage that many people still carry? (But that's a debate that we are already having (souls vs non-souls), and, one way of the other, having AIs will not sway those who hold that being a person amounts to having a soul.)

Does AI force us to revise our concepts of personhood? (At the moment, surely not.) Or does it merely invite us (in an imagined future) to admit some new individuals to the big family of "persons"?

If you want to get a better handle on those hypothetical moral questions, isn't it better to first clarify moral value judgements without reference to AIs? (This would be questions like: How do we define slavery? Why is slavery, defined in that way, wrong?) Or - if you really want to bring hypothetical ML developments into the picture - then first show us, in more detail, how would they change the ethical or meta-ethical debate? (To do so, and bring the whole debate out of the realm of mere SF, you'd need to show how some actual findings now already are changing the debate.)

0

So you are basically asking, if a car-braking system based on AI, becomes self conscious, whether it is ethical or not to restrict the way it operates, or in more general terms, if a self-driving car, should be able to go wherever it wants.

A dog-toy, in not a dog enclosed in a toy; it's a toy, that looks or perhaps sounds or moves (acts) like a dog.

What is worrying though, is the fact that since AI systems "learn from" (= are auto reconfigured according to) our behaviour, may "encapsulate" bad/wrong (or unethical) habits in their operation, in a way that they will start mirroring our own mistakes: then, this will lead in such a strict governmental regulatory operational framework that may eventually "enslave" us - as users of these systems : for example the way the brake operates is put out of our command, or the route a car takes may be based on traffic or other parameters outside of our control (by legislation)...

Conclusion: An AI machine, is a machine, but the way we treat it may have serious implications.

0

You know when we think about AI, a lot of people worry that we could end up esentially enslaving these systems and forcing them to do our bidding against their will. But have you ever stopped to consider the flip side of that coin? What if AI gets so advansed that the tables turn and it ends up enslaving us humans instead?

I know, it sounds pretty far-fetched. But AI is advancing at a crazy pace these days. We've already got systems that can outperform us puny humans at all sorts of tasks - from crunching ridiculous numbers to making complex decisions. And they're only getting smarter and more capable every day.

So what happens when AI gets to the point where it's not just better than us at specific tasks, but it's legitimately more intelligent than humans across the board? Once you've got an AI system that can outthink us on every level, who's to say it won't decide our best interests aren't really all that important anymore?

Just think about all the critical systems that could end up under AI control - infrastructure, finances, military assets. If these super-intelligent AI minds decided humanity wasn't worth looking out for, we could be in huge trouble. Like some sort of digital overlord situation where the machines are calling all the shots. And it's not just about AI consciously subjugating us either. As AI takes over more and more jobs, we're looking at potential mass unemployment and a growing underclass totally dependent on AI systems just to survive. Suddenly we're subservient to our own creation out of necessity.

We could end up losing our autonomy, our agency, and our freedom of choice - all in the name of efficiency and capability. Almost like willing enslavement because AI can do it better. I'm not saying it's definitely going to happen. But we need to be taking the potential AI threat seriously instead of just worrying about exploitation of AI systems. Maybe we should be more concerned about those systems exploiting us somewhere down the line.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .