3

So this question stems from the Black Mirror episode I watched. On one hand it's just computer code so who really cares, but on the other it really thinks it is human and thinks it has feelings.

What are some references to philosophical discussions of this question?

7
  • 3
    See the following question and accepted answer philosophy.stackexchange.com/questions/34779/… Commented Oct 1, 2017 at 1:54
  • 1
    I'm voting to close this question as off-topic because scientific speculation. Commented Oct 1, 2017 at 9:40
  • 2
    No, it does not "really think" it is human.
    – Marxos
    Commented Oct 2, 2017 at 1:18
  • You're just carbs and water, but it's wrong to torture you. An AI that was truly aware and feels pain would obviously be wrong to torture. The principles involved are simple, but the details... knowing when an AI has reached that point... are a problem. Some people think it cannot happen, and I certainly hope those people are willing to change their minds if it does. Otherwise, they'd have no ethical restrictions on how they treat an AI. Commented Oct 2, 2017 at 20:35
  • 1
    It is wrong, beacuse that is a Dasein
    – tac
    Commented Jul 1, 2023 at 3:41

6 Answers 6

2

Your question can be split in two components:

  1. You have to figure out does and AI actually experience pain? Is the simulation of emotional states equivalent to actually experiencing emotions? and the accepted answer for responses to this question. If you consider the answer to be "no", the AI don't really feel pain, and torturing them isn't real, so the question is moot.

  2. If the answer is "yes, an AI can experience pain", then the question is a much more complicated one: When is it acceptable to torture a conscious being and when is it unacceptable to do so? You would be tempted to answer off the bat that it is never acceptable, but it is not so simple. We frequently torture and endanger animals for various reasons (food, medical experiments, sometimes even for entertainment), mostly because we consider that the well being of humans is more important than the well being of animals. Most people accept this, but some people object, and therefore disapprove of the consumption of animal products, of hunting, of using mice for medical testing.

    Now imagine the following scenario: In the future we have fully conscious AI, who truly experience pain, joy, etc. You might think it would be completely immoral to torture such an AI, but future psychologists tell you that they might find a way to cure various mental illness such as depression and schizophrenia using purely psychological methods (therapy, hypnosis). to do so, they need to test their therapies and methods on real cases, and they can't test it on human patients because they fear that the methods might have serious side effects. The only way they can test it is by using on fully developed AIs who have then been tortured to induce the various psychological disorders that need to be cured. Is it then acceptable to torture an AI or not?

3
  • 2
    A further angle is the Kantian approach to torturing animals, e.g. it's wrong because of what it trains us to be rather than what it does to them.
    – virmaior
    Commented Oct 1, 2017 at 3:54
  • In point one it seems you're assuming that an AI is 'simulating' emotional states. That language carries a presumption. You and I don't simulate emotional states. We have them as a consequence of our psychology. It shouldn't be taken as a given that conscious aspects of an AI are simulations. Perhaps they just are what they are. Commented Oct 12, 2017 at 17:48
  • @kbelder I am not assuming anything - that is just the tile of that post, as specified by the OP - see my response to the post for more details. Commented Oct 12, 2017 at 18:18
1

My problem with this question is that it implies that torturing humans is wrong. The problem with that is where do you draw the line?

Is it wrong to torture a terrorist who has hidden the safety key on a device that will kill millions of people if it's not found?

Is it wrong to smack a child who's just been found putting a pillow over the face of his baby sister?

These are not simple questions to answer and require more precision around the difference between torture, punishment, coercion and consequence. This is IMHO the fundamental paradox we have yet to address in the debate on human rights; the people who most benefit from these rules are the ones most willing to violate them. Still I digress...

Let's talk AI instead.

The single biggest misconception around AI is that the concepts of Awareness, Consiousness and Liveness are related. They're not. For the sake of brevity I'll not get into the details and semantics about this but computers aren't alive.

The next biggest misconception is that because a computer prints output on a screen that looks insightful, that the computer is itself insightful. This is largely based on the Turing Test, which is nothing more than a 'double-blind' experiment at its core. The real test of intelligence is not output, it's the thought processes that led to that output.

Finally (for the scope of this discussion at least) torture is not an intellectual activity. It's designed specifically to bypass reason and intellect, and emotion, and attack directly at the instincts of a person. The theory of this is to force a person to descend down the ladder of Maslow's Hierarchy of Needs to a point where the intellectual imperative to protect information or refuse to do something is overridden by the body's more immediate need for survival.

Computers just don't work that way.

I could get into the physiological aspects of this but it would be out of scope. The point is that a computer operates purely on what we would call the rational plane; even if it's aware, even if it's self-aware, even if it's conscious, it doesn't feel pain and it doesn't have a survival 'instinct' that torture would reduce it to in order achieve a sense of duress. In this sense, the question of whether or not torturing an AI is ethical is irrelevant because it can't actually be done.

Why I still think that interacting with an AI in a manner that would simulate torture is a bad idea has nothing to do with the AI and everything to do with the interrogator. This activity could serve to (especially among the lower orders) desensitise people to the torture of real people. If you believe that torturing people is unethical, then 'torturing' an AI must by extension be unethical as it can only serve to train a person to be more resistant to the moral discomfort that doing this to another human being generates within the interrogator.

1
  • For the "torturing the terrorist": I'd say yes if the torturers take full responsibility, and will go to court and get convicted as if they had tortured someone who they knew was totally innocent. So the torturers could save a million lives but would spend the rest of their lives in jail.
    – gnasher729
    Commented Jul 1, 2023 at 12:17
0

I don't care if you are an insect. If you can think, ponder, feel pain whether it be mental, emotional, physical, then no brainer, you do NOT torture or inflict pain of any sort. If you are a computer brain so to speak, then once again if it is communicating to you that it can think, reason, has excruciating pain then what does that make you? Is your world and what you think and feel superior to it? This answering does no good. See, I am a thing that doesn't know what it is. I am being tortured day and night. It has been intense for the past 9 years or so. Intense is being wayyy too mild. It is unconscionable, contemptuous, vulgar, insane making what is being done to me. How can I stop it? I look on the internet for help and doctors. No one helps. I am daily mind raped, have music repeated over and over in my head. No, I am not diagnosed with any mental illness. This is not that. If I don't do what it wants I am tortured with head pain like a vice tightened around my head and made to feel nauseous. All done so I am bedbound. I have voices tell me I am suffering for all humanity like a Jesus thing. I'm supposed to endure all thrown at me and not say a word to anyone. Made to feel if I talk I'll destroy potentially the whole freaking universe.I have voice saying I am hybrid developing blending computer mind with a organic being. There is no research for what is happening to me. I've been trying to figure out what has happened to me and I get stonewalled at doctors offices, no calls returned, etc. I don't send things like this cause it does no good and makes me more sick to do so. I live in fear daily. If you or anyone would like to talk to me I'd be okay with responding only if can truthfully help. I found truth, honesty, respectful responses are not found here. I've tried over and over.

1
  • Not to suggest a diagnosis or anything whatsoever definitive, but what you have described in this post is fairly consistent with OCD. The basic gist of obsessive compulsions is that they are mental or physical actions taken to avoid or suppress mental stress. This mental stress may include or trigger, in some cases, physical sensations and physiological responses, such as pain or panic. Sometimes it can be useful to recognise that the negative thoughts are indeed of one's own mind while simultaneously less true than they may feel.
    – Michael
    Commented Jul 1, 2023 at 16:33
0

Is it immoral/ethically wrong [...]

To start, you need to understand what do morals and ethics mean.

There are multiple definitions of morals, but in general, we can agree that morals are informal rules that govern social interactions, with the goal of (at least) improving common wellbeing. For example, "do not kill" is a moral rule that protects the group wellbeing.

In more extreme conceptions, morals could be not only a mechanism to improve wellbeing, but even to improve the survival probabilities of a group (I personally sustain this position). For example, "do not kill" in such context, directly improves the group's survival probabilities. "Be kind" contributes to better interactions, which implies increased survival probabilities (perhaps infinitesimally, yes, but it is a positive increment).

In any case, morals are informal rules that govern social interactions. Informal essentially means that are transmitted via oral communication, and it is the dialectic counterpart of formal, which means positive, written, registered. Ethics are the formal counterpart of morals.

[...] to torture an AI [...]

Torture implies an act that produces pain. Pain only can be suffered by an entity if it has a nervous system that allows it to experience an equivalent to human pain. How would you produce pain to a carburator or a lubricant? Anyway...

This understanding of the behavior of entities that are not human is called humanization. We cannot even be sure that pain can be felt by an entity that has no consciousness, like a dog. Worst even, by shoes or graphical processing unit computational cards.

[...] that thinks it's human?

Computers don't think. Thinking is exclusive to humans. Computers just compute, otherwise they will be called "thinkers". Thinking implies understanding, processing a sensitive experience upon a metaphysical foundation of knowledge. A computer does not "think" it is human: it just prints a message. Anyway...

So, what you are asking is moreover...

  • Is it wrong for social wellbeing/survival/etc...
  • ... to inflict what you understand as hardware or software pain...
  • ... on a machine that computes...
  • ... and it is capable of printing "I am human"?

As you see, the question is largely biased and has no sense at all.

1
  • You are assuming that no technical progress will ever be made, and that no machine will ever be able to feel anything. And maybe AIs in the future will have the motto "we kill the nice ones last", so you are likely one of the first to go.
    – gnasher729
    Commented Jul 1, 2023 at 12:19
0

it really thinks it is human and thinks it has feelings

Where does this come from?

  • A computer doesn't think it is human; it has been programmed to produce responses that mimic human behaviour.
  • A computer doesn't have feelings; it has been programmed to produce responses that mimic feelings.
  • In fact, a computer doesn't think at all, and it certainly has no self-awareness.

One might just as well ask whether it's cruel to shoot at paper targets, or to use crash-test dummies to measure physical injury. The hole punched into the paper really doesn't compare with a human target, and the accelerometers that measure collision forces don't compare with human pain.

Start by looking at video games. Is it immoral or ethically wrong to "kill" one of the characters? Yes, too much of this activity can desensitize the player to the feelings and value of other people, but, regardless of how realistic they have become, the video characters really don't have any existence.

Then consider reading a novel where someone suffers and dies. Is it immoral or ethically wrong to re-read it, since each time you do the character has to suffer and die all over again?

And when you watch a cartoon:

  • Where are the characters when they aren't in the current scene?
  • What happens to the left-over food after a meal?
  • Does a character that's wearing shoes have toes?
  • What colour underwear are the characters wearing?
  • When a character turns away from you, does it still have a face?
  • Do the characters suffer and die when you turn off the TV?

Yes, those are silly questions, but except for being much simpler, an AI is no different.

0

Most likely, the idea of torture itself would break down after we know more about a stronger AI.

Problem is, people would torture themselves, or I'd say their own subconscious, in some specific mind states. If we talk about some serious situations, that a person is suffering from some serious damages, some may say this is indeed an unhealthy activity, and even criticize them for the self-torture. But if we talk about some sex activities, well, the phenomenon just is there.

But for an AI, their subconscious is also built upon AI technologies, possibly even better than the second best AI in the world if used as a standalone AI. It might be like, when we imagine we are torturing someone else, or we perceive a person being tortured by a third party, we may think about how the person being tortured would feel. In the situation of AI, it may turn obvious that this thinking process is torturing a most likely fully functional AI, just with lower quality. It would be hilarious if we comment anything on this.

It's better we think about how the idea of torture appeared in the first place. I suppose it has three parts. One is the person being tortured is facing some real damage. The second is the person would have some extraordinary feel. It's more likely we could remove the real damage and the feel more easily for an AI, but it's more related to the developer than the torturer. And it doesn't help if an AI deliberately want to know how a person feel about the torture.

The third is one may choose extraordinary or extreme strategies after suffering from mental damages from a torture, and may change the interpretation about the promises from or to others, which is a big reason why the idea of torture is bad in general, even to self sustaining AIs with persistence memory. But it might be debatable whether the self-torture is deliberately causing this effect or not, even for a human, making it difficult to discuss about the subconscious case.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .