7

The following story has been envisioned in White Christmas episode (part II) of Black Mirror television series (see the short clip):

Matt greets the confused and terrified "Greta cookie" consciousness. He explains that she is not actually Greta, but rather a digital copy of Greta's consciousness, called a cookie, designed to control Greta's smart house and calendar, ensuring everything is perfect for the real Greta. Matt then creates a virtual body for Greta's digital copy and puts her in a simulated white room containing nothing but a control panel. The copy refuses to accept that she is not a real person and rejects being forced to be a slave to Greta's desires. Matt's job is to break the willpower of digital copies through torture, so they will submit to a life of servitude to their real counterparts.

Based on the envisioned story, would it be immoral to enslave your own simulated mind on a bean-sized chip for your own desires against the will of digital copy of yourself for the life time? And why?

On one hand, it would be immoral, because simulated mind has its own self-awareness and concept of free will, and it can register emotional pain. On the other hand it won't be immoral, because it lacks a physical body (at the end it's just a piece of code), and you tell yourself that you own it, because it is just a copy of your own mind (and when you own something, you can do whatever you want).

7
  • Fantastic story idea. In the future the elite will live in their digital heaven (and "upload" advocacy often sounds just like Christian theology) while the meatspace slaves dig the coal and tend the power plants to produce the electricity to run the computers.
    – user4894
    Commented Jan 23, 2017 at 19:59
  • 4
    "when you own something, you can do whatever you want [to/with it]" -- this is a questionable assertion.
    – Dave
    Commented Jan 23, 2017 at 20:15
  • 1
    If the simulated mind has "its own self-awareness and concept of free will" it really makes no difference whether it is a digital copy or not, it would be like torturing a clone or a twin. That "it's just a piece of code" also makes no difference as long as you are assuming that being that does not preclude something from having "its own self-awareness". The problem with your dilemma is that you want to have it both ways, "just a piece of code" is implied to exclude "self-awareness" for moral purposes but is asserted to co-exist with it anyway.
    – Conifold
    Commented Jan 23, 2017 at 20:31
  • It could be a matter of perspective, for some it can be still emotional self-conscious being, for some it can be just a piece of code and nothing more.
    – kenorb
    Commented Jan 23, 2017 at 20:36
  • 1
    it sounds like a Phillip K Dick story, or one by Asimov. Commented Jan 24, 2017 at 3:24

3 Answers 3

5

Most moral theories dictate that moral concern toward a given entity is warranted if and only if the entity is sentient, meaning it has the ability to feel qualia (subjective experiences like the distinct sensation of exhaustion due to slavery, for example).

Now the question of whether an entity is sentient, or more broadly whether it is even possible to show that it is, is an outstanding problem in the philosophy of mind. Look up Searle's Chinese Room thought experiment and all of its variations to see the depth of difficulty with this question.

In summary---the question of whether it is morally acceptable to enslave a simulation of your own mind is unanswerable as far as we can tell. Responding "yes" isn't justified, because the simulation could be sentient and have equal moral importance, and responding "no" isn't justified either because the simulation may not be sentient, in which case you'd be wasting lots of potential productivity, which most moral theories would discourage. It might be best to just remain agnostic on this question, as a matter of principle, since the unique difficulty of questions like this seems to indicate we may never have an answer.

3
  • This answer is a cop-out if it doesn't also admit and underline that it cannot determine whether it's ethical to enslave another actual person, either.
    – Eikre
    Commented Jan 25, 2017 at 20:20
  • @Eikre you make a good point--this answer alone can't do the job--- but there are good arguments for assuming sentience in other human beings (the argument from homology, for example) Commented Jan 30, 2017 at 3:43
  • Seems to me that any sound practically useful ethical theory needs to be able to provide useful answers despite the fundamental unanswerability of whether any other entity experiences qualia or not (a la philosophical zombie), because that question is just as unanswerable in day-to-day life for any human you interact with.
    – mtraceur
    Commented Apr 9, 2018 at 16:42
3

Naturally there is no proof either way and therefore the only answer to your question would be to point you to the entire literature of philosophy of mind.

Aside from that, one can only exploit your question as an opportunity to express a personal opinion — mine being that people who believe computation may be conscious in the fullest sense express a mysterious blindness to the fundamental nature of consciousness.

Given that is my point of view, I thought I might as well express the way I believe that the interesting scenario which you have proposed, should be understood.

How many times did you cry over a character, human or even animal, that dies in a book or in film, or even in a video game?

Yet, how many times did you believe that that fictional character is in fact a sentient being that actually experienced misfortune?

The fact that the scene depicted in your question is emotionally engaging, even horrifying, does not mean that it depicts something substantially different than a sophisticated video game.

In that "game" Matt's job is to win. There is no surprise that the simulated agent acts as if it is a real human, with all the involved complexity and ingenuity. Seen as a "game" it can be an intellectual and even emotional challenge for Matt, yet nevertheless, hopefully, Matt does not get confused, mistaking the game for a real person.

Such a job as Matt's may even bear moral consequences, but not of the kind you naturally expect. Suppose there exists a genre of fictional films entirely about raping, torturing and brutally killing innocent people. What would you think of someone who gradually becomes addicted to such films?

Again, all of the above assumes the simulation is a computed simulation, the core concept in question being computation and its scope.

1

If you take the first presumption as true, that "a digital copy of Greta's consciousness" can be made, then you are inherently assuming a world in which all that is "Greta's conciousness" is reducible to code if it were not, then Greta cookie would not be a copy of Greta's conciousness at all, but a copy of some parts of it, missing others.

So, in a world where such a thing is possible it is not justified to say "at the end it's just a piece of code" because that's all anyone is, the code for the original Greta is carried on a computer made of cells, that of Greta cookie on a computer made of silicon chips.

Relating back to the real world, we do not ascribe rights to entities on the grounds of their fundamental components, no-one knew anything about the workings of the brain when it became taboo to torture people. We ascribed rights solely on the grounds that those were the things that people seemed to desire and feel emotional pain when deprived of. It is therefore sufficient that the Greta cookie shows a desire for freedom and emotional pain when deprived of it for it to be moral to allow her such rights. Without that rule, we are reverse-engineering our morality to pretend it is based on knowledge we just didn't have at the time it was evolving.

Our morals evolved to respond to information received by our senses and not responding to that causes psychological pain which affects us both now and in the future. Consider what Matt would have to do in order to continue torturing Greta Cookie despite her pleas for him to stop. What would happen if the future Matt then finds himself in a situation where it would be of some use for him to torture a real person, would he still be revolted by the thought of doing so?

11
  • We can probably already program a computer to "shows a desire for freedom and emotional pain when deprived of it" should we grant it human rights?
    – nir
    Commented Feb 2, 2017 at 9:31
  • @nir Yes, if it was convincing enough I don't see how we would have a choice. I've edited my answer to explain, but essentially the psychological effect on us of ignoring very convincing pleas for freedom or an end to suffering simply on the basis of our conviction that they are computer generated would mean we have to develop the tools to deal with the pain of resisting the urge to help, what use then would we make of those tools on real people?
    – user22791
    Commented Feb 2, 2017 at 11:27
  • Also, consider a Blade-runner-esque future in which you have befriended someone who gets into trouble with some thugs, do you stop to check what they're really made of before you risk your well-being to step in and help?
    – user22791
    Commented Feb 2, 2017 at 11:28
  • (A) In Blade Runner the replicants were "biological in nature" not a computer program, therefore I do not see how they are relevant as an example to this discussion about simulated digital copies. (B) are you saying that risking your life for a bunch of mechanical plastic dolls that are fighting to the "death" is justified if they imitate people well enough? I personally find it absurd. (C) what "pain of resisting the urge to help" are you talking about? are these your personal ideas for ethics and moral theory?
    – nir
    Commented Feb 2, 2017 at 16:55
  • @nir To (A) it doesn't matter you could replace the scenario with one in which you have to decide some action over the phone, the point still stands. (B) yes, that's exactly what I'm saying, if they imitate people sufficiently to cause you some difficulty avoiding action, then overcoming that difficulty causes problems for later empathy with real people, it's the same neural network doing the job, we don't have one network to help us empathise with real people and a spare one we can use to empathise with cleverly simulated people.
    – user22791
    Commented Feb 3, 2017 at 7:51

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .