3

If consciousness arises from specific functions instantiated by physical systems, consider a robot with functions mirroring those found in carbon-based life, particularly in humans. Would this imply that the robot could experience consciousness akin to humans, including feelings of pain and suffering? If so, should moral considerations apply to such a robot? Would it be necessary to enact laws to safeguard the well-being of these robots?

1
  • 1
    As a reductionist, I would have to say Yes - if we can be conscious, it's completely feasible that a Robot could be conscious to (not that any given robot necessarily IS conscious, but some robotic or algorithmic system could be capable of consciousness in principle). If a Robot or other computerized system had the systems in place that we would call conscious, then they'd potentially have to have similar kinds of moral protections we extend to humans - depending on various aspects of their particular kind of consciousness.
    – TKoL
    Commented Nov 24, 2023 at 11:41

5 Answers 5

4

If robots achieve (human-level) consciousness, we likely wouldn't have a good differentiating criteria between human and robot for ethical consideration.

On a related note, vegans argue that we don't have a good differentiating criteria between humans and other animals.

Without a differentiating criteria, it would be inconsistent to treat humans one way without extending the same treatment to conscious robots (or non-human animals).

As for having a poor differentiating criteria, slavery was (and is) to a large extent that for humans (e.g. differentiating based on race), and most of the modern world has decided that equal ethical consideration of all humans is the way to go instead. So I'd say it would be quite important to either differentiate based on a good criteria, or to not differentiate.


One could also approach the question from the viewpoint of moral frameworks. Why do you give moral consideration to other humans?

A common idea is to try to minimise suffering and maximise happiness (utilitarianism). This, in itself, only requires that entities can experience suffering or happiness to be considered here. One might additionally say that this should be limited to humans, but that would lead to the points raised above about differentiating criteria.

1
  • Very pertinent reference to veganism, +1. Are you a vegan, by the way?
    – user66156
    Commented Nov 23, 2023 at 23:56
4

This is easy to approach with the proper definitions.

Morals are a set of rules which improve social interactions (for multiple goals, for example, survival: it is not moral to kill because it reduces the survival probabilities of the group). Example of moral rule is a woman telling her grandson "be kind".

Ethics are the formal expression of morals. That is, usually written, and using a precise language. Example of ethical rule: "kindness enforces social relationships, which improve the survival probabilities of the group".

Now, before focusing the formal expression (ethics with robots), it is necessary to focus the informal (moral) rules: what are the moral rules for dealing with robots?

If mistreating robots improve our survival probabilities, we should do it. If it reduce our survival probabilities, we should be nice with them. Morals and ethics are about human goals (ie. survival), not about robots or animals (e.g. if we care for animals is not because of them, but because of us: caring for animals and plants increase our survival probabilities).

So, in order to define the moral and ethical rules that should guide our interactions with robots, you should precise what is the impact of each type of interaction with robots. Should we pet them? Should we share our resources with them? Do we want them to survive and kill us? Do we want to coexist in peace? Do we want to be equals to them? Do we want them to subordinate to humans?

2
  • 1
    The problem with this answer is that the definition of "us" is relative. Many people who are against mistreatment of animals believe that "us" is all conscious beings, not just humans. So "our" survival includes survival of animals.
    – Barmar
    Commented Nov 24, 2023 at 13:38
  • 1
    This answer gives only a very narrow definition of what Morals and Ethics are that are only true in some subset of consequentialism. Deontology or virtue ethics may arrive at a very different answer.
    – blues
    Commented Nov 24, 2023 at 13:40
1

So without giving my opinion (which is absolutely yes, if they have consciosuness), you could look into accelerationism and posthumanism, as these both have had philosophers linked to them, and they have a sizable internet presence. One thing I recall is the claim that when a true AGI with consciousness and serious computational power arrives, we should not assume it will share in the ethical intuitions we - as humans - have, and yet it will have ethical authority over us (a lot of crazy might be released if true).

0

If functionalism is right, then we need to revisit what morality is. Obviously, the current moral concept is not based on the recognition of functionalism.

1
  • Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
    – Community Bot
    Commented Dec 20, 2023 at 10:25
-3

Consciousness cannot arise out of a machine.

If it could we would have seen it, even in a basic level. If on the other hand you believe that a machine can be made as having consciousness by design, that's beyond sci-fi. We don't even know what an electron really is, how could we create consciousness?

On the other hand, machines are and will be made, that will start to interact with us in far more profound ways than we can imagine.

For example having a car accident against a self-drive car, or being regulated by an AI application at the state level, or being locked inside an elevator by a system that considered you a threat for some reason, may pose challenges on the ethical and moral level.

Not for the well-being of the machines, but for the well-being of ourselves as individuals and as a society.

9
  • 1
    How do you "see" consciousness? What does it look like? What are its properties?
    – NotThatGuy
    Commented Nov 23, 2023 at 20:52
  • @NotThatGuy, I "see" consciousness with mine, it's properties are the behavioural aspect of the one that has it. Commented Nov 23, 2023 at 21:21
  • "it's properties are the behavioural aspect"? So if a machine behaves like a human, you'd say it's conscious? If not, then behaviour doesn't seem to be the determining characteristic here.
    – NotThatGuy
    Commented Nov 23, 2023 at 21:30
  • 1
    @NotThatGuy en.wikipedia.org/wiki/Problem_of_other_minds
    – user66156
    Commented Nov 23, 2023 at 23:59
  • 1
    @IoannisPaizis It has everything to do: "Given that I can only observe the behavior of others, how can I know that others have minds?". Here, minds and consciousness are more or less interchangeable.
    – user66156
    Commented Nov 24, 2023 at 7:52

You must log in to answer this question.