6

The idea of a driverless car sounds amazing, and offers the possibility of eliminating a large amount of human error caused by e.g. impatience. Wikipedia writes:

While morally, the introduction of autonomous vehicles to the mass market seems inevitable due to a reduction of crashes by up to 90% and their accessibility to disabled, elderly, and young passengers, there still remain some ethical issues that have not yet been fully solved.

However, it will likely be impossible to achieve perfection in driverless cars, so, in the event of widespread usage of driverless cars, people may still be harmed (and harmed in ways that would not have occurred had the car had a human driver), yet at a rate lower than that of human drivers.

Question: How can one determine when it is ethical to use imperfect driverless cars?

I'm interested in learning about the relevant tools in philosophy for finding this balance point. How can we determine if e.g. a 90% improvement on humans is enough ethically?

3
  • It is impossible to achieve perfection in human drivers either, and I doubt that any kind of numerical threshold can be twisted out of ethics. Many states in the US allow their senior citizens to drive without much testing of their continuing capacity, the alternative would be political suicide for whoever pushes it. The same is true of the 55mph speed limit for highways, which was rescinded despite reducing the number of fatalities. This is how the "balance points" are found in general, not through ethics but through political compromises.
    – Conifold
    Commented Feb 23, 2017 at 1:26
  • A naive approach (which formal name I do not know, hence this is not an answer) is evaluating if that solution is better than the current one. If it is, wouldn't it be ethical already? Furthermore, wouldn't it be unethical to not use driverless cars if we could reduce accidents and risks, even when imperfect? This black and white balance of morality is a good discussion starter.
    – Alpha
    Commented Feb 25, 2017 at 5:53
  • 2
    I think the question needs to be edited to fit here. Specifically, you should make clearer the assumptions you think are involved in ethics. It seems like you're committed to some sort of balancing approach (whether that is utilitarian, consequentialist more generally, or basic goods). But without knowing what to balance or what matters, it's impossible to answer.
    – virmaior
    Commented Feb 28, 2017 at 3:20

2 Answers 2

5

One problem that enters into this situation that does not apply to the case of human drivers, and that may hold this process up for some time is the diffusion of what is now individual responsibility into the corporate domain. This is a general problem with the deployment of artificial intelligence, and we will face many versions of it soon. But so far, the problem has always been mapped back onto humans. In the case of an autonomous vehicle, this would be quite difficult.

Corporations take responsibility the same way individuals do, but they are able to shift the parts that compose themselves around to evade apparent responsibility or to hide resources available for remediation from those to whom they would be owed. They have been known to dissolve completely, escaping liability by having no assets while a 'different' corporation is constituted with the exact same human members, free of historical ties to the past behavior of the same group of people.

As it is (at least in the U.S.) determining who is at fault in an accident is a legal process between insurance companies representing individuals. The facts are ascertained from eyewitnesses, if only in the form of the two drivers themselves. Approximately equal resources and equal process will be deployed for both drivers, and they are aggregated by the requirement of mandatory insurance into large enough groups that if a large settlement is determined, the individual will not be financially destroyed (though they may never again be able to get insurance and thus may no longer be allowed to drive.)

If a machine makes an error, and the passenger is not in a position to take responsibility for knowing what happened, we are left with a difficulty assigning blame. The contest of attempts to prove fault is vastly unfair, strangely it is so to either party, but in wholly different ways.

  • The corporation and its machine can expect very little empathy from humans, who will constitute the jury or magistracy making the decision; but
  • The same resources cannot possibly be deployed on behalf of the human that will automatically come forward to defend the machine.

We have no clue how to insert equality here. We don't know which party is more disadvantaged, and we don't have faith that if the car company is just put out of business the existing cars already produced will not become unfortunate burdens upon their owners.

(The German government raised the same issues with respect to open-source software back when it first became an important force in the computing industry: that it proposed a problem of responsibility that had no parallel in earlier law, and it was unclear what would constitute fairness.

Elaborate networks of proxies and escrow schemes were finally instituted to buffer the court system against the risks of getting this wrong. A similar situation might arise here, with auto-insurance companies basically changing in form in a way that simply removes the issue by convincing the courts that this not so bizarre, after all. )

1
  • 1
    I just realized this is totally not an answer. So, y'all voting for it, take it as clarifying the question and maybe propose an answer?
    – user9166
    Commented Feb 25, 2017 at 0:11
0

I believe that a possible metric would be - does the introduction of the current level of technology (driverless vehicle, etc.) improve (less crashes, etc.) the current status? If yes, then it is ethical to introduce it.

The "imperfections" that might result from the "introduction" can be dealt with similarly as we do now with the imperfect human.

1
  • Assuming a utility calculus of sorts, this seems like the right direction but incomplete. Should we also take in account potential futures that can only result from its adoption? In other words, a variation on can we build giant dams that risk killing workers who make them but save 1000s of lives in the future? In this case, it's a double question.
    – virmaior
    Commented Feb 28, 2017 at 3:22

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .