6

From Britannica, utilitarianism is an ethics concept in which

an action (or type of action) is right if it tends to promote happiness or pleasure and wrong if it tends to produce unhappiness or pain—not just for the performer of the action but also for everyone else affected by it.

I have two questions regarding the same:

  1. Is the above statement equivalent to Spock's famous quote “the needs of the many outweigh the needs of the few”?

  2. Apart from arguments referring to egoism, are there any valid counter points to utilitarianism?

4
  • 1
    A pretty common one is reductio ad absurdum. If we can make decisions by simplistically calculating the ethics of an action based on things like giving more life, money, or happiness to the greater number of people, it maybe could follow that one should kill themself immediately and donate their money to poor people or orphans or something. Not necessarily, but it’s possible to drive utilitarian arguments into conclusions that people may find untenable. Commented Jan 9 at 6:57
  • 4
    See IEP, Arguments against Act Utilitarianism and IEP, Arguments against Rule Utilitarianism. Generally, utilitarianism proper, in its classical versions, is mostly of historical significance only. Its current descendants more commonly go by the name of consequentialism:"Persistent opponents posed plenty of problems for classic utilitarianism. Each objection led some utilitarians to give up some of the original claims of classic utilitarianism."
    – Conifold
    Commented Jan 9 at 7:08
  • 2
    Johan E. Gustafsson's entertaining 2022 article Bentham's Mugging presents the arguments against pure utilitarianism—and its pragmatic evolutions—as an accessible dialogue with an opportunistic mugger. Commented Jan 10 at 0:17
  • 1

11 Answers 11

20

I'd say they have three main problems.

  1. It can be extremely difficult, if not impossible, to calculate and compare all the consequences of various actions to determine which action leads to the greatest overall utility. The future is uncertain, and the ripple effects of actions are complex.

  2. Utilitarianism focuses entirely on maximizing overall utility and happiness, potentially at the expense of individual rights and justice. Things like slavery or unfair discrimination could be justified under utilitarianism if they maximize utility for the majority.

  3. There may be no objective way to compare utility/happiness across different people. How can we quantitatively weigh one person's happiness against another? Utilitarian calculations require making controversial interpersonal utility comparisons.

As for Spock's quote..

It reflects this central tenet of utilitarianism - that sometimes the overall needs and interests of the majority group should take precedence over those of a minority group or individual. Sacrificing a few to save the many can potentially be justified under strict utilitarian reasoning if it maximizes the net benefits for all.

However, Spock's quote is mainly about numbers, while utilitarianism is more broadly about maximizing total "utility" or well-being. But the underlying premise is very similar - the greatest total welfare takes priority according to utilitarians, even if that disadvantages a minority.

1
10

(1) No, Spock's statement is not an expression of utilitarianism.

(2) Here is a counterargument to utilitarianism: according to the definition you gave, if someone thinks it would be cool to commit a murder, it would be right to find someone with no friends or family and murder them in their sleep. The victim has no one to feel pain at his passing, and since you killed him in his sleep, he feels no pain, so the only effect the act has is to provide happiness and pleasure for the murderer. Since the only effect is to provide happiness and pleasure, by the definition you gave, this would be the right thing to do. If you feel that this consequence is wrong, that the murder would still be morally wrong, then you don't really believe in the moral principle expressed by that definition.

There are more sophisticated definitions that can avoid this particular counterexample, but all definitions of utilitarianism are ultimately subject to counterexamples of this sort.

1
9

One answer the second question of the OP is the "mere addition paradox". To paraphrase, this concerns the fact that adding more people to a society can in some sense "increase happiness", while average happiness simultaneously decreases. For example, it is common experience that when too many people are on a highway, causing traffic congestion, everyone there is less happy than they would be with fewer people. However, since each of those people has chosen to be on that highway, we can infer that they are probably happier there than anywhere else they could practically choose to be at that moment (otherwise they would e.g. get off at the next exit and wait for the traffic to subside). For any reasonable definition of happiness, and for any highway full of a few happy people moving along at full speed, it turns out there is another highway of greater total happiness where everyone is stuck in a traffic jam. So utilitarianism might lead one to believe that traffic jams are the best possible scenario. There are many mathematical details to consider, so check out the Wikipedia article for that, but I would say there is no easy resolution to the paradox.

In my interpretation, this points out a problem at the root of utilitarianism, which is that it was devised considering a static universe with a fixed, finite number of agents. As soon as you accept that people must be born and die and circumstances change, utilitarianism does not provide good answers to a great many questions. Practical examples:

  • Population control (e.g. China's one-child policy), birth control, etc.
  • End-of-life care and social cost of keeping alive people who would otherwise die. A great amount of money is spent keeping sick people alive.
  • Use of non-replenishable resources. Helium e.g. is a finite resource, and once used up we won't really be able to get more of it. At the same time, it's value is continually increasing due to new technologies (e.g. liquid helium refrigeration). These two facts suggest that we should wait as long as possible before using up our helium reserves. But if everyone takes that attitude, it will never be used, and it will generate no happiness.

Some of these paradoxes are related to paradoxes concerning infinities or probabilities on infinite sets. Basically, utilitarianism is an attempt to make value judgements somewhat quantitative, but no matter how you try to do this, there are some questions that simply have no good quantitative answer.


Edit: In response to some comments:

  • First off, I do not intend to defend the mere addition paradox. The details and counter arguments are contained in the linked Wikipedia article. I claim that even if you disagree with certain aspects of it, you can hopefully agree that it shows difficulties in applying utilitarianism in many realistic situations, difficulties which are not shared by some other ethical systems.
  • The example of the traffic jam is a toy model to avoid nasty discussions about mass killings and global overpopulation.
  • One way some people try to escape the mere addition paradox is to claim that average happiness should be prioritized over total happiness. However, this also leads to nasty conclusions, such as the mass killings cited by one commenter.
  • Since there was some criticism about what my examples were trying to optimize, let me cite a passage from the Wikipedia page on Utilitarianism:

Although different varieties of utilitarianism admit different characterizations, the basic idea behind all of them is, in some sense, to maximize utility, which is often defined in terms of well-being or related concepts.

There are many forms of utilitarianism corresponding to different definitions of utility. But the point of the examples above is that they all have problems/paradoxes when the number of people is infinite, unbounded, or able to change. I don't want to give mathematically rigorous statements of all the cases mentioned above, but as an example I will try to formulate the scarce resource one more carefully. Let's model utility of some resource as a function of time U(t) which, moreover, is positive and grows with time (monotonically increasing). The consumption of the resource can be modeled as a probability measure du on the positive real line. The total utility is then the integral of U(t) du(t). (This SE doesn't support inline math--sorry for the lack of formatting.) So far we haven't really assumed anything about our version of utilitarianism--only that there exists some resource whose utility increases with time (modeled by U(t)) and which has a finite, consumable quantity. Now the punchline is that there is no measure du (i.e. no usage plan) which maximizes utility; for any usage plan, deferring it by a little bit increases the total utility.

The weakest interpretation of this is that utilitarianism is incomplete--for any definition of utility, there is a scenario where utilitarianism doesn't tell you what the best course of action is. More strongly, one might infer that a society of perfect utilitarians with incomplete information would all reason that they should not be the ones to use the resource, so that it never gets used; but this leads provably the situation of minimal utility. This latter interpretation is more model dependent though (contingent on how one models the information of the utilitarians, e.g.). Either way, it's a weakness not shared with other ethics (e.g. "finder keepers").

5
  • I'm not sure in which way utilitarianism does not provide a good answer to questions like birth control or China's population politics in general. I'm sure that the implementers of the one-child policy believed to optimize the greater good since the population growth of the 1960s was unsustainable. (Many Chinese technocrats would probably claim to act in the name of the greater good all along, by the way, which could be used as a counter-argument: I's better to try to be concretely nice than abstractly good because it is less likely to lead to catastrophic outcomes.) Commented Jan 10 at 13:25
  • To take this "rather nice than good" to extremes: I can imagine that both Stalin and Hitler believed to act in the name of the greater good: Stalin promoted the anyway inevitable proletarian revolution, Hitler thought making the superior Arian race and ridding the world of Jewish parasites would bring a better world. Some collateral damage on the way was inevitable, even more so because some people couldn't see the light, try as they might. But one would be hard-pressed to call them nice, because they hurt a great many people. Commented Jan 10 at 13:31
  • "For any reasonable definition of happiness, and for any highway full of a few happy people moving along at full speed, it turns out there is another highway of greater total happiness where everyone is stuck in a traffic jam." False. Climbing the mountain in the fog paradox.
    – Joshua
    Commented Jan 10 at 15:45
  • @Peter-ReinstateMonica Thanks for the comments. I've tried to respond to some in edits to the post. Summarily, I think your second comment provides the answer to the first. There are many variations on utilitarianism, and some of them lead to the conclusions you mentioned, which I think most people would agree are very bad. This would seem a strong criticism of utilitarianism.
    – Yly
    Commented Jan 10 at 19:24
  • @NotThatGuy Thanks for the comments. I've tried to respond to some of it in edits to the post.
    – Yly
    Commented Jan 10 at 19:25
3

Turning into a data-nerd for a moment:

When aggregating Happiness across a population, evaluating "sums" and "averages" in various hypothetical situations shows that Happiness probably isn't a simple numerical value. It is relatively easy to concoct (unlikely!) situations in which you can use the base argument of Utilitarianism to "prove" a case that would seem unethical.

As you can see in the definition given, there is no proposition on what a good evaluation of an individual's Happiness is inherent in the definition of Utilitarianism (much less a type of value that would combine satisfactorily with other individuals!)

While this does mean that we are forced to look for evaluations that include unevenly distributed values and/or better mechanisms to merge values across populations (such as looking to inviolable rights, scaled benefit, etc.) that doesn't address whether Utilitarianism is wrong, just that Utilitarianism is incomplete as an ethical justification when used in its most simple form.

3

A possible counterargument is the idea of drugged rats. (This is a meme I came across online, but after some thought it appears to have some merit.)

Utilitarianism promotes the greatest happiness for the greatest number. The argument claims that the best way to achieve this is to breed rats, and have them on an IV/dopamine tube from birth. As an extension, the society with the greatest number of rats in such a system would be the most moral one, and humanity should give up other pursuits in favor of finding more efficient methods of breeding rats and creating dopamine.

Edit because the answer is unclear (?):

I didn't think it was necessary to elaborate, but this is a clearly absurd conclusion resulting from gaming the mission statement of utilitarianism.

2
  • As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
    – Community Bot
    Commented Jan 10 at 14:18
  • This is another good rebuttal I've heard before. One could also use even simpler organisms. I wonder if there's a standard name in the literature for this.
    – user76284
    Commented Jan 11 at 5:03
1

One argument against utilitarianism are inviolable rights and human dignity which categorically forbid certain acts, no matter their utility. They happen to be the ethical foundation of the German Basic Law, and there were two lawsuits in recent decades exploring precisely this tension regarding government actions.

The gist is that there are non-negotiably forbidden acts which are inherently evil. The government cannot employ them even in fairly extreme situations where utilitarianism would dictate them. A constitutionally acting government is forbidden to act inhumanely against its citizens, period.

The acts in question were intentionally sacrificing a few innocents to save many, and torturing a culprit to save an innocent.

The first scenario concerned a 9/11 situation; the government would not be allowed to issue the command to shoot down the airplane. Reading the decision actually makes me emotional. I think it is worthwhile to quote the essential part verbatim, emphasis by me, from here:

Die einem solchen Einsatz ausgesetzten Passagiere und Besatzungsmitglieder befinden sich in einer für sie ausweglosen Lage. Sie können ihre Lebensumstände nicht mehr unabhängig von anderen selbstbestimmt beeinflussen. Dies macht sie zum Objekt nicht nur der Täter. Auch der Staat, der in einer solchen Situation zur Abwehrmaßnahme des § 14 Abs. 3 LuftSiG greift, behandelt sie als bloße Objekte seiner Rettungsaktion zum Schutze anderer. Eine solche Behandlung missachtet die Betroffenen als Subjekte mit Würde und unveräußerlichen Rechten. Sie werden dadurch, dass ihre Tötung als Mittel zur Rettung anderer benutzt wird, verdinglicht und zugleich entrechtlicht; indem über ihr Leben von Staats wegen einseitig verfügt wird, wird den als Opfern selbst schutzbedürftigen Flugzeuginsassen der Wert abgesprochen, der dem Menschen um seiner selbst willen zukommt.1

Or, in fewer words: Inviolable rights are inviolable. Duh.

The second lawsuit concerned a policeman threatening torture to a man who had abducted a child, in order to make him reveal the victim's location. Here, too, the court found that even the threat of torture violates the human dignity guaranteed by the Basic Law. The policeman was found guilty and sentenced. While the sentence was nominal and on probation, it was an important decision affirming the limits imposed on government interventions springing from the ethics underlying our legal system.

Note that these decisions concerned official government actions. An individual finding themselves in these difficult situations may decide differently, but must expect to be punished for it; such an action cannot be a government action.


1 Google delivers a decent translation; I modified a few details: The passengers and crew members exposed to such an operation find themselves in a hopeless situation. They can no longer influence their own living conditions independently of others. This makes them an object not only of the perpetrators. The state, which resorts to the defensive measure of Section 14 Paragraph 3 of the Aviation Security Act in such a situation, also treats them as mere objects of its rescue operation to protect others. Such treatment disrespects those affected in their nature as subjects with dignity and inviolable rights. They are objectified and at the same time deprived of rights because their killing is used as a means to save others; by having their lives unilaterally disposed of by the state, the aircraft occupants, who are themselves in need of protection as victims, are denied the value that a human has for its own sake.

4
  • The you share is a legal case. It says that under the German Basic law, certain rights are inviolable and that legally inviolable rights are inviolable. But it doesn't prove that the German Basic law is ethically superior to a utlitarian system. Inviolable rights and human dignaty are an alternative conclusion to ultitarianism, but are not arguements for that conclusion. Commented Jan 9 at 19:04
  • @IanSudbery The mere assertion that there are inviolable rights is an axiom which as such cannot be proven or disproven; it is an "arbitrary" choice, a matter of taste, if you want. (Some people have better taste than others, but you cannot prove taste.) To accept this axiom immediately invalidates a strictly utilitarian ethics, as the court decisions show. Therefore, if accepted, they are surely a strong anti-utilitarian argument. (The law was quoted as an example of an application of this particular ethic, and to illustrate how it is applied in real life, and its consequences.) Commented Jan 9 at 19:50
  • Surely all arguements can be refuted by the assetion of an incompatible axiom. I would have taken it as given that those that hold the view that some things are wrong irrespective of their utility don't believe in Utilitarianism - that is simply a tortology. It is specifically because you can't prove an axiom that arguements based on stating different axioms are generally not convincing. Commented Jan 10 at 11:21
  • 1
    @IanSudbery I note that the other answers except the "mere addition paradox" are also based on axiomatic values, without making it explicit: David rejects that murder would be justified, Growing_strong laments a possible violation of "individual rights and justice" (roughly my argument). In my opinion, Growing's other arguments are not strong: If an attempt to measure the outcome, including between individuals, comes out undecided, that's OK -- we do not need a complete order of all actions, a partial order is sufficient, in particular one identifying really bad choices. Commented Jan 10 at 13:20
1

The common way to argue against Utilitarianism seems to be to present a simplistic interpretation of what 'the greatest sum of happiness' may be and then use some absurd case to show that this is not right. We do not know what 'happiness' is, let alone have any good way of measuring it. Nevertheless, John Stuart Mill and others proposed that there may be some 'happiness integral' which would be maximal for all desirable cases. If you find a counterexample for a particular model, that shows the particular model is not the right one.

It may useful to consider a bad example of this approach: the IQ test. We may understand some people are 'smarter' than others. We don't know what 'smartness' is, but we can design abstract tests that ought not to favour a particular nation or race or other subdivision of mankind. The IQ test score then came to be seen as a measure of how 'smart' you were; if certain skin colours or sexes scored less, they were seen as being less 'smart' rather than that the test was biased. I remember a marvellous counterexample where the IQ-style test was a tray of objects such as twigs, stones, and pieces of bone: Native Australians scored highly, while average westerners could not even see what the 'question' was.

Nevertheless. some people are 'smarter' than others. Intelligence is a thing, but IQ is not a good measurement of it. Chess rankings are a pretty precise measure of how likely person A is to beat person B in a match, so we can measure some mental intangibles such as chess ability. We have not measured intelligence or happiness yet, but we can keep trying.

The other solution would be to prove there is no optimal solution for 'happiness'. I don't see how this is possible, as such a proof will need a pretty sound definition of 'happiness' itself. Even so, progress will have been made.

5
  • "IQ is not a good measurement of it" Why not? What better measurement do you have?
    – user76284
    Commented Jan 10 at 17:55
  • I don't have to provide a better measurement. It is enough to say millions of soldiers were ranked soldiers were ranked according to the Stanford-Binet tests, which systematically ranked Southern Europeans and dark-skinned US racial groups lower than your basic WASP. We cannot re-analyze these people because they are all dead, but there are serious doubts that any real difference is explained by genetics rather than differences in education and expectations (see en.wikipedia.org/wiki/…). Commented Jan 10 at 21:07
  • Wikipedia is not a reliable source for this topic: quillette.com/2022/07/18/cognitive-distortions
    – user76284
    Commented Jan 11 at 3:13
  • If Wikipedia is not a reliable source, then Quillette definitely isn't lol
    – ajd138
    Commented Jan 11 at 13:31
  • @ajd138 Non sequitur.
    – user76284
    Commented Jan 11 at 17:29
1

Just to present an opposing view:

People have presented plenty of arguments against utilitarianism (here and elsewhere), but:

  • There are responses to those arguments (but there may also be responses to those responses, with subsequent responses)
  • Those arguments often involve examples that are very extreme and completely detached from reality, which doesn't mean it's a bad principle for non-extremes.
  • People like to consider utilitarianism in isolation, but we should ask how it compares to other moral philosophies.
  • If pressed, one could combined utilitarianism with e.g. some rights-based morality. You don't necessarily need to treat maximising well-being as the ultimate moral authority to appreciate that maximising well-being is generally a good thing to strive towards.

There is no perfect moral principle because reality is messy, but utilitarianism is one of the best, or the single best, principle we've been able to come up with.

At least as far as I've seen, every prominent alternative to utilitarianism seems to invoke ideas that seem unjustified, poorly defined or arbitrary, they can (and do) fairly-trivially justify utterly disgusting and atrocious acts, they lead to drawing arbitrary lines, or they in some form appeal to increasing happiness or avoiding suffering (utilitarian ideals).


To consider some arguments and responses:

"Utilitarianism may lead to killing a few to increase the happiness of the many"

Response: for this, the downside to the oppressed is typically far more severe than the upside to the advantaged, the upside for the advantaged may not be that advantageous (many people are unhappy living in an unequal society, even if it's in their favour), the tables could turn (due to e.g. revolution, which is less likely under equality).

One may judge suffering to be exponentially worse than happiness is good, and consider happiness to have diminishing returns, so it would be implausible for a scenario like the hypothetical to actually happen.

"There is no objective way to compare utility/happiness across different people"

Response: if you're going to charitably give money to a stranger, would that be to a billionaire, or to the homeless person who can't afford food? One doesn't even need to make a case there, because practically every person is already able to compare utility/happiness. Objective and quantitatively don't apply to emotion in general. Evaluating someone's emotional state is inherently subjective and qualitative. But that doesn't mean you can't strive towards consistency.

If you want some quantifiable measure (which is still subjective), consider how you'd feel about living in one person's life, compared to another. Let's say you'd live as a homeless person for 5 years, and then you'd live as a billionaire for 5 years, after which you'd go back to your normal life. If you can have $10 at some random point during that period, would you rather have it while homeless or while a billionaire? A slight variant on that: how much money would you choose to transfer from the billionaire to the homeless person? How much longer would you be willing to be homeless to spend more time as a billionaire?

This may not be a perfect comparison, because you may not know how it would feel to live as someone else, especially in the long term, and especially given that life experiences, physiological variation and mental conditions significantly affect one's experience. But it could provide a lot of useful insights into how well-being compares in different circumstances.

"Utilitarianism would demand that you donate all your possessions because consideration for yourself and those around you is equal to consideration for those living in extreme poverty, and your money would best serve them"

Response: one may say that not every action you perform should serve the greatest good. We are emotional creatures that are strongly driven by our own interests, and this can't really be ignored.

If you're really inclined to argue this from utilitarianism, one might say that a moral philosophy which does not give some preference to oneself and those around you cannot generally be adopted or maintained, for reasons given above. Thus what would provide the greatest increase in well-being is if utilitarianism gives some preference to one's own interests. But maybe this is a bit contrived.

I've also addressed the idea of selling all your possessions in an earlier answer, although that doesn't specifically mention utilitarianism.

0
0

By no means the most important criticism, but a common one that's missed in other otherwise great answers like this one, revolves around demandingness. The idea here is that intuitively we often don't just think of moral actions as "good therefore obligatory", we also have a conception of the supererogatory.

Think of a heroic act of self-sacrifice - we tend to see the people who do such feats as praiseworthy precisely because they've gone above and beyond what we might expect of someone in that situation. And whilst we might wish someone had acted otherwise, we don't tend to think people who haven't acted heroically are somehow immoral or bad people for not having done so.

Utilitarianism is an ethics of maximization. It doesn't appear to have room for the supererogatory. If an act improves utility, than you should do it regardless, according to this philosophy.

Naturally, utilitarians have their own responses to this objection, and likewise critics have their counter-responses, so there's a literature around this particular line of argument.

0

A general point that can be made against utilitarianism is that it is tantamount to one rather arbitrary axiom system among others, for ethics. Since the utilitarian analysis of the concepts "good" and "right" is defunct modulo ordinary-language philosophy (incompatible with the analysis of a plethora of other normative/deontic concepts like "merely permitted," "supererogatory," "forgivable," etc.) and dispensable modulo formal conceptual engineering, we have no stable ground for choosing utilitarian axioms instead of neo-Aristotelian or neo-Kantian ones (say) anymore than we have stable grounds for categorically preferring ZFC over ZF or NF, or NFU, or whatever set theory we think up on the basis of an imperfect and ultimately relative analysis of the concept "set."

Utilitarianism is also not emotionally compelling enough for enough people. Granted, it would be too much to expect that an ethical theory should be convincing to 100% of those who learn it, 100% of the time, but 50% of learners 50% of the time would be helpful (and impressive). Granting even more, no other ethical theory is so compelling, either, which does testify against those theories too, though.

0

The most potent argument, as I see it, is not against the concept of Utilitarianism per se, but against its utility (pun intended). Sure we all want to be happy -- but what good would that do if we don't know how?

“No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks.” (Mary Shelley, the author of Frankenstein; or, The Modern Prometheus)

That's the story of Hitler, by the way -- as well as countless other tragic examples over the course of human history.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .