5

What are the arguments for and against this? Any resources that are easy to read would help

4
  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Philip Klöcking
    Commented Jan 26, 2019 at 21:06
  • The point of edit?
    – rus9384
    Commented Jan 27, 2019 at 10:27
  • 1
    What is the Question ? The edit has completely removed any topic.
    – Geoffrey Thomas
    Commented Jan 27, 2019 at 14:13
  • This edit also makes it impossible to consider existing answers and future answers together in any productive way.
    – user9166
    Commented Feb 4, 2019 at 20:25

3 Answers 3

6

This is currently a major topic in academic philosophy of science. Among people who specialize in this topic — including myself — a strong majority now think that ethical values do and should play a role in evaluating scientific claims.

One major argument for this claim is the argument from inductive risk. Inductive risk simply refers to the risk of believing a false claim ("false positive" error) or rejecting a true claim ("false negative" error) whenever we evaluate the claim using limited evidence and cognitive capabilities. Which, of course, is pretty much all the time in empirical science. In this context, when evaluating a claim, we need to determine the relative importance of the two kinds of error. Is it worse to believe a false claim or reject a true claim? The argument from inductive risk points out that setting this balance requires us to consider the downstream consequences of making each kind of error, including the non-epistemic consequences of acting on our beliefs. "Which is worse?" is ultimately a question about values. In this way, and perhaps in others, values have a role to play in evaluating empirical claims.

Here are some readings to get you started:

11
  • This seems a good answer but it is all about interpretation and hypothesis. A scientific claim is not an interpretation or hypothesis. Where there is a claim it should be bullet-proof and unaffected by ethics. At least, this view of what constitutes a scientific claim explains why our answers don't agree. A lot of so-called scientific claims are just guesswork but these are not really scientific claims, just the opinions of some scientists. Still, I wish they'd read your answer and stop prompting damaging philosophical guesswork as the 'scientific view'.
    – user20253
    Commented Jan 22, 2019 at 15:48
  • 3
    "Where there is a claim it should be bullet-proof...." This assumes that false positives are a higher priority than false negatives. Which requires a value judgment. For example, the tobacco industry argued for decades that we shouldn't regulate tobacco until we could be certain that it caused cancer, while also promoting research that suggested it might not cause cancer. More acceptance for the risk of false negatives might have led to earlier regulation of tobacco, which could easily have lengthened millions of lives. See here: <books.google.com/books?id=CrtoNFTuPwwC>
    – Dan Hicks
    Commented Jan 22, 2019 at 18:02
  • 2
    evaluating science isnt justifying it. This is slightly besides the OP question, but interesting nevertheless. What you are doing sounds a lot like risk management : en.wikipedia.org/wiki/Risk_management Commented Jan 22, 2019 at 18:47
  • @DanHicks - Good point. Where the jury is out on an issue the ethics might determine our actions. But what I would call a scientific claim is a claim justified by data and experiment. I would agree with Manu that the issue you raise is about risk management. The main thing for me would be to distinguish scientific claims from the speculations of scientists. They are regularly confused.
    – user20253
    Commented Jan 23, 2019 at 11:57
  • @PeterJ To point out the part of Manu's comment you missed: a claim is always a hypothesis. The jury is always out, to one degree or another, or science would just stop. No claim is 'justified' by experiment, there is always more data to be gathered in the future that might change it in some way. That door is never closed. So the risk is never gone. Newtonian physics was settled fact, until it wasn't.
    – user9166
    Commented Jan 26, 2019 at 19:58
0

The leading answer is obviously correct. It may not be concrete enough.

In the 1970's homosexuality per se stopped being classified as a mental disorder because of political pressure to re-analyze a long and established scientific consensus to the contrary. Knowing in retrospect that this line of study had a cultural bias behind it, all of that work was subjected to closer scrutiny, and much of it was basically discarded. The 'justification' of that existing work was changed by an ethical consideration.

More recently, these are three cases I have followed that have come to the point of mass popularization after the scientific community has raised the ordinary standards of acceptance and delayed approving, publishing or citing work, because doing so could have unfortunate effects.

  • Thinking revived by The Bell Curve which considers the relative intelligence of classes and races and revives citation of older eugenicist views that are now broadly controverted.

  • The work explained in The Man Who Would be Queen which proposes an overall diagnosis that most male-to-female transsexuals (but not all) really have a totally different disease and their transsexuality is a symptom.

  • The studies including those of James Cantor that indicate pedophilia is a physiological brain-configuration problem that cannot be treated, which leads to specter of using brain scans to detect criminals.

(The choice of this list is obviously skewed entirely by my vested interest in the case I introduced this with. There are equally strong examples that would be preferred by someone to the political right. The last two authors, also pretty much 'won' -- because they are good scientists. So this is not about silencing opinion, it is about a varying the standards of rigor in a socially productive way.)

Doing that is using ethics to change the criteria for 'justification'. It is a real thing, and a good one.

Scientific claims are never settled, but they do take on a greater degree of presumed correctness if they are repeatedly cited. They can contribute, ultimately to paradigmatic principles.

So it is very important that potentially damaging claims not succeed too easily. Once they do we get led down the tortured paths documented in the kind of history popularized in Steve J. Gould's The Mis-Measure of Man.

In the shorter term, while there is always the opportunity to publish controverting papers, doing the research involved, when the subject is politically tense, can be adversative. So the pushback comes in irrelevant political arguments instead of engaging good science in response.

The argument then becomes established through 'ad baculum' attrition as people are led away from threatening their careers by engaging in politically contentious process to get published or even to speak about their work. Given the way this sort of thing gets handled outside science, nobody is going to repeat these experiments, even to contest them. So we need to be more certain the originals hold water, or worse science results.

Avoiding this is good science, even if it directly alters the justification process by holding people who do audacious work to an occasionally unfair higher standard.

It counteracts a lot more subtle effects alluded to by Conifold's first comment. The fact that humans are doing science, that we have a particular sense of what is simple, when a measure is good, how mechanical a mechanism has to be, when an argument is spurious, etc. automatically skews the kinds of hypotheses that arise, and how we combine them. People are political animals, science is a social process, and we have a strong tendency to all have the same thoughts, especially if we share a formative culture. And those inevitably bias the small decisions made in day-to-day scientific work. Those biases can be aspects of harmful broader social trends.

Knowing that, it is perfectly reasonable to push back with other aspects of social processes, like arguments about ethics.

7
  • It is certainly good practice to make sure the research is sound and the data is not being misinterpreted, but I see no examples here of scientific claims being modified by ethical evaluation, just poor or optional interpretations of the data being corrected by care and thoughtfulness.
    – user20253
    Commented Jan 28, 2019 at 10:10
  • @PeterJ And I see you changing the rules so you won't be wrong.
    – user9166
    Commented Feb 4, 2019 at 20:24
  • What is your point? Can you provide an example of a scientific claims being modified by an ethical evaluation? If not then why argue?
    – user20253
    Commented Feb 5, 2019 at 9:32
  • @PeterJ. What would such an example need to be. Soimething that was settled for years and then changed because of ethical considerations? Done. Something that would ordinarily have been considered accepted but was not, because of the potential ethical consideration? Done. Something that meets some bizarre prejudice you have about what is and is not scientific? Impossible, because no such thing exists. My point is that you are using a standard that does not exist. You are ignoring the core of the philosophy of science for the last 60 years.
    – user9166
    Commented Feb 5, 2019 at 19:08
  • @PeterJ My point is that you are not arguing, you are asserting dogmatically that what you refuse to see does not exist.
    – user9166
    Commented Feb 5, 2019 at 19:16
0

According to popper, what is logical positivism's role in scientific ethics?

Popper's position is that logical positivism is false, so it has no role in scientific ethics. Popper criticised logical positivism in a couple of ways. Logical positivists wanted to justify induction. But as Popper pointed out, induction is impossible so this program was doomed from the start. In addition, they wanted to adopt methodological naturalism: they would observe what scientists do and that would tell them the methods of science. As a result, experimental science would tell us how science works and there would be no need for a separate field of philosophy of science. This naturalism couldn't address the problem of induction since it presupposed that induction is possible and would have similar problems for any other controversy about methodology on which scientists disagree. Naturalists would also need to decide what sort of activities constitute science and which people are scientists. So the naturalists would just shift the problem of methodology to deciding those questions instead of talking about actual methodological problems directly. See "Logic of Scientific Discovery" by Popper, Part I, Chapters 1 and 2 and Section 17 of Popper's book "Unended Quest" for his criticisms of logical positivism.

Popper's position on scientific ethics is that scientists are fallible, that any scientific theory may be mistaken and that scientists should criticise their own theories and seek criticism from others. See "The World of Parmenides" Essay 2 Addendum 2 for more details.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .