6

Feyerabend was critical of the scientific method and claimed in his book "Against method" that "anything goes". If I understood correctly he meant that there is no single scientific method,but he does not specify common components of every scientific method either,thus concluding that "anything goes". Are the following observations evidence that Feyerabend is wrong?

  • The replication crisis demonstrated that a huge amount of finding in the social as well as natural sciences cannot be replicated. Isn't the solution to the replication crisis more stringent statistical methodology?

  • Also Feyerabend never specified common elements of scientific methodologies such as observation, forming of hypothesis or data analysis which enabled the majority of scientific discoveries. There are common elements to all scientific methods, contrary to Feyerabend.

  • One might find suitable theories through mathematical manipulation of existing theories. However these theories will be tested against the physical world, thus although one might argue to have found truth before testing, we cannot know if these theories really hold up to reality before testing them.

  • Non-mathematical theorizing can discover "trivial" workings of reality. However this seems only to work for low hanging fruits and usually does involve at least observation of the physical world plus a lot of undocumented reasoning.

2
  • 2
    If you add a space after the "-", you get real bullet points...
    – AnoE
    Commented Jan 10, 2022 at 12:12
  • 2
    nice thank you!! Commented Jan 10, 2022 at 12:15

3 Answers 3

11

The most widespread opinion of Feyerabend is that his basic thesis is correct, that there IS no single method that applies to all of science, but that his inference to anarchism is an over extrapolation. There are instead multiple useful methods, each of which is generally useful across a range of different sciences. And this collection of methods have a familial resemblance that involves a respect for the pursuit of truth, fallibilism, and peer evaluation/critique.

However, the thesis of this question is incorrect. The replication crisis really has nothing to do with Feyerabend, and rigorous statistics will have no effect on it. One of the major features of the replication crisis is that experiment-experiment results tend to have mean solutions that fall well outside the expected range one would predict from the prior published test results. This was the surprising conclusion from multiple revisitations to historic drug effectiveness tests. More careful analysis of the prior test results would not have affected that the test-test variability is not well predicted by the experimental variability within a particular test series.

The basic cause of the replication crisis, is that science simply does not do enough replications. And repeated testing, in different labs, with slightly different protocols and experiment teams, is the best way for science to sort valid from spurious inferences. Revisiting the same question multiple times, demonstrably does not lead to the same conclusion each time. And one will not discover this, unless one actually does multiple replications of interesting results. An interesting corollary is that this principle applies to surprising UNinteresting results too. Failed experiments have not been the focus of the replication program, but the same effect of experimental variability likewise applies to them, where there are likely a large number of false negatives that were just due to bad luck in experimental variability.

There are a variety of science practices that lead to the replicability crisis.

  • Almost all academic original research is from PhD programs. And every PhD awarded has to be for original work, which means that NONE of the primary output of academic research will be replications.
  • Most journals will only publish "interesting" work -- IE only statistically significant, original, and with positive outcomes. IE, if a study has a negative or indeterminate outcome, or is a replication, it will generally be rejected for publication. This drastically skews the published literature toward innovative research rather than completing a database on a subject, an over-focus on statistical significance, when even real effects are very often not significant in every test, and a massive discouragement of ever doing replications.
  • The best funded studies, with larger sample sizes, etc, are from commercial organizations. And they will generally only report results when they are commercially useful -- IE successful applications of a commercial product. This drastically skews the published literature on these commercial products.

One of the concerns of critics of science has been about "P-hacking" where researchers multiply reanalyze their data until they finally find an effect that meets a 95% P criteria. The focus on statistical rigor, and bayesian analysis, is intended to address this supposed problem. However, this "problem" is just a sociological reaction to the publication policy of journals. Until journals publish indeterminate and replication studies too, variations of "p-hacking" will continue to skew the science database.

And setting a more stringent criteria for publishing, which is the point of the Bayesian vs frequentist approach, will not do anything prevent the lucky statistical accident effect which is so much a part of the replication crisis. What it WILL do, is drastically increase the number of false negatives, where a real effect exists, but is never explored because the publication criteria for the first time it was observed prevented publishing the interesting hints that an indeterminate but suggestive initial investigation provided.

4

There are more than a dozen different explanations for the replication crisis. Different explanations require different responses. For example, publish or perish incentive structures wouldn't be addressed by encouraging psychologists to be more statistically rigorous. I list some other explanations on slide 6 of this presentation: https://drive.google.com/file/d/11LpcCZlPKg-kzK_WhCv6rsy7PXLI5-4v/view?usp=drivesdk

Computer simulations and formal mathematical models are used in fields such as climate science, evolutionary game theory, cosmology, and computational chemistry. The relationship between these models and observation is complex and often indirect, in part because these fields often study phenomena that can't be observed directly (eg, because of time scales or abstraction). The definition of "observation" has to be stretched very thin to fit a generalization like "all sciences use observation." There are similar problems with talking about hypotheses or mathematical theories. Paleontology and archaeology often can't operate in a hypothesis testing way because they can't produce data on demand. The big claims made by these historical sciences might be qualitative explanations of particular events (what caused the KT extinction?) rather than mathematizable generalizations.

In general, it's extremely difficult to find substantive commonalities across all scientific fields.

11
  • 1
    Appreciate the answer. Considering page 6 of your presentation, as I understand you list these as factors which could be improved implying some value system what is better for establishing truth..Isn't that against "anything goes"...(I absolutely agree with these recommendations however I'm not sure Feyerabend would have) Commented Jan 9, 2022 at 21:44
  • Feyerabend's argument is that we need methodological anarchism because the aim of science is truth. The basic argument strategy of Against Method is to show that, for any given epistemological principle, some historical scientist made progress towards truth by violating that principle.
    – Dan Hicks
    Commented Jan 10, 2022 at 15:54
  • Couldn't findings which violated good practice also be by chance or because they were low hanging fruits?? For example, I could find an effect with a very small sample size, but it doesn't follow that sample size is not important, and obviously one should not recommend a small sample size.. Commented Jan 10, 2022 at 16:02
  • @Rubus -- I can't speak for Dan Hicks, but will try to answer here anyway. YES, there are better practices, and larger sample sizes are among them. BUT -- many sciences cannot DO sample size easily -- Astronomy, anthropology, archaeology, sociology, economics, geology -- many sciences are primarily observational. And for, say, medicine -- case studies which are very small sample size are often very illuminating/suggestive of problems with treatments, and ways to improve them. Generalities have exceptions. Feyerabend focused on these valid exceptions to an excessive degree.
    – Dcleve
    Commented Jan 10, 2022 at 18:48
  • ok..but shouldn't it then be "anything goes but only if there aren't better alternatives"..I mean clearly there is a hierarchy of evidence... Commented Jan 10, 2022 at 19:29
1

Agreeing with @Dcleve's answer, I would like to add to that.

Feyerabend's thesis(1) of "anything goes" means, among other things, that there is no a priori reason that throwing a coin in order to derive natural laws or decide actions is on the average worse(2) that what is done using scientific methods (where applicable). In fact there is a non zero chance this random walk on possible actions can arrive at more profound and deeper outcomes.

Also there is no a priori reason that effective cures for diseases cannot be found through, for example alternative conceptions of reality, like myths and story telling, than "narratives" based on molecular biology.

Another reason is that cultural hegemony should not blind us from alternative but equally effective conceptions of physical reality and methodology.

It is a fact that a given outcome may be arrived at through different routes.

So for these and other reasons Feyerabend's thesis cannot simply be dismissed. Even if one adheres to the scientific method, it is still beneficial to keep in mind that anything goes, as it can have a positive effect on scientific narrow-mindedness which can be detrimental to the pursuit of knowledge.

References

  1. Against Method
  2. NFL theorem
15
  • 1
    While there are examples that neglect of the scientific method yielded valid findings, the overwhelming majority if scientific progress has been acomplished because of adherences to scientifc standards..Also finding which have been found by unorthodox methods usually were low hanging fruits. Can you name one finding in the modern science (last 50 years) that has been found due to unorthodox methods? Commented Jan 13, 2022 at 15:42
  • 1
    "that there is no a priori reason that throwing a coin in order to derive natural laws or decide actions is on the average worse that what is done using scientific methods (where applicable)." you can also write a novel by randomly generating words until the novel makes sense (inifite monkey theorem) however it will take like forever compared to a target oriented approach Commented Jan 13, 2022 at 15:46
  • There are various examples of breakthroughs or intuitions that have been found by unorthodox methods even by chance. But have been reformulated in standard scientific terms. A sampling of the history will provide examples. But you fail to account for the hegemony of a certain approach in the last 400 years. So any comparison is unfair. In any case it does not diminish Feyerabend's thesis.
    – Nikos M.
    Commented Jan 13, 2022 at 15:49
  • Can you name one finding in the modern science (last 50 years) that has been found due to unorthodox methods? When sb. does personal experimentation it has to be replicated due to various methodological problems with personal experimentation..selection bias is one. non-existant sample size is another. etc etc..it's not as if results from personal experimentation are equally valid as double blinded randomized control trials..yes you can find truth through personal experimentation, however until validated it should be regarded with skepticism...also there is no "hegemony of science".. Commented Jan 13, 2022 at 15:54
  • 1
    @NikosM. -- The scientific method has been developed to deal with this world, not with an arbitrary abstract collection of all logical possibilities. The scientific method assumes predictability, and predictability involves the assumption that phenomena behave without multiple radical discontinuities in their behavior, and that most phenomena are timewise stable, and not chaotic. These assumptions appear to be true of our universe. Assuming they will continue to be true -- is fine by pragmatism. Not for logicians. We are talking how to do science. So a pragmatic approach is justified.
    – Dcleve
    Commented Jan 13, 2022 at 22:51

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .