The most widespread opinion of Feyerabend is that his basic thesis is correct, that there IS no single method that applies to all of science, but that his inference to anarchism is an over extrapolation. There are instead multiple useful methods, each of which is generally useful across a range of different sciences. And this collection of methods have a familial resemblance that involves a respect for the pursuit of truth, fallibilism, and peer evaluation/critique.
However, the thesis of this question is incorrect. The replication crisis really has nothing to do with Feyerabend, and rigorous statistics will have no effect on it. One of the major features of the replication crisis is that experiment-experiment results tend to have mean solutions that fall well outside the expected range one would predict from the prior published test results. This was the surprising conclusion from multiple revisitations to historic drug effectiveness tests. More careful analysis of the prior test results would not have affected that the test-test variability is not well predicted by the experimental variability within a particular test series.
The basic cause of the replication crisis, is that science simply does not do enough replications. And repeated testing, in different labs, with slightly different protocols and experiment teams, is the best way for science to sort valid from spurious inferences. Revisiting the same question multiple times, demonstrably does not lead to the same conclusion each time. And one will not discover this, unless one actually does multiple replications of interesting results. An interesting corollary is that this principle applies to surprising UNinteresting results too.
Failed experiments have not been the focus of the replication program, but the same effect of experimental variability likewise applies to them, where there are likely a large number of false negatives that were just due to bad luck in experimental variability.
There are a variety of science practices that lead to the replicability crisis.
- Almost all academic original research is from PhD programs. And every PhD awarded has to be for original work, which means that NONE of the primary output of academic research will be replications.
- Most journals will only publish "interesting" work -- IE only statistically significant, original, and with positive outcomes. IE, if a study has a negative or indeterminate outcome, or is a replication, it will generally be rejected for publication. This drastically skews the published literature toward innovative research rather than completing a database on a subject, an over-focus on statistical significance, when even real effects are very often not significant in every test, and a massive discouragement of ever doing replications.
- The best funded studies, with larger sample sizes, etc, are from commercial organizations. And they will generally only report results when they are commercially useful -- IE successful applications of a commercial product. This drastically skews the published literature on these commercial products.
One of the concerns of critics of science has been about "P-hacking" where researchers multiply reanalyze their data until they finally find an effect that meets a 95% P criteria. The focus on statistical rigor, and bayesian analysis, is intended to address this supposed problem. However, this "problem" is just a sociological reaction to the publication policy of journals. Until journals publish indeterminate and replication studies too, variations of "p-hacking" will continue to skew the science database.
And setting a more stringent criteria for publishing, which is the point of the Bayesian vs frequentist approach, will not do anything prevent the lucky statistical accident effect which is so much a part of the replication crisis. What it WILL do, is drastically increase the number of false negatives, where a real effect exists, but is never explored because the publication criteria for the first time it was observed prevented publishing the interesting hints that an indeterminate but suggestive initial investigation provided.