10

This is a question about normative theories in the philosophy of science and epistemology, I would like to know how I as a research scientist / mathematician should treat "cranks." I am not a philosopher and I am unfamiliar with the literature on this topic. I would like recommendations on good papers on trust in the scientific process at or outside the boundaries of mainstream work, like

  • What are the limits of open mindedness? Should a medical doctor be open minded about new purported evidence for homeopathy, or is it right for them to refuse to even seriously consider new arguments or purported evidence for homeopathy?
  • For this and similar things, such as advances in personality types claimed by the field of socionics, on what grounds can or should we reject this prior to careful personal inspection of the arguments of socionics researchers? (For example, Wikipedia cites the authority of the Russian academy of sciences which considers it pseudoscience.) Does epistemological virtue demand reading in detail or can I skim it and say "This has lots of red flags such as bold claims about unifying theories"
6
  • 5
    evidence is not "good faith"... Evidence is the base of science. Commented Feb 12 at 15:06
  • 1
    Interesting questions. Things that could be relevant might be time (you can afford to waste), evidence for the theory, trust in the integrity of the evidence, compatibility and applicability, whether it works, whether it's safe or comes with costs, expert opinion. Like in the end you can't really discard anything unless disproven, which outside of math is quite difficult. But at the end of the day you just need to decide whether it's a waste of time or a potential progress.
    – haxor789
    Commented Feb 12 at 15:32
  • @MauroALLEGRANZA "scientific evidence is not naive, it must be carefully vetted" - indeed. I think my question is in part what should qualify or disqualify purported evidence, say "pre-evidence", from going through thorough review by the scientific community or by me personally. Commented Feb 12 at 16:12
  • 2
    Look at theory ladenness in the work of Kuhn, Feyerabend et al. Short summary: There is no escaping one's own prejudices
    – Rushi
    Commented Feb 12 at 16:19
  • 9
    You don't have an obligation to investigate claims or theories that you think are crankish, but you do have an obligation to then remain silent about the issue and not pile on with other people who probably also haven't investigate the claims. This piling on tends to manufacture an environment where no one thinks the claim is worth investigating seriously. If you do that, you are engaging in politics, not science. Commented Feb 12 at 16:37

5 Answers 5

8

I would say, whether you're a scientist or not, your obligation to take seriously ideas that are extremely unorthodox (like Homeopathy) depends largely on your role in relation to the unorthodox idea.

If you are a medical doctor who serves patients, you probably can't be reasonably expected to spend your time vetting every possible wacky idea. It's infeasible to expect every doctor to do this, so these doctors ought to mostly focus on the proven-effective treatments and largely ignore the more wacky ideas.

If you are a reviewer in the peer-review process, however, you probably ought to take seriously every paper you've been asked to review, at least for long enough to give it an honest review.

Not everybody can afford to take every idea seriously, we have limited time here and some ideas have been given more time than they deserve already.

0
9

Anecdote

I once took a masters level class in advanced electronics. The professor specialized in biomedical instruments. He had completed many research projects for the military and the medical industry. He said published papers under the electrical engineering peer review process could not be trusted. He gave us a peer reviewed paper from a professional journal. The paper described a circuit to measure oxygen in the blood (blood level oximeter circuit). He said we should report the errors that we find in the circuit. The circuit could not perform as described in the paper. I found six errors. He said I could have stopped at three.

Flaws in peer review institutions

This article is a criticism of the peer review (publish or perish) political institution:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ, make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.

So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

The Myth of Mental Illness

Thomas Szasz, author of The Myth of Mental Illness, might be considered a crank by other self-appointed experts who believe that there is such a thing as mental pathology! Szasz argues that a medical diagnosis involves observation of signs, symptoms, and often tests for the presence of absence of some physical evidence of pathology. The diagnosis of mental pathology involves nothing except a pattern of behavior and an interpretation of its meaning. He does not deny that people have problems in living, and that they seek to communicate with others to solve their problems, but Szasz does not accept the verdict of the mental health experts that they have discovered behavioral pathology! Is Szasz a crank, a heterodox expert, or the more scientific observer?

1
  • 2
    That is an enlightening answer. Thank you for your effort Commented Feb 13 at 10:17
5
  • Unfortunately the history of science and of mathematics shows that good faith is a deceptive strategy.

    I remember some years ago the announcement of a proof of the Riemann hypothesis in mathematics. Of course there have been many wrong
    announcements before. But this time it was made by an eminent
    mathematician, a leading figure in his field.

    The press reacted enthousiastically, but the critical colleagues pointed out several gaps in the presented scetch of the proof. The colleagues were right. The Riemann hypothesis is open until today.

  • On the other hand it is a good strategy to be sceptical if a lay person presents a proof or a counter-example concerning a long standing conjecture. One often recognizes that the author does not make much use of the technical literature, apparently he does not know it.

  • Hence I recommend to look at the paper, at the name of the author and on his previous publications. If possible one should form an own opinion. But knowing the arguments of institutions like the corresponding academies and reading one or two original assessments of the experts in the field is also a good strategy.

0
1

Things that might need to be considered (not complete) in this pursuits might be:

  • Time
  • Potential
  • Compatibility
  • Applicability
  • Success of the Theory
  • Popularity
  • Evidence
  • Trust

The first and most obvious factor is time. Directly feeding into the exploration-exploitation-dilemma, so more or less the question whether you should spend your active hours with research (exploration) or application (exploitation). Where research has the potential to provide improvements and optimization of processes or to be a waste of time, while running applications provides results, but at the risk of doing it very inefficiently (also wasting time and resources).

So the first question is probably, do you have the time for that? Or maybe is your current system lacking so much that you need to take the time for that because otherwise you'd be failing anyway? Or in general do you think it's time worth spending? Whether the answer is yes or no depends on the specifics and often comes down to the decision of the person being asked...

So for example the inventor of homeopathy cited as reason for his research that the medicine at the time was severely lacking, in that extensive use of bloodletting and use of laxatives were making the patients weaker and more ill rather than helping them. And in that regard homeopathy might have ironically worked, ... in the sense that it doesn't work and hence doesn't make things worse than they already had been.

So there might have been good reason to do research (which always goes outside of the mainstream and into the unknown), given that the state of medicine wasn't great and that there was potential for low hanging fruits of improvement. Now it still ended up mostly being a waste of time and at best accidentally useful. So apparently he improved "clinical trials" as a way of testing his substances (though due to being that fringe that was of little significance to what we now use as clinical trials) or his drug testing revealed stuff that actually has an effect... that is before he gave it the homeopathic treatment of diluting it... but still that is an inherent risk of any research (that it ends up being a waste of time).

Now closely linked to the problem of limited time is the potential to be useful. So is this either a low hanging fruit that can improve without much effort or is it something that has a groundbreaking effect that is worth the time investment. Of course especially the overuse of claims of the latter usually taints this risk-reward calculation so much often enough the claim of a groundbreaking/revolutionary/all-purpose whatever is an immediate red flag for bullshit. That being said it could still happen that this is not an overused PR claim, but what you yourself might get as impression from your own work.

Next up would be compatibility&applicability. That is the problem that the "revolutionary" usually comes with the problem of being incompatible with most other things. So for example if homeopathy were to be correct, we'd have a problem with the entire rest of science, because as of right now, according to those sciences it shouldn't work. Like the substances are diluted below the level where chemistry could have any measurable effect. Seriously they dilute it to the point where there might only be a statistical chance that there is even 1 atom of the "active" ingredient present, in the otherwise inert "solution". And the claim that the water has a memory of what it interacted with is in some conflict with physics. Not to mention that the higher potency by diluting would have "interesting" consequences with respect to idk pissing in the ocean... So if that were to be true we might need to discard most of what we think we know and start from the ground up.

That's not to say that thing like this don't happen and there is the concept of the scientific revolution where you have stages of building upon previous knowledge within a paradigm and then phases of a paradigm shift where people suddenly cite different papers and build upon those. Though usually the evidence largely remain unchanged it's rather the explanation that changes, so even in that regard it's very unlikely that homeopathy is true and all of science is wrong, given that this would mean that we should at least see an effect and get an explanation for why we don't see one now.

So the problem is that the further you walk off the mainstream, the more you are alone and have to supply yourself with new structures and frameworks that the existing institutions already have. So idk if millions of people do or have checked the theories for internal consistency it becomes much easier to build on the work of other people than if you reinvent the wheel and have nobody around who knows what you're doing. In the latter case there is a lot more stuff to do which has nothing to do with the actual research, for example.

So the more niche the subject and the more fringe the assertion the less compatible it is with other research and the more it's harder or unlikely to find application for that.

Which blends into the next point "success of the theory". So basically whether it works. Like if it's not just a nice idea, but if there is an actual proof of concept, if there might already be a prototype and it shows promising results for a relevant problem, then even if their idea is bullshit that topic might nonetheless be interesting.

And usually the aggregate "potential" of a theory correlates with "popularity". Now popularity is not an indicator that something is correct, far from it, however in the end humans are social animals so even if the huge mass is chasing bullshit and might still be worth paying attention as that is likely going to shape the environment, blend into language, determine which problems are considered relevant and which get funding aso.

So at first you might check whether you have time for that and if that is well spend on the issue. Whether the topic in general is worth it and then whether the paper in particular is worth it. Which includes the evidence cited, the internal and external consistency with facts. The thing is the more easy errors there are in the paper the more likely there either are better sources or the more likely you are better off to tackle the problem yourself if you feel up for that kind of challenge.

Which brings us to the last point on my non-conclusive list: trust. Now that probably sounds counter-intuitive as it's literally a fallacy (appeal to authority) to trust in institutions, experts and whatnot. And no it's not appeal to "false-authority" it's appeal to authority, the fact that an expert says it does NOT mean that they are correct. As you can see very plainly if you look at the start, namely that research is not inevitably leading to a positive outcome but could also be a waste of time, so failure is not even necessarily a flaw of the researcher, might just be bad luck. Though when push comes to shove you only have 2 options and realistically only 1. Either you trust experts or you do it yourself (and your time is limited so even if you do parts of it yourself you'd end up with a pretty limited dataset). So practically you'd need to trust at least some people.

And that includes trust in the ability of data collection (because if they fuck up that part the entire rest is completely worthless), in the ability to analyze the data, in their integrity to not obfuscate or tamper with the data, to not exaggerate their findings. Now the more you are into a topic the more you have a more realistic part on the expectations and the analysis part, but if you can't trust the data itself that's not going to help you either.

So ironically the currency within the scientific community is trust. The more institutionalized experts, have proven under pressure that they are theoretically capable of performing the job that they want to attempt (got their degrees and reputation), their work has been tested by their peers and ultimately their reputation and career relies on their integrity. Because the thing is if they fuck up, and I don't mean being wrong, but being sloppy or even cheating, then their career is over. If you can't trust the data that people present then their entire body of work is worthless. Like you'd need to redo every experiment, every analysis and so on. And not only is their work worthless, they might have tainted the work of all the people that build upon them and that might have wasted their time doing so.

So often enough the default trust for established experts is higher as they a) proven some level of expertise, b) have a reputation among other experts and c) would lose a lot if they worked sloppy.

Conversely the mechanisms of the scientific community with regards to quality management are usually "peer-review" and "impact factor". So whether a subset of your peers approve of your methodology and your work standards and how relevant the article and the journal is with regards to how well received it is in the scientific community. Which is measured by citations. So if other people cite your work as their theoretical foundation those papers of yours increase in value and if journals publish lots of those high ranking papers their impact factor increases and thus their interest in curating their publications and getting rid of the bullshit.

Though at the same time peer review is only as useful as the peers that review. So the more fringe and novel the subject matter (and that's not limited to the "cranks", but also includes idk the AI hype or less flashy fundamental research with not a long list of researchers) the less peers and the less knowledge about what you're actually doing and in turn the less competent reviews you can expect. Like usually the process of reviewing might be blind, but often enough is the subject matter is fringe enough you can already tell by writing style, the complaints or just the low number who your reviewers are...

Same with impact factors, apparently it has breed a culture of publish or perish where your popularity and being around so that people know you, know your stuff and read and in consequence cite your papers and lots of them creates a feedback loop where socializing and networking is rewarded rather than proper scientific research.

So there are attempts to hold a certain standard for research and to provide a value in terms of highlighting what's worth reading. But in the end a lot of that is based on at least some good faith and trust in people's integrity and it's far from being fail safe.

So in the end "science" is usually what you end up with years or decades after the initial publications. After the hype cycle has moved on, after people tried to build upon the findings (and accidentally checked them) and either failed or succeeded, after things have been canonized.

Though that's usually when you're no longer reading papers but text books or only look at the initial paper as a historic document but no longer for it's content. While papers are usually either read because they are fresh and interesting or because they are specific to a particular experiment and that always comes with some risk.

Sure it can be the next scientific revolution or you could have stumbled upon a Ramanujan and even people like Niels Bor (?) said something along the lines of "you first need to publish a lot of normal stuff before you've established yourself enough for people to take your more daring approaches serious" and despite the revolutionary papers it might be more like Planck said "Progress is made one funeral at a time", so essentially the acceptance of a new theory comes due to the fact that proponents of the old theory die out and the students want to learn another.

So in the end you probably need to develop an "intuition" of your own with regards to whether it's worthwhile wasting your time on that. Which isn't meant to be "emotional", but in the end you yourself also are some sort of expert and you probably have a professional opinion as to whether an idea is interesting, promising, whether it's feasible, whether your peers talk about it, whether the person is a master of their craft, a novice or a career jumper. And if it's riddled with errors, red flags, isn't interesting and no one is paying attention, then that's probably for a reason.

So a scientist should be professionally open minded given that most of what is being done is model building where it's never a certainty that those are actually correct, but there are only so many thing that you can be an expert in and if you already feel that something is being suspicious you should probably let people develop it further on their own and see where it is going before you jump a band waggon.

0
0

This question comes out of a lack of understanding or misunderstanding of power. People engage other people, reasonably, but pretty soon the conversations gets heated and their "point of authority" gets challenged. They start losing their intellectual "high" ground, and (understandably) they don't like it.

Many people, at such point, walk way or strike the individual, because they don't know how to deal with the contradiction of power. There aren't too many mechanisms. One mechanism is to use the holy name of GOD (YHVH): "That's (Y)our opin(I)on.", for example -- which you are not use in vain.

Another is to dig down and find their premise and mock then: an ad hominem. But it only works when you find the right combination of sounds that essentially act as a curse.

These are both "bad faith" when used wrongly. But sometimes the soul (your personal mind) must use force to protect itself, just as your body. So know when and how to use these techniques without resorting to bad faith.

There are appropriate uses for violence, but that is a much more challenging road (called the "enlightened warrior") and probably not appropriate for Philosophy SE where reasonable argument should prevail. ;^)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .