53

My sub-field has been haunted by the fact that for slightly over fifty years the key foundation of everything we do has been in empirical dispute. The empirical anomalies have generated tens of thousands of articles. There are over 3800 for just one anomaly alone.

The problem came about because the key work was done before the mathematics had been settled and so assumptions replaced theorems. I know this because the original core set of papers have a math mistake in it. Mathematicians ultimately did learn how to solve this class of problems, but the two fields never realized that the other didn't know what the other didn't know. I found the mistake. It is subtle, but catastrophic.

I had assumed that writing a paper and doing a population test on the data we have would bring about change. Turns out that was very naive. Other than continuously presenting at conferences, what else can I do to move my field away from a technique that can be proven to be completely uncorrelated with nature? How do you get people to stop using the accepted practice? It isn't a secret that it doesn't work, but it has always been assumed to be close in some very loose sense of "close." Or rather, for a long time it has been assumed to be a poor approximation and that if the one thing causing the phenomenon were found, it would be added to the existing model and all would be great. How do you move academics when it is in the undergraduate textbooks, it is being accepted for publication, and there are seminars every year on either a new anomaly or some other factor in how it does not work?

Any strategy would be welcomed.

1
  • Comments are not for extended discussion; this conversation has been moved to chat.
    – eykanal
    Commented Dec 14, 2016 at 14:08

3 Answers 3

48

Two quotes immediately came to mind:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it. - Max Planck

I'm trying to free your mind, Neo. But I can only show you the door. You're the one that has to walk through it. - Morpheus from The Matrix

To elicit a change of the magnitude you indicate, four aspects must be communicated clearly and unequivocally:

  1. The existence of the error (i.e., demonstrate that the prior approach is incorrect). Other academics in the field may not recognize that there is an issue.
  2. The history of the error (i.e., why the prior approach was used and how it has persisted for 50 years). Given its age and prevalence, other academics may understandably presume that the prior approach has been validated before and may dismiss your claims on that basis.
  3. The implications of the error (i.e., what negative impact the prior approach has). The effect of the prior approach may not be clear or other academics may perceive it as negligible relative to the effort necessary to adopt a new approach.
  4. The change itself (i.e., the course of action to rectify the error - the new approach). Other academics may not know what should replace the prior approach.

You should first make sure that these have been satisfied in your previous publications. Assuming this has been achieved, there is really only so much you can do (see the quotes). You obviously can't force people to change their ways. In addition to being patient, you can (where appropriate):

  • Continue referencing it in future papers and conference presentations.
  • Highlight it in peer review.
  • If you teach, incorporate into your courses.
  • Write a review article of the error in your field (one way to address #2 above).
  • Communicate (1-4) in relatively plain language on your academic blog (assuming you have one).
  • Send your paper(s) to the authors of the undergraduate textbooks with a letter explaining (1-4) and why the change should be included in the next edition.
  • Write your own textbook.
6
  • 7
    @user25459 One more, not mentioned here: try to get publications in field on the boundary of your main field, perhaps even neighbouring disciplines close to the techniques in discussion, but who have less to lose. Commented Dec 12, 2016 at 8:46
  • 75
    And, as always when seemingly being the lone voice of reason, make sure you're actually right.
    – Weckar E.
    Commented Dec 12, 2016 at 10:07
  • 3
    Note that it is entirely possible that you simply may not be able to effect change simply because of name recognition. I obviously don't know anything about you (Mr. OP), but if you're a recent graduate student or someone with few publications in the field you may have a hard time getting people to take you seriously even if you are entirely right.
    – eykanal
    Commented Dec 12, 2016 at 13:57
  • 5
    @CaptainEmacs that is rather humorous actually. I tried that. It was rejected as obvious. I sat there and just laughed because it had been previously rejected in the field as obviously false. Commented Dec 13, 2016 at 0:18
  • @Guho I never thought about an academic blog. I just got my doctorate but this is my third career. I have grandchildren. Sometimes I forget that typewriters and index cards don't rule the world. I have worked in two other disciplines, which is why I think I was very sensitive to other perspectives on the same topic. Commented Dec 13, 2016 at 0:23
16

Improper use of statistics is a widespread problem in many experimental sciences. I worked for some years in discrete optimization and got more and more frustrated that the "numerical experiments" were usually done without any reference to a statistical method, so that many good results where just random noise or calibrating of an algorithm to a very small data set.

I read horrible things about abuse of statistics in psychology and medicine (resulting in very low reproducibility). And I guess the same thing happens in many experimental fields.

My (pessimistic) view is that in a "publish or perish" culture, people tend to bend methods until they break, and this is especially easy with statistics: It is mathematical and often hard to grasp for the non-expert, and errors often do not lead to hard logical contradictions but to a weakened result.

Establishing questionable statistical methods in a research area often results in much more "positive" results that people can publish. And actually, this is what everybody wants. Most results are never reproduced, so non-reproducible results are likely to stay "true" very long.

9
  • 4
    Echoes the question, doesn't answer it.
    – user18072
    Commented Dec 12, 2016 at 16:42
  • 1
    @immibis: I have friends who were told straight-out by their academic advisers not to try and reproduce anything. Yes, it's required in theory for scientific progress; but no, it's generally not a high-profile thing to get publication interest. (Is my understanding.) Commented Dec 12, 2016 at 23:33
  • 2
    @M.M John von Neumann and Oskar Morgenstern published a footnote to an article warning that this class of problem had not been solved and that the proofs may only look like proofs. Unfortunately they referenced an article written in German. After than von Neumann went on to do other more interesting things like the Harvard architecture for computers and the hydrogen bomb and he never came back to it. Leonard Jimmie Savage argued that arguments about the foundational rules of any discipline are the most contentious. Peer review is supposed to catch it, but it got missed in the base article Commented Dec 13, 2016 at 0:16
  • 3
    @DanielR.Collins My thought was that while reproducing a previous result is generally not high-profile, failing to reproduce a result might be. Commented Dec 13, 2016 at 1:10
  • 1
    @M.M: Peer review means that people from your subject which follow the same standards review your material. If some "strange" methods/standards are established, they are likely to stay for a while. Commented Dec 13, 2016 at 7:54
5

Start by convincing people like me.

I have read your articles and posts on these supposed errors. While you have convinced me that there are shortcomings in Modern Portfolio Theory (and all its offspring), I think most practitioners and academics operate under the assumption that all models are wrong, but that some are just useful. I.e., I think most of us already know the difference between models and reality.

I don't think you'll convince anyone that your solution to the portfolio problem is any less of a model than the one that your are critiquing. As a result, I would recommend that you shift your focus away from the technical arguments regarding the shortcomings of others and instead focus on the high-level benefits of the paradigm shift.

I'd also like to emphasize the "high-level" aspect. The argument, although technical, should be written clearly and concisely. It should be readily accessible to practitioners. If MBA students don't understand the benefits of switching, stodgy academics will see no reason to stop defending the status quo.

I recommend you start by convincing people like me because, ultimately, you need to convince practitioners that the benefits are worth mental and tangible switching costs.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .