59

One of the fundamental features of science, maybe even the most important, is that publication of scientific results is peer-reviewed.

I want to understand why peer review is effective in the scientific community, because I want to apply principles of peer review to a different domain.

As I understand it, a researcher is reviewing work of a competitor who is producing scientific results in exchange for grant funds and reputation, usually same as the reviewing researcher does. Of course, this competition may not be very direct, but the reviewer is in an overlapping area of research. In addition, there is some natural bias of every human being to groom their own ego. Based on this, I expect most reviews to be very negative. However, it seems that biased and extremely negative reviews are rare for decent submissions.

What stops reviewers from providing biased negative reviews?

1
  • Comments are not for extended discussion; this conversation has been moved to chat.
    – eykanal
    Commented Feb 1, 2019 at 14:53

9 Answers 9

51

The term competitors does not really describe the relationship between different scientists in the same field. Of course there is some competition for grants or such, or even to put one's name in a new result, but some aspects are deeply different:

  • A publication in your field is a good thing, even if it's not from yourself. It makes your area important, and alive. It's actually very important when it comes to applying for grants.

  • You cannot benefit from someone else's work by writing negative reviews. There is no way to simply slow down publication of one paper in order to publish the result first.

  • A rejected paper will not slow down the "competitors" research. Sure they will have to improve it and submit it elsewhere, but this takes a marginal time compared to the research process itself.

  • Dishonest reviews are clearly dangerous, as some people will see your review with your name on it (I'm thinking of other reviewers on such systems as Easychair). These people might very well be the ones reviewing your grant application a few weeks later.

  • why would one risk being exposed as dishonest, when anyway there are several reviews for the same paper, and any deep difference should trigger an in-depth investigation by the editor?

The most frequent bias I have seen are review of the form "you should cite those 5 obscure and vaguely related papers all from the same author", which clearly indicate reviewers in need of citation, but nothing much worse.

4
  • 5
    Vis your second point, I've recently heard an anecdote that von Neumann was the referee for a paper in which Birkhoff proved his ergodic theorem, and after reading it, he sat on the review for such a long time that by the time he returned it, he had published his own related, but less general result (the mean ergodic theorem).
    – tomasz
    Commented Mar 14, 2016 at 21:32
  • 11
    @tomasz do you have a reference for this anecdote?
    – Dan Romik
    Commented Mar 15, 2016 at 3:56
  • @DanRomik: Not really, as I have said, I have only heard it recently. May be completely apocryphal. I am not a specialist, but as far as I understand, von Neumann's result, while less impressive and closely related, is still an important result on its own right (it's called the mean ergodic theorem), not some kind of trivial corollary.
    – tomasz
    Commented Mar 17, 2016 at 3:41
  • 3
    @tomasz There is a story, but it's different, and AFAIK isn't about peer review. Rather it was von Neumann who initially came up with his proof, communicated this to Birkhoff, who proved his theorem. Then Birkhoff quickly wrote a paper and had it published in December 1931, whereas von Neumann had to race to publish his in January 1932. See this perspective by Calvin Moore, as well as a first-hand account from Birkhoff and Koopman.
    – Anyon
    Commented Aug 1, 2018 at 21:22
65

I think the main factor that makes the peer-review system to not fall in the trap you described is that academics are finally interested in the advance of science. If I have to review a good paper, then I enjoy it and want to see it published. If I review an almost good paper, I suggest some improvements and enjoy if these are incorporated and the work is published. The very mild advantage I may have if I reject more papers does not really seem to be worth it, if I have to act unfair and unreasonable to get this advantage.

Another factor may be: The reviewer is not anonymous for the editor. Editors are often influential people and writing consistently bad and unfair reviews will make you look like a mean and unfair person. Also, the editor will not choose you again as a reviewer if you write such reviews and you will be effectively eliminating yourself from the review system for that journal (while you can still submit papers).

In view of the bounty I gave the question another thought and here is one more thing: Community. Science is well organized in communities (e.g. I consider myself as part of the mathematical community, the applied math community, the community on mathematical imaging, the optimization community and some more). As part of a community one has a sense that there are some rules one should follow to be a valuable member of the community and of these rules is fairness. Being unfair is misconduct by the unwritten community rules. So even though one may get away with several cases of unfair behavior, it feels like one is cheating the system. But staying a respected member of the communities is very important both for the scientific standing and also for productivity. This reasoning also explains that bad behavior sometimes happens, when several competing communities form which are somehow "enemies of each other": One can stay a respected member of one of the communities while still treating members from the other community unfair.

3
  • 38
    I think the main factor... is that academics are finally interested in the advance of science. - I think the main reason is that most academics are decent human beings.
    – Kimball
    Commented Mar 14, 2016 at 16:09
  • 7
    You might add an additional concept to your otherwise excellent answer: if multiple reviewers, say for instance 3 review the same paper, but 2 give an excellent review and the 3rd gives a very bad review that seems to contradict the others, this can have an effect on the 3rd reviewer's good name. If that person consistently provides biased reviews, the chance is high that their bias will come out, which in turn means their name will be tarnished
    – Cronax
    Commented Mar 15, 2016 at 9:52
  • 6
    Many academics are interested in the advance of their own career (views, theories), not of science as a whole. That's pretty obvious on social sciences like economics. Commented Mar 15, 2016 at 14:00
14
+500

I have nothing to do with academy, so take my answer just as something that I'd expect based on my understanding of human interaction and behaviour.

Reputation

The main resource you have is your own reputation, and that will be severely harmed if people are accusing you of being unfair, biased or "not objective enough". You might be able to get away with it once in a while, but overall, it's quite enough for the system to work. Worst case scenario, you slow down the propagation of the work in question, but in practice, the paper will find another way (another reviewer, journal...). Even if you succeed, you risk harming your reputation, which is extremely important in a field centered around collaboration with peers and promising understudies.

Competition

You only considered competition between individual scientists, which is mostly a thing of (1) competing for grants and (2) competing for reputation. I've already dealt with reputation. Competition for grants might be important for you if you're trying to adapt the process for something like performance reviews - it's the clearest cut case where hurting others can help you. However, it doesn't have much to do with the peer review process - that would indeed introduce a very strong motivation to be "as unfair as you can be without actually appearing unfair".

However, there's also another competition going on - that between individual reviewers and their journals. If your paper was rejected based on grounds that are seen as fair and objective, you'll likely also be rejected by other journals. If not, the other journals might jump on the opportunity to publish your paper, while also implying that another journal has treated the paper unfairly. You can't do this very often if you want your journal/reviewers to keep being relevant!

Points to take away

If you want to use a similar system for another domain, make sure that similar incentives are at play.

  • Have multiple independent reviewers, and let people choose their reviewer (while the reviewer has a chance to decline).
  • Make sure there's not a lot of "authority" in play - for example, superior-underling relation doesn't make for good peers. Peer review works best with consensus and with reasonably objective / shared values.
  • Make everything public (in the team / company). No anonymity, no "hidden" reviews. This is necessary for the reputation-based controls to work. In a way, it's a redundancy in the peer review system - it allows people to "review" the reviews themselves.

It works best in mostly flat hierarchies. Thinking in terms of a performance review in a company, peer review will be a poor choice if managers order people around. On the other hand, if managers have to persuade others to follow with their plan, it might work great :)

0
11

In addition to the mechanisms listed by others, most journals I've published in so far allow the authors to list people who they feel have competing interests and thus should not be considered as reviewers. So if there's someone who you know is working on competing research, you can put them on that list.

Of course, this isn't perfect, but if you get three reviews out of which two are largely positive and one is deeply negative, the editor is likely to decide in your favor anyway.

(I'm in biology, specifically plant genetics. I'm sure it varies a lot between fields.)

8

Double-blind review (neither reviewers nor the reviewed are shown who the opposite parties are) was invented to help with some of these issues. It's used almost exclusively in some fields for conference and journal peer review (various parts of CS comes to mind). Mostly, it's to help prevent unconscious bias against underrepresented minorities and women cropping up and tainting reviews. Given that it's relatively easy to spot women and foreigners from names, double-blind review tries to stop these biases from creeping in by hiding the names from reviewers, too.

12
  • 6
    True, but 1) if someone is doing research similar to yours and the field is small, they're likely to be able to tell it's your paper anyway, and 2) if they're just generally trying to keep their "competitors" down, they'd write bad reviews for "competing" research regardless of who the authors are. So in this particular case it's not likely to help much.
    – weronika
    Commented Mar 15, 2016 at 4:45
  • @weronika I'd note that, at least personally, "they're likely to be able to tell it's your paper anyway" isn't nearly as reliable as some people think, at least personally. I've been wrong both times I've been "positive" about it, and missed that I was reviewing the paper of someone whose work I was fairly familiar with once.
    – Fomite
    Commented Mar 15, 2016 at 6:19
  • 2
    Double-blind review only takes away some of the bias, but it does not remove the incentives described in the OP (i.e. the reviewer is still giving a competitor's work the go-ahead, at least from the OP's viewpoint). It is still an important part of the peer-review toolbox and it may work well in other environments, though. In such a case, it should also be mentioned that double-blind reviews are much harder to implement than single-blinds, particularly in an academic setting where e.g. papers will tend to acknowledge previous work by the same group. In other settings that may or may not hold.
    – E.P.
    Commented Mar 15, 2016 at 9:44
  • @E.P. and weronika, yes, and? I didn't say it was a panacea, just that there had been some work on reviewer bias that OP should be familiar with. Fomite is dead on that it's not as easy to unblind papers as you think, even in small fields. This is a reasonably well-studied area, and Computer Science sticks with it despite arguments like yours.
    – Bill Barth
    Commented Mar 15, 2016 at 14:00
  • Yeah, I'm not saying that your answer is wrong, I'm just saying that there's more to be said. If the OP wants to port peer-review to other scenarios, and they want to port double-blinds, it is likely that it will be significantly more work, at the very least on the mechanics of the thing. (It's not easy to unblind properly blinded papers. If the authors say "As we showed on a previous manuscript..." and provide a full citation, the game is up. Nothing that can't be worked correctly, but it is more work.)
    – E.P.
    Commented Mar 15, 2016 at 14:05
6

There is one aspect that has been mentioned in @weronika's answer, but not been stressed enough in my opinion and probably not obvious to the author: There is usually more than one reviewer.

In my area there will normally be three different reviews, which seems to be a good compromise of keeping total workload low and allowing to identify "outliers". An editor may not be able to see if a single review is biased, but if they see reviews contradict each other considerably, an unfair review might become noticeable.


Added for clarification: Three reviewers are certainly not enough to get meaningful statistics (that's why I wrote "outlier" in quotes), and only multiple choice or numerical ratings in the reviewers' sheets could be compared in a statistical way. But there are several critical factors that can reduce the quality of a review, and their possible impact can at least be reduced by having more than one reviewer:

  1. inadequate knowledge in the field of expertise,
  2. negligence or missing time, and
  3. bias or malicious intent.
2
  • Is this is somewhat the "statistical game" to provide a review that is likely consistent with the other reviews of this article? Commented Mar 16, 2016 at 9:26
  • @h22 I'm not sure I understand your comment, but I tried to clarify my answer with some more text.
    – Dubu
    Commented Mar 16, 2016 at 12:01
4

We've seen randomly generated papers accepted, Universities acting like paper mills, and researchers being scored soley on the number of papers published.

The review system is based almost soley on the wish that everyone has good intentions. Unfortunately this doesn't make the system work flawlessly.

The most common complaint is that a lot of reviewers don't take enough time to review the material. Making people invest precious time in a peer-review system might be a harder problem than making them behave honestly.

I haven't heard of cases where reviewers intentionall gave bad reviews. Though I have heard of feuds between authors. This is especially problematic in small fields where the reviewer isn't as anonymous as we would like. Note that feedback might identify a reviewer. Also note that the author is never anonymous (and is hard to make anonymous) which makes the system more suspectable to malicious reviewers.

The partially mitigating factor might be the editor. Which acts as a sort of referee. But what if the editor is malicious?

To answer your question: there is no definitive factor that prevents malicious use of the academic peer review system. There are instances where the system has been abused. I also do not believe that scientists are less often malicious than any other person :).

1
  • 1
    Whether reviewers "intentionally" give bad reviews may be less important than their unconscious bias.
    – user18072
    Commented Mar 16, 2016 at 21:46
0

To be honest, it is very hard to justify (or in this matter to generalize) the behavior of a group of people (reviewers). I'm sure some of us were victims of irrational and unfair reviewing process i.e., getting reviews after 4-5 months with poor questions, harsh language or the most obvious case where you can find a very similar work to your paper published couple of weeks of your article that you received after a delayed review!!

Even with double-blind peer review process, a reviewer can guess who are the authors of a certain paper based on their research topic, wording, experimental set-up and/or even funding account/agency (usually mentioned in the acknowledgement section). So, it is very PERSON dependent! Think of it like athletes who compete in sports, all of them are subjected to the same rules, however some will always try to cheat the system!! whether with drugs or with exploding loop holes!

The good news is there are more reviewers who are genuinely interested in advancing science than those who are eager to unfairly compete with other researchers. Don't let any of this put you down, think of it like this.. You must have been doing some very interesting research!!

-8

Your methodology itself is biased. Anytime you are doing something to alter the review process you are creating a biased review. This could be as simple as wording before or during the review process and at the other end of the spectrum leaving out some potential reviewers.

My advice:

  • worry more about your science than your review

  • welcome the biased reviewer. I would rather see the flaws that my research/science has as soon as possible, not after it has been fully publicized.

  • understand that there is an element of risk or wasted time involved. If someone is biased and using corrupt review tactics this is more time consuming. But and big but this also allows you to learn more about the outside forces you are going against and how to refute them better.

  • and a further point is an obviously biased review will be easily refuted. As it is, it will lower the reviewer's stance/reputation and highlight the work that you are doing. People are far more passionate when there is an A vs. Z dynamic than just hearing about A.

So really the only thing that you need to do (and I write review/survey systems) is to have a mechanism of review rebuttal and make sure the person writing the review understands this. If I cared about my name I would not write an untrue/biased review about something if I knew that someone could refute my stance directed at me. Transparency is key.

3
  • 8
    This answer does not seem to address the question. There is nothing OP said about altering the review process, only about implementing it, and it is not at all clear what "methodology" you are claiming is biased. It is also not clear that there is any sort of science involved (except in the meta level, perhaps).
    – tomasz
    Commented Mar 14, 2016 at 21:41
  • @tomasz - asking the question is an indication of trying to formulate a biased review. You are not understanding my answer and that is fine, maybe it is too meta for you.
    – blankip
    Commented Mar 14, 2016 at 23:08
  • 3
    @blankip The asker clearly states that they are trying to implement a system equivalent to peer review, outside academia. They are not trying to change academic peer review. As far as I can see, tomasz understands your answer just fine: it's you who seems to be misunderstanding the question. Commented Mar 16, 2016 at 7:34

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .