Is Peer Review Biased?
Academic publishing relies on peer review to ensure research quality. Is there bias in peer review? And if so, how does it affect academic research? A recent study examined the peer review process using simulations. Some of its findings may surprise you. Professor Justin Esarey, the author of the study, set up many simulations based on various peer review processes. In each case, he assumed that there would be three peer reviewers.
Studying the Peer Review Process
In the first simulation, he assumed an agreement among all peer reviewers and the editor for a paper accepted for publication. Then in the second, he assumed that if the majority of reviewers agreed that the paper should be accepted then it would be published. In the third, he assumed that the majority had to include the editor’s vote. Finally, there was a model where the editor made a decision based on the average reviewer report. In this model, the editor ignored the peer reviewers’ votes to accept or reject the paper.
Esarey did these four simulations twice. The first time, he used the conditions listed above. The second time he assumed that the editor would reject some papers without sending them to reviewers. In this second round of simulations, the editor rejected any paper whose quality was lower than the median paper in the population.
The simulation also included readers of the journal. How would readers rank the papers accepted for publication in each case? He did a simulation involving 50,000 papers and 500 journal readers. He set the paper acceptance rate at 10%. Esarey then used a simulation to work out how readers would rate the published papers. In every case, readers ranked the papers as being in the 80th percentile. About 12% of papers that were not rejected by an editor and that the majority of reviewers agreed should be published were viewed as being of poor quality by readers.
The Value of Editorial Intervention
Esarey’s study highlights the need for good editors. He found that the system that published papers of the highest quality used peer review but the final decision was left to the editor. The peer review reports summarized the strengths and weaknesses of the paper. Moreover, they did not include a recommendation for accepting or rejecting the manuscript.
Under these conditions, only 6% of readers felt that papers in the journal were in the 65th percentile. If editors rejected half the papers before sending the rest to peer review, only 1% of readers felt the papers were in the 65th percentile. The best system is similar to ones used by journals where reviewers are asked to submit a qualitative review of the paper. In this case, reviewers do not share their opinion on the publication of the paper.
In the simulations, Esarey found that peer review led to the acceptance of better papers. This shows that peer review has more value than randomly choosing which papers to publish. Papers of the highest quality became most likely for publication. The decision to publish a paper in the 80th percentile largely depended on luck. Publishing papers of this quality had a likelihood similar to a coin toss. This was also true for papers in the 85th percentile where the editor decided to reject some manuscripts without peer review.
Other Forms of Bias
There are other forms of bias that can affect peer review. Women and minorities tend to have more difficulty in funding, publication, or promotion. Andreas Neef published a study showing women were underrepresented as editors, reviewers, and authors. This underrepresentation remained even after making adjustments for the lower participation of women in science generally.
The study involved 9,000 editors and 43,000 reviewers from Frontiers journals. There was an improvement over time. Based on current trends, it would take until 2027 for female authors to become no longer underrepresented. Similarly, it would take until 2034 and 2042 before women became no longer underrepresented among reviewers and editors. Additionally, changing this situation requires helping people to identify and work against their biases.
It is undeniable that peer review adds value to academic publishing. This makes it important to reduce bias in peer review. In order to effectively judge research quality, peer review reports should not include the reviewer’s opinion on acceptance or rejection of the paper. The editor should take this final decision and judge academic research by virtue of its merits. For this reason, anti-bias training becomes essential. Making these changes should ensure the publication of only the best research.
Have you encountered bias during a peer review? Share your thoughts with us in the comments below!
The real glitch in Peer review, the Peer review bias, is dependent on congruence bias of the reviewers or editors. This means that all the Editors or Peer reviewers believe in Hypothesis A which is indeed still a hypothesis. But all of them opt for rejecting a perfectly scientific paper discussing a competing Hypothesis B (presuming that it is indeed wrong) giving an excuse of poor language or length or exaggerated scientific claims.
An example being editors being harsher on a paper on MOND and rejecting it for other silli reasons as they are believing in dark matter and think that mond is obviously wrong.
Less well known theories of Quantum Gravity that think beyond String theory or loop Quantum Gravity also have a hard time getting published as most reviewers have no concept of detecting removing their bias