Back to the Future: Peer Review in 2030

Nearly six months have passed since you submitted your manuscript for peer review. When it finally returns back to you, the editor informs you that the peer reviewers found your manuscript interesting, but not sufficient for publication. Shortly after, a strikingly similar paper comes out of a competitor’s lab. Interestingly, one of those peer reviewers turned out to be your competitor and they simply delayed your paper until theirs got published first. In the research community, behavior like this is unethical and would be subject to editorial disciplining. However, in an academic community fraught with hyper-competitiveness, peer review has its drawbacks. In such a world, how are researchers going to continue ensuring that the benefits of peer review are realized while guaranteeing its fairness?

At the SpotOn London conference, researchers, librarians, publishers, and scientific stakeholders sought to answer “What might peer review look like in 2030.” Shortly thereafter, BioMed Central and Digital Science published a report from these discussions in May 2017. This report included reflections on the future of peer review and how to improve a system that is central to the scientific process, yet is fraught with the personality flaws of scientists and is frustratingly slow.

Many Reasons and Ways to Change

At this conference, a frequently repeated comment was that peer review is slow, inefficient, biased, and open to abuse. Although this process has allowed science to persist as the determinant of truth, such inefficiencies and potential openness to abuse require real solutions. In the report, Rachel Burley and Elizabeth Moylan of BioMed Central invited stakeholders to reflect on the changes in peer review. Among these reflections, several recommendations were made.

In some ways, technology can have an expanded role. For example, artificial intelligence can facilitate the process of identifying expert peer reviewers that are closely matched to the topic at hand in the manuscript. Journals and editors can work to increase the diversity of their reviewer pool to include early stage scientists, women, and geographically diverse researchers. Currently, those reviewing manuscripts and those publishing the manuscripts do not share the burden equally. Indeed, scientists in the US review 33% of health science manuscripts, while only publishing 22% of these types of research reports.

Since peer review was founded on trust, it is logical to increase the transparency of the review process or experiment with new processes. While systems like ORCID may help verify author identities, detect plagiarism, or note inconsistencies across figures, it only goes so far. In this way, artificial intelligence may help editors by providing services that go beyond the rudimentary plagiarism software applications that currently exist.

Transparency, too, among reviewers will help authors and reviewers detect potentially non-scientific biases. In this way, BioMed Central has experimented with several peer review methods, such as results-free peer review while others have invested in reviewer training and have sought to develop ways to recognize the work of reviewers. It has been estimated that reviewers invested roughly 13-20 billion person hours in 2015. Yet, funding agencies and institutions do not recognize the work required to perform peer reviews. Though automating the peer review process wherever possible will help reduce this burden, having institutions involved in tenure-track academic appointments or fundings will facilitate reviewer involvement.

Peer Review in the Future

So, what will peer review look like in 2030? It is sure to be quicker and more transparent. In addition to using automation to assign well-matched expert reviewers, having an expanded and diverse pool of reviewers will ensure that any biases programmed into such automated software will not go beyond the assignment of a reviewer. Furthermore, increased recognition of peer reviewers will incentivize meaningful and more careful peer review.

Some groups are trying to track the contributions of peer reviewers through sites, such as Publons. Also, greater transparency at all stages of publication (e.g., research plans, preliminary results, narrative, etc.) will help in engaging the scientists. Yet, researchers will still have to bridge the gap in trust as well as honesty. A truly open researcher should not be taken advantage of by an opportunistic scientist.

Peer review in 2030 is the story of what society seeks to gain by embracing changing trends. The use of automation can help accelerate the efficiency of peer review, but the research community will need to confront the non-scientific behavior that still affects the work of scientists. It is hoped that changes in peer review will make researching, discovering, and sharing, a transparent and rewarding process for all involved.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    Which among these would you prefer the most for improving research integrity?