What Does the Future Hold for Peer Review?
Peer review is a vital part of academic publishing of original research. The process of peer review seems simple. Invited experts evaluate your manuscript for its validity and suitability for publishing, in either a journal, book, or conference. This vetting procedure should also prevent fraudulent research in academic publishing. However, in practice, the peer review process is not consistent, lacks rules and criteria, and is slow. The digital revolution in scholarly communication means that there will be constant innovations in peer review.
Goodbye, Traditional Peer Review
Traditional peer review filters out bad research and is also very selective, given the limited resources of paper printing. With the Internet, print space is technically not a problem and publishing is fast. In addition, the Open Access (OA) movement aims for less secrecy and bias, to ensure more rigor and honesty in scientific publishing.
Comprehensive Study of Peer Review
A new paper by Jonathan Tennant et al., thoroughly examines the peer review of journal articles and its future. The study’s 33 authors first describe the historical evolution of peer review in a socio-technological context. Then, the authors consider traits of peer review, several emerging models, and suggest a hybrid approach.
There are three eras of peer review. The first, called “primordial time”, corresponds to the period before 1950 back to the 17th century, when national academies and their journals were first established (Philosophical Transactions and Journal des Scavans). Here, peer review wasn’t called as such and it was an “in-house” process. In this, only editors evaluated the manuscripts.
After WWII, knowledge production boomed, both in kind and in quantity. This meant journals needed outside help. In 1950, Nature introduced formal peer review that was editor-led. During this era, the outsourcing of peer review to experts began. Importantly, journal-based publications became a form of professional currency and prestige in academics. Commercial publishers jumped on this, using voluntary (unpaid) peer review to promote their journals.
The third era is called “the revolution”. Here, the splitting off peer review from publishing was the aim. Its seeds came in 1990 when ArXiv launched (1991). On this web platform, physicists could openly publish their research first, but moderators would still filter out these “preprints”. The key development here was the publishing of research without going through traditional peer review.
This revolution gained momentum, especially in the last 5–10 years. This is characterized by the growth in digital-only journals (PloS One); by allowing commenting on articles (before and after formal publication, PeerJ); by making peer reviews fully available (ScienceOpen) and by cross-annotation by other web platforms (e.g., Pubpeer).
Peer Review Traits
A key conclusion by Tennant et al. is that the manner in which the process of peer review is perceived does not match its actual performance. Many studies show the number of mistakes rising, and that the process is losing its rigor. In short, sloppy scholarship has a better chance of getting published nowadays. Although traditional peer review is able to identify reliable research, it is clearly “on the ropes”. However, this is still used as a gatekeeper to gauge potential “impact” in the field and journal suitability. One innovation is telling reviewing experts to forget about novelty or potential impact (e.g., PloS One). This reduces the risk of peer review bias.
There exists single-blind, double-blind, or open peer review (OPR). In the first, mostly used by journals, reviewers are anonymous but not the authors. In the second, both are unknown. In OPR, both are known. Double-blind review does not always improve the quality of peer review and is difficult to do since manuscripts can contain clues about author identities.
OPR has a complex development (systematically reviewed by Ross-Hellauer). A big issue is the lack of an accepted OPR definition. Tennant et al. view OPR as doing one of the following: (1) disclosing names of expert reviewers to authors and readers, (2) making public the peer review reports, and (3) not limiting peer review to the invited experts.
Peer Review Evolution
Traditionally, it was enough to acknowledge the experts or thank them privately. However, now there is demand for more systematic recognition of these efforts, including feedback. One innovation is to credit such work (e.g., Publons). For this incentive to hold long term, peer reviewing must gain more weight in academic promotions and funding evaluations.
Another idea is publishing the expert reports. This could increase the quality of peer reviews, making them more constructive. Such transparency will encourage greater civility from experts and editors alike.
Another key development is decoupling peer review from academic publishing. This may even represent a paradigm shift. In the decoupled models, of which there are many variations, peer review can happen before a submission or after publication. The latter, called “post-publication peer review”, though appealing is not widely adopted by researchers.
Future Models and Hybrids
Tennant et al. identify and discuss seven distinct ways peer review could change using existing social Web platforms.
- A Reddit-based model
- An Amazon-style rate and review model
- A Stack Exchange/Overflow-style model
- A GitHub-style model
- A Wikipedia-style model
- A Hypothesis-style annotation model
- A blockchain-based model
The authors do an excellent job of summarizing each model’s traits, both the positive and negative. Suffice to say, each model has something of value to add to peer review. An interesting highlight is the use of AI-assisted peer review. Here, machine learning and neural network tools come into play. Although this automation approach cannot make decisions for editors, it could provide recommendations less error-prone than human interactions.
A viable process of peer review must provide quality control, certification, and incentives. Moderators, via community self-organization and governance (Wiki and Reddit), could do openly what editors did traditionally. Experts can get certified based on their participation and get community-level assessments (Amazon, Reddit or StackExchange). On top of altruistic motives, ORCID-within-Publons could be extended to incorporate aspects from the models above.
Both academic publishing and the process of peer review are clearly in flux. The changes are disrupting traditional peer review, which itself is still poorly understood at a large scale. Despite inertia in academic publishing models and researcher cultures, web-based OA-themed innovations in peer review are likely here to stay. Taking the best traits from various models and combining it with the spirit of traditional peer review can protect against fraudulent research and strengthen the scholarly communication system. Such a hybrid approach is perhaps the only viable way to preserve peer review.