Can Artificial Intelligence Fix Peer Review?
Artificial intelligence (AI) simply means a computer program that can mimic functions of the human brain, such as solving math problems. We use these programs every day to perform complicated procedures, such as recognizing characters and speech patterns, when we talk into our cell phones or dictate a text message.
Although we might consider AI as a somewhat new technology, the concepts go back hundreds of years. Ramon Llull (1300 AD) might have been the first to suggest a machine that could perform simple logical tasks. The first “calculating machine” was created in 1623 by Wilhelm Schickard and Gottfried Leibniz (1646) built on the concept of his theoretical calculus ratiocinator.
In the 1940s, Alan Turing, now considered to be the “father of theoretical computer science and AI,” suggested that by rearranging the binary symbols “0” and “1”, any mathematical functions could be done by a machine. With huge leaps in the areas cognitive reasoning and biofeedback, researchers have even considered the possibility of creating an artificial brain or artificial neurons.
Aside from our daily lives, can AI replace other cognitive functions, such as editing and peer reviews? Would we trust a computer program to replace the human factors involved in reviewing the works of colleagues? Do we even need a new approach to peer review?
Is the Peer Review Process Broken?
In scientific publishing, peer reviews are especially important for assessing whether specific industry protocols and standards are met and are in keeping with proper scientific methods of investigation. Also, publishers see peer reviews as a first “filter,” after which the author makes revisions based on the reviewers’ findings to produce a higher quality product. But as important as it is, it has its flaws.
According to a 2016 article published in the New Republic, it is possible that some very sound research is rejected based on peer reviews alone. Assessments in a specific field of science are difficult, there are often disagreements among reviewers, and many must divide their time among several other job responsibilities. There have even been some conspiracies among authors and reviewers to get their manuscript published. Research publishing is also highly competitive for funding, a more prestigious position in the university, or primary credit for the research. However, is there an easier way to get published?
AI in Peer Reviews
We have many programs, such as spelling and grammar checks, that make our writing easier. They are not perfect; they can’t always distinguish between spellings of certain words, but they do help us. However, with the advent of Trinka– an AI-powered language enhancement and grammar checker tool specially designed for academic writing – authors can get expert assistance for their writing. From technical spellings, style guide preferences, subject-specific corrections to formal tone and syntax, Trinka takes care of everything! In the scientific publishing industry, editing software programs like Trinka are being relied on more and more by publishers to save time and money. Historically, a researcher’s colleagues help determine the quality of the research and results, and assess how the research can contribute to their field of study; however, as mentioned, peer reviewers are overworked and the number of research papers continues to increase. A 2012 report by the International Association of Scientific, Medical, and Technical Publishers indicates that “in mid-2012, there were 28,100 active scholarly peer-reviewed journals” and the number of articles, journals, and researchers continues to steadily grow.
Proponents of AI in Peer Review
As a result of the sheer volume of manuscripts and system flaws, the industry has deep concerns about the current quality of peer reviews and is using AI to help with text mining. One such AI, “EVISE,” was created by Elsevier to replace its outdated Elsevier Editorial System, support the editorial process, and speed up manuscript processing. According to a 2015 report in Elsevier, 1.2 million manuscripts are submitted to 2,300 Elsevier journals every year; “1.3 million reviewers support the peer review process and 350,000 articles are published.” To help publishers wade through all of these, EVISE performs the following functions:
- Links a manuscript with plagiarism-checking software;
- Suggests reviewers based on content;
- Communicates with other programs to check things such as the profile, scientific performance, and conflicts of interest of reviewers;
- Automatically prepares correspondence among the parties involved;
- Provides reminders to reviewers, removes them if no response, and invites alternate reviewers;
- Sends decision letters to authors; and
- Sends thank you letters to reviewers.
Some believe that AI helps by removing time-consuming tasks, such as choosing reviewers. It is suggested that AI might perform an entire review as long as specific databases of reviewer information are available. Some also believe that removing the human aspect from the process would help eliminate tensions among authors, reviewers, and publishers. But with all the bells and whistles of AI systems in general and EVISE in particular, the industry must still continue to rely on humans to assess scientific research papers for quality, and for good reason.
Opponents of AI in Peer Reviews
There are some criticisms of AI in peer review. In a study done by scientists at the University of Trieste, Italy, fake peer reviews were presented to academicians, who were then asked to either agree or disagree with the outcomes—one-quarter agreed with them. It is suggested that fake reviews could make papers of poor quality appear publishable. EVISE had several malfunctions in its early stages, although Elsevier was able to fix them and continues to stand by its product. In addition, rules and protocols would be needed to determine exactly what functions an AI should and should not perform. For authors, will their writings change if they know they will be judged by AI? Will AI be able to determine “what is new knowledge,” which is so critical in scientific research? Is the system really broken? These and other questions must be considered before replacing all human components of peer reviews with AI.
Hi, Thanks for sharing nice stuff…