Research Reporting Guidelines: Evolution, Impact on Scientific Publishing and Future Developments

Introduction

In the early 1980s and throughout that decade, the evidence-based medicine (EBM) movement appeared and gained momentum. Largely the brainchild of Canadian physician David Sackett, who is considered the father of evidence-based medicine, the movement spread to clinical practice generally across other areas such as nursing and midwifery.

The purpose of EBM was to provide a mechanism whereby clinicians could review and assimilate evidence in their field and make decisions about the best treatments for their patients. Defined as: “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” EBM was largely aimed at individuals or groups of clinicians. However, as the field became more sophisticated, EBM became a tool for organisations and government health department to make health policy decisions and to direct funding towards what were considered, within economic constraints, the best treatments.

The Evolution of Research Reporting Guidelines

Reporting guidelines, which provide structured checklists to improve research transparency, emerged as a response to inconsistencies in research reporting. Sophisticated methods of analysis and presentation of data, largely dependent on randomized controlled trials (RCTs), were developed to the point where reviewing such evidence became the highest level of evidence – above RCTs themselves – and such reviews became and remain a major feature of healthcare journals. One aspect of such reviews is the evaluation of the quality of the studies under review. There was considerable variation in such quality, with some published reports of RCTs missing vital information such as methods of randomization or properly described controls, that moves were afoot in the early 1990s to encourage better conduct and, especially, better reporting practices in the field.

The person largely responsible for driving the move towards better reporting standards was an Irish epidemiologist based in Canada, David Moher. Concerned initially with the reporting of RCTs, Moher and colleagues published the CONSORT guidelines in 1996 which provided a checklist for authors submitting reports of RCTs to academic journals to follow thus ensuring completeness in the reporting and, directly driving up standards generally in reporting RCTs and, indirectly, the conduct of RCTs. It is fair to say that, where such reporting guidelines are available, scientists often design studies with them in mind knowing that when it comes to the reporting stage that they will have to be able to account for all the cardinal aspects of the RCT.

Since their initial publication, CONSORT guidelines have undergone multiple revisions, including extensions for RCTs in areas such as herbal medicine, acupuncture, and adverse event reporting. These are all available on the CONSORT page of the EQUATOR Network webpage.

The EQUATOR Network (Enhancing the QUAlity and Transparency Of health Research) was established in 2006 to coordinate the development and dissemination of reporting guidelines.  In total, there are currently 660 reporting guidelines available on the EQUATOR Network page although only a few are commonly used by researchers. After CONSORT, the two most widely recognized guidelines—STROBE (Strengthening the Reporting of Observational Studies in Epidemiology, developed in 2007) and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, also developed in 2007)—were introduced. Like CONSORT, since it looks specifically at RCTs, PRISMA has also undergone revision and extensions covering a similar range of topics to those covered by the CONSORT extensions.

The outcome of the development of reporting guidelines has been greater transparency around research as the reporting guidelines drive more complete reporting of all the steps in the research. This has also led to increased standardization in the way research is reported. For example, leading international journals in the field of healthcare universally point authors towards the EQUATOR network and insist that an appropriate reporting guideline is used in manuscripts. Thus, for example, the flow charts associated with CONSORT and PRISMA guidelines to track participants and articles, respectively, are a common feature of articles reporting clinical trials and systematic reviews.

With specific reference to CONSORT, this has driven much better practice on conducting and reporting RCTs as one of the checkboxes asks authors if their RCT has been registered on a clinical trials website. All leading academic publishers and the journals which they publish are signatories of the AllTrials agreement, which calls for prospective registration of all RCTs and, indeed, if reports of RCTs are submitted to leading journals which are not registered, they run the risk of being rejected on that basis alone.

The implementation of reporting guidelines has greatly facilitated the editorial and review processes of academic journals. Editors and editors-in-chief can quickly assess manuscripts at the submission stage if they purport to follow a specific design such as an RCT or systematic review. If there is no evidence of the use of an appropriate reporting guideline, then the manuscript can either be rejected or revised on that basis. For manuscripts that enter the peer review process, the use of reporting guidelines is very useful to reviewers as they can examine the checklists submitted with manuscripts to ensure that they are adherent.

Technological Shifts and New Challenges

Alongside the development of reporting guidelines, other developments have influenced their implementation. The Internet had existed in various forms, which became increasingly sophisticated, since the 1960s. As the Internet developed and became widely used in academia, a major development took place in 1991 when the World Wide Web (WWW) was created. This greatly facilitated academic research and publication and greatly facilitated systematic reviewing but putting the process in the hands of individual academics and by speeding the process considerably. This increased submission to academic journals of systematic review and meta-analysis manuscripts was a driver for the development of the PRISMA guidelines.

Another direct consequence of the development of the WWW was the ease with which academic research could be shared and this was a considerable driver of the Open Access movement. While the Open Access movement and the World Wide Web (WWW) have brought benefits, they have also contributed to a significant challenge: the rise of predatory publishers. This is a challenge to the academic publishing industry as it undermines their efforts to maintain standards. It has also been a challenge in terms of reporting guidelines, particularly the PRISMA guidelines, as the predatory journals pollute academic publishing and have been known to be included in systematic reviews. PRISMA does not make any specific statement regarding predatory journals, but advice is available for academics to help avoid their inclusion in systematic reviews.

In terms of conducting and reporting RCTs, predatory journals are known for referring to the CONSORT guidance to give a semblance of rigor while not implementing them rigorously. In fact, even mainstream academic journals have been taken to task for lack of adherence to CONSORT reporting standards, but predatory journals are notorious for publishing poor quality RCTs which lack the rigor and transparency required by CONSORT. One study assessing RCTs on COVID-19 treatments found a median adherence of 54.3% to the CONSORT checklist, indicating suboptimal reporting quality.

A major and very recent development relevant to the field of academic publishing is artificial intelligence (AI), particularly the development of large language models (LLMs) such as ChatGPT, Microsoft Co-Pilot and many other packages. While the use of AI poses a range of issues for academic publishing such as the generation of text without human input and the possibility of plagiarism with which the industry is coming to terms, it also presents considerable opportunity. Large publishing houses are currently developing their own AI packages to facilitate many aspects of the submission, review and publication processes.

With reference to reporting guidelines, AI could offer the facility for researchers to automate the process of completing checklists and generating flow charts where CONSORT and PRISMA guidelines apply. AI could also be used to generate abstracts in accordance with the reporting guideline and as specified by the target journal. At the review stage, editors and reviewers could automate the process of checking adherence.

However, the practical and ethical aspects of using AI alongside reporting guidelines needs to be considered. Practically, any AI models used to check reporting guidelines would require to be checked for accuracy. Ethically, there may be risks of bias within AI models whereby they favor some research topics over others or be biased by levels of funding or country of origin of research. In addition, depending on who has developed the AI packages, issues regarding sharing and ownership of data may arise.

Future Directions in Research Reporting

There can be little doubt that the development of reporting guidelines has led to improvement in the quality of research being published. Their utility and popularity are attested to both by the sheer number of reporting guidelines available on the EQUATOR network and the extent to which the most popular guidelines – CONSORT and PRISMA – have been extended and updated. Nevertheless, the above is largely an assumption and it would be interesting to be able to quantify the improvements. Likewise, while the number of retractions of articles continues to increase – and there are several reasons for this – the contribution of reporting guidelines to that process is unknown.

The use and development of reporting guidelines are unlikely to wane. There can be few, if any areas of research that are not covered by at least one set of reporting guidelines. The span of designs currently stretches from RCTs to qualitative research. Some research designs are covered by more than one guideline and time will tell if more guidelines are added to the 660 already available on the EQUATOR network.

For the future, it may be time to consolidate some sets of similar guidance to avoid confusion amongst researchers and to introduce more standardized approaches to reporting in some areas of research. However, this would depend on the willingness of those who had developed guidelines to collaborate. As already discussed, the impingement of AI into the use of reporting guidelines both at the reporting and peer reviewing stages is inevitable.

In summary, the development of reporting guidelines has been positive for both research and academic publishing. They will continue to be developed and used, and it is incumbent upon researchers and editors to be aware of their existence and to use them effectively.

Disclaimer: The opinions/views expressed in this article exclusively represent the individual perspectives of the author. While we affirm the value of diverse viewpoints and advocate for the freedom of individual expression, we do not endorse derogatory or offensive comments against any caste, creed, race, or similar distinctions. For any concerns or further information, we invite you to contact us at academy@enago.com

    Rate this article

    Rating*

    Your email address will not be published.