The Importance of Scientific Transparency and Reproducibility

In scientific research, credibility is of utmost importance. Data documentation and storage, and the “rigor, transparency, and attention we invest in designing, conducting, and reporting experiments” are part of ensuring sound credibility. Science can only progress if there is corroboration among colleagues, and reproducing research results can be difficult when the original study is lacking in areas such as unsound scientific methods, misinterpretation of results, or other issues that might have been purposely ignored to achieve the desired results.

According to Nature, “there is growing alarm about results that cannot be reproduced.” With more scrutiny, more complex studies, advanced statistics, and the constant pressure on researchers to publish, nearly everyone involved is concerned about scientific transparency and the challenges with irreproducible research.

What Scientists Are Saying

In 2016, Nature published the results of an online questionnaire from 1,576 respondents on the reproducibility crisis. Only 52% agreed that a crisis does exist, although 70% had attempted to reproduce a study but were unsuccessful. About one-third of respondents believed that this failure meant that the study was not valid, and 73% trusted the validity of at least one-half of the papers published in their fields. Respondent attitudes appeared to be somewhat disconnected with respect to scientific transparency, reproducibility in research, and their importance.

From 60% to more than 80% of respondents had failed to duplicate others’ results, the highest being in the field of chemistry. About 24% of successful replications were accepted by publishers versus about 13% of unsuccessful attempts. Given these results, incentives to duplicate another’s experiment are most likely not very high. Close to 90% of respondents chose “more robust experimental design, better statistics, and better mentorship” as the most important elements in research transparency and reproducibility.

Why Replicate a Study?

So with the lack of consensus, why should we care?

According to Nature, “Replication studies offer much more than technical details. They demonstrate the practice of science at its best.” The decision to replicate a study can reveal whether the hypothesis has been truly tested, and can identify problems and offer resolutions. Research findings can help initiate clinical trials on new medications, such as cancer drugs.

But what if the findings cannot be reproduced, or if the problems with duplication have to do with the lack of transparency in the research?

The Center for Open Science and Science Exchange has attempted to address these issues by creating The Reproducibility Project: Cancer Biology to “expressly to replicate important research results.” The Laura and John Arnold Foundation provided $1.3 million as part of its mission to improve reproducibility in science. “The project was to validate 50 of the highest-impact cancer findings published between 2010 and 2012 in Nature, Science, Cell and other high-impact journals.” According to eLIFE, “The project will provide evidence about reproducibility in cancer biology, and an opportunity to identify factors that influence reproducibility more generally.”

Some Interesting Findings

Erkki Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute, planned to conduct clinical trials on a new cancer drug based on his research findings. When tested using five attempts, his results either could not be duplicated or were unclear. Of the five, one failed, two were successful but not statistically significant, and two produced results that could not be interpreted.

Does this mean that Ruoslahti’s research is invalid? According to Tim Errington, project manager, “A single failure to replicate results does not prove that initial findings were wrong—and shouldn’t put a stain on individual papers.” Simple details, such as a minor temperature change or incorrect reagent mixture, might cause these failures and should not negate the original research. The project noted that the main issue with reproducibility is that many papers include too few details, such as lack of sufficient data, lack of clear methods, or incorrect statistical analysis.

An editorial published in Nature stated (in reference to duplicate studies) that “researchers must make more of them, funders must encourage them and journals must publish them.” Researchers are willing to discuss and share their experiences and regularly do so using multiple forums. F1000 recently launched its Preclinical Reproducibility and Robustness channel to encourage openness among researchers. Scientific Data and the American Journal of Gastroenterology encourage attempts to replicate studies and even solicit negative results. Nature Biotechnology has also become receptive to publishing replication studies, especially when important research questions are ultimately raised.

More Collaboration Is Needed

Current concerns focus on the “rigor of the experimental design (inclusion of all appropriate controls, blinded experimental conditions, gender balance in experimental populations, a priori determination of n’s and statistical power, appropriate statistical analyses, etc.) and on complete transparency in reporting of these parameters and all collected data.” The psychology community has led the way to encouraging collaboration by inviting the original authors to offer suggestions on how to reproduce their studies. The protocols are then published in a registered replication report (RRR), and although conventions and even vocabulary are still progressing, editors of these reports are careful to avoid such terms as “successful” or “failed” replications.

The trend to maximize reproducibility and transparency in science involves not only researchers, but stakeholders and funding organizations, “universities, journals, pharmaceutical and biotech companies, patient advocacy groups, and society at large.” New guidelines are needed and details must be provided on how experiments were performed. Because data are such a large part of whether a study is valid, standards on how they were collected and what was used and how to analyze them must be followed. As more studies are replicated and the results more widely accepted, the scientific community will help to foster better relationships among researchers who strive for the same goals and will establish “reasonable standards of conduct.”

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What features do you prefer in a plagiarism detector? (Select all that apply)