Research Reproducibility and Credibility — Replication Studies

A Self-Correcting Process!

Scientific research has never been a perfect process of discovery. We have never expected to get the results right the first time, and so we have developed multiple methodologies to test different hypotheses to sequentially build a more comprehensive understanding of the world around us.

One of the foundational assumptions of that process has been the knowledge that science will always self-correct any errors in inductive or deductive reasoning with an established process of validation through reproduction of results. If the results of a study prove to be irreproducible, we take that as a sign that the prior study should be re-examined.

A Credibility Gap

Re-examination of a failure to replicate a study results still starts with the assumption that the initial study is solid. Scientists will double and triple check the protocol of their replication study first, before moving on to examine the original study for errors in methodology and/or analysis. Unfortunately, the rise in journal retractions implies that this assumption of credibility is undeserved. It would be arrogant to assume that your replication study is automatically flawless if the results can’t be reproduced, but given the current trend in academic publishing to only publish new research, the limited likelihood of getting your replication study published would suggest that your checking efforts should be focused on building a robust case against the original study.

Paying Lip Service

We now seem to pursue prestige over credibility. Papers are cited more frequently from the prestigious journals in each field, which conveniently raises the prestige of those same journals, based on the volume of citations. Research is assumed to be solid based on the reputation of the journal and all that implies. It has become implicit that leading journals have first-class peer review processes that would catch any errors, such that replication would not be necessary. This misplaced confidence is only paying lip service to credibility. The reputation of any journal is only as good as the absence of evidence of misconduct. Any evidence of a conflict of interest, or unethical conduct can do irreparable harm to that reputation in just a few days.

Searching for a Stamp of Approval

With increasing numbers of researchers chasing a declining number of publishing opportunities in journals that have enough perceived prestige to boost the scholarly value of those eager researchers, the credibility of reproducibility has fallen by the wayside. Validation of a study, especially one that produces counterintuitive results, should come from replication of the results of that study. That world has changed.

Counterintuitive results attract a lot of attention, but a lot of that attention will come from fellow researchers criticizing the protocol and the results. However, that criticism is likely to be based on theoretical disagreements alone, because it would be very unlikely that any of those critics would be using replication data. Replication studies take time and resources, and the window of opportunity to speak out against the latest study will have closed by then. This leaves reproducibility in a quandary.

If solid evidence of flawed results can’t be produced while interest in the topic is at its height, by the time those results are made available, journals will have moved on to the next big thing. If there’s no interest in publication, it’s unlikely that the replication study would get funded in the first place, and even if it did get funding and produced results that directly challenged the integrity of the original study, the lack of interest may leave that original study without a retraction or any type of response from the journal in which it was published.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What features do you prefer in a plagiarism detector? (Select all that apply)