Why False Positive Results Are Harmful to Scientific Research?

Who are We Researching for?

If research is conducted in the name of discovery and improving human wellbeing, shouldn’t the benchmark for ‘successful’ research incorporate the same criteria? If the results tell us something new and contribute to an improved condition, shouldn’t that warrant attention?

As logical as that may seem, the reality is very different. What warrants attention in academic publishing, is not only the persistent fascination with the ‘new’, but that the ‘new’ be as counter-intuitive as possible. If a study was released with data suggesting that high fructose corn syrup (HFCS) was suddenly good for you, and the authors suggested that it should be added to your daily dietary intake immediately, journals would be clamoring for the right to publish the results because they would be counter-intuitive to everything we currently understand about the effects of high fructose corn syrup.

What Is a False Positive in Scientific Research?

In medical research, a false positive is a test result that gives an erroneous indication that a disease or condition is present when it isn’t. For example, this problem has beset mammography results for decades, resulting in thousands of unnecessary biopsies.

A false positive in scientific research is therefore a scenario when there is statistically significant evidence for something that isn’t real. The key to such a result is the creation of a hypothesis that challenges a commonly accepted question in a different and unusual way. In the HFCS example above, follow-on research would be expected to examine just how unhealthy the ingredient can be in our diets. However, the chances of getting that research funded and published would be a lot lower than if you developed a hypothesis that HFCS actually has health benefits. You can imagine some enthusiasm on the part of the food processing industry in that study.

All that is required is careful attention to the methodology – the sample population, a hypothesis that only tests for one condition, the data collected, and the subsequent statistical analysis of that data to generate a counter-intuitive false positive result — and you have the makings of a groundbreaking study.

The Pressure to Be Different

Increasing competitive pressures are turning ‘publish or perish’ into an even tougher expectation to ‘publish groundbreaking work or perish.’ This, in turn, is directing research away from follow-on extrapolation studies to further expand on the broader potential of new discoveries, towards studies that are expected to be astounding right out of the gate. There’s no doubt that such discoveries are possible and often genuinely made, but the harder that researchers are expected to pursue counter-intuitive results as the golden ticket to publication, the greater the risk that increasingly scarce research funds will be wasted on fruitless journeys.

Much Harm Done!

As long as future research is directly influenced by the results of past research, false positives will continue to do harm. There will always be a place for counter-intuitive research to investigate something in a completely different way and from a totally different direction, but such research has to be grounded in a solid hypothesis.  Without it, we run the risk of inspiring future researchers to undertake investigations into results that were wrong to begin with.

The subsequent failure to replicate the results of the original study (assuming they were given full access to the data) might cast a shadow over the original study, but probably not enough to force a retraction. Over time, research databases will become polluted with questionable research, some disconcerting replication data (assuming those studies ever get published) and future researchers will be unable to separate the good from the bad.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    Which among these would you prefer the most for improving research integrity?