Investigation of Clinical Trials Unveils Data Fabrication
The work that scientists do is vital in many ways. It helps society progress in the fields of medicine, climate change, physics, and beyond. Unfortunately, scientific misconduct is on the rise. So, the issues of scientific fraud and data fabrication need serious consideration. To prevent a negative impact on future generations, it is crucial to address the issue of data falsification.
John Carlisle’s Fight Against Data Fabrication
Some may feel that this fight against scientific fraud is too daunting. However, not to John Carlisle. He has single-handedly made it his goal to uncover data fabrication and expose the harm that it does. Carlisle, an anesthetist working for the National Health Service in England, began his work to uncover fraud about a decade ago.
Carlise’s work has got much attention in the scientific community. Over the past decade, he has investigated scientific studies on a wide range of topics. His work has uncovered scientific misconduct in hundreds of papers, leading many to be corrected or retracted. Most significantly, Carlise’s work has helped to expose some of the world’s leading scientific frauds. This includes the end of the careers of three of the six scientists worldwide who have the most retractions. This is a significant step toward protecting the integrity of scientific work.
One example is Japanese researcher Yoshitaka Fujii, who worked at Toho University in Tokyo. He published studies on the impact of various medicines on preventing vomiting and nausea in patients after surgery. To Carlisle and others, however, the data in these studies looked too clean to be true. After some in-depth investigation, Carlisle concluded that the likelihood of these data patterns occurring was “infinitesimally small.” As a result, Fujii was soon under investigation. Ultimately, he had 183 of his papers retracted and was fired from Toho University. Four years after his investigation of Fujii, Carlisle co-published an analysis of the publications of another Japanese anaesthesiologist, Yuhji Saitoh. Carlisle and his team deemed this data to be extremely suspicious, as well. As a result, 53 of Saitoh’s works have been retracted.
John Carlisle’s Methods
So, why is Carlisle’s work so effective? The answer is in his methodology. His investigations rely on the idea that real-life data have natural patterns that simulated data struggle to replicate. He focuses on inconsistencies and suspicious data to unearth a more significant trend of fabrication.
One example is his investigation of anesthesia studies by Italian surgeon Mario Schietroma (University of L’Aquila). This investigation gained the attention of the World Health Organization (WHO), which had used Schietroma’s work as the basis for its recommendations for using anesthesia. In five of the studies that Carlisle looked at, he noticed that the raw data for control and patient groups showed some suspicious similarities. His conclusion stated that Schietroma’s studies were not a reliable basis for clinical practice. Once the WHO read Carlisle’s research and the support it received, the organization quickly downgraded support for Schietroma.
Two Leaders in Detection: Grim and Statcheck
GRIM
GRIM (Granularity-Related Inconsistency of Means) was developed by Nick Brown, a graduate student in psychology at the University of Groningen, the Netherlands, and James Heathers, a student who studies scientific methods at Northeastern University in Boston, Massachusetts, It is a technique used to verify the summary statistics of research reports. It checks the calculation of statistical means, using this information to flag suspicious data. GRIM relies on precise, mathematical analysis. For example, the test will look at the mean and sample size of an experiment and analyze whether the results are mathematically possible.
Statcheck
Michèle Nuijten, a student of analytical methods at Tilburg University in the Netherlands, developed Statcheck, which she refers to as a “spellcheck for statistics.” Statcheck scans journal articles to check whether the statistics and conclusions are internally consistent. It is a popular program in use by journal editors for decades.
Both programs have limitations. For instance, the GRIM test only works when data are integers, such as the number of choices found in questionnaires. Statcheck is limiting in its format, it is also the data-presentation format used by the American Psychological Association. Regardless, Brown, Nuijten, and Carlisle see their programs as a tool for highlighting issues. According to John Ioannidis, a scientific methods researcher, and statistician at Stanford University in California, “It’s a completely different landscape if we’re talking about fraud versus if we’re talking about some typo.” The process to detect scientific fraud or data fabrication is long. These programs are a valid starting point.
How a South Korean Study Almost Beat the System
Studies have shown that 1.9% of scientists have admitted to fabricating data. Up to 33.7% acknowledged using questionable research practices. When it comes to admission rates, the percentages are even higher: 14.2% admit to falsification, while 72% acknowledge using questionable research practices. Among the many reasons for this are the desire to gain fame, increase publications and readership, gain a professional or academic position, and secure funding.
One example of beating the system is a case where South Korean researchers were able to submit a paper even after Carlisle rejection owing to serious flaws in the data. In 2012, South Korean researchers submitted a study to Anesthesia & Analgesia. This study looked at how facial muscle tone could indicate the best time to insert breathing tubes into a patient’s throat. Inconsistencies in the study led to rejection. However, it was then submitted to Carlisle’s journal Anaesthesia, but with different patient data. He remembered the paper and rejected it again. However, this was not the end. Surprisingly, Carlisle soon saw the study published in in the European Journal of Anaesthesiology. The data was the same. The results were the same. So, Carlisle contacted the journal’s editor and told them about the study’s past rejections. This led to the retraction of the study due to inconsistencies in the data and misrepresentation of results.
With scientific misconduct still an issue, the work of Carlisle, Nuijten, and Brown is more important than ever. It puts a system of checks on the scientific community, while also protecting the integrity of scientific work.
What are your experiences with scientific misconduct? Have you encountered any cases of data fabrication? Please share your thoughts with us in the comments.