Why Should You Replicate Research Data with Caution in Your Research Study?

Are We Obsessed with Applied Research?

The debate between the respective merits of basic or pure research and applied research is a heated one, especially when research budgets are under review.

Advocates of applied research argue that the inherent focus on the discovery of practical applications makes the methodology more valuable to human well-being. In contrast, advocates of basic research argue that without the continued provision of foundational knowledge, applied research will have nothing to build upon.

Beyond the theoretical debates, the practical reality is that it’s now easier to garner funding for applied research, and critics complain bitterly about the trend towards “token” basic research projects that institutions underwrite as a means of supporting their perceived commitment to broad foundational knowledge development.

However, beneath this ongoing battle, there is growing evidence of the dangers of an automatic assumption of the superiority of applied research, and the eagerness with which such research data is put into practice.

The Micro Lending Debate Was a Case in Point!

One example of the placement of an undeserving level of confidence in one research study is the research into micro lending practices by World Bank economist Shahidur Khandker and Mark Pitt, starting in 1998. Recognized later by the Grameen Foundation as: “the first serious attempt to use statistical methods to generate a truly accurate assessment of the impact of micro finance,” the initial study prompted a decade-long debate over the extent to which the chosen methodology and the results it generated really did constitute proof of the efficacy of micro lending practices.

When researchers David Roodman and Jonathan Murdoch attempted to reproduce the Khandker and Pitt study, they were unable to replicate the original claim that microfinance programs were an effective tool to address poverty. Roodman and Murdoch argued that the reduction in poverty that Khandker and Pitt measured could not be exclusively attributed to the microfinance loans provided, but rather an example of increased wealth raising the comfort level with borrowing (as opposed to borrowing raising wealth).

When we consider that many of the developed nations of the world and their largest financial institutions underwrote significant funding packages to the Grameen Bank and its contemporaries based on the evidence of the Khandker and Pitt study, it’s easy to see why the refutation by Roodman and Murdoch prompted such an extended debate.

One of the key stumbling blocks in the debate was the choice made by Khandker and Pitt to not make all elements of their data set public. Without full transparency on the data collected and the precise calculations used, both parties suffered. Roodman and Murdoch’s attempts at reproduction of the study could then be dismissed as attempts with a corresponding suggestion of inaccuracy, and the original study was then tarnished with a suggestion of something to hide. On that basis, both arguments were able to stand for over a decade and remain unresolved.

Caution Is Needed, as the Trust Is Misplaced

The term academic research carries with it an implied level of trust. To a novice, the presence of such quality control markers as institutional rankings, journal ratings, peer-review processes, subject matter expertise, an assumed academic integrity, are sufficient to place a high level of confidence in such research data.

However, we are now dealing with a reality of manipulated rankings, faked ratings, scam reviews, and expertise built on inflated or even bogus résumés. As such, just because a research project lays claim to having been thoroughly vetted, that no longer means, regrettably, that it can be taken at face value.

There is no suggestion (but much speculation) that the Khandker and Pitt study falls in this category, but the continued lack of complete transparency does serve as a strong warning that a single study should never be acted upon as conclusive evidence.

 

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    Which among these would you prefer the most for improving research integrity?