Decreasing the Influence of Impact Factor

For a number of years, the same issue has been discussed extensively that the journal impact factor (IF) is a weak metric, especially when it comes to assessing separate papers or evaluating the accomplishment of individual researchers. The impact factor can be a broad indicator of the output of a journal, but should not be used to determine the quality of individual studies or its authors.

Where Rather Than What

Unfortunately, many publishers, funders, and researchers continue to portray the impact factor as one of the most important criteria to examine the quality of research being conducted. Publishers use IF to advertise their journals, whereas funding agencies, research centers, and universities often judge the quality of the research based on the journal in which it has been published. In fact, when deciding about promotions or tenure, researchers are judged based on the journals in which they have published studies; moreover, researchers themselves evaluate a paper based on the IF of the journal in which it was published. Thus, in such a system, it seems to be more important where a particular study has been published compared to what it has helped identify. So, the goal of many scientists is to publish their results in high-impact-factor journals to advance their research career—although the flaws of the IF are evident to the academic community. This pressure can encourage researchers to hype up their work and may also lead to a waste of valuable time. Many authors submit their papers to prominent journals as a first choice and then move down to less-influential ones if they are rejected there. This continuous process of either rejection or acceptance can take more than 6-8 months.

Replacing Impact Factor

One of the main problems with the IF is that the citation distribution in a journal is usually skewed. Only a small number of highly cited papers actually contribute to the IF of that journal. Additionally, the measure does not distinguish between positive and negative citations; it also has a short time horizon (favoring fields that move quickly) and can be easily influenced by publishing reviews, letters, or editorials. Despite all these drawbacks, Stephen Curry, a Professor of Structural Biology at Imperial College London who has often criticized the misuse of this parameter has pointed out that many scientists are still addicted to the IF and this is “astonishing for a group that prizes its intelligence.”

The voices against the IF seem to be getting louder. In 2012, a group of editors and publishers released the San Francisco Declaration on Research Assessment (DORA), where they recognized “the need to improve the ways in which the outputs of scientific research are evaluated.” In fact, more than 12,500 individuals and about 900 organizations have signed it so far. This year, in July, the American Society for Microbiology announced that it would remove the IF from its journals and website—and that it would not use it for marketing or advertising purposes anymore. Similarly, eLife too is moving from the impact factor. Nature journals are now using a suite of citation-based metrics, including the article influence score and a two-year median, to measure their influence. Other journals, such as those published by the Royal Society of Chemistry and EMBO Press, display citation distributions on their sites. In a paper posted recently to the preprint server bioRxiv, a group of researchers led by Stephen Curry—and including Marcia McNutt, president of the National Academy of Sciences and former editor of the journal Science—suggest that other publishers should also play down their impact factors and emphasize on citation distribution curves instead.

Alternative Metrics

Further efforts to free academia from its dependence on the impact factor have led to the introduction of alternative metrics, such as PlumX Metrics, article-level metrics (ALMs, used by PLOS), or Altmetric (used by Nature Publishing, Wiley, and others). Using these tools, readers can now track the reach of individual studies on many journal sites. They can see the number of citations, blog posts, and social shares—or find out how many times a manuscript has been featured in the news. While all these figures can definitely help us to obtain a complete, up-to-date picture of the online activity of individual papers, they should not be used alone to evaluate the relevance of a manuscript.

One of the main problems with altmetrics is the manner in which social media works. Many posts are just retweeted or shared without confirming their veracity or if people forward the information without actually reading the manuscript. Often, many news outlets simply reproduce press releases sent out by journals and research institutions (without even examining the papers), which can improve the altmetric score of well-advertised manuscripts. In general, researchers working on applied sciences and newsworthy topics will benefit from altmetrics, whereas those dedicated to fundamental research might have a more difficult time. Other long-term online metrics, such as download numbers, page views, and comments, may be more meaningful.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    Which among these would you prefer the most for improving research integrity?