The Journal Impact Factor is a proprietary metric, published annually by the Scientific business of Thomson Reuters via Journal Citation Reports (JCR). JCR provides a number of metrics and quantitative tools for ranking, evaluating, and categorizing and comparing journals.
The Journal Impact Factor is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. A journal’s Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the previous two years. A Journal Impact Factor of 5.0 means that, on average, the articles published in that journal within the past three years have been cited five times.
Dr. Eugene Garfield, founder of the Institute for Scientific Information (ISI, now the Scientific business of Thomson Reuters) first envisioned creating a metric to evaluate journal performance in 1955, five years before founding ISI (i). Over the next 20 years, Garfield and his colleague, the late Irving H. Sher, worked to refine the Journal Impact Factor concept. Since its refinement and introduction into ISI’s Science Citation Index in the 1970s, the Journal Impact Factor has been a highly influential ranking tool used by participants in all phases of the research publishing cycle — librarians, publishers, editors, authors and information analysts.
The importance of the Journal Impact Factor in the scientific and academic communities has naturally generated some scrutiny. While bibliometricians recognize that the Journal Impact Factor offers vital insight into the influential journals within sciences and social sciences, many — its creator Garfield among them — concede that, over the past two decades, the Journal Impact Factor’s usage has gone outside of its intended application. The high Journal Impact Factor demands that some institutions and governments have for their researchers may be leading to misuse and manipulation of the metric.
But throughout its history, the Scientific business of Thomson Reuters has maintained that the Journal Impact Factor has limited applications and may be properly used only in context.
The Importance of “Context”
The Journal Impact Factor’s formula is far simpler than the complexity of comprehending and applying it. When applying an inherently simple, universal formula across different, complex fields of study, context becomes crucial. The simplicity of the formula dictates it be used within very specific parameters and for very specific purposes. Use outside of its intended purposes has the potential to mislead … and for a measurement that has become so powerful, so widely accepted, it has the potential to harm.
Perhaps the most prominent misuse of the Journal Impact Factor is its misapplication to draw conclusions about the performance of an individual researcher. The Chronicle of Higher Education referenced this misuse in an October 2005 article, “The Number That’s Devouring Science.” (ii)
[Journal Impact Factors] also help in the modern world of ultraspecialized science. Members of a tenure committee or a hiring panel find it increasingly difficult to assess the papers of a candidate working outside their own subdiscipline, so they use the impact factor of the journal in which the paper appeared as a measure of the paper’s quality. By that logic, evaluators rate a paper more highly if it appears in a high-impact journal, regardless of what the paper actually says.
Dr. Garfield and Thomson Reuters have been outspoken opponents of this misuse. In fact, in a 1998 letter to the editor of the German journal Unfallchirurg (iii), Dr. Garfield commented on the unintended use of the Journal Impact Factor as a means to evaluate individuals:
The source of much anxiety about Journal Impact Factors comes from their misuse in evaluating individuals ... In many countries in Europe, I have found that in order to shortcut the work of looking up actual (real) citation counts for investigators, the Journal Impact Factor is used as a surrogate to estimate that count. I have always warned against this use.
Not only is it important that the Journal Impact Factor be applied only to journals, but also it is critical that the Journal Impact Factor be considered only within the discipline one is researching. Citation patterns vary greatly across disciplines, and when reviewed outside the context of journals in the same scientific disciplines, “absolute” Impact Factors do not accurately represent a journal’s performance.
For example, the leading Oncology journal listed in Journal Citation Reports 2006 has a Journal Impact Factor of more than 60, while the leading JCR-listed journal in Zoology has a Journal Impact Factor of just over 3.8. Both fields have a number of quality journals, but different subject areas have different citation patterns and a different number of researchers active in the scientific dialogue.
The same can be said for journals within the same discipline, but in different sub-disciplines. According to Journal Citation Reports 2006, the top five journals in Physical Chemistry have Journal Impact Factors between 9.2 and 19.1, while the top five journals in Applied Chemistry boast Journal Impact Factors between 2.4 and 4.7. Even at the sub-discipline level, these figures are apples and oranges … one cannot draw a conclusion about a journal’s quality based on the absolute Journal Impact Factor number.
Preserving the Integrity of the Journal Impact Factor
The Journal Impact Factor is the most ubiquitous, simple and well-accepted indicator of journal quality that exists. When used properly and ethically, the Journal Impact Factor is as effective a benchmarking tool as one can hope.
Its influence and relevance is undeniable. So, too, the researcher community’s concern is real and legitimate.
Thomson Reuters encourages everyone within the research community to do their parts in preventing misunderstanding and misuse of the Journal Impact Factor. What can the research community do?
Only consider the Journal Impact Factor data in context.The most important caveat when using the Journal Impact Factor is that journals can only be compared with like journals — particularly journals within the same discipline. “Absolute” Journal Impact Factors do not accurately represent a journal’s performance without the context of journals within the same field.
Multidisciplinary journals especially must be held to their own standards, taking into consideration the different disciplines they represent most prevalently.
Do not use the Journal Impact Factor to assess the performance of an individual researcher. As with all of the metrics provided through Journal Citation Reports, the Journal Impact Factor can only be used to evaluate journals.
Thomson Reuters considers much more than the Journal Impact Factor when evaluating journals … and so should you. With JCR, our goal is to provide a “complete picture” through objective, statistical data, allowing our users to make sound, evaluative judgments. That’s why, in addition to the Journal Impact Factor, JCR presents other metrics, such as Immediacy Index, Total Cites, Total Articles and Citation Half-Life, useful in a multidimensional assessment of archived journals.
i Garfield, E., The Agony and the Ecstasy—The History and Meaning of the Journal Impact Factor. International Congress on Peer Review and Biomedical Publication, September 16, 2005.
ii Monastersky, R., The Number That’s Devouring Science. The Chronicle of Higher Education, October 14, 2005.
iii Garfield, E., The Impact Factor and Using It Correctly. Der Unfallchirurg, June 1998.