As Thomson Reuters Director of Editorial Development, Jim Testa travels the globe working with researchers, institutions and scholarly journals to understand their research needs in the areas of both content and evaluation. Here, we talk with Jim about the controversial, yet ever present Impact Factor, a proprietary measure of journal influence created in the 1950s by Thomson Reuters Chairman Emeritus, (ISI Founder) Dr. Eugene Garfield.
TS: In recent years, there has been a lot of talk about Impact Factor being “misused.” What is the most widespread misuse of Impact Factor?
JT: I’d say that using Impact Factor to evaluate the work of an individual author (instead of a journal) is the most widespread. Some institutions draw a direct connection between an author’s work and the journal in which they are publishing. Unfortunately, there is not a 1:1 correlation. To say that because a researcher is publishing in a certain journal, he or she is more influential or deserves more credit is not necessarily true. There are many other variables to consider.
TS: Thomson has publicly stated many times that Impact Factor should not be used to evaluate individuals. Why do you think this practice is still prevalent?
JT: Because the Impact Factor number is so ubiquitous, so well-established and presented at the same time every year. You’d be hard-pressed to find another metric that has those attributes. It’s a real indicator of quality (though not the last word in quality), influence and importance, measured by way of citation.
TS: Using Impact Factor to evaluate individuals is an example of “misuse” or “misinterpretation” of the metric. What about “manipulation” of Impact Factor?
JT: We’ve seen some interesting trends with regard to journal self-citation — journal articles that reference previous articles in the journal itself. Evidence suggests that some journals — some, not many — manufacture these self-citations to inflate Impact Factor.
To be clear, self-citations are not inherently bad. Our challenge is more than just monitoring the journal’s self-citation rate. It’s about monitoring the self-citation rate’s effect on the journal’s Impact Factor and its rank in its respective category.
There could be a journal in a very small field of study with only five citations, and three of them are self-citations. The self-citation rate will be high, but the effect of the self-citation is negligible.
But because Impact Factor is a measure of influence, it’s essential to demonstrate that the journal is contributing to the scientific discourse. If 70 to 90 percent of a journal’s citations are self-citations, it essentially renders the Impact Factor number meaningless.
TS: When did you first notice variances in self-citation trends?
JT: Back when we first started to track self-citations, the research community didn’t necessarily see this as manipulation. Here’s an example:
In the mid-1990s, we were meeting with the editor of an international science journal. The editor of the journal was boasting that his journal’s Impact Factor had grown dramatically over the past year.
I said, “This is a wonderful achievement. How did you increase your Impact Factor so dramatically?”
He replied, “It was easy … we just instructed all of the authors to cite the journal in their work.” He simply didn’t see anything wrong with it.
TS: Does Thomson Reuters monitor for excessive self-citations?
JT: Absolutely. For Journal Citation Reports, we monitor journal self-citation rates — not just in countries where Impact Factor manipulation is more prevalent, but all across the world. And we act, when necessary. For this year’s Journal Citation Reports (2006), we suppressed data from seven journals with excessive self-citation rates.
TS: What’s next for Impact Factor?
JT: Impact Factor, I think, is here to stay. Impact Factor remains unique and one of the truest measures of overall citation impact of a journal, indicating importance and influence. It’s as simple as can be, and that’s why it’s so compelling.