If you can meet with Triumph and Disaster
And treat those two impostors just the same;
If you can bear to hear the truth you’ve spoken
Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
And stoop and build ’em up with worn-out tools:
Rudyard Kipling
Yes it is that time of the year again when Thomson Reuters publishes its Journal Citation Report (JCR) and everyone involved in science publishing gets obsessed with Impact Factors (IF). I’m not going to go through the arguments about how little Impact Factors really mean, and I’m certainly not going to try and forecast the health or otherwise of a publishing venture based on a change of 0.3 in its IF. But I thought you might want to know what Nature Protocols’s 2012 IF is. So cue drum roll …
It’s 7.96 down a couple of points from last year’s 9.92
Or
It’s 11.74 up from last year’s 10.20
Nothing is simple when it comes to Impact Factors. They are sort of an estimate of the average number of citations that a paper in a particular journal gets, but they are actually the number of citations a journal gets in a year to articles published in the previous two (or five) years divided by the number of articles published in those years that it seem appropriate to cite (‘simples!’). Herein lies the apparent contradiction in the numbers I gave above. Nature Protocol’s Impact Factor based on citations in 2012 to protocols published in 2010 and 2011–the two year impact factor (IF2)– is 7.96. The Impact factor based on citations in 2012 to protocols published between 2007 and 2011–the five year Impact Factor (IF5)–is 11.74.
For most journals there isn’t a whole lot of difference between the IF2 and the IF5, certainly less than 10% so when someone says Impact Factor they normally mean IF2. There are a few journals with big differences between the two values. The journal with the highest IF2 of all, CA: A Cancer Journal for Clinicians published by the American Cancer Society, has an IF2 of 153.46 and a IF5 of 88.55 which I interpret as meaning that what it publishes is extremely relevant for a couple of years (and so is highly cited) but after that it quickly loses its importance.
Conversely that Nature Protocols has a higher IF5 than IF2 could be taken as an indication that the protocols we publish remain relevant well beyond the first years after presentation. A measure that might bear that out would be the cited half-life of the journal. This is defined as the median age of the articles published in Nature Protocols that were cited in a given year (i.e. 2012). For Nature Protocols it is 4.9 (in 2011 it was 4.2), but that really doesn’t say a lot as Nature Protocols is a relatively young journal which has only been in existence since 2006. The maximum value for cited half-life we could have got would have been 6 and if there was no change in the rate of citation of our protocols over time a value of 3 would have been expected. We will need to be at least in our teens before I will put much store by cited half-life.
There is another confounding factor in all this for Nature Protocols and its name is DAVID.
In December 2008 we published online a protocol by Richard Lempicki and colleagues at National Cancer Institute at Frederick, Maryland called “Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources” (Nature Protocols 4, 44-57 doi:10.1038/nprot.2008.211 (2009)). It is our most cited paper having been cited more than 3,000 times. It is in fact the most highly cited paper from any Nature journal, including Nature itself, published in 2009 (yes I know it was published in 2008 but it was in the January 2009 issue of the journal and so that makes it officially a 2009 paper). In 2012 alone it was cited upwards of 1,000 times. However since it was published in 2009 those citation do not contribute to our IF2 although they do to our IF5. It is difficult to get the figures to calculate the exact effect of a single paper on IF but a fair approximation would be to say that had those 1,000 citations been included in the calculation of our IF2 then it would have been a bit less than 3 points higher, while excluding them from our IF5 would reduce that by about 0.8 making both values in the region of 10.9.
Yep, you’ve got it! Citations to a single paper seem to account for all the difference in our Impact Factors. Which simply shows again that IF may be a great poem, but it is a poor measure of the scientific literature.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in