Rocky Balboa, Philadelphia, PA. Creative Commons licensed picture by seng1011 (steve eng) on Flickr.
[This post is part of an ongoing series about impact factors]
In the world of abused performance metrics, the impact factor is the undisputed heavyweight champion of the (publishing) world.
It has been an eventful year in the boxing ring of scientific publishing since the last set of figures were published by Thomson-Reuters. A brand new journal called PeerJ launched with a radical publish ’til you perish business model . There’s another new journal on the way too in the shape of eLifeSciences – with it’s own significant differences from current publishing models. Then there was the Finch report on Open Access. If that wasn’t enough fun, there’s been the Alternative metrics “Altmetrics” movement gathering pace , alongside suggestions that the impact factor may be losing its grip on the supposed “title” .
The impact factors below are the most recent, published June 28th 2012, covering data from 2011. Love them or loathe them, use them or abuse them, game them or shame them … here is a tiny selection of impact factors for the 10,675 journals that are tracked in Journal Citation Reports (JCR) ordered by increasing punch power.
WARNING: Abusing these figures can seriously damage your Science – you have been warned! Normal caveats apply, see nature.com/metrics.
* The Russian Journal of Cardiology is included here for reference as it has the lowest non-zero impact factor of any science journal. A rather dubious honour…
** The Cancer Journal for Clinicians is the highest ranked journal in science, it is included here for reference. Could it be the first journal to have an impact factor of more than 100?
- Richard Van Noorden (2012). Journal offers flat fee for ‘all you can publish’, Nature, 486 (7402) 166. DOI: 10.1038/486166a
- Jason Priem, Heather Piwowar and Bradley Hemminger (2012). Altmetrics in the wild: Using social media to explore scholarly impact arxiv.org/abs/1203.4745
- George Lozano, Vincent Lariviere and Yves Gingras (2012). The weakening relationship between the Impact Factor and papers’ citations in the digital age arxiv.org/abs/1205.4328
[This post is part of an ongoing series about impact factors. See Impact Factor Boxing 2012 for the latest figures.]
Well it’s that time again. The annual sweaty fist-fight for supremacy between the scientific journals, as measured by impact factors, is upon us. Much ink (virtual and actual) has been spilt on the subject of impact factors, which we won’t add to here, other than to say:
Hey look, the “European” journals might be catching up with the “American” ones. 
So, love them, loathe them, use them, abuse them, ignore them or obsess over them… here’s a tiny selection of the 10,196 journals that are tracked in Journal Citation Reports (JCR) ordered by increasing impact.
WARNING: Abusing these figures can seriously damage your Science – you have been warned! (normal caveats apply)
* The Naval Architect is included here for reference as it has the lowest non-zero impact factor of any science journal. A rather dubious honour…
** The Cancer Journal for Clinicians is the highest ranked journal in science, is included here for reference.
[Creative Commons licensed picture of Khmer boxing picture by lecercle]
- Karageorgopoulos, D., Lamnatou, V., Sardi, T., Gkegkes, I., & Falagas, M. (2011). Temporal Trends in the Impact Factor of European versus USA Biomedical Journals PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016300
According to some estimates, there are fifty million articles in existence as of 2010. Picture of a fifty million dollar note by ZeroOne on Flickr.
Earlier this year, the scientific journal PLoS ONE published their 10,000th article. Ten thousand articles is a lot of papers especially when you consider that PLoS ONE only started publishing four short years ago in 2006. But scientists have been publishing in journals for at least 350 years  so it might make you wonder, how many articles have been published in scientific and learned journals since time began?
If we look at PubMed Central, a full-text archive of journals freely available to all – PubMedCentral currently holds over 1.7 million articles. But these articles are only a tiny fraction of the total literature – since a lot of the rest is locked up behind publishers paywalls and is inaccessible to many people. (more…)
[This post is part of an ongoing series about impact factors. See this post for the latest impact factors published in 2012.]
Roll up, roll up, ladies and gentlemen, Impact Factor Boxing is here again. As with last year (2009), the metrics used in this combat sport are already a year out of date. But this doesn’t stop many people from writing about impact factors and it’s been an interesting year  for the metrics used by many to judge the relative value of scientific work. The Public Library of Science (PLoS) launched their article level metrics within the last year following the example of BioMedCentral’s “most viewed” articles feature. Next to these new style metrics, the traditional impact factors live on, despite their limitations. Critics like Harold Varmus have recently pointed out that (quote):
“The impact factor is a completely flawed metric and it’s a source of a lot of unhappiness in the scientific community. Evaluating someone’s scientific productivity by looking at the number of papers they published in journals with impact factors over a certain level is poisonous to the system. A couple of folks are acting as gatekeepers to the distribution of information, and this is a very bad system. It really slows progress by keeping ideas and experiments out of the public domain until reviewers have been satisfied and authors are allowed to get their paper into the journal that they feel will advance their career.”
To be fair though, it’s not the metric that is flawed, more the way it is used (and abused) – a subject covered in much detail in a special issue of Nature at http://nature.com/metrics [2,3,4,5]. It’s much harder than it should be to get hold of these metrics, so I’ve reproduced some data below (fair use? I don’t know I am not a lawyer…) to minimise the considerable frustrations of using Journal Citation Reports (JCR).
Love them, loathe them, use them, abuse them, ignore them or obsess over them … here’s a small selection of the 7347 journals that are tracked in JCR ordered by increasing impact.
Maybe next year Thomson Reuters, who publish this data, could start attaching large government health warnings (like on cigarette packets) and long disclaimers to this data? WARNING: Abusing these figures can seriously damage your Science – you have been warned!
- Rizkallah, J., & Sin, D. (2010). Integrative Approach to Quality Assessment of Medical Journals Using Impact Factor, Eigenfactor, and Article Influence Scores PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0010204
- Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden, R. (2010). Metrics: Do metrics matter? Nature, 465 (7300), 860-862 DOI: 10.1038/465860a
- Van Noorden, R. (2010). Metrics: A profusion of measures Nature, 465 (7300), 864-866 DOI: 10.1038/465864a
- Braun, T., Osterloh, M., West, J., Rohn, J., Pendlebury, D., Bergstrom, C., & Frey, B. (2010). How to improve the use of metrics Nature, 465 (7300), 870-872 DOI: 10.1038/465870a
- Lane, J. (2010). Let’s make science metrics more scientific Nature, 464 (7288), 488-489 DOI: 10.1038/464488a
[Creative Commons licensed picture of Golden Gloves Prelim Bouts by Kate Gardiner ]
Quite by chance, I stumbled on this interesting paper  yesterday by Philip Campbell who is the Editor-in-Chief of the scientific über-journal Nature . Here is the abstract:
As Editor-in-Chief of the journal Nature, I am concerned by the tendency within academic administrations to focus on a journal’s impact factor when judging the worth of scientific contributions by researchers, affecting promotions, recruitment and, in some countries, financial bonuses for each paper. Our own internal research demonstrates how a high journal impact factor can be the skewed result of many citations of a few papers rather than the average level of the majority, reducing its value as an objective measure of an individual paper. Proposed alternative indices have their own drawbacks. Many researchers say that their important work has been published in low-impact journals. Focusing on the citations of individual papers is a more reliable indicator of an individual’s impact. A positive development is the increasing ability to track the contributions of individuals by means of author-contribution statements and perhaps, in the future, citability of components of papers rather than the whole. There are attempts to escape the hierarchy of high-impact-factor journals by means of undifferentiated databases of peer-reviewed papers such as PLoS One. It remains to be seen whether that model will help outstanding work to rise to due recognition regardless of editorial selectivity. Although the current system may be effective at measuring merit on national and institutional scales, the most effective and fair analysis of a person’s contribution derives from a direct assessment of individual papers, regardless of where they were published.
It’s well worth reading the views of the editor of an important closed-access journal like Nature, a world champion heavyweight of Impact Factor Boxing. So their view on article-level bibliometrics and novel models of scientific publishing on the Web like PLoS ONE is enlightening. There are some interesting papers in the same issue, which has a special theme on the use and misuse of bibliometric indices in evaluating scholarly performance. Oh, and the article is published in an Open Access Journal too. Is it just me, or is there a strong smell of irony in here?
- Philip Campbell (2008). Escape from the impact factor Ethics in Science and Environmental Politics, 8, 5-7 DOI: 10.3354/esep00078
- Philip Campbell (1995). Postscript from a new hand Nature, 378 (6558), 649-649 DOI: 10.1038/378649b0
- John Sturges (1963) The Great Escape