Rocky Balboa, Philadelphia, PA. Creative Commons licensed picture by seng1011 (steve eng) on Flickr.
[This post is part of an ongoing series about impact factors]
In the world of abused performance metrics, the impact factor is the undisputed heavyweight champion of the (publishing) world.
It has been an eventful year in the boxing ring of scientific publishing since the last set of figures were published by Thomson-Reuters. A brand new journal called PeerJ launched with a radical publish ’til you perish business model . There’s another new journal on the way too in the shape of eLifeSciences – with it’s own significant differences from current publishing models. Then there was the Finch report on Open Access. If that wasn’t enough fun, there’s been the Alternative metrics “Altmetrics” movement gathering pace , alongside suggestions that the impact factor may be losing its grip on the supposed “title” .
The impact factors below are the most recent, published June 28th 2012, covering data from 2011. Love them or loathe them, use them or abuse them, game them or shame them … here is a tiny selection of impact factors for the 10,675 journals that are tracked in Journal Citation Reports (JCR) ordered by increasing punch power.
WARNING: Abusing these figures can seriously damage your Science – you have been warned! Normal caveats apply, see nature.com/metrics.
* The Russian Journal of Cardiology is included here for reference as it has the lowest non-zero impact factor of any science journal. A rather dubious honour…
** The Cancer Journal for Clinicians is the highest ranked journal in science, it is included here for reference. Could it be the first journal to have an impact factor of more than 100?
- Richard Van Noorden (2012). Journal offers flat fee for ‘all you can publish’, Nature, 486 (7402) 166. DOI: 10.1038/486166a
- Jason Priem, Heather Piwowar and Bradley Hemminger (2012). Altmetrics in the wild: Using social media to explore scholarly impact arxiv.org/abs/1203.4745
- George Lozano, Vincent Lariviere and Yves Gingras (2012). The weakening relationship between the Impact Factor and papers’ citations in the digital age arxiv.org/abs/1205.4328
Mendeley is a handy piece of desktop and web software for managing and sharing research papers . This popular tool has been getting a lot of attention lately, and with some impressive statistics it’s not difficult to see why. At the time of writing Mendeley claims to have over 36 million papers, added by just under half a million users working at more than 10,000 research institutions around the world. That’s impressive considering the startup company behind it have only been going for a few years. The major established commercial players in the field of bibliographic databases (WoK and Scopus) currently have around 40 million documents, so if Mendeley continues to grow at this rate, they’ll be more popular than Jesus (and Elsevier and Thomson) before you can say “bibliography”. But to get a real handle on how big Mendeley is we need to know how many of those 36 million documents are unique because if there are lots of duplicated documents then it will affect the overall head count. (more…)
[This post is part of an ongoing series about impact factors. See this post for the latest impact factors published in 2012.]
Roll up, roll up, ladies and gentlemen, Impact Factor Boxing is here again. As with last year (2009), the metrics used in this combat sport are already a year out of date. But this doesn’t stop many people from writing about impact factors and it’s been an interesting year  for the metrics used by many to judge the relative value of scientific work. The Public Library of Science (PLoS) launched their article level metrics within the last year following the example of BioMedCentral’s “most viewed” articles feature. Next to these new style metrics, the traditional impact factors live on, despite their limitations. Critics like Harold Varmus have recently pointed out that (quote):
“The impact factor is a completely flawed metric and it’s a source of a lot of unhappiness in the scientific community. Evaluating someone’s scientific productivity by looking at the number of papers they published in journals with impact factors over a certain level is poisonous to the system. A couple of folks are acting as gatekeepers to the distribution of information, and this is a very bad system. It really slows progress by keeping ideas and experiments out of the public domain until reviewers have been satisfied and authors are allowed to get their paper into the journal that they feel will advance their career.”
To be fair though, it’s not the metric that is flawed, more the way it is used (and abused) – a subject covered in much detail in a special issue of Nature at http://nature.com/metrics [2,3,4,5]. It’s much harder than it should be to get hold of these metrics, so I’ve reproduced some data below (fair use? I don’t know I am not a lawyer…) to minimise the considerable frustrations of using Journal Citation Reports (JCR).
Love them, loathe them, use them, abuse them, ignore them or obsess over them … here’s a small selection of the 7347 journals that are tracked in JCR ordered by increasing impact.
Maybe next year Thomson Reuters, who publish this data, could start attaching large government health warnings (like on cigarette packets) and long disclaimers to this data? WARNING: Abusing these figures can seriously damage your Science – you have been warned!
- Rizkallah, J., & Sin, D. (2010). Integrative Approach to Quality Assessment of Medical Journals Using Impact Factor, Eigenfactor, and Article Influence Scores PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0010204
- Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden, R. (2010). Metrics: Do metrics matter? Nature, 465 (7300), 860-862 DOI: 10.1038/465860a
- Van Noorden, R. (2010). Metrics: A profusion of measures Nature, 465 (7300), 864-866 DOI: 10.1038/465864a
- Braun, T., Osterloh, M., West, J., Rohn, J., Pendlebury, D., Bergstrom, C., & Frey, B. (2010). How to improve the use of metrics Nature, 465 (7300), 870-872 DOI: 10.1038/465870a
- Lane, J. (2010). Let’s make science metrics more scientific Nature, 464 (7288), 488-489 DOI: 10.1038/464488a
[Creative Commons licensed picture of Golden Gloves Prelim Bouts by Kate Gardiner ]
There is no shortage of bibliographic management tools out there, which ultimately aim to save your time managing the papers and books in your personal library. I’ve just been to a demo and sales pitch for one of them, a tool called RefWorks. Refworks claims to be “an online research management, writing and collaboration tool — designed to help researchers easily gather, manage, store and share all types of information, as well as generate citations and bibliographies”. It looks like a pretty good tool, similar to the likes of EndNote but with more web-based features that are common with Citeulike and Connotea. Here are some ultra-brief notes. RefWorks in five minutes, the good, the bad and the ugly.
Refworks finer features
- Refworks is web based, you can use it from any computer with an internet connection, without having to install any software. Platform independent, Mac, Windows, Linux, Blackberry, iPhone, Woteva. This feature is becoming increasingly common, see Martin Fenner’s Online reference managers, not quite there yet article at Nature Network.
- Share selected references and bibliographies on the Web via RefShare
- It imports and exports all the things you would expect, Endnote (definitely), XML, Feeds (RSS), flat files, BibTeX (check?), RIS (check?) and several others via the screenscraping tool RefGrab-It
- Interfaces with PubMed and Scopus (and many other databases) closely, e.g. you can search these directly from your RefWorks library. You can also export from Scopus to Refworks…
- Not part of the Reed-Elsevier global empire (yet), currently part of ProQuest, based in California.
- Free 30 day trial is available
- Just like EndNote, it can be closely integrated with Microsoft Word, to cite-while-you-write