O'Really?

September 9, 2014

Punning with the Pub in PubMed: Are there any decent NCBI puns left? #PubMedPuns

PubMedication: do you get your best ideas in the Pub? CC-BY-ND image via trombone65 on Flickr.

Many people claim they get all their best ideas in the pub, but for lots of scientists their best ideas probably come from PubMed.gov – the NCBI’s monster database of biomedical literature. Consequently, the database has spawned a whole slew of tools that riff off the PubMed name, with many puns and portmanteaus (aka “PubManteaus”), and the pub-based wordplays are very common. [1,2]

All of this might make you wonder, are there any decent PubMed puns left? Here’s an incomplete collection:

  • PubCrawler pubcrawler.ie “goes to the library while you go to the pub…” [3,4]
  • PubChase pubchase.com is a “life sciences and medical literature recommendations engine. Search smarter, organize, and discover the articles most important to you.” [5]
  • PubCast scivee.tv/pubcasts allow users to “enliven articles and help drive more views” (to PubMed) [6]
  • PubFig nothing to do with PubMed, but research done on face and image recognition that happens to be indexed by PubMed. [7]
  • PubGet pubget.com is a “comprehensive source for science PDFs, including everything you’d find in Medline.” [8]
  • PubLons publons.com OK, not much to do with PubMed directly but PubLons helps you “you record, showcase, and verify all your peer review activity.”
  • PubMine “supports intelligent knowledge discovery” [9]
  • PubNet pubnet.gersteinlab.org is a “web-based tool that extracts several types of relationships returned by PubMed queries and maps them into networks” aka a publication network graph utility. [10]
  • GastroPub repackages and re-sells ordinary PubMed content disguised as high-end luxury data at a higher premium, similar to a Gastropub.
  • PubQuiz is either the new name for NCBI database search www.ncbi.nlm.nih.gov/gquery or a quiz where you’re only allowed to use PubMed to answer questions.
  • PubSearch & PubFetch allows users to “store literature, keyword, and gene information in a relational database, index the literature with keywords and gene names, and provide a Web user interface for annotating the genes from experimental data found in the associated literature” [11]
  • PubScience is either “peer-reviewed drinking” courtesy of pubsci.co.uk or an ambitious publishing project tragically axed by the U.S. Department of Energy (DoE). [12,13]
  • PubSub is anything that makes use of the publish–subscribe pattern, such as NCBI feeds. [14]
  • PubLick as far as I can see, hasn’t been used yet, unless you count this @publick on twitter. If anyone was launching a startup, working in the area of “licking” the tastiest data out of PubMed, that could be a great name for their data-mining business. Alternatively, it could be a catchy new nickname for PubMedCentral (PMC) or Europe PubMedCentral (EuropePMC) [15] – names which don’t exactly trip off the tongue. Since PMC is a free digital archive of publicly accessible full-text scholarly articles, PubLick seems like a appropriate moniker.

PubLick Cat got all the PubMed cream. CC-BY image via dizznbonn on flickr.

There’s probably lots more PubMed puns and portmanteaus out there just waiting to be used. Pubby, Pubsy, PubLican, Pubble, Pubbit, Publy, PubSoft, PubSort, PubBrawl, PubMatch, PubGames, PubGuide, PubWisdom, PubTalk, PubChat, PubShare, PubGrub, PubSnacks and PubLunch could all work. If you’ve know of any other decent (or dodgy) PubMed puns, leave them in the comments below and go and build a scientific twitterbot or cool tool using the same name — if you haven’t already.

References

  1. Lu Z. (2011). PubMed and beyond: a survey of web tools for searching biomedical literature., Database: The Journal of Biological Databases and Curation, http://pubmed.gov/21245076
  2. Hull D., Pettifer S.R. & Kell D.B. (2008). Defrosting the digital library: bibliographic tools for the next generation web., PLOS Computational Biology, PMID: http://pubmed.gov/18974831
  3. Hokamp K. & Wolfe K.H. (2004) PubCrawler: keeping up comfortably with PubMed and GenBank., Nucleic acids research, http://pubmed.gov/15215341
  4. Hokamp K. & Wolfe K. (1999) What’s new in the library? What’s new in GenBank? let PubCrawler tell you., Trends in Genetics, http://pubmed.gov/10529811
  5. Gibney E. (2014). How to tame the flood of literature., Nature, 513 (7516) http://pubmed.gov/25186906
  6. Bourne P. & Chalupa L. (2008). A new approach to scientific dissemination, Materials Today, 11 (6) 48-48. DOI:10.1016/s1369-7021(08)70131-7
  7. Kumar N., Berg A., Belhumeur P.N. & Nayar S. (2011). Describable Visual Attributes for Face Verification and Image Search., IEEE Transactions on Pattern Analysis and Machine Intelligence, http://pubmed.gov/21383395
  8. Featherstone R. & Hersey D. (2010). The quest for full text: an in-depth examination of Pubget for medical searchers., Medical Reference Services Quarterly, 29 (4) 307-319. http://pubmed.gov/21058175
  9. Kim T.K., Wan-Sup Cho, Gun Hwan Ko, Sanghyuk Lee & Bo Kyeng Hou (2011). PubMine: An Ontology-Based Text Mining System for Deducing Relationships among Biological Entities, Interdisciplinary Bio Central, 3 (2) 1-6. DOI:10.4051/ibc.2011.3.2.0007
  10. Douglas S.M., Montelione G.T. & Gerstein M. (2005). PubNet: a flexible system for visualizing literature derived networks., Genome Biology, http://pubmed.gov/16168087
  11. Yoo D., Xu I., Berardini T.Z., Rhee S.Y., Narayanasamy V. & Twigger S. (2006). PubSearch and PubFetch: a simple management system for semiautomated retrieval and annotation of biological information from the literature., Current Protocols in Bioinformatics , http://pubmed.gov/18428773
  12. Seife C. (2002). Electronic publishing. DOE cites competition in killing PubSCIENCE., Science (New York, N.Y.), 297 (5585) 1257-1259. http://pubmed.gov/12193762
  13. Jensen M. (2003). Another loss in the privatisation war: PubScience., Lancet, 361 (9354) 274. http://pubmed.gov/12559859
  14. Dubuque E.M. (2011). Automating academic literature searches with RSS Feeds and Google Reader(™)., Behavior Analysis in Practice, 4 (1) http://pubmed.gov/22532905
  15. McEntyre J.R., Ananiadou S., Andrews S., Black W.J., Boulderstone R., Buttery P., Chaplin D., Chevuru S., Cobley N. & Coleman L.A. & (2010). UKPMC: a full text article resource for the life sciences., Nucleic Acids Research, http://pubmed.gov/21062818

May 11, 2012

Journal Fire: Bonfire of the Vanity Journals?

Fire by John Curley on Flickr

Fire by John Curley, available via Creative Commons license.

When I first heard about Journal Fire, I thought, Great! someone is going to take all the closed-access scientific journals and make a big bonfire of them! At the top of this bonfire would be the burning effigy of a wicker man, representing the very worst of the vanity journals [1,2].

Unfortunately Journal Fire aren’t burning anything just yet, but what they are doing is something just as interesting. Their web based application allows you to manage and share your journal club online. I thought I’d give it a whirl because a friend of mine asked me what I thought about a paper on ontologies in biodiversity [3]. Rather than post a brief review here, I’ve posted it over at Journal Fire. Here’s some initial thoughts on a quick test drive of their application:

Pros

On the up side Journal Fire:

  • Is a neutral-ish third party space where anyone can discuss scientific papers.
  • Understands common identifiers (DOI and PMID) to tackle the identity crisis.
  • Allows you to post simple anchor links in reviews, but not much else, see below.
  • Does not require you to use cumbersome syntax used in ResearchBlogging [4], ScienceSeeker and elsewhere
  • Is integrated with citeulike, for those that use it
  • It can potentially provide many different reviews of a given paper in one place
  • Is web-based, so you don’t have to download and install any software, unlike alternative desktop systems Mendeley and Utopia docs

Cons

On the down side Journal Fire:

  • Is yet another piece social software for scientists. Do we really need more, when we’ve had far too many already?
  • Requires you to sign up for an account without  re-using your existing digital identity with Google, Facebook, Twitter etc.
  • Does not seem to have many people on it (yet) despite the fact it has been going since at least since 2007.
  • Looks a bit stale, the last blog post was published in 2010. Although the software still works fine, it is not clear if it is being actively maintained and developed.
  • Does not allow much formatting in reviews besides simple links, something like markdown would be good.
  • Does not understand or import arXiv identifiers, at the moment.
  • As far as I can see, Journal Fire is a small startup based in Pasadena, California. Like all startups, they might go bust. If this happens, they’ll take your journal club, and all its reviews down with them.

I think the pros mostly outweigh the cons, so if you like the idea of a third-party hosting your journal club, Journal Fire is worth a trial run.

References

  1. Juan Carlos Lopez (2009) We want your paper! The similarity between high-end restaurants and scientific journals Spoonful of Medicine, a blog from Nature Medicine
  2. NOTE: Vanity journals should not to be confused with the The Vanity Press.
  3. Andrew R. Deans, Matthew J. Yoder & James P. Balhoff (2012). Time to change how we describe biodiversity, Trends in Ecology & Evolution, 27 (2) 84. DOI: 10.1016/j.tree.2011.11.007
  4. Shema, H., Bar-Ilan, J., & Thelwall, M. (2012). Research Blogs and the Discussion of Scholarly Information PLoS ONE, 7 (5) DOI: 10.1371/journal.pone.0035869

September 1, 2010

How many unique papers are there in Mendeley?

Lex Macho Inc. by Dan DeChiaro on Flickr, How many people in this picture?Mendeley is a handy piece of desktop and web software for managing and sharing research papers [1]. This popular tool has been getting a lot of attention lately, and with some impressive statistics it’s not difficult to see why. At the time of writing Mendeley claims to have over 36 million papers, added by just under half a million users working at more than 10,000 research institutions around the world. That’s impressive considering the startup company behind it have only been going for a few years. The major established commercial players in the field of bibliographic databases (WoK and Scopus) currently have around 40 million documents, so if Mendeley continues to grow at this rate, they’ll be more popular than Jesus (and Elsevier and Thomson) before you can say “bibliography”. But to get a real handle on how big Mendeley is we need to know how many of those 36 million documents are unique because if there are lots of duplicated documents then it will affect the overall head count. (more…)

July 27, 2010

Twenty million papers in PubMed: a triumph or a tragedy?

pubmed.govA quick search on pubmed.gov today reveals that the freely available American database of biomedical literature has just passed the 20 million citations mark*. Should we celebrate or commiserate passing this landmark figure? Is it a triumph or a tragedy that PubMed® is the size it is? (more…)

July 15, 2010

How many journal articles have been published (ever)?

Fifty Million and Fifty Billion by ZeroOne

According to some estimates, there are fifty million articles in existence as of 2010. Picture of a fifty million dollar note by ZeroOne on Flickr.

Earlier this year, the scientific journal PLoS ONE published their 10,000th article. Ten thousand articles is a lot of papers especially when you consider that PLoS ONE only started publishing four short years ago in 2006. But scientists have been publishing in journals for at least 350 years [1] so it might make you wonder, how many articles have been published in scientific and learned journals since time began?

If we look at PubMed Central, a full-text archive of journals freely available to all – PubMedCentral currently holds over 1.7 million articles. But these articles are only a tiny fraction of the total literature – since a lot of the rest is locked up behind publishers paywalls and is inaccessible to many people. (more…)

September 4, 2009

XML training in Oxford

XML Summer School 2009The XML Summer School returns this year at St. Edmund Hall, Oxford from 20th-25th September 2009. As always, it’s packed with high quality technical training for every level of expertise, from the Hands-on Introduction for beginners through to special classes devoted to XQuery and XSLT, Semantic Technologies, Open Source Applications, Web 2.0, Web Services and Identity. The Summer School is also a rare opportunity to experience what life is like as a student in one of the world’s oldest university cities while enjoying a range of social events that are a part of the unique summer school experience.

This year, classes and sessions are taught and chaired by:

W3C XML 10th anniversaryThe Extensible Markup Language (XML) has been around for just over ten years, quickly and quietly finding its niche in many different areas of science and technology. It has been used in everything from modelling biochemical networks in systems biology [1], to electronic health records [2], scientific publishing, the provision of the PubMed service (which talks XML) [3] and many other areas. As a crude measure of its importance in biomedical science, PubMed currently has no fewer than 800 peer-reviewed publications on XML. It’s hard to imagine life without it. So whether you’re a complete novice looking to learn more about XML or a seasoned veteran wanting to improve your knowledge, register your place and find out more by visiting xmlsummerschool.com. I hope to see you there…

References

  1. Hucka, M. (2003). The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models Bioinformatics, 19 (4), 524-531 DOI: 10.1093/bioinformatics/btg015
  2. Bunduchi R, Williams R, Graham I, & Smart A (2006). XML-based clinical data standardisation in the National Health Service Scotland. Informatics in primary care, 14 (4) PMID: 17504574
  3. Sayers, E., Barrett, T., Benson, D., Bryant, S., Canese, K., Chetvernin, V., Church, D., DiCuccio, M., Edgar, R., Federhen, S., Feolo, M., Geer, L., Helmberg, W., Kapustin, Y., Landsman, D., Lipman, D., Madden, T., Maglott, D., Miller, V., Mizrachi, I., Ostell, J., Pruitt, K., Schuler, G., Sequeira, E., Sherry, S., Shumway, M., Sirotkin, K., Souvorov, A., Starchenko, G., Tatusova, T., Wagner, L., Yaschenko, E., & Ye, J. (2009). Database resources of the National Center for Biotechnology Information Nucleic Acids Research, 37 (Database) DOI: 10.1093/nar/gkn741

June 10, 2009

Kenjiro Taura on Parallel Workflows

Kenjiro TauraKenjiro Taura is visting Manchester next week from the Department of Information and Communication Engineering at the University of Tokyo. He will be doing a seminar, the details of which are below:

Title: Large scale text processing made simple by GXP make: A Unixish way to parallel workflow processing

Date-time: Monday, 15 June 2009 at 11:00 AM

Location: Room MLG.001, mib.ac.uk

In the first part of this talk, I will introduce a simple tool called GXP make. GXP is a general purpose parallel shell (a process launcher) for multicore machines, unmanaged clusters accessed via SSH, clusters or supercomputers managed by batch scheduler, distributed machines, or any mixture thereof. GXP make is a ‘make‘ execution engine that executes regular UNIX makefiles in parallel. Make, though typically used for software builds, is in fact a general framework to concisely describe workflows constituting sequential commands. Installation of GXP requires no root privileges and needs to be done only on the user’s home machine. GXP make easily scales to more than 1,000 CPU cores. The net result is that GXP make allows an easy migration of workflows from serial environments to clusters and to distributed environments. In the second part, I will talk about our experiences on running a complex text processing workflow developed by Natural Language Processing (NLP) experts. It is an entire workflow that processes MEDLINE abstracts with deep NLP tools (e.g., Enju parser [1]) to generate search indices of MEDIE, a semantic retrieval engine for MEDLINE. It was originally described in Makefile without a particular provision to parallel processing, yet GXP make was able to run it on clusters with almost no changes to the original Makefile. Time for processing abstracts published in a single day was reduced from approximately eight hours (with a single machine) to twenty minutes with a trivial amount of efforts. A larger scale experiment of processing all abstracts published so far and remaining challenges will also be presented.

References

  1. Miyao, Y., Sagae, K., Saetre, R., Matsuzaki, T., & Tsujii, J. (2008). Evaluating contributions of natural language parsers to protein-protein interaction extraction Bioinformatics, 25 (3), 394-400 DOI: 10.1093/bioinformatics/btn631

June 4, 2009

Improving the OBO Foundry Principles

The Old Smithy Pub by loop ohThe Open Biomedical Ontologies (OBO) are a set of reference ontologies for describing all kinds of biomedical data, see [1-5] for examples. Every year, users and developers of these ontologies gather from around the globe for a workshop at the EBI near Cambridge, UK. Following on from the first workshop last year, the 2nd OBO workshop 2009 is fast approaching.

In preparation, I’ve been revisiting the OBO Foundry documentation, part of which establishes a set of principles for ontology development. I’m wondering how they could be improved because these principles are fundamental to the whole effort. We’ve been using one of the OBO ontologies (called Chemical Entities of Biological Interest (ChEBI)) in the REFINE project to mine data from the PubMed database. OBO Ontologies like ChEBI and the Gene Ontology are really crucial to making sense of the massive data which are now common in biology and medicine – so this is stuff that matters.

The OBO Foundry Principles, a sort of Ten Commandments of Ontology (or Obology if you prefer) currently look something like this (copied directly from obofoundry.org/crit.shtml):

  1. The ontology must be open and available to be used by all without any constraint other than (a) its origin must be acknowledged and (b) it is not to be altered and subsequently redistributed under the original name or with the same identifiers.The OBO ontologies are for sharing and are resources for the entire community. For this reason, they must be available to all without any constraint or license on their use or redistribution. However, it is proper that their original source is always credited and that after any external alterations, they must never be redistributed under the same name or with the same identifiers.
  2. The ontology is in, or can be expressed in, a common shared syntax. This may be either the OBO syntax, extensions of this syntax, or OWL. The reason for this is that the same tools can then be usefully applied. This facilitates shared software implementations. This criterion is not met in all of the ontologies currently listed, but we are working with the ontology developers to have them available in a common OBO syntax.
  3. The ontologies possesses a unique identifier space within the OBO Foundry. The source of a term (i.e. class) from any ontology can be immediately identified by the prefix of the identifier of each term. It is, therefore, important that this prefix be unique.
  4. The ontology provider has procedures for identifying distinct successive versions.
  5. The ontology has a clearly specified and clearly delineated content. The ontology must be orthogonal to other ontologies already lodged within OBO. The major reason for this principle is to allow two different ontologies, for example anatomy and process, to be combined through additional relationships. These relationships could then be used to constrain when terms could be jointly applied to describe complementary (but distinguishable) perspectives on the same biological or medical entity. As a corollary to this, we would strive for community acceptance of a single ontology for one domain, rather than encouraging rivalry between ontologies.
  6. The ontologies include textual definitions for all terms. Many biological and medical terms may be ambiguous, so terms should be defined so that their precise meaning within the context of a particular ontology is clear to a human reader.
  7. The ontology uses relations which are unambiguously defined following the pattern of definitions laid down in the OBO Relation Ontology.
  8. The ontology is well documented.
  9. The ontology has a plurality of independent users.
  10. The ontology will be developed collaboratively with other OBO Foundry members.

ResearchBlogging.orgI’ve been asking all my frolleagues what they think of these principles and have got some lively responses, including some here from Allyson Lister, Mélanie Courtot, Michel Dumontier and Frank Gibson. So what do you think? How could these guidelines be improved? Do you have any specific (and preferably constructive) criticisms of these ambitious (and worthy) goals? Be bold, be brave and be polite. Anything controversial or “off the record” you can email it to me… I’m all ears.

CC-licensed picture above of the Old Smithy (pub) by Loop Oh. Inspired by Michael Ashburner‘s standing OBO joke (Ontolojoke) which goes something like this: Because Barry Smith is one of the leaders of OBO, should the project be called the OBO Smithy or the OBO Foundry? 🙂

References

  1. Noy, N., Shah, N., Whetzel, P., Dai, B., Dorf, M., Griffith, N., Jonquet, C., Rubin, D., Storey, M., Chute, C., & Musen, M. (2009). BioPortal: ontologies and integrated data resources at the click of a mouse Nucleic Acids Research DOI: 10.1093/nar/gkp440
  2. Côté, R., Jones, P., Apweiler, R., & Hermjakob, H. (2006). The Ontology Lookup Service, a lightweight cross-platform tool for controlled vocabulary queries BMC Bioinformatics, 7 (1) DOI: 10.1186/1471-2105-7-97
  3. Smith, B., Ashburner, M., Rosse, C., Bard, J., Bug, W., Ceusters, W., Goldberg, L., Eilbeck, K., Ireland, A., Mungall, C., Leontis, N., Rocca-Serra, P., Ruttenberg, A., Sansone, S., Scheuermann, R., Shah, N., Whetzel, P., & Lewis, S. (2007). The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration Nature Biotechnology, 25 (11), 1251-1255 DOI: 10.1038/nbt1346
  4. Smith, B., Ceusters, W., Klagges, B., Köhler, J., Kumar, A., Lomax, J., Mungall, C., Neuhaus, F., Rector, A., & Rosse, C. (2005). Relations in biomedical ontologies Genome Biology, 6 (5) DOI: 10.1186/gb-2005-6-5-r46
  5. Bada, M., & Hunter, L. (2008). Identification of OBO nonalignments and its implications for OBO enrichment Bioinformatics, 24 (12), 1448-1455 DOI: 10.1093/bioinformatics/btn194

June 2, 2009

Who Are You? Digital Identity in Science

The Who by The WhoThe organisers of the Science Online London 2009 conference are asking people to propose their own session ideas (see some examples here), so here is a proposal:

Title: Who Are You? Digital Identity in Science

Many important decisions in Science are based on identifying scientists and their contributions. From selecting reviewers for grants and publications, to attributing published data and deciding who is funded, hired or promoted, digital identity is at the heart of Science on the Web.

Despite the importance of digital identity, identifying scientists online is an unsolved problem [1]. Consequently, a significant amount of scientific and scholarly work is not easily cited or credited, especially digital contributions: from blogs and wikis, to source code, databases and traditional peer-reviewed publications on the Web. This (proposed) session will look at current mechanisms for identifying scientists digitally including contributor-id (CrossRef), researcher-id (Thomson), Scopus Author ID (Elsevier), OpenID, Google Scholar [2], Single Sign On, PubMed, Google Scholar [2], FOAF+SSL, LinkedIn, Shared Identifiers (URIs) and the rest. We will introduce and discuss each via a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats). Is digital identity even possible and ethical? Beside the obvious benefits of persistent, reliable and unique identifiers, what are the privacy and security issues with personal digital identity?

If this is a successful proposal, I’ll need some help. Any offers? If you are interested in joining in the fun, more details are at scienceonlinelondon.org

References

  1. Bourne, P., & Fink, J. (2008). I Am Not a Scientist, I Am a Number PLoS Computational Biology, 4 (12) DOI: 10.1371/journal.pcbi.1000247
  2. Various Publications about unique author identifiers bookmarked in citeulike
  3. Yours Truly (2009) Google thinks I’m Maurice Wilkins
  4. The Who (1978) Who Are You? Who, who, who, who? (Thanks to Jan Aerts for the reference!)

May 19, 2009

Defrosting the John Rylands University Library

Filed under: seminars — Duncan Hull @ 4:14 pm
Tags: , , , , , , , , , , , ,

http://www.flickr.com/photos/dpicker/3107856991/For anyone who missed the original bioinformatics seminar I’ll be doing a repeat of the “Defrosting the Digital Library” talk, this time for the staff in the John Rylands University Library (JRUL) . This is the main academic library in Manchester with (quote) “more than 4 million printed books and manuscripts, over 41,000 electronic journals and 500,000 electronic books, as well as several hundred databases, the John Rylands University Library is one of the best-resourced academic libraries in the country.” The journal subscription budget of the library is currently around £4 million per year, that’s before they’ve even bought any books! Here is the abstract for the talk:

After centuries with little change, scientific libraries have recently experienced massive upheaval. From being almost entirely paper-based, most libraries are now almost completely digital. This information revolution has all happened in less than 20 years and has created many novel opportunities and threats for scientists, publishers and libraries.

Today, we are struggling with an embarrassing wealth of digital knowledge on the Web. Most scientists access this knowledge through some kind of digital library, however these places can be cold, impersonal, isolated, and inaccessible places. Many libraries are still clinging to obsolete models of identity, attribution, contribution, citation and publication.

Based on a review published in PLoS Computational Biology, pubmed.gov/18974831 this talk will discuss the current chilly state of digital libraries for biologists, chemists and informaticians, including PubMed and Google Scholar. We highlight problems and solutions to the coupling and decoupling of publication data and metadata, with a tool called citeulike.org. This software tool (and many other tools just like it) exploit the Web to make digital libraries “warmer”: more personal, sociable, integrated, and accessible places.

Finally issues that will help or hinder the continued warming of libraries in the future, particularly the accurate identity of authors and their publications, are briefly introduced. These are discussed in the context of the BBSRC funded REFINE project, at the National Centre for Text Mining (NaCTeM.ac.uk), which is linking biochemical pathway data with evidence for pathways from the PubMed database.

Date: Thursday 21st May 2009, Time: 13.00, Location: John Rylands University (Main) Library Oxford Road, Parkinson Room (inside main entrance, first on right) University of Manchester (number 55 on google map of the Manchester campus). Please come along if you are interested…

References

  1. Hull, D., Pettifer, S., & Kell, D. (2008). Defrosting the Digital Library: Bibliographic Tools for the Next Generation Web PLoS Computational Biology, 4 (10) DOI: 10.1371/journal.pcbi.1000204

[CC licensed picture above, the John Rylands Library on Deansgate by dpicker: David Picker]

Next Page »

Blog at WordPress.com.