O'Really?

January 18, 2013

How to export, delete and move your Mendeley account and library #mendelete

Deleteme

Delete. Creative Commons licensed picture by Vitor Sá – Virgu via Flickr.com

News that Reed Elsevier is in talks to buy Mendeley.com will have many scientists reaching for their “delete account” button. Mendeley has built an impressive user-base of scientists and other academics since they started, but the possibility of an Elsevier takeover has worried some of its users. Elsevier has a strained relationship with some groups in the scientific community [1,2], so it will be interesting to see how this plays out.

If you’ve built a personal library of scientific papers in Mendeley, you won’t just want to delete all the data, you’ll need to export your library first, delete your account and then import it into a different tool.

Disclaimer: I’m not advocating that you delete your mendeley account (aka #mendelete), just that if you do decide to, here’s how to do it, and some alternatives to consider. Update April 2013, it wasn’t just a rumour.

Exporting your Mendeley library

Open up Mendeley Desktop, on the File menu select Export. You have a choice of three export formats:

  1. BibTeX (*.bib)
  2. RIS – Research Information Systems (*.ris)
  3. EndNote XML (*.xml)

It is probably best to create a backup in all three formats just in case as this will give you more options for importing into whatever you replace Mendeley with. Another possibility is to use the Mendeley API to export your data which will give you more control over how and what you export, or trawl through the Mendeley forums for alternatives. [update: see also comments below from William Gunn on exporting via your local SQLite cache]

Deleting your Mendeley account #mendelete

Login to Mendeley.com, click on the My Account button (top right), Select Account details from the drop down menu and scroll down to the bottom of the page and click on the link delete your account. You’ll be see a message We’re sorry you want to go, but if you must… which you can either cancel or select Delete my account and all my data. [update] To completely delete your account you’ll need to send an email to privacy at mendeley dot com. (Thanks P.Chris for pointing this out in the comments below)

Alternatives to Mendeley

Once you have exported your data, you’ll need an alternative to import your data into. Fortunately, there are quite a few to choose from [3], some of which are shown in the list below. This is not a comprehensive list, so please add suggestions below in the comments if I missed any obvious ones. Wikipedia has an extensive article which compares all the different reference management software which is quite handy (if slightly bewildering). Otherwise you might consider trying the following software:

One last alternative, if you are fed up with trying to manage all those clunky pdf files, you could just switch to Google Scholar which is getting better all the time. If you decide that Mendeley isn’t your cup of tea, now might be a good time to investigate some alternatives, there are plenty of good candidates to choose from. But beware, you may run from the arms of one large publisher (Elsevier) into the arms of another (Springer or Macmillan which own Papers and ReadCube respectively).

References

  1. Whitfield, J. (2012). Elsevier boycott gathers pace Nature DOI: 10.1038/nature.2012.10010
  2. Van Noorden, R. (2013). Mathematicians aim to take publishers out of publishing Nature DOI: 10.1038/nature.2013.12243
  3. Hull, D., Pettifer, S., & Kell, D. (2008). Defrosting the Digital Library: Bibliographic Tools for the Next Generation Web PLoS Computational Biology, 4 (10) DOI: 10.1371/journal.pcbi.1000204
  4. Attwood, T., Kell, D., McDermott, P., Marsh, J., Pettifer, S., & Thorne, D. (2010). Utopia documents: linking scholarly literature with research data Bioinformatics, 26 (18) DOI: 10.1093/bioinformatics/btq383

September 4, 2009

XML training in Oxford

XML Summer School 2009The XML Summer School returns this year at St. Edmund Hall, Oxford from 20th-25th September 2009. As always, it’s packed with high quality technical training for every level of expertise, from the Hands-on Introduction for beginners through to special classes devoted to XQuery and XSLT, Semantic Technologies, Open Source Applications, Web 2.0, Web Services and Identity. The Summer School is also a rare opportunity to experience what life is like as a student in one of the world’s oldest university cities while enjoying a range of social events that are a part of the unique summer school experience.

This year, classes and sessions are taught and chaired by:

W3C XML 10th anniversaryThe Extensible Markup Language (XML) has been around for just over ten years, quickly and quietly finding its niche in many different areas of science and technology. It has been used in everything from modelling biochemical networks in systems biology [1], to electronic health records [2], scientific publishing, the provision of the PubMed service (which talks XML) [3] and many other areas. As a crude measure of its importance in biomedical science, PubMed currently has no fewer than 800 peer-reviewed publications on XML. It’s hard to imagine life without it. So whether you’re a complete novice looking to learn more about XML or a seasoned veteran wanting to improve your knowledge, register your place and find out more by visiting xmlsummerschool.com. I hope to see you there…

References

  1. Hucka, M. (2003). The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models Bioinformatics, 19 (4), 524-531 DOI: 10.1093/bioinformatics/btg015
  2. Bunduchi R, Williams R, Graham I, & Smart A (2006). XML-based clinical data standardisation in the National Health Service Scotland. Informatics in primary care, 14 (4) PMID: 17504574
  3. Sayers, E., Barrett, T., Benson, D., Bryant, S., Canese, K., Chetvernin, V., Church, D., DiCuccio, M., Edgar, R., Federhen, S., Feolo, M., Geer, L., Helmberg, W., Kapustin, Y., Landsman, D., Lipman, D., Madden, T., Maglott, D., Miller, V., Mizrachi, I., Ostell, J., Pruitt, K., Schuler, G., Sequeira, E., Sherry, S., Shumway, M., Sirotkin, K., Souvorov, A., Starchenko, G., Tatusova, T., Wagner, L., Yaschenko, E., & Ye, J. (2009). Database resources of the National Center for Biotechnology Information Nucleic Acids Research, 37 (Database) DOI: 10.1093/nar/gkn741

May 13, 2009

XML Summer School, Oxford

XML Summer School, Oxford, U.K.After a brief absence, it is good to see the XML Summer School is back again this September (20th-25th) at St. Edmund Hall, Oxford. This is  “a unique event for everyone using, designing or implementing solutions using XML and related technologies.” I’ve been both a delegate and a speaker here over the years; back in 2005, with Nick Drummond we presented the Protégé and OWL tutorial which was good fun.  So here is what I.M.H.O. makes the XML summer school worth a look: (more…)

February 26, 2008

So, no-one told you life was going to be this way

Filed under: semweb — Duncan Hull @ 1:29 pm
Tags: , , , , , , , ,

Friends via Hot Rod HomepageSo, no-one told you life was going to be this way
Your job is a joke, you are broke, your love life is DOA.
It is like you are always stuck in second gear
Well, it has not been your day, your week, your month, or even your year…

OWL be there for you, when the rain starts to pour. Software engineer Leigh Dodds explains how: (more…)

June 2, 2006

Debugging Web Services

Filed under: biotech,informatics — Duncan Hull @ 11:19 pm
Tags: , , , , , , , , ,

IMGP4014There are a growing number of biomedical services out there on Wild Wild Web for performing various computations on DNA, RNA and proteins as well as the associated scientific literature. Currently, using and debugging these services can be hard work. SOAP UI (SOAP User Interface) is newish and handy free tool to help debug services and get your in silico experiments and analyses done, hopefully more easily.

So why should bioinformaticans care about Web Services? Three of the most important advantages are:

  1. They can reduce the need to install and maintain hundreds of tools and databases locally on desktop(s) or laboratory server(s) as these resources are programmatically accessible over the web.
  2. They can remove the need for tedious and error-prone screen-scraping, or worse, “cut-and-paste” of data between web applications that don’t have fully programmatic interfaces.
  3. It is possible to compose and orchestrate services into workflows or pipelines, which are repeatable and verifiable descriptions of your experiments that you can share. Needless to say, sharing repeatable experiments has always been important part of science, its shouldn’t be any different on the Web of Science.

All this distributed computing goodness comes at a price though and there are several disadvantages of using web services. We will focus on one here: Debugging services, which can be problematic. In order to do this, bioinformaticians need to understand a little bit about how web services work and how to debug them.

Death by specification

Debugging services sounds straightforward, but many publicly available biomedical services, are not the simpler RESTian type, but the more complex SOAP-and-WSDL type of web service. Consequently, debugging usually requires a basic understanding these protocols and interfaces, the so-called Simple” Object Access Protocol (SOAP) and Web Services Description Language (WSDL). However these specifications are both big, complicated and being superceded by newer versions so you might lose the will-to-live while reading them. Also, individual services described in WSDL are easier for machines to read, than for humans, and therefore give humble bioinformaticians a big headache. As an example, have a look at the WSDL for BLAST at the DNA Databank of Japan (DDBJ).

So, if you’re not intimately familiar with the WSDL 1.1 specification (frankly, life is too short and they keep moving the goal-posts anyway), it is not very clear what is going on here. WSDL describes messages, port types, end points, part-names, bindings, bla-bla-bla, and lots of other seemingly unnecessary abstractions. To add insult to injury WSDL is used in several different styles and is expressed in verbose XML. Down with the unnecessary abstractions! But the problems don’t stop there. From looking at this WSDL, you have to make several leaps of imagination to understand what the corresponding SOAP messages this BLAST service accepts and responds with will look like. So when you are analysing your favourite protein sequence(s) with BLAST or perhaps InterProScan it can be difficult or impossible to work out what went wrong.

Using SOAPUI

This is where SOAPUI, can make life easier. You can launch SOAPUI using the Java Web Start, load a WSDL in and you can begin to see what is going on. One of the nice features, is it will show you what the SOAP messages look like, which saves you having to work it out in your head. So, going back to our BLAST example…

  1. Launch the SOAPUI tool and select File then New WSDL Project (Give project a name and save it when prompted).
  2. Right click on the Project folder and select add WSDL from URL
  3. Type in http://xml.nig.ac.jp/wsdl/Blast.wsdl or your own favourite from this list of molecular biology wsdl.
  4. When asked: Create default requests for all operations select Yes
  5. The progress bar will whizz away while it imports the file, once its done, you can see a list of operations
  6. If you click on one of them e.g. searchParam then Request1, then select Open Request Editor it spawns two new windows…
  7. The first (left-hand) window shows the SOAP request that is sent to the BLAST service:
    <soapenv:Envelope
    	... boring namespace declarations ... >
    	 <soapenv:Body>
    
    		<blas:searchParam soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
    			<!-- use BLASTp -->
    			<program xsi:type="xsd:string">blastp</program>
    
    			<!-- Use SWISSPROT data  -->
    			<database xsi:type="xsd:string">SWISS</database>
    
    			<!-- protein sequence -->
    			<query xsi:type="xsd:string">MHLEGRDGRR YPGAPAVELL QTSVPSGLAE LVAGKRRLPR GAGGADPSHS</query>
    
    			<!-- no parameters -->
    			<param xsi:type="xsd:string"></param>
    		</blas:searchParam>
    
    	</soapenv:Body>
    </soapenv:Envelope>
  8. When you click on the green request button, this message is sent to the service. Note: you have to fill in the parameters values as they default to: “?”.
  9. After submitting the request above, the SOAP response appears in the second (right-hand) window:
    <soap:Envelope
    ... namespace declarations... >
       <soap:Body>
    
          <n:searchParamResponse xmlns:n="http://tempuri.org/Blast">
             <Result xsi:type="xsd:string">BLASTP 2.2.12 [Aug-07-2005] ...
    		 Sequences producing significant alignments:                      (bits) Value
    		 sp|Q04671|P_HUMAN P protein (Melanocyte-specific transporter pro...   104   8e-23 ...
    		 </Result>
          </n:searchParamResponse>
       </soap:Body>
    </soap:Envelope>

Not all users of web services will want the gory details of SOAP, but for serious users, its a handy tool for understanding how any given web service works. This can be invaluable in working out what happened if, or more probably when, an individual service behaves unexpectedly. If you know of any other tools that make web services easier to use and debug, I’d be interested to hear about them.

Conclusions: It’s not rocket science

In my experience, small tools (like SOAPUI) can make a BIG difference. I’ve used a deliberately simple (and relatively reliable) BLAST service for demonstration purposes, but the interested reader / hacker might want to use this tool to play with more complex programs like the NCBI Web Services or InterProScan at the EBI. Using such services often requires good testing and debugging support, for example, when you compose (or “mashup”) services into complex workflows, using a client such as the Taverna workbench. This is where SOAP UI might just help you test and debug web services provided by other laboratories and data centres around the world, so you can use them reliably in your in silico experiments.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

May 24, 2006

Dub Dub Dub 2006

WWW2006
The 15th International World Wide Web conference is currently underway in Edinburgh, Bonny Scotland. As usual, this popular conference has some good papers, only 11%* of submissions are accepted. One particular paper caught my eye: One Document to Bind Them: Combining XML, Web Services, and the Semantic Web. This paper has probably been selected because it will wind people up (sorry I mean “spark a debate”) so its an entertaining and sometimes enlightening read.

In this paper, Harry Halpin and Henry Thompson make some observations about the state of the web in 2006:

But, according to the authors, it doesn’t have to be this way…

  1. Many (but not all) web services are functions that are available on the web,
  2. The semantic web gives us an elaborate type system, using ontologies, which can extend what we already have with XML Schema
  3. The combination of the first two, gives us Semantic Web Services which are typed functions. This allows us to invoke web services not just by their URI (e.g. http://xml.nig.ac.jp/xddbj/Blast for a Blast service), but by the type of information they have. E.g. you have an output of type BLAST_report or perhaps InterProScan_report, what services will take this as input? What operations can be performed on this data? This sounds a lot like BioMOBY, with bells on.

What Harry and Henry propose is tying all this together using a single XML vocabulary, called Semantic fXML, to put “a unified abstraction of data, types and functions” so that the web can compute. This is all a bit pie-in-the-sky vision of the future stuff, but what might it mean for your average bioinformatican? It would be seriously useful if we could make the current molecular biology web services easier to use, but agreeing on and using an ontology for annotating the types of the inputs and outputs of all the services is non-trivial task. Bioinformaticians already have a (somewhat limited) universal type system for describing all data in bioinformatics, its called string. Persuading them to use something more powerful is not easy unless the benefits are immediately obvious.

At the moment, it is difficult to tell if sfXML will ever have any impact on bioinformatics but who cares? Despite this, the paper is enjoyable reminder of what is interesting about services on the Web. They transform the web from a place where we can merely search and browse for data (sequences, genes, proteins, metabolic pathways, systems etc), into “one vast de-centralised computer” a bit like the one described in can computers explain biology? This, in my humble opinion, is what makes the web and bioinformatics an exciting place to work in 2006.

* Footnote: Of nearly 700 papers submitted: only 81 research papers were accepted (11%). This is a 25% increase on the number of submissions last year to www2005 in Chiba, Japan.

References

  1. Harry Halpin and Henry S. Thompson (2006) One Document to Bind Them: Combining XML, Web Services, and the Semantic Web in Proceedings of the 15th international conference on World Wide Web, Edinburgh Scotland DOI:10.1145/1135777.1135877
  2. This post originally published on nodalpoint with comments

Blog at WordPress.com.