O'Really?

July 3, 2023

Some pros and cons of using bookdown and quarto to write books

There’s a community of people here who use the R language to get stuff done known as the R Usergroup Manchester (RUM). We meet monthly to learn from each other. At the last meetup on 29th June, I gave a joint talk with Stavrina Dimosthenous about quarto.org and its predecessor bookdown.org. Following Stravrina’s quick introduction to Quarto, I gave a lightning talk about some of the pros and cons of using bookdown to write books.

Since the talk was recorded, I’ve posted the video below, which is a lo-fi Microsoft Teams recording, which doesn’t include any of the Q&A that followed.

TL:DR; Bookdown and quarto are useful and very well documented tools for publishing books that can help you overcome some of the (many) limitations of Learning Management Systems like Blackboard. If you’re writing anything book shaped in your teaching (or elsewhere) I reckon that bookdown/quarto are good tools that are worth learning as they’ll help you to get stuff done.

Thanks Kamilla Kopec-Harding for organising and hosting the talks, a promotional poster for which, you can see below. 🙏

References

  1. Wickham, Hadley, and Garrett Grolemund. 2017. R for Data Science. O’Reilly UK Ltd. r4ds.had.co.nz.
  2. Xie, Yihui. 2017. Bookdown: Authoring Books and Technical Documents with R Markdown. Boca Raton, Florida: Chapman; Hall/CRC. bookdown.org/yihui/bookdown.
  3. Xie, Yihui, Christophe Dervieux, and Emily Riederer. 2020. R Markdown Cookbook. Boca Raton, Florida: Chapman; Hall/CRC. bookdown.org/yihui/rmarkdown-cookbook.







June 18, 2013

Peter Suber’s Open Access book is now freely available under an open-access license

Peter Suber's Open Access book

Open Access by Peter Suber is now open access

If you never got around to buying Peter Suber’s book about Open Access (OA) publishing [1] “for busy people”, you might be pleased to learn that it’s now freely available under an open-access license.

One year after being published in dead-tree format, you can now get the whole digital book for free. There’s not much point writing yet another review of it [1], see Peter’s extensive collection of reviews at cyber.law.harvard.edu. The book succinctly covers:

  1. What Is Open Access? (and what it is not)
  2. Motivation: OA as solving problems and seizing opportunities
  3. Varieties: Green and Gold, Gratis versus libre 
  4. Policies: Funding mandates (NIH, Wellcome Trust etc)
  5. Scope: Pre-prints and post-prints
  6. Copyright: … or Copyfight?
  7. Economics: Who pays the bills? Publication fees, toll-access paywalls and “author pays”
  8. Casualties: “OA doesn’t threaten publishing; it only threatens existing publishers who do not adapt”
  9. Future: Where next?
  10. Self-Help: DIY publishing

Open Access for MACHINES!

A lot of the (often heated) debate about Open Access misses an important point about open access being for machines as well as humans, or as Suber puts in Chapter 5 on Scope:

We also want access for machines. I don’t mean the futuristic altruism in which kindly humans want to help curious machines answer their own questions. I mean something more selfish. We’re well into the era in which serious research is mediated by sophisticated software. If our machines don’t have access, then we don’t have access. Moreover, if we can’t get access for our machines, then we lose a momentous opportunity to enhance access with processing.

Think about the size of the body of literature to which you have access, online and off. Now think realistically about the subset to which you’d have practical access if you couldn’t use search engines, or if search engines couldn’t index the literature you needed.

Information overload didn’t start with the internet. The internet does vastly increase the volume of work to which we have access, but at the same time it vastly increases our ability to find what we need. We zero in on the pieces that deserve our limited time with the aid of powerful software, or more precisely, powerful software with access. Software helps us learn what exists, what’s new, what’s relevant, what others find relevant, and what others are saying about it. Without these tools, we couldn’t cope with information overload. Or we’d have to redefine “coping” as artificially reducing the range of work we are allowed to consider, investigate, read, or retrieve.

It’s refreshing to see someone making these points that are often ignored, forgotten or missed out of the public debate about Open Access. The book is available in various digital flavours including:

References

  1. Suber, Peter. Open Access (MIT Press Essential Knowledge, The MIT Press, 2012). ISBN:0262517639
  2. Clair, Kevin. (2013). Kevin Michael Clair reviews Open Access by Peter Suber The Journal of Academic Librarianship, 39 (1) DOI: 10.1016/j.acalib.2012.11.017

July 15, 2010

How many journal articles have been published (ever)?

Fifty Million and Fifty Billion by ZeroOne

According to some estimates, there are fifty million articles in existence as of 2010. Picture of a fifty million dollar note by ZeroOne on Flickr.

Earlier this year, the scientific journal PLoS ONE published their 10,000th article. Ten thousand articles is a lot of papers especially when you consider that PLoS ONE only started publishing four short years ago in 2006. But scientists have been publishing in journals for at least 350 years [1] so it might make you wonder, how many articles have been published in scientific and learned journals since time began?

If we look at PubMed Central, a full-text archive of journals freely available to all – PubMedCentral currently holds over 1.7 million articles. But these articles are only a tiny fraction of the total literature – since a lot of the rest is locked up behind publishers paywalls and is inaccessible to many people. (more…)

December 11, 2009

The Semantic Biochemical Journal experiment

utopian documentsThere is an interesting review [1] (and special issue) in the Biochemical Journal today, published by Portland Press Ltd. It provides (quote) “a whirlwind tour of recent projects to transform scholarly publishing paradigms, culminating in Utopia and the Semantic Biochemical Journal experiment”. Here is a quick outline of the publishing projects the review describes and discusses:

  • Blogs for biomedical science
  • Biomedical Ontologies – OBO etc
  • Project Prospect and the Royal Society of Chemistry
  • The Chemspider Journal of Chemistry
  • The FEBS Letters experiment
  • PubMedCentral and BioLit [2]
  • Public Library of Science (PLoS) Neglected Tropical Diseases (NTD) [3]
  • The Elsevier Grand Challenge [4]
  • Liquid Publications
  • The PDF debate: Is PDF a hamburger? Or can we build more useful applications on top of it?
  • The Semantic Biochemical Journal project with Utopia Documents [5]

The review asks what advances these projects have made  and what obstacles to progress still exist. It’s an entertaining tour, dotted with enlightening observations on what is broken in scientific publishing and some of the solutions involving various kinds of semantics.

One conclusion made is that many of the experiments described above are expensive and difficult, but that the costs of not improving scientific publishing with various kinds of semantic markup is high, or as the authors put it:

“If the cost of semantic publishing seems high, then we also need to ask, what is the price of not doing it? From the results of the experiments we have seen to date, there is clearly a need to move forward and still a great deal of scope to innovate. If we fail to move forward in a collaborative way, if we fail to engage the key players, the price will be high. We will continue to bury scientific knowledge, as we routinely do now, in static, unconnected journal articles; to sequester fragments of that knowledge in disparate databases that are largely inaccessible from journal pages; to further waste countless hours of scientists’ time either repeating experiments they didn’t know had been performed before, or worse, trying to verify facts they didn’t know had been shown to be false. In short, we will continue to fail to get the most from our literature, we will continue to fail to know what we know, and will continue to do science a considerable disservice.”

It’s well worth reading the review, and downloading the Utopia software to experience all of the interactive features demonstrated in this special issue, especially the animated molecular viewers and sequence alignments.

Enjoy… the Utopia team would be interested to know what people think, see commentary on friendfeed,  the digital curation blog and youtube video below for more information.

References

  1. Attwood, T., Kell, D., McDermott, P., Marsh, J., Pettifer, S., & Thorne, D. (2009). Calling International Rescue: knowledge lost in literature and data landslide! Biochemical Journal, 424 (3), 317-333 DOI: 10.1042/BJ20091474
  2. Fink, J., Kushch, S., Williams, P., & Bourne, P. (2008). BioLit: integrating biological literature with databases Nucleic Acids Research, 36 (Web Server) DOI: 10.1093/nar/gkn317
  3. Shotton, D., Portwin, K., Klyne, G., & Miles, A. (2009). Adventures in Semantic Publishing: Exemplar Semantic Enhancements of a Research Article PLoS Computational Biology, 5 (4) DOI: 10.1371/journal.pcbi.1000361
  4. Pafilis, E., O’Donoghue, S., Jensen, L., Horn, H., Kuhn, M., Brown, N., & Schneider, R. (2009). Reflect: augmented browsing for the life scientist Nature Biotechnology, 27 (6), 508-510 DOI: 10.1038/nbt0609-508
  5. Pettifer, S., Thorne, D., McDermott, P., Marsh, J., Villéger, A., Kell, D., & Attwood, T. (2009). Visualising biological data: a semantic approach to tool and database integration BMC Bioinformatics, 10 (Suppl 6) DOI: 10.1186/1471-2105-10-S6-S19

Blog at WordPress.com.