Posts Tagged ‘COMET’

It’s the end of Jerome as we know it (but I feel fine)

Posted on November 28th, 2011 by Paul Stainthorp

The University of Lincoln’s Jerome project finished in August with the successful release of more than 240,000 openly-licensed bibliographic records, available over developer APIs, and a joint hack day with Cambridge University Library‘s COMET project.

Now, encouraged by positive JISC feedback, both institutions—Cambridge and Lincoln jointly—have applied for follow-up project funding under the project title CLOCK. If our bid is successful, the new project will run between December 2011–July 2012, employing a web developer based at the University of Lincoln, and distilling the work of both institutions into the development of new innovative library metadata discovery services for the scholarly community.

You can read the project proposal for CLOCK at http://lncn.eu/ijt4 – the introductory section is below.

The University of Lincoln and Cambridge University Library both delivered successful projects (Jerome and COMET) for the JISC Infrastructure for Resource Discovery Programme in 2011. This is a proposal for the continuation of and elaboration upon the work of both projects, via a programme of development work shared between the two institutions.

Throughout both projects (COMET-Jerome), parallel approaches in technology and data structure were noted and commented upon. A ‘mash day’ workshop event held in Cambridge in August aimed to explore these differences as well as areas of potential synergy. Here project members identified several points of interest to take forward.

Both projects produced outputs of interest to researchers, students, librarians, developers, and designers of bibliographic discovery environments. The CLOCK project will harness the success of these two complementary initiatives and investigate new approaches to data creation and discovery in the library domain. In particular, it will investigate, propose, and develop new, web-based bibliographic tools/APIs which will make it easier for developers, academic libraries and library end-users (esp. researchers) to find Open Bibliographic Data and incorporate that data into systems and workflows.

This project is an opportunity to [1] exploit through real-world applications the significant amount of data released openly by Cambridge University Library; [2] apply the Jerome database architecture, iterative development methodology, and API framework to a bibliographic dataset an order of magnitude greater than the University of Lincoln’s; and [3] to build and enable a new set of tools and demonstrator services which will enable the future development of public Open Bib Data web applications of practical utility to libraries and end-users.

The project will be supported by library consultant Owen Stephens, who will help to put the work into a national context, relating CLOCK to the wider movement toward Open Bib Data and the work of the JISC Discovery initiative. It will take place in an environment (Lincoln/Cambridge) where a culture of developer inquiry and experimentation is encouraged and nurtured. It is also endorsed by senior library management at both universities.

Both universities are involved in complementary development work which will  both inform and be informed by CLOCK: at Cambridge, Ed Chamberlain is guiding the development of the JISC Open Bibliography 2 project; in Lincoln, Paul Stainthorp is lead researcher on the #jiscmrd Orbital project, which is investigating the management of research data, with some areas of overlap.

CLOCK will operate as part of the wider JISC Digital Infrastructure: Information and library infrastructure: Resource discovery, and support the recent concerted effort to move toward openly licensed library discovery in UK Higher Education and beyond.

Jerome/COMET hack day: Fun in the Fens

Posted on August 10th, 2011 by Paul Stainthorp

Here’s a photo of the CARET (Centre for Applied Research in Educational Technologies) offices at the University of Cambridge, where we held our log-awaited joint Jerome/COMET hack day, on Monday 8 August. Actually, in the end, it turned out to be a kind of Jerome/COMET/SALDA/synthesis/OUseful mashup-AH!

Jerome/COMET

In attendance (for the record):

Train mayhem aside (in the end the Lincoln contingent didn’t arrive until nearly midday), it was a really useful day and well worth doing. Particular thanks to Ed Chamberlain and his colleagues for hosting the event and for arranging the food and refreshments. Thanks also to everyone who travelled from afar for no other reason than they love a good mashup.

Typically, the ever-prolific Tony Hirst has already managed to write up not one, but two blog posts about ideas that came out of the day:

  • Getting Library Catalogue Searches Out There…
  • Open Data Processes: the Open Metadata Laundry (N.B. this one relates specifically to Jerome – in particular, our notion of ‘scrubbing’ dodgy MARC records by taking only the identifiers plus the bare citation-only fields, and using that minimal set to grab additional free and Open data from the web, automatically creating new full versions of records that are inherently Open. ‘Metadata laundry’, me like.)

Here are three more ideas/conversations we had in Cambridge that I thought were going somewhere interesting. Yeah, we might get around to actually doing these, sometime…

1. Using COMET data to enhance Jerome

The ideaSimilar to the ‘metadata laundry’, above, and to the way Jerome already uses data from the Open Library, JournalTOCs, LibraryThing, etc., to enhance its book records with additional metadata. Jerome constructs a URL in the form http://data.lib.cam.ac.uk/isbn/_______, with the ISBN from the Jerome record dropped in at the end. COMET responds with a link to an open record in RDF and/or JSON, which Jerome gladly sucks in, adding any additional fields to its original source record. Enrichment ensues.

2. Using Jerome search to ‘skin’ COMET

I called this one “Jerome Scholar” ;-) …we make use of the search aspects of Jerome (in particular, the speed of Sphinx, the ‘mixing desk‘ idea, the neat record presentation, to provide a really smooth way of interacting with the much more well-structured (hence “Scholar”) data that resides in COMET.

3. Using the differences between the two datasets to tell us something interesting

I have a notion that there’s something inherently useful about being able to compare two versions of a record for the ‘same’ object. If we could use Jerome+COMET to generate a web application/data feed – one that other discovery services could themselves consume, we’d have ways of ‘sparking off’ whole new avenues of discovery: from misspelled names, variant titles, different subject terms assigned by different cataloguing practices, etc. Like xISBN, but for non-standardised data(?). All right, that’s the fuzziest of the three ideas. And as the eminiently sensible Owen Stephens kept asking me, “…what’s the use case?”.

And then we went to the pub.

And then we went to the pub.

What did it cost and who benefits?

Posted on July 27th, 2011 by Paul Stainthorp

This is going to be one of the hardest project blog posts to write…

The costs of getting Jerome to this stage are relatively easy to work out. Under the Infrastructure for Resource Discovery programme, JISC awarded us the sum of £36,585 which (institutional overheads aside) we used to pay for the following:

  • Developer staff time: 825 hours over six months.
  • Library and project staff time: 250 hours over six months.
  • The cost of travel to a number of programme events and relevant conferences at which we presented Jerome, including this one, this one, this one, this one and this one.

As all the other aspects of Jerome—hardware, software etc.—either already existed or were free to use, that figure represents the total cost of getting Jerome to its current state.

The benefits (see also section 2.4 of the original bid) of Jerome are less easily quantified financially, but we ought to consider these operational benefits:

1. The potential for using Jerome as a ‘production’ resource discovery system by the University of Lincoln. As such it could replace our current OPAC web catalogue as the Library’s primary public tool of discovery. The Library ought also to consider Jerome as a viable alternative to the purchase of a commercial, hosted next-generation resource discovery service (which it is currently reviewing), with the potential for replacing the investment it would make in such a system with investment in developer time to maintain and extend Jerome. In addition, the Common Web Design (on which the Jerome search portal is based) is inherently mobile-friendly.

2. Related: even if the Jerome search portal is not adopted in toto, there’s real potential for using Jerome’s APIs and code (open sourced) to enhance our existing user interfaces (catalogues, student portals, etc.) by ‘hacking in’ additional useful data and services via Jerome (similar to the Talis Juice service). This could lead to cost savings: a modern OPAC would not have to be developed in isolation or tools bought in. And these enhancements are as available to other institutions and libraries as much as to Lincoln.

3. The use of Jerome as an operational tool for checking and sanitising bibliographic data. Jerome can already be used to generate lists of ‘bad’ data (e.g. invalid ISBNs in MARC records); this intelligence could be fed back into the Library to make the work of cataloguers, e-resources admin staff, etc., easier and faster (efficiency savings) and again to improve the user experience.

4. Benefits of Open Data: in releasing our bibliographic collections openly Jerome is adding to the UK’s academic resource discovery ‘ecosystem‘, with benefits to scholarly activity both in Lincoln and elsewhere. We are already working with the COMET team at Cambridge University Library on a cross-Fens spin-off miniproject(!) to share data, code, and best practices around handling Open Data. Related to this are the ‘fuzzier’ benefits of associating the University of Lincoln’s name with innovation in technology for education (which is a stated aim in the University’s draft institutional strategy).

5. Finally, there is the potential for the university to use Jerome as a platform for future development: Jerome already sits in a ‘suite’ of interconnecting innovative institutional web services (excuse the unintentional alliteration!) which include the Common Web Design presentation framework, Total ReCal space/time data, lncn.eu URL shortener and link proxy, a university-wide open data platform, and the Nucleus data storage layer. Just as each of these (notionally separate) services has facilitated the development of all the others, so it’s likely that Jerome will itself act as a catalyst for further innovation.

An elastic bucket down the data well (#rdtf in Manchester)

Posted on April 20th, 2011 by Paul Stainthorp

I was in Manchester on Monday for Opening Data – Opening Doors, a one-day “advocacy workshop” hosted by JISC and RLUK under their Resource Discovery Taskforce (#rdtf) programme. I delivered a five-minute ‘personal pitch’ about Jerome, open data, and the rapid-development ethos that’s developing at Lincoln.

Ken Chad is writing up a report from the day and Helen Harrop is producing a blog, both of which will be signposted from the website: http://rdtf.mimas.ac.uk/

The big data question

All the presentations can be viewed on slideshare, but there were some particular moments that I think are worth picking out:

The JISC deputy, Prof. David Baker was first up. His presentation, ‘A Vision for Resource Discovery‘ should be compulsory reading for university librarians. See, in particular, slides #6 (guiding principles of the RDTF), #8 (a future state of the art by 2012), and #11 (key themes).

Slide from David Baker's presentation Slide from David Baker's presentation Slide from David Baker's presentation

Following this introduction, there were three ‘perspectives’, short presentations “reflecting on the real world motivations and efforts involved in opening up bibliographic, archival and museums data to the wider world”: from the National Maritime Museum, the National Archives

…and from Ed Chamberlain of (Jerome’s ‘sister project‘) COMET (Cambridge Open METadata), the perspective from Cambridge University Library on opening up access to their non-inconsiderable bibliographic data. N.B. slides #4 (what does COMET entail?), #9 (licensing) and—more than anything else—slide #16 (“beyond bibliography”).

Slide from Ed Chamberlain's presentation Slide from Ed Chamberlain's presentation Slide from Ed Chamberlain's presentation

The first breakout/discussion session which I sat in on looked at technical and licencing constraints to opening up access to [bib] data. This was the point at which the tortured business metaphors started to pile up. ‘Buckets’ of data. ‘Elastic’ buckets that can expand to include any kind of data. And (my personal contribution, continuing the wet theme): data often exist at the bottom of a ‘well’. Just because a well is open at the top, it doesn’t necessarily make it easy to get the water out! You need another kind of bucket – a service bucket that makes it possible to extract and make use of the water. Sorry, data. What were we talking about again?

Then a series of 5-minute ‘personal pitches’, including mine just after lunch. I didn’t use slides, but I’m typing up my handwritten notes on Google Docs and I’ll post them as a separate blog post when I get a chance.

David Kay (SERO), Paul Miller (Cloud of Data) and Owen Stephens delivered the meat of the afternoon session in their presentation, ‘The Open Bibliographic Data Guide – Preparing to eat the elephant‘. The website containing the Open Bib Data Guide (which has not been formally launched until now) can be found at: http://obd.jisc.ac.uk/

The site itself is going to be invaluable in hand-holding and guiding institutions through the possibilities in opening up access to their own bibliographic data (OBD). Slides from the presentation that are particularly worth noting are #8 (which shows the colour-coding used to distinguish the different OBD use-cases) and #14 (examples of existing OBD).

Slide from the OBD presentation Slide from the OBD presentation

Paul Walk’s presentation, ‘Technical standards & the RDTF Vision: some considerations‘, is the source of the slide which I photographed (at the top of this blog post). Paul talked about ‘safe bets’; aspects of the Web that we can rely on playing a part in allowing us to create a distributed environment for resource discovery: including “ROASOADOA” (Resource- / Service- / Data-Oriented Architecture), persistent identifiers, and a RESTful approach. See also this blog post.

In the second breakout/discussion session, we discussed technical approaches. One of the themes which we kept coming back to was that of two approaches (encapsulated by Paul’s slide) which—while not mutually exclusive—may require different business cases or different explanations in order to be taken up by institutions. We characterised the two approaches as:

  • Raw open data vs Data services
  • Triple store vs RESTful APIs
  • Jerome vs COMET (bit of a caricature, this one, but not entirely unjustified!)

I was gratified that Lincoln’s approach to rapid development and provision of open services was also referred to in non-ungratifying terms, as a model which could be valuable for the HE sector as a whole.

Finally, we heard what’s next for the #rdtf programme. It’s going to be rebranded as ‘Discovery‘ and formally re-launched under the new name at another event: ‘Discovery – building a UK metadata ecology‘ on Thursday, 26 May 2011, in London. See you there?

Ken Chad is writing up a report from the day and Helen Harrop is producing a blog, both of which will be signposted from the website: http://rdtf.mimas.ac.uk.