Posts Tagged ‘resource discovery’

What did it cost and who benefits?

Posted on July 27th, 2011 by Paul Stainthorp

This is going to be one of the hardest project blog posts to write…

The costs of getting Jerome to this stage are relatively easy to work out. Under the Infrastructure for Resource Discovery programme, JISC awarded us the sum of £36,585 which (institutional overheads aside) we used to pay for the following:

  • Developer staff time: 825 hours over six months.
  • Library and project staff time: 250 hours over six months.
  • The cost of travel to a number of programme events and relevant conferences at which we presented Jerome, including this one, this one, this one, this one and this one.

As all the other aspects of Jerome—hardware, software etc.—either already existed or were free to use, that figure represents the total cost of getting Jerome to its current state.

The benefits (see also section 2.4 of the original bid) of Jerome are less easily quantified financially, but we ought to consider these operational benefits:

1. The potential for using Jerome as a ‘production’ resource discovery system by the University of Lincoln. As such it could replace our current OPAC web catalogue as the Library’s primary public tool of discovery. The Library ought also to consider Jerome as a viable alternative to the purchase of a commercial, hosted next-generation resource discovery service (which it is currently reviewing), with the potential for replacing the investment it would make in such a system with investment in developer time to maintain and extend Jerome. In addition, the Common Web Design (on which the Jerome search portal is based) is inherently mobile-friendly.

2. Related: even if the Jerome search portal is not adopted in toto, there’s real potential for using Jerome’s APIs and code (open sourced) to enhance our existing user interfaces (catalogues, student portals, etc.) by ‘hacking in’ additional useful data and services via Jerome (similar to the Talis Juice service). This could lead to cost savings: a modern OPAC would not have to be developed in isolation or tools bought in. And these enhancements are as available to other institutions and libraries as much as to Lincoln.

3. The use of Jerome as an operational tool for checking and sanitising bibliographic data. Jerome can already be used to generate lists of ‘bad’ data (e.g. invalid ISBNs in MARC records); this intelligence could be fed back into the Library to make the work of cataloguers, e-resources admin staff, etc., easier and faster (efficiency savings) and again to improve the user experience.

4. Benefits of Open Data: in releasing our bibliographic collections openly Jerome is adding to the UK’s academic resource discovery ‘ecosystem‘, with benefits to scholarly activity both in Lincoln and elsewhere. We are already working with the COMET team at Cambridge University Library on a cross-Fens spin-off miniproject(!) to share data, code, and best practices around handling Open Data. Related to this are the ‘fuzzier’ benefits of associating the University of Lincoln’s name with innovation in technology for education (which is a stated aim in the University’s draft institutional strategy).

5. Finally, there is the potential for the university to use Jerome as a platform for future development: Jerome already sits in a ‘suite’ of interconnecting innovative institutional web services (excuse the unintentional alliteration!) which include the Common Web Design presentation framework, Total ReCal space/time data, lncn.eu URL shortener and link proxy, a university-wide open data platform, and the Nucleus data storage layer. Just as each of these (notionally separate) services has facilitated the development of all the others, so it’s likely that Jerome will itself act as a catalyst for further innovation.

How commercial next-generation library discovery tools have *nearly* got it right

Posted on May 17th, 2011 by Paul Stainthorp

In Huddersfield (again – I’m barely away from the place!), yesterday, at a CILIP UC&R (University, College and Research Group) Yorkshire & Humberside [catchy name] training event on ‘Discovering Discovery Tools‘. Librarians from four different UK universities gave practical, pros-and-cons descriptions of how they implemented and are now running four different commercial next-gen resource-discovery tools:

Five (count ‘em!) people from Lincoln were in the audience. I was wearing two hats: one for project Jerome for thinking about design concepts in resource discovery tools; the other for my day job – Lincoln is in the middle of a strategic review of Library ICT systems, which may well end up recommending that we buy one of these products.

It was all good stuff. First off, libraries need to hear the honest, warts and all counterpoint to the glowing terms in which each discovery product is described by its vendor. Secondly, it’s useful to subject all four* resource discovery platforms to the same amount of daylight, and see where the common problems lie, as well as where one tool outperforms another. Thirdly—and even though there’s a lot of resource discovery hyperbole to be heard—this is still a big shift for academic libraries, and I think we should discuss implications that are wider than the costs/benefits for an individual institution.

(*Yes, I know there are a few other tools. But they weren’t in the room yesterday.)

Lockside
What’s stopping us? (Canal lock gate at the University of Huddersfield.)

Things that jumped out at me:

Commercial resource discovery has reached a level of maturity that was absent a couple of years ago. That’s not to say that all next-gen resource discovery tools are perfect (because they aren’t), or that there aren’t any problems (because there are; see below), but academic libraries do now have a genuine choice between several different, viable commercial products.

Here’s a heresy: the differences between these four products are not that significant. I think that anyone who went away from yesterday’s event thinking that out of the four discovery tools on display there are some ‘good’ and some ‘bad’ …is probably wrong. It’s not really about the product, it’s about the willingness of the vendor to overcome problems, and about their attitude to their customers. Do you buy a slightly-less slick product, but from a company you feel you can have a more productive relationship with?

In fact, most of the real problems with resource discovery seem to be common to all four of the products on show yesterday. De-duping via FRBR reckons to be a bit of an Achilles’ heel. (A shame. FRBRisation is one of those things you either need to get right, or not do at all. A half-arsed attempt is worse than not bothering.)

Also broken: known-item search. This ought to be trivial to fix, and it needs to be sorted now now now.  I find it particularly sinister that some commercial resource-discovery tools rank their search results according to secret, proprietary algorithms that can’t be inspected or challenged by their users, let alone altered/improved. This is a problem. What’s the point of a library that can’t justify how its resource discovery system actually works? Are we just here to sign the cheques?

Libraries still have a tendency to overcomplicate things for their users. Sometimes they do this because they have no choice (perhaps their shiny new discovery tool doesn’t quite work they way it should); but often they seem just too ready to accept a situation where users are inconvenienced sooner than address an underlying problem. Lincoln included in this sweeping generalisation.

There’s no point pretending that a library can make two independent decisions to purchase [a] a next-gen resource discovery platform, and [b] a journals knowledgebase/link resolver. The two things are all tied up together. To pick a random example: you want Summon, you’d better want 360.

Why can’t we just buy access to a search index? If I want to pay to provide my users with the benefits of a lovely big central index of content, why do I have to buy into your discovery algorithm and web front-end as well? (Whither JISC collections?)

Related, and finally – we really shouldn’t have to replace our search and discovery interfaces every time we want/need to use a different content provider, and we shouldn’t be placed in the situation of having to make collection/subscription decisions in order to ‘feed’ our discovery tool. It may be temptingly easy, cost aside, to pick up and put down different next-gen discovery products (“…it’s just a subscription!”) but there’s too much at stake for our users.

An elastic bucket down the data well (#rdtf in Manchester)

Posted on April 20th, 2011 by Paul Stainthorp

I was in Manchester on Monday for Opening Data – Opening Doors, a one-day “advocacy workshop” hosted by JISC and RLUK under their Resource Discovery Taskforce (#rdtf) programme. I delivered a five-minute ‘personal pitch’ about Jerome, open data, and the rapid-development ethos that’s developing at Lincoln.

Ken Chad is writing up a report from the day and Helen Harrop is producing a blog, both of which will be signposted from the website: http://rdtf.mimas.ac.uk/

The big data question

All the presentations can be viewed on slideshare, but there were some particular moments that I think are worth picking out:

The JISC deputy, Prof. David Baker was first up. His presentation, ‘A Vision for Resource Discovery‘ should be compulsory reading for university librarians. See, in particular, slides #6 (guiding principles of the RDTF), #8 (a future state of the art by 2012), and #11 (key themes).

Slide from David Baker's presentation Slide from David Baker's presentation Slide from David Baker's presentation

Following this introduction, there were three ‘perspectives’, short presentations “reflecting on the real world motivations and efforts involved in opening up bibliographic, archival and museums data to the wider world”: from the National Maritime Museum, the National Archives

…and from Ed Chamberlain of (Jerome’s ‘sister project‘) COMET (Cambridge Open METadata), the perspective from Cambridge University Library on opening up access to their non-inconsiderable bibliographic data. N.B. slides #4 (what does COMET entail?), #9 (licensing) and—more than anything else—slide #16 (“beyond bibliography”).

Slide from Ed Chamberlain's presentation Slide from Ed Chamberlain's presentation Slide from Ed Chamberlain's presentation

The first breakout/discussion session which I sat in on looked at technical and licencing constraints to opening up access to [bib] data. This was the point at which the tortured business metaphors started to pile up. ‘Buckets’ of data. ‘Elastic’ buckets that can expand to include any kind of data. And (my personal contribution, continuing the wet theme): data often exist at the bottom of a ‘well’. Just because a well is open at the top, it doesn’t necessarily make it easy to get the water out! You need another kind of bucket – a service bucket that makes it possible to extract and make use of the water. Sorry, data. What were we talking about again?

Then a series of 5-minute ‘personal pitches’, including mine just after lunch. I didn’t use slides, but I’m typing up my handwritten notes on Google Docs and I’ll post them as a separate blog post when I get a chance.

David Kay (SERO), Paul Miller (Cloud of Data) and Owen Stephens delivered the meat of the afternoon session in their presentation, ‘The Open Bibliographic Data Guide – Preparing to eat the elephant‘. The website containing the Open Bib Data Guide (which has not been formally launched until now) can be found at: http://obd.jisc.ac.uk/

The site itself is going to be invaluable in hand-holding and guiding institutions through the possibilities in opening up access to their own bibliographic data (OBD). Slides from the presentation that are particularly worth noting are #8 (which shows the colour-coding used to distinguish the different OBD use-cases) and #14 (examples of existing OBD).

Slide from the OBD presentation Slide from the OBD presentation

Paul Walk’s presentation, ‘Technical standards & the RDTF Vision: some considerations‘, is the source of the slide which I photographed (at the top of this blog post). Paul talked about ‘safe bets’; aspects of the Web that we can rely on playing a part in allowing us to create a distributed environment for resource discovery: including “ROASOADOA” (Resource- / Service- / Data-Oriented Architecture), persistent identifiers, and a RESTful approach. See also this blog post.

In the second breakout/discussion session, we discussed technical approaches. One of the themes which we kept coming back to was that of two approaches (encapsulated by Paul’s slide) which—while not mutually exclusive—may require different business cases or different explanations in order to be taken up by institutions. We characterised the two approaches as:

  • Raw open data vs Data services
  • Triple store vs RESTful APIs
  • Jerome vs COMET (bit of a caricature, this one, but not entirely unjustified!)

I was gratified that Lincoln’s approach to rapid development and provision of open services was also referred to in non-ungratifying terms, as a model which could be valuable for the HE sector as a whole.

Finally, we heard what’s next for the #rdtf programme. It’s going to be rebranded as ‘Discovery‘ and formally re-launched under the new name at another event: ‘Discovery – building a UK metadata ecology‘ on Thursday, 26 May 2011, in London. See you there?

Ken Chad is writing up a report from the day and Helen Harrop is producing a blog, both of which will be signposted from the website: http://rdtf.mimas.ac.uk.

JISC #rdtf meeting, Birmingham

Posted on March 1st, 2011 by Paul Stainthorp

I’m in Birmingham for the JISC Infrastructure for Resource Discovery start-up meeting. We’re here to get to know the other 7 projects that JISC has funded. Here’s what we’ll be talking about:

The objectives for this meeting are:
  • To introduce the bigger picture of the resource discovery taskforce work and all of the projects that are involved
  • To share approaches and knowledge on the key issues for the programme – technical approaches, licensing and aggregation.
For this session each project will need to prepare a 5 minute overview of their project. We would like your overview to address the following questions
  • What content and metadata are you working with?
  • How will this data be made available?
  • What are your use cases for the data?
  • What benefits to your institution and the sector do you anticipate?
12.30 Discussion of technical approaches
  • Each project will be asked to briefly outline the biggest technical challenge they face in their project. We will then look for common issues and opportunities for projects to collaborate.
  • What technical approaches and tools are you using?

And here are my slides for the 5-minute presention on Jerome: