Archive for the ‘Uncategorized’ Category

It’s the ‘Final’ Blog-Post

Posted on August 1st, 2011 by Paul Stainthorp

sunset cowboy

I’ve put ‘final’ in inverted commas in the title of this blog post (which should be sung—of course—to the tune of this song) – because while the JISC-funded Jerome project has indeed come to an end, Jerome itself is going nowhere. We’ll continue to tweak and develop it as an “un-project” (from whence it came), and—we sincerely hope—Jerome will lead in time, in whole or in part, to a real, live university library service of awesome.

Before we get started, though, thanks are due to the whole of the Jerome project team: Chris Leach, Dave Raines, Tim Simmonds, Elif Varol, Joss Winndevelopers Nick Jackson and Alex Bilbie times a million, and also to people outside the University of Lincoln who have offered support and advice, including Ed ChamberlainOwen Stephens, and our JISC programme manager, Andy McGregor.

Ssssooooo…

Just what exactly have we produced?

  1. A public-facing search portal service available at: http://jerome.library.lincoln.ac.uk/
    • Featuring searchbrowse, and bibliographic record views.
    • Search is provided by Sphinx.
    • A ‘mixing desk‘ allows user control over advanced search parameters.
    • Each record is augmented by data from OpenLibrary (licenced under CC0) to help boost the depth and accuracy of our own catalogue. Where possible, OpenLibrary also provides our book cover images.
    • Bibliographic work pages sport COinS metadata and links to previews from Google Books.
    • Item data is harvested from the Library Management System.
    • Social tools allow sharing of works on Facebook, Twitter, etc.
  2. Openly licensed bibliographic data, available at http://data.lincoln.ac.uk/documentation.html#bib, and including:
  3. Attractive, documented, supported APIs for all data, with a timeline of data refresh cycles. The APIs will provide data in the following formats:
    1. RDF/XML
    2. JSON
    3. RIS
    4. The potential for MARC
  4. Source code for Jerome will be made Open and publicly available (after a shakedown) on GitHub.
  5. While the user interface, technical infrastructure, analytics and machine learning/personalisation aspects of Jerome have been discussed fairly heavily on the project blog, you’ll have to wait a little while for formal case studies.
  6. Contributions to community events. We presented/discussed Jerome at:

What ought to be done next?

  1. There’s a lot more interesting work to be done around the use of activity/recommendation data and Jerome. We’re using the historical library loan data both to provide user recommendations (“People who borrowed X…“), and to inform the search and ranking algorithms of Jerome itself. However, there are lots of other measures of implicit and explicit activity (e.g. use of the social sharing tools) that could be used to provide even more accurate recommendations.
  2. Jerome has concentrated on data at the bibliographic/work level. But there is potentially even more value to be had out of aggregating and querying library item data (i.e. information about a library’s physical and electronic holdings of individual copies of a bibliographic work) – e.g. using geo-lookup services to highlight the nearest available copies of a work. This is perhaps the next great untapped sphere for the open data/Discovery movement.
  3. Demonstrate use of the APIs to do cool stuff! Mashing up library data with other sets of institutional data (user profiles, mapping, calendaring data) to provide a really useful ‘portal’ experience for users. Also: tapping into Jerome for reporting/administrative purposes; for example identifying and sanitising bad data!

Has Jerome’s data actually been used?

Probably not yet. We were delighted to be able to offer something up (in the form of an early, bare-bones Jerome bibliographic API) to the #discodev Developer Competition, where we still hope to see it used. Also, we are holding a post-project hack day (on 8 August 2011) with the COMET project in Cambridge to share data, code, and best practices around handling Open Data. We certainly intend to make use of the APIs internally to enhance the University of Lincoln’s own library services. If you’re interested in making use of the Jerome open data, please email me or leave a comment here.

What skills did we need?

At the University of Lincoln we have been experimenting with a new (for us) way of managing development projects: the Agile method, using shared tools (Pivotal Tracker, GitHub) to allow a distributed team of developers and interested parties to work together. On a practical level, we’ve had to come to terms with matching a schemaless database architecture with traditional formats for describing resources… Nick and Alex have learned more about library standards and cataloguing practice (*cough*MARC*cough*) than they may have wished! . There are also now plans to extend MongoDB training to more staff within the ICT Services department.

What did we learn along the way?

Three things to take away from Jerome:

  1. MARC is evil. But still, perhaps, a necessary evil. Until there’s a critical mass of libraries and library applications using newer, more sane languages to describe their collections, developers will just have to bite down hard and learn to parse MARC records. Librarians, in turn, need to accept the limitations of MARC and actively engage in developing the alternative</lecture over>.
  2. Don’t battle: use technology to find a way around licensing issues. Rather than spending time negotiating with third parties to release their data openly, Jerome took a different approach, which was to release openly those (sometimes minimal) bits of data which we know are free from third-party interest, then to use existing open data sources to enhance and extend those records.
  3. Don’t waste time trying to handle every nuance of a record. Whilst it’s important from a catalogue standpoint, people really don’t care if it’s a main title, subtitle, spine title or any other form of title when they’re searching. Perfection is a goal, but not a restriction. Releasing 40% of data and working on the other 60% later is better than aiming for 100% and never releasing anything.

Thanks! It’s been fun…

Paul Stainthorp
July, 2011

The Jerome Resource Model

Posted on July 31st, 2011 by Nick Jackson

One of the things which I’ve come to realise whilst working on Jerome is that library storage formats generally suck. They’re either so complex you need a manual to work out what they mean (such as my good friend MARC) or lack sufficient depth to elegantly handle what you’re trying to do without resorting to epic kludges. This is mostly down to the fact that people have assumed a storage format must also function as an exchange format which can encompass the entirety of knowledge about a resource, a foolish notion which has led to all kinds of mayhem in the past. We have catalogues to know things like the date at which the special edition bound with human skin was printed, who edited it and the colour of his eyes. A discovery service doesn’t care, and neither (I strongly suspect) do people who just want access to our data.

As a result Jerome uses its own internal format for storing data which is built around what Jerome needs to know to provide the necessary search and discovery services. Since we’re storing objects in MongoDB I’ve opted for a nice clean JSON-based object model with very clear field names and structure. It throws away all kinds of ‘useful’ data from a detailed metadata standpoint purely because it’s irrelevant to discovery. The vast majority of fields we store are completely optional (since not all resources have everything we care to know about). In a nutshell, we are capturing and storing the following data:

  • A resource’s unique Jerome ID.
  • Information on the collection the resource belongs to, including its unique ID within that collection.
  • Title, secondary title and edition.
  • An array of author names and secondary author names.
  • A date of publishing split into year, month and day.
  • An abstract or synopsis.
  • Keywords and subject categories.
  • A periodical name, volume number and issue number.
  • Availability dates, including start and stop of availability.
  • Publisher name and location.
  • ISBN or ISSN.
  • A URL to access the resource, and optionally a label for that URL.
  • A URL for direct access to the full text of the resource.
  • Name and URL for the licence information of the resource metadata.

As part of the magic of this approach we completely disassociate the individual type of resource from its representation within Jerome. We can represent pretty much any collection of resources within the same framework with no having to muck about writing custom code to turn one format into another for representation.

Modular Discovery

Posted on July 30th, 2011 by Nick Jackson

Jerome is a system which is modular by design. It comprises of a variety of distinct modules which handle data collection, formatting, output, search, indexing, recommendation and more. It’s also fairly unique (as far as I can tell) in that different types of resource also occupy a modular ‘slot’ rather than being interwoven with Jerome itself – it has no differentiation at the code level between books, ebooks, dissertations, papers, journals, journal entries, websites or any other ‘resource’ which people may want to feed it.

As a result of this approach we can use Jerome as a true multi-channel resource discovery tool. All that’s required for anybody to add resources to Jerome and immediately make them searchable and recommendable is for a ‘collection’ to be created and for them to write a bit of code which can make the following API calls as necessary:

  • Create a new resource as part of the collection, telling us as much about it as they can.
  • Update an existing resource when it changes.
  • Delete a resource which is no longer available.
  • Optionally record a use of a resource against a user’s account to help build our recommendations dataset.

That’s it. Got a collection of awesome lecture slides you want to feed into Jerome and instantly make known as a resource? You can do that.

We’ll have your API documentation up soon.

What did it cost and who benefits?

Posted on July 27th, 2011 by Paul Stainthorp

This is going to be one of the hardest project blog posts to write…

The costs of getting Jerome to this stage are relatively easy to work out. Under the Infrastructure for Resource Discovery programme, JISC awarded us the sum of £36,585 which (institutional overheads aside) we used to pay for the following:

  • Developer staff time: 825 hours over six months.
  • Library and project staff time: 250 hours over six months.
  • The cost of travel to a number of programme events and relevant conferences at which we presented Jerome, including this one, this one, this one, this one and this one.

As all the other aspects of Jerome—hardware, software etc.—either already existed or were free to use, that figure represents the total cost of getting Jerome to its current state.

The benefits (see also section 2.4 of the original bid) of Jerome are less easily quantified financially, but we ought to consider these operational benefits:

1. The potential for using Jerome as a ‘production’ resource discovery system by the University of Lincoln. As such it could replace our current OPAC web catalogue as the Library’s primary public tool of discovery. The Library ought also to consider Jerome as a viable alternative to the purchase of a commercial, hosted next-generation resource discovery service (which it is currently reviewing), with the potential for replacing the investment it would make in such a system with investment in developer time to maintain and extend Jerome. In addition, the Common Web Design (on which the Jerome search portal is based) is inherently mobile-friendly.

2. Related: even if the Jerome search portal is not adopted in toto, there’s real potential for using Jerome’s APIs and code (open sourced) to enhance our existing user interfaces (catalogues, student portals, etc.) by ‘hacking in’ additional useful data and services via Jerome (similar to the Talis Juice service). This could lead to cost savings: a modern OPAC would not have to be developed in isolation or tools bought in. And these enhancements are as available to other institutions and libraries as much as to Lincoln.

3. The use of Jerome as an operational tool for checking and sanitising bibliographic data. Jerome can already be used to generate lists of ‘bad’ data (e.g. invalid ISBNs in MARC records); this intelligence could be fed back into the Library to make the work of cataloguers, e-resources admin staff, etc., easier and faster (efficiency savings) and again to improve the user experience.

4. Benefits of Open Data: in releasing our bibliographic collections openly Jerome is adding to the UK’s academic resource discovery ‘ecosystem‘, with benefits to scholarly activity both in Lincoln and elsewhere. We are already working with the COMET team at Cambridge University Library on a cross-Fens spin-off miniproject(!) to share data, code, and best practices around handling Open Data. Related to this are the ‘fuzzier’ benefits of associating the University of Lincoln’s name with innovation in technology for education (which is a stated aim in the University’s draft institutional strategy).

5. Finally, there is the potential for the university to use Jerome as a platform for future development: Jerome already sits in a ‘suite’ of interconnecting innovative institutional web services (excuse the unintentional alliteration!) which include the Common Web Design presentation framework, Total ReCal space/time data, lncn.eu URL shortener and link proxy, a university-wide open data platform, and the Nucleus data storage layer. Just as each of these (notionally separate) services has facilitated the development of all the others, so it’s likely that Jerome will itself act as a catalyst for further innovation.

Following in Jerome’s footsteps

Posted on July 27th, 2011 by Paul Stainthorp

As the formal end of the Jerome project looms, this blog post is aimed at other people who might want to take a similar approach to breaking data out of their library systems.

From the start, Jerome took an approach that can be summed up as:

  • Use the tools and abilities that you have to hand. Go for low-hanging fruit and demonstrate value early on.
  • Follow the path of least resistance. If it works, it’s fine. Route around problems rather than fighting against them.
  • Consider the benefits of a ‘minimally invasive’ approach to library systems. Use the web to passively gather copies of your data without having to make changes to your exisiting root systems (such as LMSes).

What tools would people need to take this approach?

1. Technical ability. Jerome would have got nowhere without the coding abilities of University of Lincoln developers Nick Jackson and Alex Bilbie, and the approach they take to development (see below). It’s fair to say that their involvement has been something of a culture-shift for the Library: they have brought a fresh approach to dealing with our systems and data.

2. An agile, rapid, iterative project methodology and a suite of (often free; always web-based) software tools to support that way of working. This probably can’t be overstated: a ‘traditional’ project management methodology just wouldn’t have worked for Jerome.

3. An understanding of where data resides in your current systems. We’ve had to become uncomfortably au fait with the structure of SirsiDynix‘s internal data tables, MARCXML, OAI-PMH, RIS and all sorts of other unpleasantness. Also an awareness of the rich ecosystem of third-party library/bibliographic/other useful data that exists on the web.

4. Related: a willingness to try different approaches to getting hold of this data (the absolutely-anything-goes, mashup-heavy approach): APIs: great. Screen-scraping: yeah, if we must. SQL querying, .csv dumps, proprietary admin interfaces: all fine. Don’t be precious about finding a way in. By far the most important thing is to provide the open data service in the first place. Things can always be tidied up, rationalised, at a later date.

5. A box to run things on. This doesn’t have to be a large institutional server: we’ve successfully run Jerome on a Mac Mini.

6. Finally, the use of blogs—such as this one—and social media to engage a (self-selecting, admittedly) community of potential users and fellow-travellers.

Priorities:

  • Use tools such as Pivotal Tracker and mind maps to capture requirements and turn them into a plan for development.
  • Meet regularly to review the plan and push the ideas on to the next stage.
  • Decide what’s ‘out of scope’ early on so you can concentrate on maximum value.
  • Realise that there’s value in releasing even part of your data—for example, a list of ISSNs—which you can exploit immediately without having to worry about issues (e.g. third-party copyright in records) that might affect your complete dataset.
  • Blog about what you’re doing. Little and often is the way to go.

What to avoid:

While it’s important to be aware of the bigger picture, don’t get too distracted by the way things are being done elsewhere.

A lot of the open data movement seems to be closely tied up with providing access to large quantities of Linked Data (the dreaded RDF triple store!), and initially I was worried that because we were not taking that approach we were somehow out of step. (In fact, I think that our not concentrating on Linked Data has allowed Jerome to explore the open data landscape in a different and valuable way, i.e. the provision of open bibliographic data services via API – see Paul Walk’s slides on providing data vs. providing a data service. I know which side my bread’s buttered: Open Data ≠ merely open; Open Data = open and usable.)

Similarly, while a lot of the discussion around third-party intellectual property rights in data has been phrased in terms of negotiating with those third parties to release the data openly (or taking a risk-assessment approach to releasing it anyway!), Jerome took a different approach, which has been first to release openly those (sometimes minimal) bits of data which we know are free from third-party interest, then to use existing open data sources to enhance the minimal set: what we end up with are data that may differ from the original bibliographic record, but which are inherently open. It’s not a better approach, just a different one.

Who you will need to have ‘on side’:

  • Your systems librarian, or at least someone who has a copy of the manual to your library catalogue!
  • A cataloguer, to excuse explain the intricacies of MARC.
  • A university librarian / head of service who is willing to take risks and sanction the release of open data.
  • A small-but-thriving community of developers, mashup-enthusiasts and shambrarians (see above), including people on Twitter who can act as a sounding board for ideas.
  • Ideally, your project team should include as many people as possible who do not have a vested interest in the way libraries have historically got things done. Jerome has been valuable in arising from an approach that’s different from the library norm.