Posts Tagged ‘MARC’

Following in Jerome’s footsteps

Posted on July 27th, 2011 by Paul Stainthorp

As the formal end of the Jerome project looms, this blog post is aimed at other people who might want to take a similar approach to breaking data out of their library systems.

From the start, Jerome took an approach that can be summed up as:

  • Use the tools and abilities that you have to hand. Go for low-hanging fruit and demonstrate value early on.
  • Follow the path of least resistance. If it works, it’s fine. Route around problems rather than fighting against them.
  • Consider the benefits of a ‘minimally invasive’ approach to library systems. Use the web to passively gather copies of your data without having to make changes to your exisiting root systems (such as LMSes).

What tools would people need to take this approach?

1. Technical ability. Jerome would have got nowhere without the coding abilities of University of Lincoln developers Nick Jackson and Alex Bilbie, and the approach they take to development (see below). It’s fair to say that their involvement has been something of a culture-shift for the Library: they have brought a fresh approach to dealing with our systems and data.

2. An agile, rapid, iterative project methodology and a suite of (often free; always web-based) software tools to support that way of working. This probably can’t be overstated: a ‘traditional’ project management methodology just wouldn’t have worked for Jerome.

3. An understanding of where data resides in your current systems. We’ve had to become uncomfortably au fait with the structure of SirsiDynix‘s internal data tables, MARCXML, OAI-PMH, RIS and all sorts of other unpleasantness. Also an awareness of the rich ecosystem of third-party library/bibliographic/other useful data that exists on the web.

4. Related: a willingness to try different approaches to getting hold of this data (the absolutely-anything-goes, mashup-heavy approach): APIs: great. Screen-scraping: yeah, if we must. SQL querying, .csv dumps, proprietary admin interfaces: all fine. Don’t be precious about finding a way in. By far the most important thing is to provide the open data service in the first place. Things can always be tidied up, rationalised, at a later date.

5. A box to run things on. This doesn’t have to be a large institutional server: we’ve successfully run Jerome on a Mac Mini.

6. Finally, the use of blogs—such as this one—and social media to engage a (self-selecting, admittedly) community of potential users and fellow-travellers.

Priorities:

  • Use tools such as Pivotal Tracker and mind maps to capture requirements and turn them into a plan for development.
  • Meet regularly to review the plan and push the ideas on to the next stage.
  • Decide what’s ‘out of scope’ early on so you can concentrate on maximum value.
  • Realise that there’s value in releasing even part of your data—for example, a list of ISSNs—which you can exploit immediately without having to worry about issues (e.g. third-party copyright in records) that might affect your complete dataset.
  • Blog about what you’re doing. Little and often is the way to go.

What to avoid:

While it’s important to be aware of the bigger picture, don’t get too distracted by the way things are being done elsewhere.

A lot of the open data movement seems to be closely tied up with providing access to large quantities of Linked Data (the dreaded RDF triple store!), and initially I was worried that because we were not taking that approach we were somehow out of step. (In fact, I think that our not concentrating on Linked Data has allowed Jerome to explore the open data landscape in a different and valuable way, i.e. the provision of open bibliographic data services via API – see Paul Walk’s slides on providing data vs. providing a data service. I know which side my bread’s buttered: Open Data ≠ merely open; Open Data = open and usable.)

Similarly, while a lot of the discussion around third-party intellectual property rights in data has been phrased in terms of negotiating with those third parties to release the data openly (or taking a risk-assessment approach to releasing it anyway!), Jerome took a different approach, which has been first to release openly those (sometimes minimal) bits of data which we know are free from third-party interest, then to use existing open data sources to enhance the minimal set: what we end up with are data that may differ from the original bibliographic record, but which are inherently open. It’s not a better approach, just a different one.

Who you will need to have ‘on side':

  • Your systems librarian, or at least someone who has a copy of the manual to your library catalogue!
  • A cataloguer, to excuse explain the intricacies of MARC.
  • A university librarian / head of service who is willing to take risks and sanction the release of open data.
  • A small-but-thriving community of developers, mashup-enthusiasts and shambrarians (see above), including people on Twitter who can act as a sounding board for ideas.
  • Ideally, your project team should include as many people as possible who do not have a vested interest in the way libraries have historically got things done. Jerome has been valuable in arising from an approach that’s different from the library norm.

Progress, progress…

Posted on April 11th, 2011 by Nick Jackson

It’s an end-of-iteration time, and that means a round-up of what’s been going on in the Jerome world. I’m going to kick off by announcing the exciting news that we’ve standardised on Pivotal Tracker as our agile workflow manager. This means that we can all see exactly what’s planned, what needs tweaking and what’s coming next as well as getting immediate updates on the state of the current iteration. Since this is a JISC project we’ve made the whole thing public so you can see how we’re getting on.

So what’s been happening recently? Here’s a quick breakdown.

  • Catalogue import now uses the full MARC record instead of a stripped-down ‘friendly’ version, giving higher quality data and metadata.
  • Individual items in the catalogue and journals collections now have their own information pages. The same for Repository items is coming soon.
  • Item pages sport COinS metadata for ease of referencing and OpenURL lookups. Give it a whirl with any COinS-compatible browser plugin, like Zotero in Firefox.
  • Item pages all have a huge set of social media baked right in, allowing easy sharing and bookmarking.
  • We’re now aware of where an e-book is available, and highlight it accordingly in search and on the item page.
  • Cover images are now available for both books and a limited (Elsevier) set of journals.
  • Catalogue item pages have links to Google Books previews where they are available
  • We understand different media types (we’re still adding some more), which now highlight things like videos in search results. Soon they’ll be adjustable in search.
  • Tweaked default search weightings to provide slightly more accurate default results.
  • Pulling data from OpenLibrary for items with valid ISBNs provides a richer experience in the “Other Resources” section.
  • Book cover images are now coming from OpenLibrary, giving a higher quality and generally wider range.
  • Search data sets are now much cleaner in terms of character encoding and special character escaping, giving much richer international/foreign character support.

What’s coming next? We’re meeting soon to decide the key points for the next iteration but in the meantime I can reveal that we’ll be busting out our list-fu (reading lists, citation lists, pick lists and more), our first user-aware tools (custom search weighting sets and history!), journal contents, richer subject data, browsing by subject, similar books and more.

Stay tuned!