Posts Tagged ‘orangutan’

Following in Jerome’s footsteps

Posted on July 27th, 2011 by Paul Stainthorp

As the formal end of the Jerome project looms, this blog post is aimed at other people who might want to take a similar approach to breaking data out of their library systems.

From the start, Jerome took an approach that can be summed up as:

  • Use the tools and abilities that you have to hand. Go for low-hanging fruit and demonstrate value early on.
  • Follow the path of least resistance. If it works, it’s fine. Route around problems rather than fighting against them.
  • Consider the benefits of a ‘minimally invasive’ approach to library systems. Use the web to passively gather copies of your data without having to make changes to your exisiting root systems (such as LMSes).

What tools would people need to take this approach?

1. Technical ability. Jerome would have got nowhere without the coding abilities of University of Lincoln developers Nick Jackson and Alex Bilbie, and the approach they take to development (see below). It’s fair to say that their involvement has been something of a culture-shift for the Library: they have brought a fresh approach to dealing with our systems and data.

2. An agile, rapid, iterative project methodology and a suite of (often free; always web-based) software tools to support that way of working. This probably can’t be overstated: a ‘traditional’ project management methodology just wouldn’t have worked for Jerome.

3. An understanding of where data resides in your current systems. We’ve had to become uncomfortably au fait with the structure of SirsiDynix‘s internal data tables, MARCXML, OAI-PMH, RIS and all sorts of other unpleasantness. Also an awareness of the rich ecosystem of third-party library/bibliographic/other useful data that exists on the web.

4. Related: a willingness to try different approaches to getting hold of this data (the absolutely-anything-goes, mashup-heavy approach): APIs: great. Screen-scraping: yeah, if we must. SQL querying, .csv dumps, proprietary admin interfaces: all fine. Don’t be precious about finding a way in. By far the most important thing is to provide the open data service in the first place. Things can always be tidied up, rationalised, at a later date.

5. A box to run things on. This doesn’t have to be a large institutional server: we’ve successfully run Jerome on a Mac Mini.

6. Finally, the use of blogs—such as this one—and social media to engage a (self-selecting, admittedly) community of potential users and fellow-travellers.


  • Use tools such as Pivotal Tracker and mind maps to capture requirements and turn them into a plan for development.
  • Meet regularly to review the plan and push the ideas on to the next stage.
  • Decide what’s ‘out of scope’ early on so you can concentrate on maximum value.
  • Realise that there’s value in releasing even part of your data—for example, a list of ISSNs—which you can exploit immediately without having to worry about issues (e.g. third-party copyright in records) that might affect your complete dataset.
  • Blog about what you’re doing. Little and often is the way to go.

What to avoid:

While it’s important to be aware of the bigger picture, don’t get too distracted by the way things are being done elsewhere.

A lot of the open data movement seems to be closely tied up with providing access to large quantities of Linked Data (the dreaded RDF triple store!), and initially I was worried that because we were not taking that approach we were somehow out of step. (In fact, I think that our not concentrating on Linked Data has allowed Jerome to explore the open data landscape in a different and valuable way, i.e. the provision of open bibliographic data services via API – see Paul Walk’s slides on providing data vs. providing a data service. I know which side my bread’s buttered: Open Data ≠ merely open; Open Data = open and usable.)

Similarly, while a lot of the discussion around third-party intellectual property rights in data has been phrased in terms of negotiating with those third parties to release the data openly (or taking a risk-assessment approach to releasing it anyway!), Jerome took a different approach, which has been first to release openly those (sometimes minimal) bits of data which we know are free from third-party interest, then to use existing open data sources to enhance the minimal set: what we end up with are data that may differ from the original bibliographic record, but which are inherently open. It’s not a better approach, just a different one.

Who you will need to have ‘on side’:

  • Your systems librarian, or at least someone who has a copy of the manual to your library catalogue!
  • A cataloguer, to excuse explain the intricacies of MARC.
  • A university librarian / head of service who is willing to take risks and sanction the release of open data.
  • A small-but-thriving community of developers, mashup-enthusiasts and shambrarians (see above), including people on Twitter who can act as a sounding board for ideas.
  • Ideally, your project team should include as many people as possible who do not have a vested interest in the way libraries have historically got things done. Jerome has been valuable in arising from an approach that’s different from the library norm.

How To: PHP and MongoDB on OS X Server 10.6

Posted on July 21st, 2010 by Nick Jackson

As you may have spotted, yesterday we gained a brand-new shiny Mac Mini Server (with an asset sticker on the front), running a virgin install of OS X Server 10.6. This is the story of how you get it to run the things we need for Jerome.

1. Update

First of all, update OS X using Software Update. It’s not difficult, and it fixes any glitches which have been spotted. This is a development box, so we’re not too bothered about an update breaking things in a most spectacular fashion.

2. Enable PHP

OS X 10.6, although it ships with Apache 2, doesn’t ship with PHP enabled by default. Fortunately PHP5 is bundled and mostly ready to go, it just needs turning on. I found an excellent walkthrough on Foundation PHP which covers everything you need to do to get PHP switched on and running.

Basically, it’s enabling the PHP5 module in /etc/apache2/httpd.conf and copying /etc/php.ini.default to /etc/php.ini. There’s a bit more on enabling some error notices, but that’s more PHP configuration than actual enabling.

3. MacPorts

MacPorts provides a repository-like way of installing things in OS X, including MongoDB (which we need later).


Before installing MacPorts you need to head to the Mac Dev Center and grab Xcode. It’s free (but quite a large download), you just need to register. Make sure you install it with the UNIX Development Tools enabled, or else MacPorts will throw a wobbler when you install it.

We’re not using Xcode for anything else, but MacPorts needs it for the compilation tools.


Installing MacPorts is stupidly easy. Download the installer, and run it.

Read the rest of this entry »

Don’t call it a monkey…

Posted on July 20th, 2010 by Nick Jackson

Our un-server (referred to as the orang-utan within Jerome circles) has arrived, and is currently sat on my desk being configured ready to go. Enjoy some obligatory photos, and cry as you realise that whilst checking the server in the ICT Service Desk decided to put the asset label on the very front panel.