Friday 9 March 2012

Beagrie's Metrics

We're aiming at delivering a set of enhancements to Linnean, but how will we know if they worked? One of the aims of the ELO project is to measure the results of the programme of enhancements in terms of tangible benefits to Linnean and its stakeholders. We're thinking about a framework that will enable us to measure the results of this before-and-after process.

Our thinking at the moment is that we could adapt and make use of the Beagrie metrics published in Benefits from the infrastructure projects in the JISC managing research data programme, which were devised for measuring the value of research data to an HEI.

The Institutions that Beagrie worked with were asked about how their lives would improve if their research data was better managed. Data management planning is a wide-ranging process that includes preservation as one of the outcomes. Those consulted were very strong at coming up with lists of potential benefits. But it was slightly harder for them to come up with reliable means of measuring those benefits.

Even so, the report came up with a very credible list. It was organised under the names of the stakeholders who would benefit the most. A little tinkering with that table allows us to put Linnean at the top of the list as the main beneficiary. We also know Linnean has researchers, and that they are concerned with scholarly access. This suggests a framework like the one below might work for us.

Benefits Metrics for Linnean
  • New research grant income
  • Number of research dataset publications generated
  • Number of research papers
  • Improvements over time in benchmark results
  • Cost savings/efficiencies
  • Re-use of infrastructure in new projects

Benefits Metrics for researchers
  • Increase in grant income/success rates
  • Increased visibility of research through data citation
  • Average time saved
  • Percentage improvement in range/effectiveness of research tool/software

Benefits Metrics for Scholarly Communication and Access
  • Number of citations to datasets in research articles
  • Number of citations to specific methods for research
  • Percentage increase in user communities
  • Number of service level agreements for nationally important datasets

The Institutions in the report go on to give specific instances of how these metrics apply in their case. For instance, for the "Average Time Saved" metric the Sudadmih project reported:

"In an attempt to measure benefit 1 (time saved by researchers by locating and retrieving relevant research notes and information more rapidly) Sudamih asked course attendees to estimate how much of their time spent writing up their research outputs is actually spent looking for notes/files/data that they know they already have and wish to refer to. The average was 18%, although in some instances it was substantially more, especially amongst those who had already spent many years engaged in research (and presumably therefore had more material to sift through). This would indicate that there is at least considerable scope to save time (and improve research efficiency) by offering training that over the long term could improve information management practices."

However, the report is also clear that any form of enhancements (technical, administrative, cultural) can take some time to bed down before their benefits are even visible, let alone become measurable. "Measuring benefits therefore might be best undertaken over a longer time-scale", is one possible conclusion. That is a caveat we'll have to bear in mind, but it doesn't preclude us devising our own bespoke set of metrics.

No comments:

Post a Comment