Benchmark to improve

UCISA has run the HEITS exercise to collect benchmark statistics for seventeen years. During that time, members have used the data to assist in making business cases for funding, for quality assurance purposes and for comparing themselves with their peers. I attended a workshop run by EUNIS’s BENCHEIT working group last week partly to hear what others were doing in the way of benchmarking and partly to see if there were any lessons that we could learn from our peers (and thirdly to promote the results of the UCISA Digital Capabilities survey).

The Finns compiled their statistics by carrying out an in depth analysis of the costs of services. This is similar to the approach adopted by the Jisc Financial X-ray – although it takes time to produce the data, particularly when considering the apportionment of procurement items and staff costs, it does lead to detailed costs. It also permits quite detailed comparison between institutions. Individual institutions can pick out areas where their costs are very different (higher or lower) and they can then ask questions of the other participants to establish the reasons for the variation.

The Dutch approach was similar but they also used the statistics strategically within the individual institutions. Whilst they also identified the exceptional costs and sought to identify the reasons behind variations, they used the statistics to demonstrate value internally (“the IT infrastructure is only costing x% of the student fee”) and to baseline costs in order to highlight the impact of projects. In both the Finnish and Dutch cases, the statistics prompted an open discussion on the costs of contracts and where there were significant variations they were cited in talks with suppliers in order to bring costs down. There seemed to be far more openness with regard to commercial contracts than appears to be the case in the UK – perhaps this is something we need to address?

Whilst the Dutch and Finns largely concentrated on the costs of services, the Spanish adopted a more holistic approach. There too were carrying out cost comparisons but this was being done within an overall framework that assessed the maturity of the IT Governance and Management in the institution. A catalogue of principles, broken down into objectives, each with quantifiable indicators and variables, was used as the basis for the study. Each indicator and variable is fully defined to avoid any ambiguity. The results were then passed back to the institutions showing their position for each indicator relative to their peers.

The one message that emerged from the workshop is that it is important not to take raw cost figures as the basis for comparison. There are many reasons for differences in costs – the size of the institution and its mission will be contributing factors and the CHEITA group have been looking at using these to facilitate international comparisons (more in a later post). Other factors include the quality of the service being provided and institutional drivers – higher costs may be as a result of investment in any given year. It is important to have a dialogue in order to understand the context and the underlying reasons for any variation. It is a message that I continue to promote in the UUK benchmarking initiatives: the figures alone do not give the full picture – you need to understand the institutional drivers and the value of that spend in order to make a genuine comparison.

(also published on the UCISA blog)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: