Best Value and Better Performance in Libraries

A: How library service managers can get to grips with assessing the impact of services

A2: Where are you now?

A2.1: Are the concepts clear?

The language and underpinning concepts of performance measurement are often arcane and sometimes confusing. This is probably because the basic metaphor of performance indicators and targets and the associated vocabulary were developed by engineers when looking at system efficiency and appropriated by accountants to simplify and ‘measure’ the complex world of managing organisations.

It is hardly surprising that the concepts of performance measurement work reasonably well when the library is considered as a more or less efficient operating system. Unfortunately for the accountants (operating in their guise of management consultants), libraries do not exist simply to perform efficiently. The performance metaphor begins to break down when we look at what the service is trying to achieve and how well it is doing (or moving beyond easily measured ‘outputs’). In seeking a way forward, we would like to stress that performance measurement based on system efficiency is important for libraries and should run alongside any work on assessing impact.

A2.2: Performance and other indicators

A performance indicator is a statement against which achievement in an area of activity can be assessed. It should provide at least one of the following:

  • information about the performance of a system
  • information about the central features of a system
  • information on potential or existing problem areas
  • information that is policy relevant.

Performance indicators are usually problematic and take time to develop. Gray and Wilcox3 described PIs as “socially constructed abstractions arising from attempts to make sense of complex reality.”

If you start off listing criteria by which the service might be judged, you will end up with a list of specific qualitative statements (e.g. ‘links to the community’; ‘equality of access to resources’). These then form the basis for creating indicators, which are assessable and more quantifiable. It is important to start with the judgement criteria rather than with the indicators.

When trying to measure impact, it may be extremely difficult to move from the qualitative criteria into more quantifiable indicators. If so, you may have to leave these at the level of criteria for success. We need to make judgements through measurement and qualitative assessment. You may prefer to describe qualitative impact indicators as ‘success criteria’.

Indicators (including success criteria) should:

  • be as few as possible
  • allow meaningful comparisons to be made over time
  • cover significant parts of the activities of the service (not all or even most, according to Gray and Wilcox)
  • reflect the existence of competing priorities.

Some other characteristics of good indicators are listed in C.

A2.3: Where impact and achievement fit into the picture

Figure 1

Figure 1 outlines our view of the relationships amongst library performance measurement, impact assessment, comparative ‘benchmarking’ and a broader view of ‘value for money services’.

A2.4: Inputs – processes – outputs

The ‘input-process-output’ link describes the main tool for assessing system efficiency. To give a library example, the inputs to the system might be the total books purchased in a given year; the processes would be everything entailed in getting the books to the service point from which they are borrowed; the outputs would be the number of books issued on loan during the same (or subsequent) years.

There are, of course, already a number of complex factors present in this apparently simple example – how to define – ‘books’, how to allow for the staff, equipment and infrastructure (buildings, heat, light, transport) costs of processing the books, and what constitutes a loan are just a few of these. Unlike most ‘production lines’, the library inputs, processes and products are less than straightforward – what about the inputs other than books that are processed in different ways for purposes other than loan? However, you can apply these basic approach to many of the activities undertaken by libraries to give at least part of the story.

A2.5: Outcomes

Moving on to ‘outcomes’ (in performance measurement-speak) the analogy with the engineering system begins to break down. It is possible to point to ‘higher order’ or ‘longer term’ effects of the processes explored in the basic model but not with any certainty that the effects identified (e.g. a high proportion of “readers for pleasure” in the community) are caused by the outputs that are being tracked. We will return to this issue of establishing ‘cause and effect’ below. Meanwhile, we have chosen to recognise the inherent difficulties in measuring performance at this level by using different terminology – we prefer to talk about assessing the impact of services and about achieving service aims.

A2.6: Best Value

Best Value has been shown at two points in the model. This reflects the tension between the stated intentions of Best Value to stretch and challenge services and to look at effectiveness and quality (positioning the reviews at the impact end of the continuum) and the limited interpretation by some authorities as a form of service rationing focusing on economy and efficiency (hence the positioning in the input – output link).

This limited interpretation has been reinforced by the fact that two out of the three initial public library service Best Value indicators (for 2000-2001) are at the input – output level (the cost per visit to public libraries and the number of visits per head of population). The other indicator (the percentage of library users who found the book/information they wanted, or reserved it, and were satisfied with the outcome) could be regarded as an impact indicator or as a slightly cumbersome output indicator.

You can apply Best Value as a powerful management tool to encourage wide ranging review based on collecting real data about what matters – but you may have to fight for that interpretation.

A2.7: Benchmarking

Service benchmarking, or systematically comparing your service performance with others in order to seek ‘best practice’, still draws upon engineering terminology but moves us into more problematic areas.

Two types of service benchmarking are commonly used:

data (or output) benchmarking, which relates the inputs (resources) to the outputs of a defined service. “Data benchmarking” for different library services, based on comparing their inputs and outputs (for example, the direct purchase costs of loan materials in relation to the total loans in a given year) may be fairly readily achievable. However, this can become messy if the comparisons are more global (all costs of delivering all library services against annual issues). Does low cost per loan equal a good service (or an efficient one)?

process benchmarking, which looks at what you do with the inputs to try to achieve specific outputs (alternative ways of using resources to get particular effects). Process benchmarking with other library services offers some basis for comparing what processes are undertaken and how (e.g. between the point when a book is selected for purchase and its first loan). However, you should not go on to assume that the simplest or cheapest set of processes is automatically the best. Process benchmarking conducted with similar services (in size or state of development) is likely to be productive if it also entails discussion with other services about any emergent differences between them.

To introduce another layer of complexity, advocates of both data and process benchmarking urge managers to make comparisons with the best comparable organisations in any sector – not just libraries.

Larry Brady, Executive Vice President of FMC, one of the biggest US corporations, is scathing about most data benchmarking but is a strong advocate of process benchmarking.4 “We ask our managers to go outside the organisation and determine the approaches that will allow achievement of their long-term targets,” he said, “we want to stimulate thought about how to do things differently to achieve the target rather than how to do existing things better.”

This approach chimes very well with the ‘challenge’ element of Best Value.

One of the keys to successful benchmarking is to do it at the right time. You will be ready for benchmarking after you have systematically worked out your service aims and priorities. This should help in focussing on things that other people are doing that you might not otherwise have thought about. If benchmarking is started too early it may push you into someone else's agenda and solutions – you may get hooked into their outputs and processes which may not achieve the impact you want.

A2.8: Benchmarking and Value for Money

Both Best Value and the Value for Money movement are prone to being misinterpreted as being just or mostly about efficiency (doing things better with the same resources or maintaining a standard with fewer resources). The intention of both approaches should also be to look at effectiveness. This can, of course, be difficult if an authority starts from the premise that it will save all the costs of its Best Value programme from savings made by that programme.

Looking at the idea of value for money in very broad terms, we suggest that benchmarking on the basis of inputs related to outcomes (or in our terms impacts) offers better prospects than data or process benchmarking. Unfortunately there is a prior requirement – we all need to get better at assessing impact! In case you think we are taking a short flight into cloud cuckoo-land, we are not alone in this view. The Cabinet Office, the Treasury, the National Audit Office and the NHS Executive are all showing great interest in the prospect of benchmarking on the basis of impact and achievement.5 However, nobody is doing it well – yet!

Real VfM is secured when your service is having the impact you and your stakeholders want, using the most efficient means. VfM should link inputs and processes to outputs and impact.

3. GRAY, J. and WILCOX, B. ‘Good school, bad school’: evaluating performance and encouraging improvement Buckingham: Open U. P. 1995 ISBN 0-355-19489-3

4. KAPLAN, R.S. and NORTON, D.P. ‘Putting the balanced scorecard to work‘ Harvard Business Review Sept-Oct. 1993

5. Achieving effective performance management and benchmarking in the public sector QMW Public Policy Seminars, University of London 14th October 1999