Blog post at Walking Paper raises the question — what are the best statistics for measuring libraries? Points out that circulation statistics, while heavily used, are limited and limiting.
Not sure what I make of his approach — seems to conflate “statistics you show to the general public” and “statistics you show to oversight bodies” while being biased toward the former — but it’s a good question.
What statistics would you use to measure library performance?
(h/t Librarian in Black)
3 thoughts on “library statistics”
On the theory of “you get what you measure”, surely this depends centrally on the mission of your library. It’s not about what to measure, it’s about what to value.
For instance, circulation gives you numbers about how well your collection is serving goals related to putting books in the hands of patrons. To the extent that a library’s goals are related to curating a big physical pile of bound paper, circulation serves the statistical need. The linked post, contrariwise, suggests that the library’s goal is to serve some unspecified “community and individual goals”, which it should measure by anecdotes (“Having people in the community tell their stories”? Put that in a linear regression and smoke it, statheads!). To the extent that a library’s goals are related to serving some sort of community and individual need for narratives of library-related successes, this is a reasonably good measure.
That is, unless you are blindly collecting statistics unrelated to your mission, the dispute is not statistics; the dispute is goals. Specify or discern the goals first, then construct the metrics to fit them.
The one cautionary thing that I would point out, though, is that mechanically generated statistics like circulation are a lot more egalitarian than a search for positive community anecdotes. The boring, inarticulate person holed up in a carrel doing boring work on a boring topic still gets a statistical vote in the circ figures. Someone who is never going to “tell his story” in the community is still of value. It seems to me that this is a not inconsiderable benefit of quantitative and impersonal goals over ones related to surveys, narratives, and anecdotes.
What Grant said, totally.
Also, measuring circulation has a positive bias, i.e. you measure successful transactions. That’s fine and dandy. What about unsuccessful transactions? I.e. people who couldn’t find what they were looking for; people who didn’t _know_ what they were looking for, etc.
I’m sort of writing a library manifesto these days, and measurement plays a part. But beside circulation measurement, I consider strong searching- and browsing-measurements essential to assessing the quality and usability of one’s catalog and metadata resources, and of course of the user interface itself.