objectivity vs. transparency

The always fascinating David Weinberger blogs on transparency vs. objectivity. Worth reading the whole thing — the argument gets deeper as it goes along. But here’s the part where I really started thinking:

Transparency prospers in a linked medium, for you can literally see the connections between the final draft’s claims and the ideas that informed it. Paper, on the other hand, sucks at links. You can look up the footnote, but that’s an expensive, time-consuming activity more likely to result in failure than success. So, during the Age of Paper, we got used to the idea that authority comes in the form of a stop sign: You’ve reached a source whose reliability requires no further inquiry.

Hence — to move the opening sentences from that paragraph to the close:

We thought that that was how knowledge works, but it turns out that it’s really just how paper works.

Of course just about anyone nerdy enough to chase footnotes knows that appeal to authority is a fallacy, but he’s got a point there: when it’s hard to do, you’re more likely to rely on the authority of the source, to seek out authorities who are trustworthy (or who have a cultural aura of trustworthiness clinging to them, like his newspaper example — at least for certain newspapers), and to have an intellectual edifice that depends on your ability to, well, trust without verifying. Blogs let wacky opinionated perspectives proliferate, but linking and searching substantially lower the cost of verifying, so objectivity’s role and importance decrease.

(The searching is key, though — link ecologies can, I expect, be navelgazing, and they often do a poor job of getting beyond our love of confirmation bias…)

So where’s the library connection? Libraries have historically been, I think, edifices built on objectivity. We’re the neutral observer. We’re the place you can trust, full of the sources you can trust. Authoritative knowledge! Come and get some.

I come across a lot of articles in my class readings written by librarians who are clearly getting the thrashing heebie-jeebies from this transition away from objectivity (and also, as it happens, comprehensiveness). Tagging, from faceless wild-west Internet crazies, versus sober and structured subject headings, assigned by trained experts? Wikipedia…(same argument)? And I admit, when I was teaching, it was frustrating to see my students head straight for Google when we went to our beautiful library with its excellent collection…

…but it wasn’t because they were going to Google over books; it was because they were going to Google without having developed the sophisticated cognitive apparatus you need when you can’t just trust a source. They didn’t have tools for evaluating the reliability of sites, nor even for situating their content within a broader body of knowledge they could have used to do that evaluation. Appeal to authority is lame, logically speaking, but it’s a good starting place while you work on appeals to your own intuition.

Anyway, that’s a digression. The point is, libraries have, I think, bought heavily into this culture of objectivity — historically, culturally, even architecturally. Many librarians relish their roles as gatekeepers, want the catalog and metadata that give you brilliantly precise searching if only you will master idiosyncratic syntax — and then bemoan users’ tendency to flock to an unadorned search box and keyword-search without a delimiter in sight — something they can do by themselves and, increasingly, anywhere.

I don’t think a lot of librarians, or libraries, know how to position themselves in this shift. So, ideas? What’s the role of a cultural institution, a neoclassical edifice, a, dare I say, neutral authority in a world of omnipresent always-on kudzu-like explosions of transparent information? Can the question even be answered with that set of adjectives and nouns? If not, how do they change?

Advertisements

why is serials recordkeeping so problematic?

I’ve been reading this post, from the charmingly named In the Library with the Lead Pipe.

The part I’ve been munching through: apparently it’s really, really hard for libraries to keep track of their electronic serials and database usage. If you want to know which of the things you’re subscribed to are actually getting used and how (and what it’s costing you), strap yourself in for a long ride, because ILSes don’t have rich enough functionality to harvest that information for you. Some people buy additional systems on top to help, but even those require a lot of work if you want to extract useful data.

There are some good reasons for this. Libraries frequently subscribe to databases or journals as bundles (and may be required to do so by the publisher), and the usage codes may not disaggregate resources within the bundle. Libraries may subscribe as part of consortia, but need to extract data for their individual institution.

Still, though. This seems like a pretty obvious thing to want to do — keep track of your actual use! So why do the tools not support it? I welcome ideas from people who actually know something (which is not me!), but in the meantime, I’ll brainstorm some possibilities…

  • It’s a genuinely hard technical problem. (And there are a lot of problems that need to be solved here — not just capturing the data, from subscription systems that apparently don’t natively provide it, but organizing it into a database that answers users’ questions, has a usable front end, and spits out data in formats useful for budgeters and other decisionmakers. That’s not one system — that’s multiple interacting systems, possibly produced by different organizations — and potentially problems that have to be re-solved for every database vendor and ILS combination. OK, it’s even harder than I realized when I started this bullet point and was just thinking about algorithms.)
  • Libraries don’t prioritize recordkeeping and review of their serials and databases enough to exert pressure (market, social, cultural) on companies to develop this feature.
  • ILSes offer a tremendous number of features; while libraries might want better serials tracking, they care more about those other features, so it’s those things that ILSes are competing on. (Although this doesn’t answer why an ILS that does well on those things, *and* on serials, doesn’t emerge and stomp on its competitors. But maybe it’s too hard (algorithmically or monetarily) to do that.)
  • This is a place where the culture clash between librarians and programmers is showing; maybe they just aren’t talking to one another enough for the user needs to become apparent. Again, you’d think this would be a place where the company (or open source project) that does do a good user needs analysis to eat its competition — and there are niches where librarians and programmers overlap — but all too often they don’t seem to even have a common vocabulary.

Ideas?