In my Library Automation class yesterday, the concept of satisficing came up.
Digression: satisficing is where I feel most acutely the cultural conflict between the librarians I read and talk with in school, and the software geeks I socialize with. So any time that comes up, there’s a lot going on in my head.
Someone noted how the nature of research was changing as new search tools become available — not, to be tactful, that the quality was suffering, but that people are drawn to accessibility over exhaustivity. A favorite classmate of mine leaned over and said, “How is that quality not suffering?”
Well, class is not the time to go into that, but here’s my answer to her:
It depends.
Making search easier, making records and then content more accessible, means that more searches come up with something. It means that people are more prone to treat searching for information as a realistic tactic. It means that the generation of ideas, and the development of content and other products based on those ideas, is easier. It means we will have a world with more generation, more creativity, more content, more entrepreneurship.
And that content will cover our world with information kudzu which, like kudzu, will often have to be macheted away. Some of that content, those prototypes, those ideas, will be horribly flawed (broken, misleading, decontextualized) because they were based on incomplete or inaccurate information. But sometimes, the idea that exists, the product that exists, even if broken, is better than the idea or product that does not. I’m typing this on a browser with bugs on an operating system with bugs on hardware that’s getting increasingly apoplectic, but my life is better for having these.
So satisficing, yes, you are my little love for what you bring to our lives. But I think the cataloguers and old-school library theorists of the world have a very real point as well when they decry you. Because sometimes, the incomplete search really isn’t enough. There are objectives and applications for which good-enough is good-enough, but if I’m talking academic research (at least, past the undergraduate level)? If I’m talking, good heavens, medical research? Intelligence and security work? I would really rather the investigators not satisfice. And to this extent, the easy availability of patchy search, the least-effort temptation, really is a problem, and even a threat.
So there you go, M: the answer behind my expression.
In science and engineering, background research is usually the first stage of a project, when deciding what to do and how to proceed: you want to know whether someone’s already done the preliminary work, and whether someone’s already satisfactorily done the proposed work. The important thing with doing background research is that when you find a potentially-relevant paper, there are two outcomes of interest here: first, that the older work proves to be useful, allowing the researcher to avoid reinventing the wheel. Second, that the older work seems to negate the current work (either by disproving something or by claiming to have already done the work that is proposed)
Here’s the problem: just because something was published, doesn’t mean it was actually done or done well. In fact, Sturgeon’s Law applies here in a big way: 90% of everything is crap, particularly academic publications. People publish so they can say they published, not because they accomplished something. But it’s often difficult to tell the difference between the good and the bad in a timely fashion. And work that was good for its day may very well have a better solution using modern methods/materials/knowledge.
Satisficing then becomes extremely important: if your research is *too* thorough, you’re guaranteed to dredge up dross, work by people who got nowhere, but need to publish to mollify their sponsors or their employers, or to get tenure, or simply because they don’t know any better. (Of course, if your research is too shallow, it won’t help in the ways it’s supposed to) So it seems to me that for engineering research, “good enough” is perfect.
LikeLike
Following John’s comment, I’d add that it all depends on posing questions intelligently. The petabytes of information searched for us within seconds *can* give good answers to relative questions; they can very rarely give good answers to absolute questions.
For example, these are questions that a search can answer well:
* Of some large and heterogeneous set of texts, is word form X more common than form Y?
* Among the sociology articles indexed by the X search engine, what is the most commonly-cited source?
etc.
And these are questions that a search can rarely answer well:
* has the expression X EVER appeared in print?
* Has anyone EVER cited source X?
* How many editions of _A Pilgrim’s Progress_ were published in the 19th century?
etc.
So as long as you *understand* the tao of Search, you can make good and responsible scholarly use of it. Of course, most people, including many (most?) researchers, *do not* understand the tao of Search, and see Sturgeon’s Law, to which John has already had recourse above.
LikeLike