Was just reading the charmingly titled blog post Google Scribe, You Autocomplete Me (via the even more charming Trevor Dawes). Scribe is a new Labs features that does exactly what you would guess — provides autocomplete suggestions for sentences.
The Chronicle blogger does not like this. Predictably Scribe, when given sample text which must have a small and quirky corpus to draw upon (e.g. “hermeneutics”), suggests weird, only arguably grammatical things. (Though not necessarily less grammatical than the sorts of articles which contain the word “hermeneutics”.)
It is of a piece with posts I read earlier today snarking on Google Instant, or with the torrent of condemnation I have seen about the inaccuracy of Google metadata. (Passim. Seriously.)
And all of this criticism that Google is not doing these things correctly is making me cranky, because I think that it’s missing the point. Because what if the point is not, say, providing research-grade metadata every time? (Crucial when you need it, but generally, people don’t.) What if the point is not even providing correct autocompletions? What if the point is saying — we have data. We have computational powers on a scale incomprehensible only a decade or two ago. What can we do with that?
And if that’s the point — if that’s the game being played — then the way to win it isn’t by being correct: it’s by pushing the boundaries. The way to win is asking — then operationalizing — questions you don’t know the answer to, like Peter Norvig said, flipping the coin, seeing when you come up heads.
One of the striking things about Google is that they flip those coins very publicly, so we all get to see when they come up tails. And then they get criticized for coming up tails, and the criticism rolls off their back and they merrily steamroller along, because the criticism is missing the point. It’s like criticizing a fencer for not intercepting a touchdown pass. In the fraction of a second before, armed only with complaints, you get skewered.
There are some fascinating criticisms out there of Google’s errors. The fiasco over privacy and Buzz, say. Blundering into the unknown means stirring up complex systems in unexpected ways, and the anatomization of that fail can be infinitely intriguing.
But fundamentally, twenty years from now, our understanding of what access to information means will be transformed by how we have internalized the lessons of Google’s bold failures. It will not be transformed by complaints that fencers are bad at football. Even if they are.