Jerry Remy, beloved New England sportscaster and former Red Sox player, in his book Understanding Baseball, wrote something that has stuck with me hard: if your team never gets called out at home, your third base coach isn’t doing his job. That is, if he’s being so conservative about sending runners that he never sends the ones with a risk of failure, he’s also not sending a lot of guys with a risk of success, and you’re avoiding outs by passing up on a lot of runs you should have scored.
The same idea came up recently in a Slate interview with Peter Norvig, the director of research at Google:
[Having your company run by engineering rather than sales or marketing means] a very different attitude toward error. If you’re a politician, admitting you’re wrong is a weakness, but if you’re an engineer, you essentially want to be wrong half the time. If you do experiments and you’re always right, then you aren’t getting enough information out of those experiments. You want your experiment to be like the flip of a coin: You have no idea if it is going to come up heads or tails. You want to not know what the results are going to be.
He goes on to talk about how this pervades corporate culture and operations. If you assume there are going to be errors in your code (because there always are), you build a process that can route around that. If you assume your hardware will sometimes fail (because it always, eventually, does), you buy a huge pile of cheap servers instead of a few expensive bespoke machines, and design around them (and, coincidentally, save money). And you build an organization that can minimize the consequences of failure:
We do it by trying to fail faster and smaller. The average cycle for getting something done at Google is more like three months than three years. And the average team size is small, so if we have a new idea, we don’t have to go through the political lobbying of saying, “Can we have 50 people to work on this?” Instead, it’s more done bottom up: Two or three people get together and say, “Hey, I want to work on this.” They don’t need permission from the top level to get it started because it’s just a couple of people; it’s kind of off the books.
(I debated, up there, should I write, “minimize the consequences of failure”, or “encourage innovation”? I decided, really, they’re the same thing. The greater the downside if your innovations fail, the fewer people will be brave enough to try. The more resources (time, money, staff) have to be committed — put at risk — to try anything new, the fewer new things you’ll try.)
Norvig admits he’s lucky in that the stakes are lower with Google than they might be elsewhere; the “best” 10th result to a search query is highly subjective and it’s OK if you don’t return the same thing each time, in a way it’s not OK if you’re casual about how many zeros go on someone’s bank account. (Though if my bank wants to add a few zeros to my savings account, I’m cool with that. Just sayin’.)
Come to think of it, it’s striking how much subjectivity comes into play in that article. Something like server failure you can attack probabilistically: you may not know which servers will fail, but you can estimate how many and create a strategy that, with a quantifiable margin of confidence, can cope. But something like “will people think ads by their email are creepy” or “will anyone care about Android” are all focus groups and crystal balls and gut checks. Engineers are awesome at Bayesian probability; I doubt they’ve got any comparative advantage with crystal balls. So I’m wondering if there isn’t a meta-level of risk tolerance here: create a culture where people can innovate and fail about things in their comfort zone, and they’ll have the confidence to innovate (fail) (sometimes succeed) about things outside it. To step off into the unknown, where all the future happens.