tl;dr I read the Ruby Community Conduct Guideline. There are some appealing elements, but it is not actually workable as a governance document. I see three key problems: lack of recourse, assumption of symmetry, and non-handling of bad actors.
The Ruby Community Conduct Guideline has an arresting blankness where I expected to see information on procedure. In particular, it doesn’t address any of the following:
- How, and to whom, can conduct bugs be reported?
- Who has the authority to mediate, or adjudicate, disputes under this guideline?
- How are people selected for this role?
- What sanctions may they impose? (What may they not impose?)
- What procedures will they follow to:
- Investigate situations
- Reach decisions
- Communicate those decisions to the aggrieved parties and the community at large
- What enforcement mechanisms are (and are not) available after decisions are reached? Who is invested with the authority to carry out these enforcement mechanisms?
The absence of such procedures is obviously worrisome to people who identify with complainants and see themselves as being at risk of being harassed, because it indicates that there is, in fact, no mechanism for lodging a complaint, and no one responsible for handling it. But it should also be worrisome to people who see themselves as more likely to be (however unfairly) among the accused, because it means that if someone does attempt to lodge a complaint, the procedures for handling it will be invented on the fly, by people under stress, deadline pressure, and heavy criticism.
The history of such situations does not suggest this will go well.
There are, again, some appealing statements of aspirational values in the Guideline. But the values are written as if they apply equally to all parties in all scenarios, and this has serious failure modes.
I expect, for instance, that the first guideline (“Participants will be tolerant of opposing views”) is meant to avoid folding an ideological litmus test into the Guideline. And I actually share the implied concern there; poorly drafted or discussed codes of conduct can indeed shade into this, and that’s not okay in large, international spaces. Insofar as this statement says “if I’m a Republican and you’re a Democrat, or I’m on Team Samoas and you’re on Team Tagalongs, or I’m a vi girl and you’re an emacs guy, we should be able to work together and deal with our disagreement”, I am all for it.
But what if my viewpoint is “someone should be allowed to check your genitals to see if you’re allowed to go to the bathroom“? Or “there aren’t many black software engineers because they’re just not as smart as white people”? (To be clear, not only do I not hold either viewpoint, I find them both loathsome. But you needn’t look far to find either.) Well. If I have any position of power in the community at all, my viewpoint has now become a barrier to your participation, if you are trans or black. You can’t go to a conference if you’re not sure that you’ll be able to pee when you’re there. And you can’t trust that any of your technical contributions will be reviewed fairly if people think your group membership limits your intelligence (unless you hide your race, which means, again, no conference attendance for you, and actually quite a lot of work to separate your workplace and social media identities from your open source contributions). Some people will laugh off that sort of outrageous prejudice and participate anyway; others will participate, but at a significant psychic cost (which is moreover, again, asymmetric — not a cost to which other community members are, or even can be, subject) — and others will go away and use their skills somewhere they don’t have to pay that kind of cost. In 2/3 of these cases, the participant loses; in 1/3, the open source community does as well.
And that brings me to the other asymmetry, which is power. Participants in open source (or, really, any) communities do not have equal power. They bring the inequalities of the larger world, of course, but there are also people with and without commit bits, people recognized on the conference circuit and those with no reputation, established participants and newcomers…
If, say, “Behaviour which can be reasonably considered harassment will not be tolerated.”, and low-status person A is harassing high-status person B, then even without any recourse procedures in the guideline, B has options. B can quietly ensure that A’s patches or talk proposals are rejected, that A isn’t welcome in after-hours bar conversations, that A doesn’t get dinner invitations. Or use blunter options that may even take advantage of official community resources (pipe all their messages to /dev/null before they get posted to the mailing list, say).
But if B is harassing A, A doesn’t have any of these options. A has…well, the procedures in a code of conduct, if there were any. And A has Twitter mobs. And A can leave the community. And that’s about it.
An assumption of symmetry is in fact an assumption that the transgressions of the powerful deserve more forbearance than the transgressions of the weak, and the suffering of the weak is less deserving of care than the suffering of the powerful.
We write code in the hopes it will do the right thing, but we test it with the certainty that something will do wrong. We know that code isn’t good enough if it only handles expected inputs. The world will see your glass and fill it with sfdeljknesv.
When interpreting the words and actions of others, participants should always assume good intentions.
I absolutely love this philosophy right up until I don’t. Lots of people are decent, and the appropriate reaction to people with good intentions who have inadvertently transgressed some boundary isn’t the same as the appropriate reaction to a bad actor, and community policy needs to leave space for the former.
But some actors do not, in fact, have good intentions. The Ruby Guideline offers no next actions to victims, bystanders, or community leaders in the event of a bad actor. And it leaves no room for people to trust their own judgment when they have presumed good intentions at the outset, but later evidence has contradicted that hypothesis. If I have well-supported reasons to believe someone is not acting with good intentions, at what point am I allowed to abandon that assumption? Explicitly, never.
The Ruby Guideline — by addressing aspirations but not failure modes, by assuming symmetry in an asymmetric world, by stating values but not procedures — creates a gaping hole for the social equivalent of an injection attack. Trust all code to self-execute, and something terribly destructive will eventually run. And when it does, you’ll wish you had logging or sandboxes or rollback or instrumentation or at the very minimum a
SIGTERM…but instead you’ll have a runaway process doing what it will to your system, or a messy
The ironic thing is, in everyday life, I more or less try to live by this guideline. I generally do assume good faith until proven otherwise. I can find ideas of value in a wide range of philosophies, and in fact my everyday social media diet includes anarchists, libertarians, mainline Democrats, greens, socialists, fundamentalist Christians, liberal Christians, Muslims, Jews of various movements, people from at least five continents…quite a few people who would genuinely hate each other were they in the same room, and who discuss the same topic from such different points of view it’s almost hard to recognize that it’s the same topic. And that’s great! And I’m richer for it.
And the Guideline is still a bad piece of community governance.