Communal Ownership Online

Posted 7/30/2021

We often think of online communities as a “shared digital commons”. A forum or subreddit or chatroom where people meet and talk. An open source project, where a collection of developers build something together. A wiki, where people gather and organize knowledge. These are online spaces made up by communities of people, serving those same communities. But they are rarely governed by those same communities. More specifically, the technology these platforms are built on does not support shared governance, and any community decision-making must be awkwardly superimposed. Let’s examine the problem, and what solutions might look like.

Internet platforms usually only support one of two models of resource ownership:

  1. Single Administrator One user owns each GitHub repository and decides who gets commit access. If a repository is owned by an “organization”, that organization is owned by a single user who decides what users are in the org, or teams within the org, and what authority each one has. One user owns each Discord server, and each subreddit. Powers may be delegated from these “main” owners, but they hold ultimate control and cannot be overruled or removed.

  2. No Administrator Platforms like Twitter or Snapchat don’t have a sense of “shared community resources”, so each post is simply owned by the user that submitted it. On platforms like IRC, there may be chat channels with no operators, where every user is on equal footing without moderation power.

The single administrator model arises by default: When someone sets up a webserver to host a website, they have total control over the server, and so are implicitly the sole administrator of the website. This made a lot of sense in the 90s and early 00s when most online communities were self-hosted websites, and the line between server administration and community moderation was often unclear. It makes less sense as “online communities” become small compartments within larger websites like Reddit, Discord, GitHub, Trello, or Wikia. There are server administrators for these sites, of course, but they’re often several levels removed from the communities hosted on them. The single administrator model makes almost no sense for peer-to-peer communities like groups on Cabal, Cwtch, or IPFS, or Freenet sites, all which have no real “server infrastructure”.

The idea of “shared ownership of an online space” is nothing new. Many subreddits are operated by several moderators with equal power, who can do anything except expel the original owner or close the subreddit. Discord server owners frequently create moderator or half-moderator roles to delegate most governance, except the election of new moderators. While technically a benevolent dictatorship, these are functionally oligarchies so long as the benevolent dictator chooses to never exercise their powers. Many prominent open source projects have a constitution or other guiding documents that define a “steering committee” or “working groups” or rough parliamentary systems for making major choices about a project’s future. Whoever controls the infrastructure of these open source projects, from their websites, to their git repositories, to chat servers or bug trackers or forums, is honor-bound to abide by the decisions of the group.

But this is exactly the problem: While we can define social processes for decision-making, elections, and delegation, we’re awkwardly implementing those social processes over technology that only understands the benevolent dictator model of “single administrator with absolute power”, and hoping everyone follows the rules. Often they do. When someone goes “power mad” in a blatant enough way, the community might fork around them, migrating to a new subreddit or discord server or git repository and replacing the malfunctioning human. However, there’s a high social cost to forking - rebuilding any infrastructure that needs to be replaced, informing the entire community about what’s going on, selecting replacement humans, and moving everyone over. Often few people migrate to a fork, and it fizzles out. Occasionally there’s disagreement over the need to fork, so the community splits, and both versions run for a time, wasting effort duplicating one another’s work. The end result is that while online benevolent dictators are ostensibly replaceable, it’s a difficult and costly process.

Wouldn’t it better if the technology itself were built to match the social decision-making processes of the group?

Let’s focus on open source as an example. Let’s say that, by social contract, there’s a committee of “core developers” for a project. A minimum of two core developers must agree on minor decisions like accepting a pull request or closing an issue, and a majority of developers must agree on major decisions like adding or removing core developers or closing the project.

Under the present model, the community votes on each of the above operations, and then a user with the authority to carry out the action acts according to the will of the group. But there’s nothing preventing a FreeBSD core developer from approving their own pull requests, ignoring the social requirement for code review. Similarly, when an npm user’s account is compromised there’s nothing preventing the rogue account from uploading an “update” containing malware to the package manager.

But what if the platform itself enforced the social bylaws? Attempting to mark a new release for upload to npm triggers an event, and two developers must hit the “confirm” button before the release is created. If there are steps like “signing the release with our private key”, it may be possible to break up that authority cryptographically with Shamir Secret Sharing so that any two core developers can reproduce the key and sign the release - but this is going too far on a tangent.

Configuring the platform to match the group requires codifying bylaws in a way the platform can understand (something I’ve written about before), and so the supported types of group decision-making will be limited by the platform. Some common desirable approaches might be:

  • Threshold approval, where 3 people from a group must approve an action

  • Percentage voting, where a minimum % of a group’s members must approve an action

  • Veto voting, where actions are held “in escrow” for a certain amount of time, then auto-approved if no one from a group has vetoed them

This last option is particularly interesting, and allows patterns like “anyone can accept a pull request, as long as no one says no within the next 24 hours”.

There’s a lot of potential depth here: instead of giving a list of users blanket commit access to an entire repository, we can implement more nuanced permissions. Maybe no users have direct commit access and all need peer approval for their pull requests. Maybe sub-repositories (or sub-folders within a repository?) are delegated to smaller working groups, which either have direct commit access to their region, or can approve pull requests within their region among themselves, without consulting the larger group.

Now a repository, or a collection of repositories under the umbrella of a single project, can be “owned” by a group in an actionable way, rather than “owned by a single person hopefully acting on behalf of the group.” Huge improvement! The last thing to resolve is how the bylaws themselves get created and evolve over time.

Bylaws Bootstrapping

The simplest way of creating digital bylaws is through a very short-lived benevolent dictator. When a project is first created, the person creating it pastes in the first set of bylaws, configuring the platform to their needs. If they’re starting the project on their own then this is natural. If they’re starting the project with a group then they should collaborate on the bylaws, but the risk of abuse at this stage is low: If the “benevolent dictator” writes bylaws the group disagrees with, then the group refuses to participate until the bylaws are rewritten, or they make their own project with different bylaws. Since the project is brand-new, the usual costs to “forking” do not apply. Once bylaws are agreed upon, the initial user is bound by them just like everyone else, and so loses their “benevolent dictator” status.

Updating community bylaws is usually described as part of the bylaws: Maybe it’s a special kind of pull request, where accepting the change requires 80% approval among core members, or any other specified threshold. Therefore, no “single administrator” is needed for updating community rules, and the entire organization can run without a benevolent dictator forever after its creation.

Limitations and Downsides

There is a possible edge case where a group gets “stuck” - maybe their bylaws require 70% of members approve any pull request, and too many of their members are inactive to reach this threshold. If they also can’t reach the thresholds for adding or expelling members, or for changing the bylaws, then the project grinds to a halt. This is an awkward state, but it replaces a similar edge case under the existing model: What if the benevolent dictator drops offline? If the user that can approve pull requests or add new approved contributors is hospitalized, or forgets their password and no longer has the email address used for password recovery, what can you do? The project is frozen, it cannot proceed without the administrator! In both unfortunate edge cases, the solution is probably “fork the repository or team, replacing the inaccessible user(s).” If anything, the bylaws model provides more options for overcoming an inactive user edge case - for example, the rules may specify “removing users requires 70% approval, or 30% approval with no veto votes for two weeks”, offering a loophole that is difficult to abuse but allows easily reconfiguring the group if something goes wrong.

One distinct advantage of the current “implicit benevolent dictator” model is the ability to follow the spirit of the law rather than the letter. For example, if a group requires total consensus for an important decision, and a single user is voting down every option because they’re not having their way, a group of humans would expel the troublemaker for their immature temper-tantrum. If the platform is ultimately controlled by a benevolent dictator, then they can act on the community’s behalf and remove the disruptive user, bylaws or no. If the platform is automated and only permits actions according to the bylaws, the group loses this flexibility. This can be defended against with planning: A group may have bylaws like “we use veto-voting for approving all pull requests and changes in membership, but we also have a percentage voting option where 80% of the group can vote to kick out any user that we decide is abusing their veto powers.” Unfortunately, groups may not always anticipate these problems before they occur, and might not have built in such fallback procedures. This can be somewhat mitigated by providing lots of example bylaws. Much like how a platform might prompt “do you want your new repository to have an MIT, BSD, or GPL license? We can paste the license file in for you right now,” we could offer “here are example bylaws for a group with code review requirements, a group with percentage agreement on decisions, and a group with veto actions. Pick one and tweak to your needs.”

The General Case

We often intend for web communities to be “community-run”, or at least, “run by a group of benevolent organizers from the group.” In reality, many are run by a single user, leaving them fragile to abuse and neglect. This post outlines an approach to make collective online ownership a reality at a platform level. This could mitigate the risk of rogue users, compromised users, and inactive moderators or administrators that have moved on from the project or platform without formally stepping down.