Cooperative Censorship

I have long been an opponent of censorship by any authority. Suppression of ideas stifles discussion, and supports corruption, authoritarianism, and antiquated, bigoted ideas. I have put a lot of thought in to distributed systems, like Tor or FreeNet, that circumvent censorship, or make it possible to host content that cannot be censored.

However, the recent Charlottesville protests show another side of the issue. Giving the alt-right a prolific voice online and in our media has allowed the Nazi ideology to flourish. This isn’t about spreading well-reasoned ideas or holding educational discussion - the goal of white supremacists is to share a message of racial superiority and discrimination based wholly in old hateful prejudice, not science or intellectual debate.

The progress of different hosting providers shutting down the Daily Stormer neo-Nazi community site shows how hesitant Corporate America is to censor - whether out of concern for bad PR, loss of revenue, perception of being responsible for the content they facilitate distribution of, or (less likely) an ideological opposition to censorship.

Ultimately, I still belief in the superiority of decentralized systems. Money-driven corporations like GoDaddy and Cloudflare should not be in the position where they are cultural gatekeepers that decide what content is acceptable and what is not. At the same time, a distributed system that prevents censorship entirely may provide an unreasonably accessible platform for hate speech. No censorship is preferable to authoritarian censorship, but is there a way to build distributed community censorship, where widespread rejection of content like white supremacy can stop its spread, without allowing easy abuse of power? If it is not designed carefully such a system would be prone to Tyranny of the Majority, where any minority groups or interests can be oppressed by the majority. Worse yet, a poorly designed system may allow a large number of bots to “sway the majority”, effectively returning to an oligarchic “tyranny of the minority with power” model. But before ruling the concept out, let’s explore the possibility some…

Existing “Distributed Censorship” Models

Decentralized Twitter clone Mastadon takes a multiple-instances approach to censorship. Effectively, each Mastadon server is linked together, or “federated”, but can refuse to federate with particular servers if the server admin chooses to. Each server then has its own content guidelines - maybe one server allows pornography, while another server forbids pornography and will not distribute posts from servers that do. This allows for evasion of censorship and the creation of communities around any subject, but content from those communities will not spread far without support from other servers.

Facebook lookalike Diaspora has a similar design, distributing across many independently operated servers called “pods”. However, content distribution is primarily decided by the user, not the pod administrator. While the pod administrator chooses what other pods to link to, the user independently chooses which users in those pods their posts will be sent to, with a feature called “aspects”. This ideally lets a user segment their friend groups from family or work colleagues, all within the same account, although there is nothing preventing users from registering separate accounts to achieve the same goal.

Both of these models distribute censorship power to server administrators, similar to forum or subreddit moderators. This is a step in the right direction from corporate control, but still creates power inequality between the relatively few server operators and the multitude of users. In the Mastadon example, the Mastadon Monitoring Project estimates that there are about 2400 servers, and 1.5 million registered users. That is, about 0.16% of the population have censorship control. While there’s nothing technical stopping a user from starting their own server and joining the 0.16%, it does require a higher expertise level, a server to run the software on, and a higher time commitment. This necessarily precludes most users from participating in censorship (and if we had 1.5 million Mastadon servers then administering censorship would be unwieldy).

Other Researcher’s Thoughts

The Digital Currency Initiative and the Center for Civic Media (both MIT groups) released a relevant report recently on decentralized web technologies, their benefits regarding censorship, and adoption problems the technologies face. While the report does not address the desirability of censoring hate speech, it does bring up the interesting point that content selection algorithms (like the code that decides what to show on your Twitter or Facebook news feeds) are as important to censorship as actual control of what posts are blocked. This presents something further to think about - is there a way to place more of the selection algorithm under user control without loading them down with technical complexity? This would allow for automatic but democratic censorship, that may alleviate the disproportionate power structures described above.

Posted 8/19/17