Reimagine the Internet Day 5: New Directions in Social Media Research

Posted 5/14/2021

This week I’m attending the Reimagine the Internet mini-conference, a small and mostly academic discussion about decentralization from a corporate controlled Internet to realize a more socially positive network. This post is a collection of my notes from the fifth day of talks, following my previous post.

Today’s final session was on new directions in (academic) social media research, and some well thought-out criticisms of the decentralization zeitgeist.

An Illustrated Field Guide to Social Media

Several researchers have been collaborating on an Illustrated Field Guide to Social Media, which categorizes social media according to user interaction dynamics as follows:

Category Description Examples
Civic Logic Strict speech rules, intended for discussion and civic engagement, not socialization Parlio, Ahwaa, vTaiwan
Local Logic Geo-locked discussions, often with neighborhoods or towns, often with extremely active moderation, intended for local news and requests for assistance Nextdoor, Front Porch Forum
Crypto Logic Platforms reward creators with cryptocurrency tokens for content and engagement, and often allow spending tokens to influence platform governance, under the belief that sponsorship will lead to high quality content Steemit, DTube, Minds
Great Shopping Mall Social media serving corporate interests with government oversight before users (think WeChat Pay and strong censorship), community safety concerns are government-prompted rather than userbase-driven WeChat, Douyin
Russian Logic Simultaneously “free” and “state-controlled”, stemming from a network initially built for public consumption beyond the state, then retroactively surveilled and controlled, with an added mandate of “Internet sovereignty” that demands Russian platforms supersede Western websites within the country VKontakte
Creator Logic Monetized one-to-many platforms where content creators broadcast to an audience, the platform connects audiences with creators and advertisers, and the platform dictates successful monetization parameters, while itself heavily influenced by advertisers YouTube, TikTok, Twitch
Gift Logic Collaborative efforts of love, usually non-commercial and rejecting clear boundaries of ownership, based in reciprocity, volunteerism, and feedback, such as fanfiction, some open source software development, or Wikipedia AO3, Wikipedia
Chat Logic Semi-private semi-ephemeral real-time spaces with community self-governance where small discussions take place without unobserved lurkers, like an online living room Discord, Snapchat, iMessage
Alt-Tech Logic Provides space for people and ideas outside of mainstream acceptable behavior, explicitly for far-right, nationalist, or bigoted viewpoints Gab, Parler
Forum Logic Topic-based chat with strongly defined in-group culture and practices, often featuring gatekeeping and community self-governance Reddit, 4chan, Usenet
Q&A Logic Mostly binary roles of ‘askers’ and ‘answerers’ (often with toxic relations), heavy moderation, focuses on recognition and status, but also reciprocity and benevolence Yahoo Answers, StackOverflow, Quora

The authors compare platforms along five axis (affordances, technology, ideology, revenue model, and governance), with numerous real-world examples of platforms in each category. The table above does not nearly do the book justice; It’s well worth a read, and I’d like to dedicate a post just to the field guide in the future.

The Limits of Imagination

Evelyn Douek, Harvard law school doctoral candidate and Berkman Klein Center affiliate, had some excellent critiques of small-scale decentralization.

Changing Perceptions on Online Free Speech

She framed online free speech perspectives as coming from three “eras”:

  1. The Rights Era, where platforms are expected to be financially motivated, and will maybe engage in light censorship on those grounds (copyright, criminal content, etc), but should otherwise be as hands-off as possible

  2. The Public Health Era, where platforms are expected to be stewards of public society, and should take a more active role in suppressing hatespeech, harassment, and other negative behavior

  3. The Legitimacy Era, where platforms are directed by, or at least accountable to, the public rather than solely corporate interests, bringing public health interests to the forefront of moderation and platform policy

Under this framing we’re currently in the “public health era”, imagining an Internet that more closely resembles the “legitimacy era”. Reddit is expected to ban subreddits for hate speech and inciting violence, even if they don’t meet the criteria of illegal content, the public demands that Twitter and Facebook ban Donald Trump without a court order asking them to, etc. We ask major platforms to act as centralized gatekeepers and intervene in global culture. When we imagine a decentralized Internet, maybe a fediverse like Mastodon or Diaspora, we’re often motivated by distributing responsibility for moderation and content policy, increasing self-governance, and increasing diversity of content by creating spaces with differing content policies.

Will Decentralization Save Us?

Or is decentralization at odds with the public health era? This is mostly about moderation at scale. Facebook employs a small army of content moderators (often from the Philippines, often underpaid and without mental health support despite mopping up incredibly upsetting material daily), and we cannot expect small decentralized communities to replicate that volume of labor.

Does this mean that hatespeech and radicalization will thrive in decentralized spaces? Maybe, depending on the scale of the community. In very small, purposeful online spaces, like subreddits or university discord servers, the content volume is low enough to be moderated, and the appropriate subjects well-defined enough for consistent moderation. On a larger and more general-purpose network like the Mastodon fediverse this could be a serious issue.

In one very real example, Peloton, the Internet-connected stationary bike company, had to ban QAnon hashtags from their in-workout-class chat. As a fitness company, they don’t have a ton of expertise with content moderation in their social micro-community.

Content Cartels / Moderation as a Service

There’s been a push to standardize moderation across major platforms, especially related to child abuse and terrorism. This often revolves around projects like PhotoDNA, which is basically fuzzy-hashing to generate fingerprints for each image, then compare them against vast databases of fingerprints for child abuse images, missing children, terrorist recruitment videos, etc.

This is a great idea, so long as the databases are vetted so that we can be confident they are being used for their intended purpose. Finland maintains a national website blocklist for child pornography, and upon analysis, under 1% of blocked domains actually contained the alleged content.

Nevertheless, the option to centralize some or all moderation, especially in larger online platforms, is tempting. Negotiating the boundary between “we want moderation-as-a-service, to make operating a community easier”, and “we want distinct content policies in each online space, to foster diverse culture” is tricky.

Moderation Along the Stack

Moderation can occur at multiple levels, and every platform is subject to it eventually. For example, we usually describe Reddit in terms of “community self-governance” because each subreddit has volunteer moderators unaffiliated with the company that guide their own communities. When subreddit moderators are ineffectual (such as in subreddits dedicated to hatespeech), then Reddit employees intervene. However, when entire sites lack effectual moderation, such as 8chan, Bitchute, or countless other alt-tech platforms, their infrastructure providers act as moderators. This includes domain registrars, server hosting like AWS, content distribution networks like CloudFlare, and comment-hosting services like Disqus, all of whom have terminated service for customers hosting abhorrent content in the past.

All of this is important to keep in mind when we discuss issues of deplatforming or decentralization, and the idea that users may create a space “without moderation”.

Conclusion

The big takeaway from both conversations today is to look before you leap: What kind of community are you building online? What do you want user interactions and experiences to look like? What problem are you solving with decentralization?

The categories of social media outlined above, and the discussion of moderation and governance at multiple scales, with differing levels of centralization, add a rich vocabulary for discussing platform design and online community building.

This wraps up Reimagine the Internet: A satisfying conclusion to discussions on the value of small communities, diversity of culture and purpose, locality, safety, but also challenges we will face with decentralization and micro-community creation. This series provides a wealth of viewpoints from which to design or critique many aspects of sociotechnical networks, and I look forward to returning to these ideas in more technical and applied settings in the future.