Here’s how to fix online harassment. No, seriously

By letting third parties build moderation and safety tools, social media platforms could limit harassment — and give users more agency in how they engage

A common fallacy in thinking about content moderation is the mere practice of isolating and identifying the problem as such. It is enticing, especially to software engineers and computer scientists, to reduce the issue to a black box classification question, where we merely have to design a system that can take as input each piece of content and produce as output the answer to the question: “Is this OK or not?”

Ideally, the inside of this box is a piece of software, but when the technology isn’t there yet, we can fake it by tucking some humans inside the black box  – i.e. send the task out to low-paid workers sitting in content moderation farms. But the question of “Is this OK or not?” is never that simple. OK to whom? And in which context? Even if a machine learning classifier can give a numeric probability on how likely a piece of content is to be toxic or harassing, it’s not clear what should be done with that prediction, particularly given the inevitability of bias or error. Setting a platform-wide bar for “quality” that is too high will screen out too many things that are actually OK, and setting it too low will permit too many problematic things to pass.

This entire framing of the problem of “content moderation” is flawed. Someone’s experience on a platform is much more than the abuse-likelihood score of each piece of content they see. It is affected by every feature and design choice. Explicit product decisions and machine learning algorithms determine what is given distribution and prominence in timelines and recommendation modules. Prompts and nudges like text composers and big buttons are designed to encourage certain behavior  –  which is not always good, for instance if they end up motivating quickly-fired retorts and thoughtless replies. Channels for private communication are convenient for personal conversations, yet dangerous as a direct line to someone with no observers to intervene against bad behaviour. Abusers and harassers have the space to be endlessly creative in how to threaten, menace and bully.

I have the unfortunate personal experience of over a decade of online harassment. In this time, I’ve seen everything from dedicated hate and conspiracy pages, whack-a-mole harassers who create account after account on the same platform when they get suspended, cross-platform attacks, impersonation accounts that post abuse under my name and image, co-ordinated harassment campaigns and troll brigades, waves of abuse that come following a viral post, private messages that chastise me and tell me how I might make myself more attractive to men, invites to group chats where people I don’t know discuss murdering me  – not to mention simple garden-variety sexism and misogyny.

To build solutions for the entire space of abuse issues is no easy task. It gets harder when every new feature is also a potential vector of abuse. Platforms have a responsibility to build in basic protection mechanisms, and this is necessary  –  but not sufficient. Platform-level decisions will always be crude, hewing to a lowest common denominator, and are not contextualised or personalised. To give users more control over their individual experience, platforms must first build moderation and safety constructs such as reporting, blocking and muting. But they should also open up their trust and safety APIs, so that others can invent a full range of consumer solutions. This would allow third-party developers to build creative, sometimes specialised or maybe “niche”, services for users that need and prioritise different things.

In the physical world, roads are a little bit like this infrastructure that underpins different possible consumer solutions for transit and transportation, though the APIs don’t need to be explicitly provided. The main requirements are being able to move across a flat surface and stay within a lane. Some people want horsepower and flashiness; they have their sports cars. Other cases call for more security, like armoured transport. Delivery drivers who want to zip around quickly in the city have scooters. Using social media platforms right now is a little like being able to travel on roads only in a standard-issue open-air vehicle with no protection against people furiously slinging eggs, and given an air horn to signal for help (but no one comes).

Opening APIs would allow consumers more choice and control over how they navigate social media. My company, Block Party, takes advantage of existing Twitter APIs to build one such service. By automatically muting accounts that don’t pass user-configurable filters  –  for example, in a mode called “I need a break”, you can choose to only hear from people you follow and people followed by people you follow  –  we let people quieten the noise in their Twitter mentions. These hidden accounts are put into a folder on Block Party for later review and action, if and when desired; access to this folder can also be shared with trusted friends to help review. It’s a simple concept. But for people who deal with harassment or just a lot of unwanted replies and mentions, having this extra layer of customisation to quarantine the mess can be a huge relief. In some cases it makes it possible or tolerable for people to continue using Twitter when it wouldn’t be otherwise.

For platforms to open up APIs around moderation and safety is a win-win-win. It’s better for users, who have more control over how they engage. It’s better for platforms, who have happier users. And it enables a whole ecosystem of developers to collaborate on solving some of the most frustrating problems of abuse and harassment on platforms today.

Tracy Chou is the founder of Block Party and cofounder of Project Include.

This article was originally published by WIRED UK