iStock

Online Safety Bill unfit for purpose, says new campaign group

Campaign group set up to oppose Online Safety Bill says the duty of care is too simplistic, cedes too much power to US corporations and will, in practice, privilege the speech of journalists or politicians

The Online Safety Bill is overly simplistic and cedes too much power to Silicon Valley firms over freedom of speech in the UK, claims a newly formed campaign group.

The proposed law, formerly known as the Online Harms Bill, seeks to promote safety online by making internet companies and service providers more accountable for the content shared by users on their platforms.

However, speaking at a press conference, members of the newly established Legal to Say. Legal to Type campaign group have criticised the Bill as it currently stands for ceding too much power over UK citizens’ freedom of speech to Silicon Valley.

Group members include Conservative MP David Davis, Index on Censorship CEO Ruth Smeeth, Open Rights Group executive director Jim Killock and Gavin Millar QC of Matrix Chambers.

Under the Bill’s duty of care, technology platforms that host user-generated content or allow people to communicate will be legally obliged to proactively identify, remove and limit the spread of illegal or harmful content – such as child sexual abuse, terrorism and suicide material – or they could be fined up to 10% of turnover by the online harms regulator, which is confirmed to be Ofcom.

“Today the UK shows global leadership with our ground-breaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world,” said digital secretary Oliver Dowden following the publication of the draft bill in May 2021.

“We will protect children on the internet, crack down on racist abuse on social media and, through new measures to safeguard our liberties, create a truly democratic digital age.”

Home secretary Priti Patel added: “It’s time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties.”

The legislation will apply to any company in the world that serves UK-based users, with the rules tiered in such a way that the most popular sites and services (those with large audiences) will need to go further by setting and enforcing clear terms and conditions that explicitly state how content that is legal but could still cause significant physical or psychological harm will be handled.

This will include misinformation and disinformation about a range of topics, such as coronavirus vaccines, marking the first time online misinformation has come under the remit of a government regulator.

“Under the duty of care provisions… the process envisaged by it requires platforms to, and I quote, ‘manage and mitigate the risk of harm from illegal content, content harmful to children and content harmful to adults’,” said Millar at the press conference.

“That is harm as defined in the Bill, and that is the risk as identified by them in a risk assessment that they conduct on their own service – so it leaves it to them to decide how to do this – and the Bill mandates that they design and implement what are called safety policies.

“This will be done primarily by tech solutions, which is algorithms, rather than human beings making judgements about the content.”

Davis, who characterised the Bill as a “censor’s charter”, added: “Silicon Valley providers are being asked to adjudicate and censor ‘legal but harmful’ content. Because of the vagueness of the criteria and the size of the fine, we know what they’re going to do – they’re going to lean heavily into the side of caution.

“Anything that can be characterised as misinformation will be censored. Silicon Valley mega-corporations are going to be the arbiters of truth online – the effect on free speech will be terrible.”

An official draft of the Bill, released in early May 2021, also included a new criminal offence for senior managers as a deferred power, which attendees said would only exacerbate the issue.

Millar further described the increased external pressure from the state as a “recipe for over-zealous and misconceived exclusion of content”.

Read more about online safety and regulation

Fact-checking experts previously told a House of Lords committee in February 2021 that the Online Safety Bill should force internet companies to provide real-time information and updates about suspected disinformation, and further warned against an over-reliance on artificial intelligence (AI) algorithms to moderate content.

Full Fact CEO Will Moy said at the time: “We need independent scrutiny of the use of AI by those companies and its unintended consequences – not just what they think it’s doing, but what it’s actually doing – and we need real-time information on the content moderation actions these companies take and their effects.

“These internet companies can silently and secretly, as the AI algorithms are considered trade secrets, shape public debate. These transparency requirements therefore need to be set on the face of the Online Safety Bill.”

Moy said the majority of internet companies take action on every piece of content in their systems through their algorithms, because they decide how many people they are seen by, how they are displayed, and so on.

“Those choices are treated as commercial secrets, but they can powerfully enhance our ability to impart or receive information – that’s why we need strong information powers in the Online Safety Bill, so we can start to understand not just the content, but the effects of those decisions,” he said.

The Bill also includes protections for journalistic content and content of “democratic importance”, which conference attendees saw as creating a two-tier system around freedom of expression that privileges the speech of journalists and politicians.

“The Bill attempts to give politicians and journalists additional protections for their free speech,” said a report published by Index on Censorship ahead of the conference. “Yet the ability of ordinary people to speak their mind online has brought about many positive changes, such as the #MeToo movement.

“The Bill could lead to companies removing people’s own experiences as they could be considered ‘harmful’. The voices of ordinary British people are not any less important than those of a select few. By creating two tiers of freedom of expression, the government is opening up disparities between the free speech enjoyed by ordinary citizens and that enjoyed by a Westminster bubble of journalists and politicians.”

In the wake of former US president Donald Trump’s suspension from Twitter in January 2021, Alex Stamos, a former chief security officer at Facebook and director of the Stanford Internet Observatory, noted that “the disinformation problem is almost uniquely being driven by political elites”.

Research from the Election Integrity Partnership, which looked at misinformation in the 2020 US presidential election, found that reducing the spread of disinformation “doesn’t require widespread suppression” because it was just a small number of Twitter accounts that had “consistently amplified misinformation about the integrity of the election”.

Conference attendees agreed that the Bill was too broad, and would benefit from being broken down into separate pieces of legislation that focus on the specific issues to be addressed.

“I think the problem with the Bill is the fact that it’s trying to do everything at once,” said Killock. “Rather than trying to work out what an appropriate way of helping to ensure that children’s safety is developed, we’re simply saying, well, we’ll treat adults and children all the same, and we will just, essentially, ask all of the platforms to mitigate risks across all of these categories of people in the same way.”

Davis said that, ultimately, Parliament has “got to do its job, and that means to go through specifically and identify what it thinks are harmful and render them illegal if need be”.

Read more on IT legislation and regulation

CIO
Security
Networking
Data Center
Data Management
Close