Five Things You Need To Know About The Online Safety Bill
Mark Johnson, Advocacy Manager, Big Brother Watch: The UK has now endured 10 long years of debate and deliberation on the topic of internet regulation. For proponents of new online “safety” laws, this has been as painful as it has for those deeply concerned about the potential impact on civil liberties. What began with a speech by David Cameron in 2013 on cleansing the internet of “disgusting material” online, led to an “Internet Safety Strategy Green Paper” in 2017, which developed into an “Online Harms White Paper” in 2019, a draft Online Safety Bill in 2021 and then a full and final Online Safety Bill in 2022. The Bill, soon to be an Act of Parliament, has only just completed its passage through both the House of Commons and House of Lords after being carried through multiple Parliamentary sessions and at least one major revision.
A debate which began focused squarely on the protection of children online morphed into something much broader, culminating in a piece of legislation that would ultimately regulate the speech of everyone online. The final product is a regulatory framework, overseen by broadcast regulator Ofcom, which will increase liability on online platforms, shifting culpability to these platforms for the online expression of individuals, rather than the users themselves. The regulatory framework includes content takedown obligations, will age-gate large parts of the internet, amasses a large amount of executive control over how the internet is regulated and even contains provisions to sanction the scanning of our messages en-masse.
This blog signposts the key areas of the Bill that threaten our rights and liberties. Despite the best efforts of campaigners across a multitude of groups, who were able to scale back the ill-informed concept of “harmful” speech to adults, the civil liberties red flags are many and must be monitored closely when the regulatory framework formally kicks in the coming months.
1. Social media platforms have formally become judge and jury over our speech
At the heart of the Online Safety Bill is the delegation of responsibility for individuals’ online expression to social media platforms themselves. This approach runs contrary to the general principle that people should ultimately be responsible for their own actions and in the context of a communications network, only incentivises these platforms to censor where they are threatened with penalties if they do not. Whilst some illegal content will always be clear and obvious to content moderators, it is inconceivable that they should be able to make determinations on what might legally constitute “stirring up hatred” or “malicious communication”, speech which can reach a criminal threshold but which the police and the courts frequently find hard to make judgments on. Silicon Valley’s content moderators can’t possibly fulfil the tasks of police, judge and jury, so when these difficult determinations are presented to them, under the threat of penalties, they will almost certainly censor lawful speech out of an abundance of caution.
2. Silicon Valley’s speech codes will get state-backing, even if they are absurd
2022’s Conservative Party leadership contest saw a spark in interest around the Bill and led to the only significant revision which would protect free expression. Before the Bill was revised, provisions around speech which could be deemed “legal but harmful” to adults would have compelled platforms to identify categories of so-called “harmful” speech and demonstrate to Ofcom that they were dealing with it satisfactorily under content policies. As content policies increasingly pushed for the suppression of different forms of lawful expression, many of us harboured concerns that the Bill would only reinforce Big Tech censorship.
During the leadership contest, the now Prime Minister, Rishi Sunak, said:
“the challenge is whether it [the Online Safety Bill] strays into the territory of suppressing free speech. And the bit in particular that has caused some concern and questions is around this area where the government is saying, look, here’s some content that’s legal but harmful, and it’s that that’s this kind of area, which I think people rightly have said, well, what exactly does that mean?”.
While Liz Truss won the vote, provisions on speech that the government considered “legal but harmful to adults” and a proposed new offence which would have criminalised “seriously distressing” speech were ditched in a major victory for Big Brother Watch and other free expression and civil liberties groups.
SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It
However, these changes were in some sense, too good to be true.
Rather than scrap regulations regarding lawful expression entirely, the Government created a new statutory duty on social media platforms to consistently apply their own terms and conditions on taking down and restricting content and banning and suspending users.
Our research on platforms’ terms and conditions shows that they can inhibit lawful discussions on issues of racial and gender justice and mental health. In fact, we performed an experiment which found that even speech by politicians from Boris Johnson to Nadine Dorries and Angela Rayner can and has been censored under platform content rules. Furthermore, given major platforms’ pandemic terms and conditions prohibiting content that contradicted “health authorities”, this would have meant the state regulator Ofcom could issue fines of up to 10% of major platforms’ revenues if they did not consistently censor women describing changes to their periods post-vaccination, people criticising mandatory vaccine passports, or people discussing credible lab-leak theories on the origins of Covid.
Big Brother Watch believes that the creation of a statutory duty on platforms to enforce censorious, corporate policies that are totally out of step with UK speech laws is incompatible with the state’s duty to uphold Article 10 of the European Convention on Human Rights, which protects individuals’ freedom of expression.
3. The internet will become age-gated
Attempts to age-gate pornographic material online via the 2017 Digital Economy Act fell apart by 2019 due to inadequate privacy safeguards. In spite of this failure of government policy, Ministers not only attempted to re-legislate in this domain but pushed age-gating to include conventional social media sites. This broadening of the concept of age-gating now threatens the ability of adults and children to receive and impart information on major platforms without undergoing invasive age-verification checks.
Under the “child safety duties” in the Bill, a platform must make the choice between mandating technology that would check the age of all of its users, or censor content in a way that is appropriate for those under the age of 18. Whilst it is vital that children are protected online, this is an entirely disproportionate step that will manifest in more online censorship and digital exclusion from services that are obliged to respond to this duty.
It is worth mentioning in particular that these obligations also fall on search services. Given the educational benefits of Google and its wide use across all sections of society, including children, it is feasible that the search giant may have to adopt policies that ensure users are not under 18 or else sanitise the information readily available to the general public.
4. Ofcom will not be completely independent
A common theme throughout legislation over the course of this Parliament has been the extent to which the Government have sought to grant themselves executive control over previously independent regulatory systems and in turn, new and significant parts of public life.
Compliance with the Online Safety Bill’s regulatory regime can be fulfilled by a platform meeting standards set out in codes that are issued by Ofcom. However, these codes, themselves mini-rules which could dictate the limitations of our speech online, will only be published as secondary legislation, which is barely scrutinised by Parliament. In a clear example of state overreach, the Bill gives the Secretary of State powers to revise these codes of practice in certain circumstances. While the criteria for ministerial intervention in this manner were narrowed, this gives the Government of the day a huge amount of sway over how these online safety rules are established.
The legislation also gives the Secretary of State powers over the scope of the regulatory system and powers to influence Ofcom through formal statements and issued guidance. The upshot will be a system that is not fully independent from Government intervention and the potential for scenarios where the Government is able to pressure online intermediaries to narrow the parameters of expression online, through pressure or influence, without the full and proper scrutiny of Parliament.
5. Your personal device may become a tool of state surveillance
The Government’s Online Harms White Paper established a blueprint for what the Online Safety Bill would eventually become. The proposals were focused squarely on the regulation of peer-to-peer communications on public platforms. The paper even issued the following guarantee:
“Reflecting the importance of privacy, the framework will also ensure a differentiated approach for private communication, meaning any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels.”
These words proved to be entirely meaningless. In the context of a long-running Home Office war on end-to-end encryption, the Government included provisions in the Online Safety Bill which give Ofcom the power to require the use of “accredited” technology to monitor and scan all of our messages across entire messaging channels. This provision, which despite the best efforts of campaigners was not materially altered, could prove to be a hammer blow in the battle to protect privacy in the United Kingdom.
Users of end-to-end encrypted messaging services are not above the law – indeed, existing law gives the police powers to obtain encrypted data from suspects’ devices already. However, the intrusive powers in the Bill relate to the surveillance of the entire population’s devices and private messages under the guise of “safety” – not just suspects.
For groups who particularly depend on this technology to keep their messages private and secure, the creation of powers that enable the scanning of all messages across entire messaging channels comes as a devastating blow. They threaten journalists, human rights activists, LGBT people, abuse victims, whistleblowers and ironically, even people in positions of relative power who may use these services for the benefit of private messaging. Private messaging services such as Signal and WhatsApp have said they will not comply with what would essentially be the mandated issuing of spyware on all of our phones, which could lead these services to leave the UK.
While the Government have admitted that these intended powers remain technically unfeasible at this time, their creation opens the door to the sinister prospect of future monitoring and surveillance, the likes of which we have not yet seen in this country. In this vein, it is vital that these powers, which are ultimately sanctioned by Ofcom, continue to be resisted.
It is quite possible that the Online Safety Bill will fail on its own merits to keep people safe online whilst still undermining our free expression and privacy in ways that we could not yet comprehend
Conclusion
All of these measures are yet to be implemented but as Ofcom begins to undertake its new work very soon, it must be scrutinised closely. Despite the wins campaigners have achieved in scaling back some of the most egregious attacks on free speech, the final product of 10 years of wrangling is a Frankenstein’s monster of a Bill that will set free expression and privacy back decades in this country.
What we can be sure of upon its implementation is that the Online Safety Bill’s technical illiteracy will almost certainly come crashing into a brutal acquaintance with reality and will be accompanied by a mire of legal compliance questions for affected services. At a time when criminal law is being used to repress protesters, refugees and many other marginalised groups, so too will this regulatory system be used to censor them.
It is quite possible that the Online Safety Bill will fail on its own merits to keep people safe online whilst still undermining our free expression and privacy in ways that we could not yet comprehend. If this becomes the reality, civil liberties campaigners will take no pleasure in saying that many of our warnings were simply not heeded.