You can access the ORS Handbook here
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
UK’s regulatory framework:
Main body overseeing online regulation:
Key bodies and institutions:
Key takeaways for tech companies:
Online Harms proposal:
UK counterterrorism legislation:
Terrorism Act 2000, the Terrorism Act 2006 & the Counterterrorism and Border Security Act 2019
In an amendment to article 58 of the Terrorism Act 2000, as written in the Counterterrorism and Border Security Act, viewing terrorist content online just once may give up to 15 years in prison. However, penalisation is dependent on knowing the purpose of that content (it being terrorist in nature), without a reasonable excuse (including journalistic or academic work).
The former Independent Reviewer of Terrorism Legislation, Max Hill, raised questions on the amendment on subsection 58 of the Terrorism Act when it was proposed in 2017. In a response, he and Professor Clive Walker of Leeds University School of Law asked whether an amendment was needed in the first place. They concluded that the existing clauses 1 (the encouragement of terrorism), 2 (the dissemination of terrorist publications), and 5 (the preparation of terrorist acts) of the Terrorism Act 2006 were sufficient for prosecuting and criminalising the online viewing of terrorist content, and so argued that the amendment was not necessary.
The Independent Reviewer subsequently considered the proposed amendment, which at that time still set out to criminalise “repeated viewing” of terrorist content on the Internet. On this premise, the Independent Reviewer identified that the law had the potential to “catch far too many people”. However, as mentioned above the final Act went a step further, dropping the “repeated viewing” element and criminalising one-off viewing of terrorist material. The Independent Reviewers’ concerns were publicly shared by civil society groups, who cautioned that it might have detrimental impact on freedom of speech.
The Independent Reviewer’s original criticism also identified potential issues with users having to understand the “purpose of content” in order for the law to be effective, arguing that viewing of terrorist content does not necessarily mean that a user understands its purpose. This line of criticism can also be applied to sharing and disseminating content, as again, users might not be aware that the content is there for “terrorist purposes”.
Furthermore, the United Nations special rapporteur on human rights and counter-terrorism, Professor Fionnuala Ní Aoláin, criticised the Counterterrorism and Border Security Act 2019 for being based on a “conveyer-belt” understanding of radicalisation or taking up violence, pointing out that there is little academic support for the theory that an individual will become radicalised by viewing terrorist content alone. Ní Aoláin also stated that whilst there are some protections for academics and journalists, other users will be infringed in their right to impart, seek, and receive information.
Online Harms White Paper
The Online Harms White Paper was published in April 2019 by the UK Home Office and the UK DCMS.
The proposed legislation has not yet entered parliament, but a Consultation process was held in 2019. In total, 2,400 responses were received from a broad range of stakeholders, including larger and smaller tech companies, governments, academics, think-tanks, civil society groups, and publishers.5
The White Paper covers a broad and varying range of online harms, although it distinguishes between “potentially harmful content” and “illegal content”. Illegal content includes child sexual exploitation as well as terrorist content. This distinction was made to ensure the proportionality of the legislation, meaning that extreme content requires “further action” from platforms. However, the legislation does not define terrorist content or what going “further” entails. The proposal limits itself to suggesting that content removal should be preferred for illegal content, whilst other online harms should be addressed by other “content processes in place by tech companies”.
The proposed legislation has received criticism in the following areas:
The Interim Regime
The Interim Regime will work to ensure that the UK upholds its obligations under the EU’s AVMSD until the Online Harms legislation is passed. As such, the Interim Regime applies to all UK VSPs. The EU updated the AVMSD, which governs Union-wide coordination of national legislation on audio-visual services (such as television broadcasts), in 2018 to include VSPs. It encourages Member States to ensure that VSPs operating under their jurisdiction comply with the requirements set out in the AVMSD, including preventing the dissemination of terrorist content. The European Commission has specified that VSP status primarily concerns platforms who either have the sharing of user-generated video content as its main purpose or as one of its core purposes, meaning that in theory the AVMSD could apply to social media platforms as well.
Similar to the feedback raised on the Online Harms White Paper, criticism raised by legal experts, civil society groups, and tech companies on the Interim Regime consolidate around the enforcement mechanisms that might lead to over-removal and potentially hinder competition and innovation as well as the lack of definitional clarity when it comes to defining harmful content, and particularly terrorist content.
However, Ofcom’s most recent guidance for VSPs specifies that its first priority is to work together with the VSPs to strengthen or implement new measures in order to comply with the interim regime in its “early regulatory phase”. In addition, Ofcom has provided guidance on how to determine proportionality between the action taken by a VSP and the level of harm of a particular piece of content. Ofcom stipulates that the size of the VSPs will be taken into account in both its proof of compliance as well as Ofcom’s enforcement mechanism. Whilst this guidance clarifies some of the new requirements put on VSPs, the guidance is likely to change throughout the early regulatory phase.
Tech Against Terrorism offered a response to Ofcom’s consultation process on the regulation of VSPs, which was concluded in September, which can be found here.
 Restricted material constitutes “videos which have or would be likely to have an R18 certificate, or which have been or would like be refused a certificate. It also means other material that might impair the physical, mental or moral development of persons under the age of 18”.
 UK Consultation Report.
 The UK only has existing obligations to the European directives for the duration of the Brexit negotiations; therefore, the legal concerns might become less relevant. However, whilst the UK might not have to fulfil the European directives, potential implications for freedom of speech and intermediary liability are still valid for post-Brexit Britain.
Article 19, 2019. Response to the Consultations on the White Paper on Online Harms
Global Network Initiative (2020), Content Regulation and Human Rights.
Human Rights Watch (2020). Social Media Platforms Remove Evidence War Crimes
Natasha Lomas (2019), UK Sets Out Safety-focused Plan to Regulate Internet Firms, Techcrunch
Osborne Clarke (2020), Online Harms Regulation | Clarity Awaited but Reforms Set to Be Delayed.
Tech Against Terrorism (2020), Summary Tech Against Terrorism’s Response to Ofcom’s Consultation Process on the Regulation of Video-Sharing Platforms.
UK government (2020), Online Harms White Paper – Initial Consultation Response
Vinous Ali, (2019) TechUK comments on the Government’s new Online Harms White Paper TechUK