Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.


Special edition: online regulation 

Due to recent developments in the online content regulation landscape around the world, Tech Against Terrorism in this special edition of our reader's digest aims to provide an overview of articles discussing the most recent legislative and regulatory initiatives.

Tech Against Terrorism has focussed on online regulation in the past. Most recently, we discussed the topic at length in our podcast series. In our webinar series, we have also covered related topics, including definitions of terrorism, Terms of Service, and transparency reporting. If you wish access a recording of these webinars, please get in contact with us.


Current regulatory initiatives

The legal response of Western democracies to online terrorism and extremism: In this article, Nery Ramati explores the different legal responses of Western democratic states to online extremism. Based on countries’ legal histories, Ramati shows how different histories, especially whether countries experienced an ongoing conflict, help determine counter-terrorism regulation of the online sphere. Ramati finds four main strategies to prevent “online extremism and terrorism”; removal of content, surveillance of online activity, criminalisation of certain public expression, and justification of restrictive administrative measures based on the online content. In addition, Ramati analyses the legal powers that influence what content should be allowed on tech companies' platforms. Ramati also stresses that countries often increase surveillance as part of their counterterrorism measures, potentially risk restricting the right to privacy (Ramati, Vox-Pol, 04-03-2020)

Germany: 

The problems with Germany's new social media hate speech bill: In this article, Federico Guerrini analyses the, in his view, problematic nature of Germany’s hate speech law and how it has set a precedent for other countries. The law, called the Network Enforcement Act (NetzDG), requires tech companies to take down “offensive content” within 24 hours, and if they fail to do so to pay a fine of up to 50 million dollars. Guerrini highlights criticisms of the law, stating that it risks leading to censorship and inhibiting freedom of speech. In addition, the NetzDG gives tech companies the responsibility of legal adjudication in deciding what content is covered by the law. Due to the high fines, according to Guerrini this might mean that companies will err on the side of removal with potential negative consequences for freedom of speech as a result. Guerrini also raises the point that non-democratic states might use NetzDG as justification for implementing their own initiatives to restrict online free expression.  (Guerrini, Forbes, 03.03.2020)

United Kingdom: 

Online harms regulation – Clarity awaited but reforms set to be delayed:This review by Osborne Clarke, an international legal practice, on the UK Online Harms White Paper, analyses the current state of the regulation and the questions that still need to be answered. The regulation, involving new duty of care policies for both illegal content and “potentially harmful content”, will set an important precedent for other countries. Hence, the authors stress that clarity is all the more important to bring cohesion to a fragmented legal cross-border landscape. Hence, the review stresses that questions regarding how the new codes of conduct and duty of care will be implemented alongside platforms’ protection from liability, as well as on how automated solutions should be used to detect content, need to be answered promptly. Finally, the authors highlight the effects for smaller tech companies, as the trickledown effect of the regulation will be significant, with some tech companies unlikely to be able to comply with its measures. (Osborne Clarke, 11.05.2020)

France: 

– France’s new online hate speech law Is fundamentally flawed: In this article, Chloe Havadas addresses David Kaye’s, UN Special Rapporteur on Freedom of Expression, criticism on France’s new law on online hate speech. The law stipulates that any hateful and discriminatory comments based on race, gender, disability, sexual orientation, and religion need to be taken down in 24 hours, with content related to terrorism and child exploitation having to be removed within one hour. If companies fail to remove content within these deadlines, they risk steep fines. According to Kaye, the law risks harming freedom of expression, especially due to its broad definitions of terms such as “extremism” or “inciting hatred”. The law also encourages the use of data-driven content moderation solutions, such as artificial intelligence (AI), which currently cannot catch the nuance of speech, including hate speech, and thereby risks being ineffective or taking down legal content. In addition, whilst bigger tech companies are able to meet the requirements set out in the new law, smaller platforms do not have the resources to do so. As a result, Kaye says that the law risks leading to self-censorship, due to unclear definitions and the risks of fines, as well as sparking the narrative found in elements of violent extremist circles around perceived online censorship. Instead, Kaye suggests that transparency and oversight standards are a favourable “blueprint” to countering hate speech online, which this law, according to him, fails to introduce. (Havadas, Slate, 26.06.2020)

United States: 

Trump signs order that could punish social media companies for how they police content, drawing criticism and doubts of legalityFollowing US President Trump’s 28 May Executive Order, Tony Romm & Elisabeth Dwoskin examine some of the critiques against the approach outlined in the Order. The Executive Order seeks to change the federal law known as Section 230 of the Communications Decency Act, which stipulates that tech companies decide what is allowed on their platforms and protects them from liability for content posted by users. Romm and Dwoskin mention the challenges surrounding the legality of the Executive Order as it according to them risks undermining the First Amendment. This criticism has been echoed by a range of entities, including lawmakers, tech companies, digital experts, free speech activists, and what Romm and Dwoskin call “longtime conservative-leaning advocacy groups”.  (Romm & Dwoskin, Washington Post, 29.05.2020)

Pakistan: 

– Pakistan's Online Harm Rules: Right to Privacy and Speech DeniedIn this article, Aryan Garg sets out the criticism regarding Pakistan’s “Citizen’s Protection Against Online Harms” rules, which were approved in January of this year. The law requires social media companies to remove content related to terrorism, extremism, violence, hate speech, or other content deemed to pose a risk to national security, within 24 hours, or within 6 hours in emergency cases. To ensure compliance, the government has appointed a National Coordinator to monitor tech company efforts under the law. Notably, Garg points towards several potential problems with the position of National Coordinator. Namely, the lack of transparency of the Coordinators’ qualifications and accountability, as well as the risk of damaging individual human rights posed by the scope of its powers. In addition, due to the vague definitions on what qualifies as harmful content in the law, companies might err on the side of removal and thereby take down potentially lawful content so as to not be fined. In combination, Garg argues that this law risks harming digital rights and online freedom of speech in Pakistan  (Garg, Vox-Pol, 27.05.2020)

Malaysia: 

The Lawfare Podcast: Gabrielle Lim on the life and death of Malaysia's anti-fake news Act:This podcast, moderated by Evelyn Douek and Quanta Jurecic, discusses Gabrielle Lim’s study on Malaysia's proposed Anti-Fake News Act, which was unveiled in 2018. Douek, Jurecic, and Lim point to the significance of this case-study, as it shows the importance of countering “dangerous rhetoric” around disinformation, and illustrates how the way disinformation is talked about in the West has global impact. The proposed law set out to counter fake news, which was framed as a national security threat. However, Lim's study concluded that the law would have likely harmed free speech rather than stopping the circulation of fake news. For example, all content shared in text, audio, and video format would be at risk of being labelled "partly" or "wholly false". If not removed from their platforms, tech companies and their employees would have faced jail time as well as fines. Therefore, Lim says the the law would have served as a tool to dissuade political dissent. Following objections from civil society and activist groups, the Act being repealed before coming into effect.  (The Lawfare Podcast, 28.05.2020)


Online regulation –  the expert perspective

Systemic Duties of Care and Intermediary Liability:In this article, Daphne Keller examines the newly proposed and implemented laws targeting illegal content online, and the inherent challenges with the systemic duty of care and intermediary liability principles in Europe and elsewhere. The systemic duty of care suggested in these proposals aim to “set a standard for platforms' overall content moderation system” and “to coexist with ordinary intermediary liability laws”. It also creates obligations for platforms to meet the standard of care requirements to claim immunity from liability, install proactive monitoring measures, and remove illegal content. Keller identifies two practical consequences of this: for a company to improve existing “notice-and-takedown systems” and to proactively detect or remove such content. However, the real challenges arises around liability litigation, which is two-fold. First, if a platform was unaware of the content in question, it is argued they should have been aware, due to increased proactive measures as stipulated in the systemic duty of care. Second, if a platform is aware of the content, but fail to act accordingly by taking it down, they can also be held liable. (Keller, Stanford Center for Internet and Society, 28.05.2020)

– Broad Consequences of a Systemic Duty of Care for Platforms: In this follow-up piece, Keller dives into the consequences of implementing the systemic duty of care.  In doing so, Keller considers two different systems of duty of care: prescriptive and flexible. In the prescriptive model, specific proactive measures would be provided to tech companies. This would protect platforms from liability, as well as meet the goals often outlined by policy makers, such as swift removal of illegal content. However, one of the methods that tech companies could be required to deploy in order to find this content would be the ‘filtering’ of content, which in practice risks hurting fundamental rights. This would also include steep costs, which small tech companies might not be able to pay, risking to harm competitiveness. In the flexible model, broadly defined obligations would be given to tech companies. Keller outlines how this would lead to more reliance on platforms’ own Term of Service,  rather than what the laws in question deem to be illegal content. This would lead to increased risk of platforms being held liable, as well as less accountability and transparency. However, it can also pave the way for a more diverse, online landscape of expression, and more competition.  (Keller, Stanford Center for Internet and Society, 01.06.2020)

Have you worked on online regulations and have an article to share with us? Please email us at contact@techagainstterrorism.org, and we will add it to our resources on the matter.


Counterterrorism 

Note on counter-terrorism and sentencing bill: sentencing Reforms: In this analysis, Jonathan Hall, the UK's Independent Reviewer of Terrorist-Legislation, provides the first of a number of notes on the UK Counter-Terrorism and Sentencing Bill, as introduced into Parliament on 20 May 2020. He points to noteworthy parts of the Bill: the “serious terrorism sentence” (for offenders whose actions “very likely” resulted into multiple deaths), and the removal of the parole board for certain dangerous terrorist offenders, including minors. The first, Hall points out, is problematic as the risk of reoffending now needs to be predicted at the time of the initial sentencing, which is difficult to assess. In addition, whether someone was “very likely” responsible for multiple deaths is sometimes difficult for the court to assess. Regarding the removal of the parole board, the Bill, according to Hall, removes an offender’s incentive for good behaviour, eradicates current and future risk assessments during an offender’s sentence, and finally, prohibits minors from rehabilitating towards early release. Hall mulls on how both clauses of the Bill rely on the same principle: that terrorists pose a long-term risk to society and that the best way to protect the public is to increase their sentences, as well as take away the chance of early release. (Hall, Independent Review, 01.06.2020)

Reader's Digest - 13 March 2020

Tech Against Terrorism Reader's Digest 13 March 2020 Our weekly review of articles on terrorist and violent extremist use of the Internet,...

Read More

1 min read

Reader's Digest – 15 May

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. We...

Read More

Reader's Digest – 20th March 2020

Tech Against Terrorism Reader's Digest 20 March 2020 Our weekly review of articles on terrorist and violent extremist use of the Internet,...

Read More