Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.

Top stories

  • In a recent piece for The Independent, Tech Against Terrorism cautions against focusing all attention on big tech in the fight to counter terrorism and calls for more support to be given to smaller platforms, which have the will but often lack the capacity to tackle terrorist exploitation: “The most effective way to prevent terrorists’ exploitation of smaller social media companies, while ensuring online competition thrives, is to provide them with the practical tools to make their platforms more secure.”
  • On Wednesday, 29 July, the CEOs of Facebook, Google, Amazon and Apple faced their first joint antitrust hearing in front of the US Congress. You can read about the key takeaways from the hearing, and its possible future implications for tech regulation, here.
  • The European Union’s Internet Referral Unit (EU IRU) has released its latest annual report on the state of Islamist terrorism online propaganda, and how it reflects political and operational realities. Read the report here.

Tech policy 

  • TikTok’s New CEO says company will reveal how algorithms work: In an “unprecedented” move TikTok’s new CEO, Kevin Mayer, announced that the social media platform would make its content-moderation algorithms public to experts. This would allow for a selected panel of experts to observe how the popular social media platform enforces its content moderation policies. Mayer also announced that the company would disclose its data flows to regulators. TikTok is also set to launch a “Transparency and Accountability Center” in Los Angeles, to house the platform’s data flow and code, and to “host online tours of its data during the pandemic.” (Fisher, Axios, 29.07.2020)
     
  • The balancing act of content moderation in Europe – Lessons from the French “Avia law” for the Digital Services Act: Francois-Xavier Dussart uses the recent example of the partially dismissed French “Avia law”, aimed at combating hateful content on the Internet, as a cautionary tale for the EU Digital Service Act. Indeed, the French Constitutional Council declared parts of the law unconstitutional due to its disproportionate requirements for tech platforms, that would risk encouraging platforms to censor potentially legal content. Dussart thus cautions the DSA of the risks of being overly broad in its covering, and calls for it to “provide a coherent and appropriately narrow legal framework for content moderation in the European Union, and  to embrace the diverse and complex online ecosystem through an appropriate scope with clear definitions and safeguards.” A need for a coherent and evidence-based framework that, according to Dussard, should have at its core the principles of necessity and proportionality in order to safeguard fundamental freedom. (Dussart, Global Network Initiative, 27.07.2020)
  • Thinking through transparency and accountability commitments under the Digital Services Act: In the latest blogpost of a Global Network Initiative series on the EU Digital Services Act (DSA), Spandi Singh looks at how the DSA could support accountability and transparency across the tech sector. Singh emphasises key considerations for the EU when calling for greater transparency through the DSA, including the need to balance the standardisation of transparency reports with considerations for how specific platforms work. This is particularly important to ensure a broad comprehension of the larger tech sector content moderation system, whilst safeguarding diversity and innovation in the metrics produced. Regarding innovation, Singh stresses that transparency requirements should, although important, not create barriers limiting competition.  The DSA should also consider to whom transparency reports are addressed – users, researchers, or governments for instance – and how to support transparency practices that are relevant to a broad range of actors. Finally, Sings underlines the need for greater transparency from governments, especially given the increased importance of governments and intergovernmental Internet Referral Units in moderating online content. (Singh, The Global Network Initiative, 20.07.2020) 

On this topic, we are listening to: 

Far-right violent extremism and terrorism 

  • It’s too late to stop QAnon with fact checks and account bans: Following Twitter’s announcement that it would take action in preventing the spread of QAnon on its platform, TikTok’s blocking of QAnon related hashtags, along with reports of Facebook soon taking action on QAnon, Abby Ohlheiser dwells on the conspiracy theory, its spread on social media, and what should be done to prevent it. Ohlheiser begins with a comprehensive overview of QAnon and its online presence, from its emergence on 4chan in 2017, to the harassment campaigns it spawned on social media, and offline acts of violence. Analysing the tech sector’s response to QAnon, she argues that tech platforms need to go beyond bans and fact checks to stop QAnon, and instead educate users on misinformation and on the risks of being targeted by “information warfare” through coordinated campaigns. She concludes that the whole information ecosystem needs to be rethought, as QAnon supporters are already thinking about alternatives to bypass regulations. (Ohlheiser, MIT Technology Review, 26.07.2020)
     
  • Twitter brings down the banhammer on QAnon: Reflecting on the recent wave of enforcement actions  taken by all major online platforms in recent weeks – from new policies on Covid-related misinformation, to Facebook’s ban of the Boogaloo movement, and YouTube’s decision to remove several prominent far-right channels – Evelyn Douek looks at what these recent changes mean for content moderation in the future. Douek calls for greater transparency and accountability around newly announced or enforced moderation policies. Taking Twitter’s recent crackdown on QAnon as an example, she highlights that increased content moderation enforcement does not necessarily “signify a new era of accountability in content moderation”, and notes how tech companies are often able to implement new moderation rules with little scrutiny and accountability. In reaction to such a lack of clarity, she stresses the importance of clearly explained and transparent rules that users can rely on to hold platforms accountable. Douek concludes by calling for greater clarity around company policy and enforcement updates to build on efforts made to render platforms more transparent in recent years. (Douek, Lawfare, 24.07.2020)

Islamist terrorism 

  • ISIS and al-Qaeda’s commentary on American protests:Whilst violent extremist exploitation of social unrest  in the US – whether due to  Covid-19  or protests that followed the killing of George Floyd – have caught a lot of attention, Sammie Wicks looks at how international terrorist organisations have been framing the US protests to serve their own narratives. To do so, Wicks examines Islamic State (IS) and al-Qaeda’s commentaries on the US protests. Whilst IS mostly questions America’s strength and capabilities to act to counter the terrorist group, al-Qaeda is focused on reaching out to American audiences, especially African-Americans in calling for them to “embrac[e] Islam, take action, and embrac[e] sacrifice” in order to free themselves from racial injustice. Wicks concludes that the groups’ arguments demonstrate a clear attempt at exploiting grievances and perceived injustices as a pathway to action. (Wicks, ICCT, 23.07.2020)