Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.

We want to hear from you – End-to-End Encryption

Tech Against Terrorism is currently expanding its expertise on the use of end-to-end encryption (E2EE), and we are trying to deepen our comprehension of how online users and the general public perceive encryption, and the wider “encryption debate”.

To inform our understanding of this topic, we would be interested in hearing more from you and how you perceive encryption, in particular E2EE, as a user of online services. To this end,we kindly ask you to spare a few minutes of your time to respond to our short and anonymous survey.

TCAP Office Hours

Following the success of the Terrorist Content Analytics Platform (TCAP) office hours in 2020, Tech Against Terrorism is delighted to open registration for the January sessions of the TCAP office hours, to be held on the 27 and 28 January.

You can register for the session on27 January, at 5:00pm GMT, here, and for the session on 28 January, at 12:00pm GMT, here.


Top stories

In the wake of the storming of the US Capitol by supporters of President Trump on Wednesday, several tech companies took content moderation actions on the US President’s accounts and posts:

  • Twitter has locked the account of President Trump and removed several of his tweets. The social media platform took these actions on the ground of “risk of violence”.

  • Facebook has also locked President Trump’s Facebook and Instagram accounts, and removed several of his posts in response to an “emergency situation”. Facebook CEO Mark Zuckerberg later announced that President Trump’s accounts will be suspended for at least the next two weeks (until the end of the presidential transition period in the US).

  • Twitch has announced that they are suspending President Trump’s account for an indefinite period, with the decision to be reviewed after he leaves office. The platform had already suspended President Trump’s account in June, in line with its hateful conduct policy.

  • TikTok has announced that videos of President Trump’s speeches that contain false claims about the US election would be removed from the platform, in line with its misinformation policy, and that it would redirect certain hashtags, including #stormthecapitol, to limit visibility on its platform.

  • President Trump’s Wednesday video address, published during the US Capitol storming, was removed by Facebook and Youtube. Facebook removed it to “diminish the risk of ongoing violence”, and Youtube in line with its policy on false information about the US election results.

  • Snapchat has temporarily suspended President Trump’s account, a decision to be re-evaluated later as the platform monitors the situation. Snap had already actioned the President’s account in June, removing it from the Discover tab to reduce its visibility, citing that they could not promote account inciting to racial violent on their platform.

  • The Global Alliance for Responsible Media, a cross-sector initiative founded by the World Federation of Advertisers, has announced an agreement on harmful content online in collaboration with Facebook, YouTube, and Twitter. The four key areas of the agreement are meant to provide “a common framework for defining harmful content that is inappropriate for advertising”.

  • A court in Sweden has charged a member of the neo-Nazi group Nordic Resistance Movement in his capacity as editor of the group’s website, which the court said contained potentially 28 accounts of incitement to racial hatred. You can listen to our podcast on “How Nordic neo-Nazis use the internet” here.
  • YouTube will appoint a local representative to Turkey to comply with the requirements laid out by the country’s Social Media Bill which was passed last October. You can read more about this law and what it means for tech companies in our Online Regulation Series’ blog on Turkey.

  • Tech policy

    • The Year that Changed the Internet: In a review of the past year, Evelyn Douek analyses how 2020 marked a turning point in social media content moderation policies and enforcement. In doing so, she reviews some of the most important moderation decisions made by social media companies: from Twitter’s first adding a “get the facts” label to a tweet by President Trump, to major platforms removing misinformation linked to Covid-19, and the crackdown on QAnon. Douek analyses these decisions as an important shift from tech platforms’ long-standing view that they should not be arbiter of the truth online. A “naïve optimism” that Douek presents as an “overly simplified and arguably self-serving understanding of the [US] First Amendment tradition.” With platforms continuing with this shift, Douek also reviews the alternative measures – other than content removal – that platforms are introducing to limit the spread of misinformation, such as Facebook and Youtube adjusting their content moderation algorithms. Douek concludes by stressing that the change in platforms’ policies and enforcement shows some limitations. For instance, the lack of attention given by social media to their markets outside the US and non-English content in general. She concludes by stressing that 2020 also demonstrated that the “fundamental opacity” of moderation remains, as well as the limits of content moderation that do not “address the social or political circumstances that caused it [the content] to be posted in the first place”. (Douek, The Atlantic, 28.12.2020)

    • Last December, we hosted a webinar on “Content Moderation and Alternatives to Content Removal”, welcoming insights from Alex Feerst, Advisor at the Trust and Safety Professional Association, Bill Ottman, CEO of Minds, and Rachel Wolbers, Public Policy Manager for the Facebook Oversight Board. If you want to access a recording of this webinar, you can email us at [email protected].


      In October 2020, we published a report analysing tech platforms’ response to the spread of Covid-19 misinformation, and how this misinformation was exploited by far-right violent extremism. You can find our report in English here, and in French here.


      Far-right violent extremism and terrorism

      • QAnon and the Storm of the U.S Capitol: The Offline Effects of Online Conspiracy Theories: Marc-André Argentino discusses the storming of the US Capitol by supporters of President Trump and followers of QAnon on Wednesday – the day the US congress was certifying Joe Biden as the next US President. Argentino presents an overview of the ideology and history of QAnon that had been “a security threat in the making” for the past three years. In doing so, he analyses the conspiracy theory movement in having evolving into an “extremist religio-political ideology”. A “hyper-real religion” with its followers viewing QAnon as “the source of truth” despite all evidence presented to counter it. Argentino explains how the movement grew in 2020 (with an exponential growth of over 581% on Facebook), exploiting the Covid-19 pandemic to spread misinformation and conspiracies theories. Argentino argues that the storming of the Capitol is “the culmination of what has been building up for weeks: the “hopeium” in QAnon circles that some miracle via Vice-President Mike Pence and other constitutional witchcraft would overturn the election results.” (Argentino, The Conversation, 07.01.2021)

      Counterterrorism

      • IntelBrief: New Year, New International Counterterrorism Landscape: In this Brief, the Soufan Center dwells on how the counterterrorism (CT) landscape could change in 2021. In particular, with the Biden-Harris administration offering hope for a renewal of US multilateral engagement and improved relations with the UN. Following 9/11, the US had been at the center of the UN counterterrorism framework. 20 years later, this brief assesses how the US now has two possible ways of engaging with the UN: either by ramping up support for UN CT bodies, and/or via proactive relationships with France and the UK to balance China and Russia on the counterterrorism scene. The brief details the different advantages that the UN offers as a multilateral CT actor, including its convening power. Continuing on the UN CT role, the brief notes that the General Assembly is to negotiate a biennial review of the UN Global Counter Terrorism Strategy this year, whilst the Security Council will renegotiate the mandate of the UN Counter-Terrorism Executive Directorate (UNCTED). The Brief also underlines the role of the Global Counter-Terrorism Forum (GCTF), to be chaired by Canada and Morocco this year, and the possibility for the US to turn its support to the GCTF if “frustrated by the UN”. It concludes by noting the role that the UN has played as “an important counterpoint to the excesses of the ‘War on Terror’ approach” over the last decades, and by underlining how the US foreign policy choices on multiple stages will shape the CT landscape for the years to come. (Soufan Center, 04.01.2021)

      Tech Against Terrorism is supported by UNCTED and our work has been recognised in Security Council resolutions. Recently, we partnered with UNCTED for a webinar on how intergovernmental organisations can support the tech sector in countering terrorist use of the internet whilst respecting human rights. If you want to access a recording of this webinar, you can email us at [email protected].

      • The Tech Industry and the Regulation of Online Terrorist Content: What Do Law Enforcement Think?: Stuart MacDonald and Andrew Staniforth present some early findings from a project on law enforcement and social media cooperation that interviews law enforcement professionals. Generally, interviewees stressed that a voluntary approach based on informing platforms about the threat rather than coercing platforms to remove content is more effective. The interviews highlight concerns from law enforcement, including with regard to new legislations on online harm and terrorist content, which risk undermining the current efforts and present jurisdictional concerns. Interviewees also raised concerns with smaller platforms lacking resources and capacity despite their willingness to tackle online terrorist content, and emphasised the role played by organisations such as Tech Against Terrorism in supporting smaller platforms. The risk of losing access to data and information was also raised by interviewees. Namely,law enforcement are concerned about the use of end-to-end encryption and the removal of content by platforms rendering difficult the tracing back of said content. Finally, the respondents stressed the challenges that far-right violent extremism is posing, in particular due to a lack of a clear definition around when such content is illegal. (MacDonald and Staniforth, Hedayah, 03.01.2020)

      At Tech Against Terrorism, one of our core missions is to support smaller tech platforms in tackling terrorist use of the internet whilst respecting human rights. This is why we offer mentorship support, in partnership with the Global Internet Forum to Counter Terrorism, for tech companies wanting to improve their processes and policies to counter terrorist exploitation of their platforms. You can find more information about our mentorship programme, and ensuing membership to continue provide support beyond mentorship, here and here.


      For any questions, please get in touch via:
      [email protected]