6 min read

October 2020 Update


Headline news

  • On 5 October, we launched the Online Regulation Seriesour latest knowledge sharing endeavour aimed at shedding light on the state and future of online content moderation, in particular with regard to terrorist and extremist content. Throughout the series, we provided regular blogposts on different countries’ regulatory frameworks, covering a different region each week. So far, we have covered the following countries:
  • The Fondation pour L’Innovation Politique (Fondapol) published our analysis of the online exploitation of Covid-19 by violent far-right extremists and tech platforms’ response to the increase in misinformation, conspiracy theories and extremist content in the early months of the pandemic. You can find the report in English here, and in French here.
  • We started our e-learning webinar series for 2020/21 on 21 October with a webinar on Tech Against Terrorism’s Mentorship Programme and Support for Smaller platforms, during which we welcomed our partners from the Global Internet Forum to Counter Terrorism (GIFCT), and heard Nicholas Rasmussen, Executive Director, talk through GIFCT Membership. If you want to access a recording of this webinar, please send an email to contact@techagainsttterrorism.org
  • In October, we held our first office hours for the Terrorist Content Analytics Platform (TCAP). These office hours are  bimonthly hour-long sessions that provide an update on the development of the platform, as well as answer any questions interested stakeholders might have. The next session will be held in early December and registration will open closer to the date.
  • On 14 October, our Director, Adam Hadley, joined the discussion on “Counter-terrorism strategy in a post-COVID environment”, organised by Wilton Park, to talk about the cyber terrorist threat.
  • On 26 October,  our Research Manager, Jacob Berntsson, took part in the EU Internet Forum workshop on “Right-wing terrorist groups and symbols online”, giving opening remarks on the importance of supporting smaller platforms to counter the spread of terrorist and violent extremist content online.
  • On 30 October 2020, Adam Hadley  gave opening remarks during the expert group meeting on “Recently developed operational notes to assist service providers with requests from overseas criminal justice officials for electronic evidence”. This meeting gathered 21 service providers, and was organised by the United Nations Office on Drugs and Crime (UNODC), in consultation with the United Nations Security Council Counter-Terrorism Committee Executive Directorate (CTED) and Tech Against Terrorism.


What's up next?

  • The Online Regulation Series continues in November. For the last weeks of the series, we will cover the regulation of online speech and content in South America and look at tech sector initiatives that aim to provide oversight on content moderation.
    • To conclude the series, we will be hosting a webinar on the State of Global Online Regulation on Wednesday, 25 November at 5pm GMT. During this webinar we will present an overview of the online regulation trends that we noticed, and welcome regional experts to share their insights. You can register for this webinar here.
  • Continuing with our 2020/21 e-learning webinar series, our upcoming webinar at 5pm GMT on 16 December, will cover the complex issue of content moderation and alternatives to content removal. You can register for this here.
  • On 9 December, we will also host a webinar in partnership with UN CTED. This webinar session will cover UN organisations, their work in countering terrorist use of the internet, and the importance of public-private cooperation.
  • Our next podcast will cover the use of alt-tech by terrorists and violent extremists, stay tuned for this exciting episode!
  • Don't forget to follow us on Twitter  to be the first to know when a webinar is announced, or a podcast is released!


Tech Against Terrorism Reader's Digest - 6 November

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.


Top stories:

  • Following a terrorist attack in Vienna this Monday, 2 November, our Director, Adam Hadley discussed terrorist use of the internet, in particular of social media, and the related challenges of identifying terrorist content at scale with John Pienaar on Times Radio. You can listen to the interview here (02:44:42). More information about the attack in Vienna can be found here.
  • Following the passing of a new “social media” law in July, the Turkish government has announced that Facebook, Instagram, Twitter, Periscope, Youtube, and TikTok are  facing a $3.5m fine each for failing to assign a legal representative in Turkey in time (as required by the law).  You can read our blogpost on online regulation in Turkey here.
  • The Internet & Jurisdiction Policy Network has published a guide on “DNS-level action to address technical abuse: Due diligence guide for notifiers”.
  • Paypal has terminated the account of domain registrar Epik, due to financial risk issues linked to Epik’s digital currency and risk of “money laundering”. Epik is known to host  both  Gab and the Proud Boys’ website – amongst other sites tied to far-right extremism.
  • On Tuesday, European police forces from Italy, France, Greece, Norway, Britain, and the Czech Republic executed  coordinated raids, under the coordination of Europol, to counter “online hatred and incitement to violence”.


Tech policy – Online Regulation Series, MENA and Sub-Saharan Africa Focus week:

  • Facebook’s Content Moderation Errors Are Costing Africa Too Much: Tomiwa Ilori investigates  the mislabelling as “false information” by Facebook and Instagram of online content related to recent protests against police violence in Nigeria. As he dwells on the context of what became known as the “Lekki Massacre” on social media, during which 12 protesters were killed, Ilori explains that the photos of the protest uploaded on social media under the hashtag #EndSARS were labelled as “false information” by the platforms linking them to the “severe acute respiratory syndrome” (SARS) that preceded Covid-19. Whilst, Facebook and Instagram explained that this was due to an automated system error, Ilori continues on questioning the resources dedicated to content moderation in Africa. In particular, he questions online platforms’ understanding of local contexts, especially when linked to an “overreliance on automated flagging”, as well as local representation amongst human moderators including in terms of language and sub-region. (Ilori, Slate – Future Tense, 27.10.2020)

  • #NoToSocialMediaBill: Nigerians reject moves to restrict use of social media: Nigerian Netizens have been protesting a proposed bill on the regulation of social media, “Protection from Internet Falsehood and Manipulations Bill 2019” on social media. This is in reaction to recent calls to regulate social media, which have been accused by the government of spreading “fake news and in inciteful comments during the #EndSARS protests.” (Vanguard, 4.11.2020)
  • To learn more about the proposed bill: Nigerians should say no to social media bill (Anietie Ewang Human Rights Watch, 26.11.2019) 

    On tech policy and content moderation, we are also listening to:
    The Lawfare Podcast: Casey Newton on Four Years of Platform Chaoswith Evelyn Douek, Quinta Jurecic and Casey Newton.


    Far-right violent extremism and terrorism

    • The case against the Nordic Resistance Movement in Finland: an overview and some explanations: In this article, Daniel Sallamaa and Tommi Kotonen dwell on the Supreme Court of Finland’s landmark decision to ban the Finnish chapter of the neo-Nazi organisation Nordic Resistance Movement (NRM) – along with Pohjoinen Perinne Ry (“Northern Tradition”), a registered organisation  that facilitated NRM’s activities. Sallamaa and Kotonen explain that this decision follows a temporary ban issued in 2019 by the Court, when it decided to follow on NRM’s appeal following its trial on the grounds of “contravening Finish association law”. Whilst the appeal was processed at the time, the decision to ban NRM was motivated by the findings that the organisation’s objectives were “illegal, agitation against ethnic and other minorities a part of its propaganda”. The ban decision also took into account acts of violence committed by NRM’s supporters. Overall, the Court deemed that NRM violates both Finnish law as well as international conventions on human rights in both its objectives and violence. Noting that this decision does not cover all associations registered by NRM members, the authors explain that members are likely to seek ways to circumvent the ban, as they did by founding a new group in 2019, Kohti Vapautta! (“Towards Freedom”). The authors conclude that “Finnish right-wing extremist organizations will continue to operate with a more decentralized structure.” (Sallamaa and Kotonen, C-Rex – Center for Research on Extremism, 02.11.2020)
    • Blind Networks in the Extreme-Right: In this piece, Ben Lee looks into how “extreme-right groupuscules have experimented with technology to create blind networks”. Lee argues that these networks are used to coordinate offline activists’ campaigns and call for the harassment of ideological opponents via central and anonymous channels on encrypted messaging services. Members of such networks rely on an “anonymous central channel” to take instructions for offline activism. Such instructions include which images to print and turn into a campaign sticker, as well as advice on which printers to buy to make these stickers. Members of the blind networks are also encouraged to hide their identity online. Lee analyses these blind networks as a “combination of a fragmented transnational extreme-right and communications technology has created an online pool of ready-politicised recruits for different kinds of mobilisation”, allowing for offline activism at little cost and with little risks, and giving the impression of a larger activist network. (Lee, VoxPol, 28.10.2020)


    Islamist terrorism 

    • Conflans: Deep dive into the Twitter account of the terrorist Abdullakh Anzorov(Conflans : Plongée dans le compte Twitter du terroriste Abdoullakh Anzorov): French newspaper Libération accessed over 2000 tweets posted by Samuel Paty’s killer, Abdoullakh Anzorov, before the account was suspended. Fabien Leboucq and Alexandre Horn’s deep dive into the terrorist’s Twitter account reveals a “Muslim fundamentalist, misogynist, antisemitic, racist” individual   who was not hiding his views on Islamist terrorism. The tweets show a mixed use of memes common to any young individual on the French Twitter sphere, alongside openly hateful tweets. Anzorov’s hateful tweets had led the International League Against Racism and Anti-Semitism (Licra) to report numerous of his tweets – one of which was removed by Twitter in late August. Leboucq and Horns conclude the article with an overview of Anzorov’s tweets showing his interest and “jokes” on radicalisation and terrorism, often referencing infamous Islamist ideologues that have been linked to al-Qaeda and Islamic State. (Leboucq and Horn, Libération, 02.11.2020, article in French).


    For any questions or media requests, please get in touch via:
    contact@techagainstterrorism.org


    Background to Tech Against Terrorism

    Tech Against Terrorism is an initiative supporting the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups - both jihadist and far-right terrorists - consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative works with the United Nations Counter Terrorism Executive Directorate (UN CTED) and has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.