5 min read

September 2020 Update

September 2020 Update


Headline news 

  • Tech Against Terrorism podcast alert: Incels, online misogyny and gender-based terrorism: We are very excited to announce that the new episode of the Tech Against Terrorism Podcast is now live! In this episode we analyse incels, to what extent they present an (online and offline) threat, and what tech companies can do to take action on violent manifestations of this movement. We explore these topics with Dr. Debbie Ging from the School of Communications at Dublin City University, and Alex DiBranco, the co-founder and executive director of the Institute for Research on Male Supremacism. Listen here.
  • Earlier this week, we released our response to the consultation process for the EU Digital Services Act (DSA), which was organised to seek input on issues that will help shape the EU’s rulebook on digital affairs. In our response, we highlighted three areas that we deem essential in guiding the EU in creating policies that tackle terrorist use of the Internet:

  • Accountability – Governments should provide leadership and strategic thinking in tackling terrorist use of the internet
  • Rule of Law – Governments’ efforts should be based on the rule of law and pay due regard for human rights, in particular freedom of expression
  • Transparency – We encourage both governments and tech companies to be transparent in their efforts to counter terrorist use of the internet

  • Our full response can be found here.

  • Our director, Adam Hadley, spoke at the first conference organised by the Safety Tech Innovative Network, an initiative set up by the UK’s Ministry for Digital, Culture, Media and Sports (DCMS)Nominet and KTN on mental health considerations in content moderation, and the potentially negative effects experienced by content moderators.

  • Tech Against Terrorism recognises the mental health effects of content moderation and organised a webinar on this important topic last year. If you would like access to the recording, please reach out to contact@techagainstterrorism.org.

  • In addition, to help the tech sector with their content moderation and the negative implications on mental health, Tech Against Terrorism is developing the Terrorist Content Analytics Platform (TCAP). This will assist smaller tech companies in detecting harmful content and will aim to do so by taking mental health considerations into account. For more information on how we intend to do this, please see our consultation process.

  • Tech Against Terrorism also spoke at a webinar on money mulling organised by Comply Advantage and FINTRAIL that highlighted the role of social media and the financial services in crime and terrorist financing. To watch the recording of this webinar, please click here.

What's up next? 

  • We are delighted to introduce office hours for the Terrorist Content Analytics Platform (TCAP). Starting next week, we will conduct bi-monthly hour-long sessions to share updates as well as answer any questions that stakeholders from the tech sector, academia, and civil society have regarding the platform. We will kick off next week, on 7 and 8 October.

  • If you would like to attend, please register via the links below:

    7 October, at 5:30pm BST. Register here: https://us02web.zoom.us/webinar/register/WN_rWk_LPJTT4mQ3Y9G_rZ0yA

    8 October at 11:30am BST. Register here: https://us02web.zoom.us/webinar/register/WN_NQq8h906QwCE2WzT38NEIw

  • The Online Regulation Series - mapping global regulation of online terrorist activity: For over a month (5 October – 13 November) Tech Against Terrorism will shed light on the global regulation of online speech and content. Each week, we will highlight different regions' online regulation frameworks and analyse the effects for tech companies. Make sure to follow us on Twitter @techvsterrorism to keep up to date with our releases!

  • If you want to contribute, please get in touch at contact@techagainstterrorism.org.

  • Our e-learning webinar series for 2020/2021 is starting on 21 October, with a webinar on Tech Against Terrorism's Mentorship Programme and Support to Smaller Platforms. You can register here.
  • Our next podcast episode will cover the decentralised web and its implications for terrorist use of the Internet. Keep an eye on Twitter for its release!
  •  


Tech Against Terrorism Reader's Digest – 2 October

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.


Top stories:


Tech policy 


  • ‘It’s a War on Words’: Turks Fear New Law to Muzzle Social Media Giants:
     In this article, Bethan McKernan looks at Turkey’s new legislation for social media which came into effect on 1 October. The law compels social media companies, with over a million daily users in Turkey, to establish a formal presence in the country and to respond to complaints about posts that “violate personal and privacy rights” within 48 hours. International companies would also need to store user data in Turkey. If social media companies do not comply with the new criteria within six months of the legislation having gone into effect, Turkish authorities will be able to ban advertising on the platforms, assign high fines, and adjust the sites’ bandwidth by up to 90%. It would also allow courts to order Turkish news websites to remove content within 24 hours. McKernan notes that the law has been criticised for its potential negative effects on freedom of expression. (McKernan, The Guardian, 27.09.20)

  • For more insight into countries’ regulatory frameworks on content regulation, stay tuned for Tech Against Terrorism’s Online Regulation Series which will be launched next week.

  • Exclusive: Where Trump and Biden Stand on Tech Issues: In this article, Ashley Gold analyses a report published by the Information Technology & Innovation Foundation comparing US presidential candidates’, President Trump and former Vice President Joe Biden, positions on technology and innovation. According to the report, Joe Biden has laid out a concrete tech agenda whereas Donald Trump has focused on tax cuts and deregulation while criticising tech firms for a perceived anti-conservative bias. The report features President Trump and Joe Biden’s positions on hate speech and misinformation, where both candidates call for online platforms to change how they moderate content. Gold stresses that the tech industry should prepare for four more years of Trump’s “impulsive policy approach” or for a Biden administration that is likely to “be critical of tech but slow to take action”. (Gold, Axios, 28.09.20)

  • Content Moderation Case Study: Twitter's Algorithm Misidentifies Harmless Tweet As 'Sensitive Content' (April 2018):In this article, TechDirt highlights a malfunction of Twitter’s content moderation algorithmic software, and assesses potential content moderation decisions, as well as policy implications that the company could undertake. Twitter’s algorithm had accidentally flagged an innocent image and its user account as “sensitive”. Though the block was removed, the user was never directly contacted about the alleged violation. The article provides a list of questions that Twitter should address in terms of its content moderation decision-making process, such as if the process should be “stop-gapped by human moderators”, and if the false positives such as this one are common enough that a notification process should be implemented. The article also mentions policy implications to consider, for example, if Twitter should weigh the option of changing its content rules to further deter the posting of sensitive content. (Tech Dirt, 25.09.20)

Far-right violent extremism and terrorism

  • Geographically Contextualising Right-Wing Extremism for Tech Platforms: A Perspective from India: In this piece, Kabir Taneja and Maya Mirchandani argue that American white supremacist and far-right extremist narratives are increasingly similar to the Hindutva movement in India. They compare the US-based QAnon and Boogaloo movements with Hindutva groups, showing how both thrive on using technology and social media to spread ideologies which have also led to offline violence. The article highlights how Hindutva’s tactical methods of radicalisation are similar to America’s far-right groups’ propaganda, through the exploitation of social media and messaging apps, and the use of memes. According to Taneja and Mirchandani, such similarities are often ignored by the global counterterrorism community. They conclude that we should formulate a singular policy objective that addresses extremism globally and not just the West, while taking regional nuances into account as it is crucial for tech platforms. (Taneja and Mirchandani, GNET, 25.09.20)


For any questions or media requests, please get in touch via:
contact@techagainstterrorism.org


Background to Tech Against Terrorism

Tech Against Terrorism is an initiative supporting the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups - both jihadist and far-right terrorists - consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative works with the United Nations Counter Terrorism Executive Directorate (UN CTED) and has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.