September 2020 Update
What’s up next?
- We are delighted to introduce office hours for the Terrorist Content Analytics Platform (TCAP). Starting next week, we will conduct bi-monthly hour-long sessions to share updates as well as answer any questions that stakeholders from the tech sector, academia, and civil society have regarding the platform. We will kick off next week, on 7 and 8 October.
If you would like to attend, please register via the links below:
7 October, at 5:30pm BST. Register here: https://us02web.zoom.us/webinar/register/WN_rWk_LPJTT4mQ3Y9G_rZ0yA
8 October at 11:30am BST. Register here: https://us02web.zoom.us/webinar/register/WN_NQq8h906QwCE2WzT38NEIw
- The Online Regulation Series – mapping global regulation of online terrorist activity: For over a month (5 October – 13 November) Tech Against Terrorism will shed light on the global regulation of online speech and content. Each week, we will highlight different regions’ online regulation frameworks and analyse the effects for tech companies. Make sure to follow us on Twitter @techvsterrorism to keep up to date with our releases!
If you want to contribute, please get in touch at [email protected].
- Our e-learning webinar series for 2020/2021 is starting on 21 October, with a webinar on Tech Against Terrorism’s Mentorship Programme and Support to Smaller Platforms. You can register here.
- Our next podcast episode will cover the decentralised web and its implications for terrorist use of the Internet. Keep an eye on Twitter for its release!
Tech Against Terrorism Reader’s Digest – 2 October
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
‘It’s a War on Words’: Turks Fear New Law to Muzzle Social Media Giants: In this article, Bethan McKernan looks at Turkey’s new legislation for social media which came into effect on 1 October. The law compels social media companies, with over a million daily users in Turkey, to establish a formal presence in the country and to respond to complaints about posts that “violate personal and privacy rights” within 48 hours. International companies would also need to store user data in Turkey. If social media companies do not comply with the new criteria within six months of the legislation having gone into effect, Turkish authorities will be able to ban advertising on the platforms, assign high fines, and adjust the sites’ bandwidth by up to 90%. It would also allow courts to order Turkish news websites to remove content within 24 hours. McKernan notes that the law has been criticised for its potential negative effects on freedom of expression. (McKernan, The Guardian, 27.09.20)
For more insight into countries’ regulatory frameworks on content regulation, stay tuned for Tech Against Terrorism’s Online Regulation Series which will be launched next week.
- Exclusive: Where Trump and Biden Stand on Tech Issues: In this article, Ashley Gold analyses a report published by the Information Technology & Innovation Foundation comparing US presidential candidates’, President Trump and former Vice President Joe Biden, positions on technology and innovation. According to the report, Joe Biden has laid out a concrete tech agenda whereas Donald Trump has focused on tax cuts and deregulation while criticising tech firms for a perceived anti-conservative bias. The report features President Trump and Joe Biden’s positions on hate speech and misinformation, where both candidates call for online platforms to change how they moderate content. Gold stresses that the tech industry should prepare for four more years of Trump’s “impulsive policy approach” or for a Biden administration that is likely to “be critical of tech but slow to take action”. (Gold, Axios, 28.09.20)
- Content Moderation Case Study: Twitter’s Algorithm Misidentifies Harmless Tweet As ‘Sensitive Content’ (April 2018):In this article, TechDirt highlights a malfunction of Twitter’s content moderation algorithmic software, and assesses potential content moderation decisions, as well as policy implications that the company could undertake. Twitter’s algorithm had accidentally flagged an innocent image and its user account as “sensitive”. Though the block was removed, the user was never directly contacted about the alleged violation. The article provides a list of questions that Twitter should address in terms of its content moderation decision-making process, such as if the process should be “stop-gapped by human moderators”, and if the false positives such as this one are common enough that a notification process should be implemented. The article also mentions policy implications to consider, for example, if Twitter should weigh the option of changing its content rules to further deter the posting of sensitive content. (Tech Dirt, 25.09.20)
Far-right violent extremism and terrorism
- Geographically Contextualising Right-Wing Extremism for Tech Platforms: A Perspective from India: In this piece, Kabir Taneja and Maya Mirchandani argue that American white supremacist and far-right extremist narratives are increasingly similar to the Hindutva movement in India. They compare the US-based QAnon and Boogaloo movements with Hindutva groups, showing how both thrive on using technology and social media to spread ideologies which have also led to offline violence. The article highlights how Hindutva’s tactical methods of radicalisation are similar to America’s far-right groups’ propaganda, through the exploitation of social media and messaging apps, and the use of memes. According to Taneja and Mirchandani, such similarities are often ignored by the global counterterrorism community. They conclude that we should formulate a singular policy objective that addresses extremism globally and not just the West, while taking regional nuances into account as it is crucial for tech platforms. (Taneja and Mirchandani, GNET, 25.09.20)
For any questions or media requests, please get in touch via:
Background to Tech Against Terrorism
Tech Against Terrorism is an initiative supporting the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups – both jihadist and far-right terrorists – consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative works with the United Nations Counter Terrorism Executive Directorate (UN CTED) and has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.