What is Tech Against Terrorism?

Tech Against Terrorism is a project mandated by the United Nations Counter Terrorism Committee Executive Directorate (UN CTED) and implemented by the ICT4Peace Foundation. Our mission is to help the global tech industry protect itself from terrorist exploitation. We emphasise that such measures should respect freedom of expression and human rights. Our focus is global and comprehensive. We support companies of all sizes from across the tech ecosystem including social media, storage, encryption, security, fintech, and eCommerce. Additionally, we work with civil society and academia to help build consensus around proportionate measures to moderate terrorist exploitation of the internet.

 

How is Tech Against Terrorism funded?

Tech Against Terrorism’s funding model is based on an equal share between companies and governments. This balance ensures that the project can maintain its neutral position. In 2017 the project was supported by the Government of Switzerland, the Republic of Korea, Facebook, Microsoft, Google, and Telefonica.

 

Who does Tech Against Terrorism collaborate and/or associate with?

In addition to those mentioned above, Tech Against Terrorism was invited by Twitter, Google, Microsoft and Facebook to support the establishment of the Global Internet Forum to Counter Terrorism (GIFCT) after the forum was launched in May 2017. Tech Against Terrorism also hosted the heads of the US Department of Homeland Security and the UK Home Office at the US launch of the GIFCT in August 2017.

However, Tech Against Terrorism’s main collaboration partners are startups and smaller tech companies. In fact, most of our work and recommendations are based on observations we made during our consultations with small tech companies and through our workshops around the world. (Beirut, New York, London)

 

What is Tech Against Terrorism’s approach to content regulation and content removal?

Our belief is that content takedown is part of the solution, but not the whole solution. We support an industry-led holistic approach to content regulation. Tech companies have demonstrated innovation through self-regulation, employing new technological solutions while recognising that human expertise is necessary to accurately assess context and nuance at scale. Other methods of moderating harmful content include redirection, down-prioritisation, positive counter-narrative messaging, safe searches, and online community education. As a project, we also believe that there tends to be an overemphasis from governments on violent content compared to non-violent content, as the latter may be just as (if not more) influential in a person’s radicalisation process. However, we acknowledge that it is more difficult to identify and moderate non-violent content, and our project aims to support companies to determine appropriate approaches to content regulation.

 

What content should be removed from platforms for promoting terrorism?

Tech Against Terrorism acknowledges that there is no universal definition of terrorism. In fact, one of our observations when engaging with tech companies is that they struggle with moderating content on their sites due to this uncertainty. Moreover, it is sometimes difficult to define whether a video is part of terrorist propaganda, or whether it is an important piece of news that sheds light on human right abuses. When tech companies fail to make this distinction they are often criticised, but the fact is that there is no regulating body providing clear guidelines to companies whose platforms and audiences span the entire world. Tech Against Terrorism advocates for more coherence on this matter, and therefore suggests a global normative approach rather than an ad hoc approach from single governments. We recommend companies to consult the Consolidated United Nations Security Council Sanctions list, as it provides the best framework to international consensus on individuals and groups defined as terrorist. Having said that, we note the absence of certain groups in that list, and particularly far-right terror groups. Therefore, companies should also consult the proscribed groups and individuals’ list in the specific region and/or country where the content is flagged. More information and practical advice on content removal can be found in our Knowledge Sharing Platform.

 

What is Tech Against Terrorism’s view on company transparency reporting?

Tech Against Terrorism strongly supports transparency reporting. In our view, there are three main benefits of transparency reporting: it reinforces company values while easing concerns for users’ privacy, it raises awareness of the extent of government requests for content takedown, thus making it easier to hold them accountable, and it contributes to the wider debate on how content can be regulated without solely resorting to removal. We provide advice on how startups and small to medium-sized companies can produce transparency reports in our Knowledge Sharing Platform. For more information on our work on transparency reporting, see our report.

 

How do we ensure that our activities do not break national and international legislation on freedom of speech?

We emphasise that counter-terrorism efforts from companies must not infringe on freedom of speech. We commend all members and partners to consider Article 19 of the International Covenant on Civil and Political Rights (ICCPR), and call on companies to commit to such international norms through the Pledge that all Tech Against Terrorism members sign up for.

 

What is the Knowledge Sharing Platform?

As a member of the Tech Against Terrorism initiative, tech companies will have access to the Knowledge Sharing Platform, a collection of interactive tools and resources designed to support the operational needs of smaller technology companies. It is a “one stop shop” for companies to access practical resources to support their needs. This includes terrorist groups and individuals on the UN sanctions list, recommendations for model Terms of Service, Transparency Reports, standardised reporting formats, and other resources for use by companies.

 

What are our review mechanisms?

We are working with the tech industry to help establish best practice regarding review mechanisms. In becoming members of Tech Against Terrorism, tech companies commit to exploring the new technological solutions including machine learning, while also recognising that that human expertise is necessary to accurately assess context and nuance at scale. Further, we advocate for companies to develop adequate mechanisms to allow users to seek redress for content they believe has been unfairly taken down.