At the UK launch of the Tech Against Terrorism project on 12 July 2017 at Chatham House, London, our partners at Facebook, Twitter, Google, and UN CTED called for companies from across the tech industry to publicly discuss the challenges faced and to encourage future collaboration in tackling the exploitation of the internet by terrorists.

Addressing an audience of policymakers, civil society, academics and members of the tech community at Chatham House, Tech Against Terrorism Project Director, Adam Hadley, chaired the following panel of speakers:

  • Nick Pickles, Head of Public Policy and Government, UK & Israel, Twitter
  • Erin Saltman, Policy Manager, EMEA Counter-Terrorism and CVE, Facebook
  • David Scharia, Director, Chief of Branch at CTED, United Nations Security Council
  • Ankur Vora, Public Policy Analyst, Google
  • Mariusz Zurawek, Owner,

For the video of the event please click here 

Chairing the presentation at Chatham House was ICT4Peace Project Director, Adam Hadley, who summarised the work of the Tech Against Terrorism initiative in its support of the industry-led Global Internet Forum to Counter Terrorism as recently announced by the large tech companies. Adam explained the practical ways in which the project will improve capability in across tech industry especially with regards to small tech companies.

Adam explained that, “terrorists exploit an overlapping ecosystem of services, not just the big platforms like Facebook and Twitter but also the smaller services. Here in the United Kingdom the Counter Terrorism Internet Referral Unit last year reported content across 300 different services,” and that, “the concern for many smaller technology companies is that sometimes they don’t have the scale or resources to handle the challenge on their own.”

Adam further explained that large tech companies have developed a so-called “emerging normative framework” to tackle the threat which is based on self-regulation. This approach is based on establishing Terms of Service, addressing harmful content through take-downs and counter-narratives, and developing reporting through publishing regular “Transparency Reports.” Tech Against Terrorism will focus on practical ways of developing this framework and extending it to smaller tech companies.

Ultimately, Adam noted that the project is a global outreach programme designed to learn how to support smaller tech companies in a practical way and then to implement meaningful, pragmatic solutions at scale and pace.

Machines can’t do nuance” Erin Saltman, Facebook

Starting the discussion, Erin Saltman from Facebook argued how critical it is for technology companies to combine both technological and human expertise when assessing and responding to threats online. With advances in Machine Learning, tech companies such as Facebook now have the capabilities to use image- and video- recognition technologies to help tackle terrorist content being uploaded online.

However, Erin Saltman acknowledged that relying on strategies of blocking or removing content was not sufficient alone. What about context? Erin explained that “machines can’t do nuance” but can be used effectively when combined with human expertise: an operations team working 24/7 globally is Facebook’s hybrid solution and this is made up of Counter Terrorism Subject Matter Experts who review content and assess threats.

Following on, Nick Pickles from Twitter explained that human intelligence was also critical in ensuring the preservation of civil liberties. One example of this is a matter of redress: allowing users to appeal material that was automatically blocked. Nick Pickles likened the technical challenge of spam filtering to that of detecting other forms of harmful content including terrorist material – examples include users setting up multiple accounts to flood the system with the same content. Similarly, Google has integrated the use of technology to identify problematic content in its 4-step approach to removing extremist content online: Ankur Vora noted that 50% of removed content was found through pro-active deployment of video analysis tools.

As the terrorist threat online evolves so too must the response” David Scharia, UN CTED

Moreover, Ankur Vora highlighted that a key aim of Google’s 4-step approach is to understand borderline content which falls in a grey area that does not fully violate policy but can still be seen as harmful in other ways. Each of the panellists also spoke of the complexity of protecting freedom of expression online. Google, for example, has found ways of addressing this grey area by making some YouTube videos ‘view only’ – this strategy keeps the video online, but prevents comments (that are often where some hate speech is found) from being published or the sharing of the video on other platforms.

For small companies, it’s hard to respond to law enforcement requests which can come in many different formats” Mariusz Zurawek,

A key theme emphasised by the speakers was the importance of collaboration within the tech industry in responding to the terrorism exploitation of the internet. Big tech companies through the Global Internet Form to Counter Terrorism and its collaboration with Tech Against Terrorism are focussing on developing meaningful knowledge-sharing mechanisms. The broader Global Internet Forum to Counter Terrorism is also investing in technological solutions such as the “hash sharing database” which shares unique hashes of harmful content to accelerate the take-down effort across platforms.

All the panellists agreed that for these initiatives to work it is critical to involve small tech companies such as which over the last few years has faced a flood of content takedown requests from governments all over the world. The founder Mariusz Zurawek specifically noted concerns in identifying what is legal, responding to law enforcement take down requests (especially those in different languages), obtaining relevant know-how and obtaining legal advice.

The Tech Against Terrorism project will offer smaller tech companies such as innovative solutions for problems that they as yet do not have the capacity to deal with, unlike major tech organisations such as Twitter and Facebook. As David Scharia emphasised, “ultimately, it is crucial that tech companies are now engaged and able to ensure that fundamental freedoms are protected whilst also supporting the multiple parties addressing the issues.”

You can watch the video of our Chatham House here