3 min read

Tech Against Terrorism Beirut Workshop

The Tech Against Terrorism team launched their first workshop in Beirut on Wednesday 6 September. The workshop, held at Antwork Beirut, was attended by a variety of participants including those in tech hubs, the start-up sector, NGO sector, and the legal sector. This variety energised the debate, and many useful insights were gained.

In the first workshop, we explored the importance of the Terms of Service (ToS), including the difficulty in qualifying definitions, in protecting user’s rights, in limiting the liability of startups, and in protecting startups’ reputation.

A number of categories of objectionable content were explored, including ‘hate speech’, ‘violent/graphic content’, ‘threatening material’, ‘illegal activity’, and ‘terrorism’. While one audience member noted that the most important category of objectionable behaviour is ‘illegal activity’ (which is all encompassing), another noted that “’terrorism’ should only be included if you were to define what ‘terrorism’ means”. In a concluding point, it was suggested that definitions of key concepts would have to be expanded on, for understanding and to justify their very insertion in ToS.

While one audience member noted that YouTube provided the ‘best’ ToS (i.e. one which ticked all boxes of objectionable content outlined above), another participant expressed that, “each tech company should have ToS that are specific to the service’s area of focus,” (from communications to FinTech).  As an example, the ToS of Wordpress only tick two of the objectionable content boxes. However, its ToS are clearly relevantly framed for its own platform, and the other objectionable content boxes are not required. Another helpful comment made by an audience member was that the ToS should depend on the culture and values of the country in which the tech company is based. This comment was then countered, leading to an interesting discussion on the borderless nature of the internet.

Further comments indicated the hopes that soon regulation and/or oversight could be out of the hands of the government, and solely run by a transnational tech body. While utopian in character, this comment did indicate the wish of some to keep the internet ‘free’ and as untouched by government influence as possible. 

The second workshop focused on content takedown processes, and its difficulties, namely the idea that “shutting down certain types of content opens Pandora’s box. This particular workshop, and its areas of contention, particularly ignited discussion among attendees. When participants were asked if content that referred negatively towards groups ranging from women to death threats, audience members differed in whether this content should be taken down or not. When discussing if a message threatening to kill was taken offline or not, most participants predicted that the distinction lay in whether the threat targeted a specific person or not. If it didn’t, they correctly suggested it would be unlikely to be taken down.

Ayman Mhana, of SKeyes, noted that a current issue lies in the inability of machines to detect sarcasm and tone online. While around 15% of ISIS’ content does violate most tech platforms’ ToS, Ayman raised the issue of ISIS’ ‘positive’ content, i.e. replacing violent imagery with showing how positive life in the Caliphate could be. This is clearly still dangerous content (especially for the ‘journey of radicalisation’) but does not technically violate a platform’s ToS. As noted by Ayman, these positive images are actually more effective than the negative ones, usually depicting violence.

A further focus centred on the notion of empowerment, namely in how empowered companies are in regulating content. One participant questioned if the technology companies should be able to decide what should/should not be taken offline? Decentralisation was then posited as a solution, and further pushing government intervention within the digital world. As poignantly raised, would the companies “be the angels protecting the world or will they be redirecting the world?”

Another attendee noted their objection to all forms of takedown, their argument being that if you were to take down a comment made by a group such as the KKK, then we will lose out on all the comments in response, arguing against the view of the KKK. By limiting the ‘bad’ content, you are also limiting the ‘good’ responses, the counter narratives.  Finally, an interesting solution was put forward,  suggesting that content removal could be more user-friendly/receptive i.e. taking down content only after first sending notice for amendmne,t  taking down content only after sending notice. This would serve as a moderate way of protecting freedom of speech, protecting the purpose of the platform and protecting the user.

Our final workshop on transparency opened up the debate on privacy. The first argument suggested that privacy does not exist, and if one seeks it, they should lead a solely offline existence (where privacy and connectivity exist in a mutually exclusive state).

In general, the audience members seemed sceptical on the topic of privacy, unified on the belief in difficulty in protecting one’s privacy. One audience member noted that we have to both define our rights, and then fight for them. Government intervention was again mentioned, with particular reference to the San Bernardino incident, Apple vs. the FBI.  The FBI ultimately gained access without Apple’s involvement in this case, but it raises important questions, such as data privacy, and how one could then manipulate the user based off their meta-data.  Indeed, as a participant mentioned, is it not more concerning that private companies have access to this information, over the governments, without any legal right?

Regarding data requests, a demand for better rules on the transfer of data between companies was suggested. Further, if a data request does come in, greater transparency (i.e. what data is requested, which government entity, if it was part of an investigation) would be useful.

We thank all participants for their excellent insights, and SKeyes for their partnership.

Press Release: Launching a public consultation process on the Terrorist Content Analytics Platform (TCAP)

Press Release: Launching a public consultation process on the Terrorist Content Analytics Platform (TCAP)

25 October 2019 - Press Release: Launching a public consultation process on the Terrorism Contents Analytics Platform (TCAP) The Terrorist Content...

Read More

Insights from the Centre for Analysis of the Radical Right’s Inaugural Conference in London

In May of 2019 Tech Against Terrorism attended the inaugural annual conference of the Centre for Analysis of the Radical Right (CARR) - the leading...

Read More