We interrupt this broadcast for a special announcement, our latest episode of the Tech Against Terrorism Podcast is live: In this episode, join Maygane Janin and Flora Deverell as they discuss how terrorist and violent extremists exploit gaming culture for their own ends. They are joined by Linda Schlegel, a senior editor at The Counterterrorism Group and a regular contributor for the European Eye on Radicalization, where she recently published a number of articles on the exploitation of gaming culture; and Dr. Nick Robinson, an associate professor in politics and international studies at the University of Leeds who has been researching the links between videogames, social media, militarism, and terrorism for over a decade. They address in particular the “gamification of radicalisation,” the exploitation of gaming platforms, as well as why terrorist organisations
– ‘We are used to virus called bombs’:Somalia has not been spared by the COVID-19 pandemic. In a country that already faces famine and terrorism, the impact of the pandemic could be particularly significant, as Subban Jama and Ayan Abdullahi analyse
– It’s time to get serious about sanctioning global white supremacist groups: In an “unprecedented” move last month, the US State Department designated the Russian Imperial Movement – an ultranationalist white supremacist group that trained individuals to lead attacks that were then carried out in Sweden – as a global terrorist organisation. This was the first designation of its kind for a far-right group, which can now be targeted by US government financial sanctions. Commenting on this move, Daniel Glaser and Hagar Chemali argue that this designation is important in targeting far-right violent extremist networks, but stress that it should be followed by an “intelligence and aggressive follow-up strategy.” In particular, the authors emphasise the importance of a financial campaign led by the Treasury Department to dismantle far-right violent extremist and terrorist networks. So far, no such groups have been designated by the Treasury Department. (Glaser & Chemali, Washington Post, 11.05.2020)
– Far-right Britain First leader Paul Golding banned from Youtube:Scram reports that Paul Golding – leader of Britain First – has had his channel banned from Youtube. According to Scram, Golding’s channel was set up in lieu of Britain First’s channel which was removed in 2019, but has now been banned for “multiple or severe violations” of the platform’s hate speech policy. Scram further reports that in reaction to the ban, Golding has announced that Britain First would now “concentrate its efforts on Russian social media network VK.” (Scram News, 11.05.2020)
Britain First was also banned from TikTok for violation of the company’s hate speech policy last month, alongside Tommy Robinson – co-founder and former leader of the English Defence League. You can read TellMAMA’s report about this here.
– Weighing the value and risks of deplatforming: In this Insight piece, Ryan Greer dwells on the unintended consequences of deplatforming as the default means of addressing online extremist content. Deplatforming, or removal from an online platform following serious and repeated violations a platform’s policy, can have major financial impact on extremists actors and reduce activity on extremist sites according to Greer. However, this default solution is not without drawbacks. Greer provides an overview of the risks of deplatforming, including terrorist and violent extremist attempts to circumvent bans, driving actors to fringe platforms with little (or no) moderation, and eliciting a heightened sense of grievance leading individuals to further communicate with like-minded extremists. He also stresses that deplatforming can hinder law enforcement investigation by pushing terrorists and violent extremists to online spaces more difficult for investigators to access. (Greer, GNET, 11.05.2020)
– Remembering Toronto: Two years later, incel terrorism threat lingers:Two years after an Incel motivated van attack killed 10 people in Toronto, Jacob Ware, Bruce Hoffman and Ezra Shapiro assess the state of the threat of incel violent extremism and terrorism. The piece analyses incels’ online behaviour, from moving to the dark web and their practice of shitposting, to the fragmented nature of the incel community. Ware, Hoffman and Shapiro call for increased research on the issue to counter this phenomenon: not only in relation to the incel community, but also to understand the “role that sexual frustration and male aggrieved entitlement” can play in violent extremist radicalisation in other movements. (Hoffman, Shapiro, Ware, GNET, 06.11.2020)
You can find their full analysis on “Assessing the threat of incel violence” here.
– An update on combating hate and dangerous organizations: To mark the first anniversary of the Christchurch Call to Action, the founders of the Global Internet Forum to Counter Terrorism (GIFCT) – Amazon, Facebook, Google, Microsoft, and Twitter – released a statement to reassert their commitment to prevent terrorist and violent extremist (T/VE) exploitation of their platforms. In this statement, the GIFCT highlights the crisis protocol established following the attack in Christchurch and its continuous proactive work to counter T/VE use of the internet, especially through the launch of dedicated working groups.
Tech Against Terrorism is pleased to announce that we will be chairing the working group on “technical approaches.” You can read more about this in our press release.
Following the GIFCT statement, Facebook continues by providing an update on the company’s commitment to counter hate and dangerous organisations on its platform. Facebook notably develops on its playbook of automated techniques to detect terrorist content and on the progress made in this regard. Facebook is now able “to detect text embedded in images and videos in order to understand its full context,” and is expanding this technology – originally developed to identify Islamic State and al-Qaeda content – to other violent extremist ideologies. In this update, Facebook develops on its enforcement tactics and on how the company is learning about banned organisations’ attempts to bypass detection and removal to better counter this phenomenon. Facebook also takes this opportunity to share metrics on content removals on its platform since the beginning of 2020, as it recently released its latest transparency report on Community Standards enforcement. (Facebook, 14.05.2020)
– Community standards report, May 2020 edition: Facebook just released its latest transparency report on enforcement policy on Facebook and Instagram, for October 2019 to March 2020. Amongst the new metrics included in this report, Facebook reports for the first time on the number of appeals made by users for content removal decisions Instagram. Facebook also develops on its improved use of technology to locate violating content proactively. (Facebook, 12.05.2020)
– Twitter assigné en justice pour son « inaction massive » face aux messages haineux: Following a recent evaluation of Twitter’s content moderation practices on hate speech content – the evaluation focused on hate and racist content that would be interpreted as illegal under French law – four civil society organisations in France are filing a lawsuit against Twitter. The organisations are requesting that the court designate a judiciary expert in the matter, to whom Twitter should hand out all documents regarding its content moderation processes. If this request for a judiciary expert is approved, the organisations are hoping for the “number, location, nationality, language and profile” of Twitter moderators to be shared. They are also asking for Twitter to report on the number of tweets reported regarding incitation à la
– Très contestée, la “loi Avia” contre la
– What kind of oversight board have you given us?: With the 20 first members of Facebook’s Oversight Board being announced last week, Evelyn Douek provides a refresher of her previous analysis on the Board. In this article, Douek looks at which cases the Board will take, how it cases will be chosen, what standards will inform its work and analyses how big the Board’s impact will be. Douek concludes that, although the Board has some limitations, it represents the “least-worst option” as a middle way between platform-driven decision-making and “heavy-handed government involvement” in speech regulation. (Douek, University of Chicago Law Review, 11.05.2020)
Spoiler alert: Evelyn will be a guest on the next episode of the Tech Against Terrorism Podcast, discussing both the Oversight Board and online regulation in general. Watch this space!
The Columbia Journalism Review, via its Galley series, has hosted insightful discussions on the Board with a range of experts, including Evelyn Douek, UN Special Rapporteur on Freedom of Expression David Kaye, Rebecca MacKinnon, Daphne Keller (another future podcast guest!), Alex Stamos, and (board member) Alan Rusbridger. Do check them out.
For any questions, please get in touch via:
Tech Against Terrorism is an initiative launched by the United Nations Counter Terrorism Executive Directorate (UN CTED) in April 2017. We support the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups – both jihadist and far-right terrorists – consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.