4 min read

Reader's Digest – 27 November 2020

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.

Webinar Alert!

How can UN agencies support the counterterrorism efforts of smaller tech platforms whilst safeguarding human rights and freedom of expression? What are the existing avenues of cooperation between tech platforms and intergovernmental organisations? In our upcoming webinar, cooperation between the UN and smaller tech platforms in countering use of the Internet for terrorist purposes, we aim to shed light on these issues. The webinar will be held on Wednesday, 9 December, 4pm GMT. You can register here. Organised in partnership with UN CTED. 
 
When tackling terrorist use of the internet, is content removal really our only option? Our upcoming webinar - on Wednesday, 16 December, at 5pm GMT - will look at what alternative steps tech companies can take. We have an exciting panel of experts and practitioners lined up – don’t forget to register here

Top stories

  • We are pleased to announce the successful launch of automated terrorist content alerts powered by the Terrorist Content Analytics Platform (TCAP). As of this week, the TCAP will send email alerts to tech platforms when it identifies terrorist content hosted or shared on their platform. You can find more information on this here.
     
  • We have opened registration for the December sessions of the “office hours” for the Terrorist Content Analytics Platform (TCAP), to be held next week. These sessions are meant to provide an update to our stakeholders, and to answer any questions they might have. You can register here.
     
  • The Global Terrorism Index of 2020 has been released, you can find it here.
     
  • The Paris Public Prosecutor Office has announced the creation of a unit dedicated to online hatred. This Unit will work “in real time” to handle all sorts of online threats, including terrorism apology. 
     
  • The Centre for Analysis of the Radical Right has released a report: Far-right Mobilisation in Great-Britain.
     
  • The Combating Terrorism Center at West Point has released the latest edition of CTC Sentinel, including an analysis on "Outlasting the Caliphate: The Evolution of the Islamic State Threat in Africa.”
     
  • Tech companies have threatened to leave Pakistan following the government granting “blanket powers” to authorities to censor digital content.

  • Tech policy

    The Digital Services Act Could Make or Break European Democracy: Iverna McGowan discusses the upcoming EU Digital Services Act (DSA) and how it “marks a key moment for democracy across the bloc”. Through the DSA, the EU is considering new laws on content moderation, which impacts online public discourse on a global scale. McGowan notes that human rights advocates are concerned about the current EU codes of conduct, which “circumvent the human rights obligations of states and the EU by asking private companies to take down content based on broad definitions of what is permissible, without due process safeguards”. McGowan thus stresses that to safeguard European democracy, the DSA must meet several criteria. Including criteria related to human-right compliance,  McGowan argues that companies “must be bound by strong transparency requirements and strong procedural safeguards for users”, and that “the judiciary remains the final arbiter on decisions on the legality of speech”. McGowan concludes with highlighting how the DSA could impact our online information ecosystem with long-lasting implications for key tenets of democracy, and that the overarching approach must therefore be guided by key principles of transparency and accountability, putting the rights of the individual first. (McGowan, Euractiv, 25.11.2020)

    Breton Reveals Details of Future Digital ‘Gatekeeper’ Definition in EU Law: Samuel Stolton discusses the newly disclosed details of the forthcoming criteria for the definition of so-called “gatekeeper platforms” as part of the Digital Markets Act, which is set to be proposed by the EU executive in early December. Gatekeeper platforms include those that control access to critical online services, such as Apple’s App store or Google’s Play Store. There have been ongoing deliberations over the specifics of the Commission’s definition of these platforms. On 25 November, The EU’s Internal Market Commissioner, Thierry Breton, explained that online platforms would be assessed against certain criteria. Such criteria would include: their impact on the EU’s single market, whether such services are “unavoidable” for smaller players in the ecosystem, and whether or not such platforms occupy a dominant position in the market so as to be in a position to “kill off” competition. Breton emphasised that the Commission’s approach is to work by objective criteria and to try to capture the behaviour of the platform. Therefore size, per se, is not a limit but will be taken into consideration for it can amplify the impact of harmful behaviour. (Stolton, Euractiv, 26.11.2020)

    Tech Against Terrorism submitted a response to the DSA consultation process, a summary of which can be found here.
     
    To see our blog post on the EU’s online regulation, check out the EU edition of the Online Regulation Series here.

    Far-right violent extremism and terrorism  

    Upvoting Extremism: Collective Identity Formation and the Extreme Right on Reddit: Tiana Gaudette, Dr. Ryan Scrivens, Dr. Garth Davies and Dr. Richard Frank, discuss their research in response to the question “how does Reddit’s unique voting algorithm facilitate ‘othering’ discourse and, by extension, collective identity formation on r/The_Donald following Trump’s presidential election victory?” To answer this, they extracted data posted to r/The_Donald subreddit in 2017. Their study found that Reddit’s upvoting and downvoting features played a central role in facilitating collective identity formation among those who posted extreme right-wing content. According to their findings, Reddit’s upvoting feature functioned to promote and normalise otherwise unacceptable views against the out-groups, while, on the other hand, Reddit’s downvoting feature functioned to ensure that members were not exposed to content that challenged their extreme right-wing beliefs. They concluded that the upvoting and downvoting features functioned as an echo chamber for hate and, “may have also functioned to correct the behaviour of dissenting members”. (Gaudette, Scrivens, Davies, Frank, GNET, 25.11.2020)

    This week, we’re listening to The Lawfare Podcast episode on “Collaborating to Counter Violent Extremism Online” in which Evelyn Douek and Quinta Jurecic speak with Nick Rasmussen, the Executive Director of the Global Internet Forum to Counter Terrorism (GIFCT).

    We’re also listening to the Centre for Analysis of the Radical Right’s podcast on “Women and Gender in the Extreme Right”, where Augusta Dell’Omo is joined by guest Simon Purdue who discusses why gender is so important to the extreme right. 

    Islamist terrorism

    Cyberwar on Twitter: Cyber-Vigilantes vs Islamic State: Dr. Valentine Crosset discusses the movement of hacktivist groups who have “taken on the moral duty and assumed responsibility for denouncing terrorist accounts to companies”. For example, in the wake of the 2015 attacks on Charlie Hebdo’s offices, the group Anonymous announced its aim to take action against Islamist extremist websites and social media accounts. Since then, Dr. Crosset claims, several such operations have been launched on Twitter and other groups began to emerge, such as Ghost Security Group and Controlling Section, all formed to combat online jihadi propaganda. In addition, many anonymous users have become involved, referring to themselves as “jihadi hunters”, who flag IS accounts to Twitter by listing IS accounts and asking users to report accounts in order to generate massive reporting numbers. In response to this, IS militants make false reports on anti-jihadist accounts through accounts managed by humans or botnets, which they try to make look as “neutral” as possible. As a result, hundreds of users invested in the fight against IS online find themselves harassed, attacked, blocked, and suspended because of reports made by extremists. Dr. Crosset therefore claims that when platforms such as Twitter remove reported accounts, they often struggle with “differentiating nuances and context”. In this case, she writes, “the one who comes out on top is the one who manages to silence the other party”. (Crosset, GNET, 20.11.2020)