Reader’s Digest – 9 April 2021
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
Tech Against Terrorism Updates
- The March newsletter of the Terrorist Content Analytics Platform (TCAP) is online now! Don’t forget to subscribe to it for updates on development progress, webinars, and other updates!
In March, the TCAP collected and verified 1,967 URLs containing terrorist content and sent 802 alerts to 30 tech companies. 91.5% of this content is now offline following assessment and action by the tech companies partaking in the alerting scheme.
- We are excited to announce that an updated version of the Knowledge Sharing Platform (KSP) will be re-launched to tech platforms soon. The KSP is a collection of interactive tools and resources designed to support the operational needs of smaller tech platforms. It is a “one-stop shop” for companies to access practical resources to support their counterterrorism and transparency efforts. Stay tuned for announcements about the launch!
- The Council of Europe has adopted guidelines that provide Member States’ national authorities with practical support to prevent and counter terrorism.
- The head of WhatsApp, Will Cathcart, has written an op-ed in Wired, in which he describes how end-to-end encryption is under attack from a multitude of governments, which he argues would greatly undermine the right to privacy and the purpose of technology and communication.
- Twitch has updated its Hateful Conduct and Harassment policy and will now take action against users for misconduct, which includes a list of behaviours such as violent extremism and terrorism, that takes place offline or on other platforms.
- Discord has released its bi-yearly transparency report, detailing the platform’s content moderation actions taken between June and December 2020. The report highlights Discord joining Tech Against Terrorism and the Global Internet Forum to Counter Terrorism (GIFCT).
- YouTube has released a new metric, called the Violative View Rate (VVR), which specifies the percentage of views that come from viewing violative content. This metric will be included in YouTube’s Community Guidelines Enforcement Reports.
- Last week, Business Insider released an article detailing a data leak through which 530 million Facebook users’ data was compromised. Facebook has now clarified how and when this occurred (prior to September 2019), and how users can keep their account safe.
- Hope Not Hate has released a report on the violent far-right and how they intimidate, abuse and threaten journalists, particularly female journalists of which 20% have experienced offline harassment in connection with online violence targeted at them.
- Access Now has launched a petition that asks the European Union to ban automated recognition of gender and sexual orientation. This petition precedes the European Commission’ plans to propose new legislation to regulate artificial intelligence systems this month.
- The EU online Terrorism Regulation: a bad deal: Jillian C. York and Christoph Schmon discuss the proposed EU regulation on preventing terrorist content online (TERREG for short), originally introduced in 2017 and scheduled for a final parliamentary vote this month. York and Schmon detail that some of the concerns that civil society had with the initial version of the regulation have been addressed, such as the excuse for journalists and academics to view terrorist content and the adoption of the narrower definition of terrorism. They conclude, however, that the proposal still does not respect human rights. Specifically, the authors criticise the fact that the 1-hour removal deadline is still included, which the authors argue will incentivise tech companies to err on the side of removal. Further, the authors raise concerns around the “broad powers” it gives to national authorities without judicial oversight and that the law provides for extraterritorial enforcement, where governments can issue removal orders to hosting providers that fall outside of their jurisdiction. (York, Schmon, Electronic Frontier Foundation, 07.04.21).
Tech Against Terrorism also wrote an article for VoxPol, raising concerns about the effectiveness of the TERREG and the effect it may have on smaller tech companies.
Tech Against Terrorism provided a response to the European Union’s Digital Services Act which you can read on our website.
Please see our blog post on the wider regulatory landscape in the European Union as part of our Online Regulation Series.
Far-right terrorism and violent extremism
- The international far-right terrorist threat requires a multilateral response: This article, by Jason M. Blazakis and Naureen Chowdhury Fink, analyses the challenges of adapting international counterterrorism frameworks to the violent far-right but outlines ways in which international action could be taken. Among the challenges, they highlight that the United States currently does not have legislation in place to designate domestic far-right extremist groups as terrorist organisations. In examining potential solutions, Blazakis and Fink suggest increasing multilateral action, for example through the United Nations Counter-Terrorism Executive Directorate (UN CTED), more cooperation with civil society and the private sector to strengthen research and mitigation capacities, and the ability to designate domestic far-right terrorist organisations as Foreign Terrorist Organisations (FTO’s) when they have a significant transnational character. (Blazakis, Fink, Lawfare, 04.04.21).
Misogynistic extremism and gender-based terrorism
- Male supremacism, borderline content, and gaps in existing moderation efforts: Yebin Won and Jonathan Lewis discuss male supremacism, borderline content which is content that is on the edge of violating community standards but does not explicitly violate them, and gaps in tech companies’ moderation efforts. The article describes male supremacy and notes how, through embedded misogyny, it has become coined as a secondary concern to other types of extremist ideologies. They highlight how individuals that espouse male supremacy, often do so through expressing support for gender-based violence, including harassment and rape, but do not explicitly call for femicide. In addition, groups like involuntary celibates (incels) use coded language, which is less likely to be picked up by tech companies’ moderation algorithms. Won and Lewis conclude that more needs to be done by tech companies to moderate speech that is intentionally on the edge of violating community guidelines, which may risk offline attacks committed out of male supremacist beliefs. (Won, Lewis, GNET, 06.04.21).
For any questions, please get in touch via: