5 min read

Reader's Digest – 18 December 2020

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.


E2EE Survey

Tech Against Terrorism is currently expanding its expertise on the use of end-to-end encryption (E2EE), and we are trying to deepen our comprehension of how online users and the general public perceive encryption, and the wider “encryption debate”.

To inform our understanding of this complex discussion, we would be interested in hearing more from you and how you perceive encryption, in particular E2EE, as a user of online services. To this end, we kindly ask you to spare a few minutes of your time to respond to our short and anonymous survey.


Top stories

  • This week we hosted a webinar on Content Moderation: Alternatives to Content Removal. If you would like to access a recording of this webinar, you can reach out to us at contact@techagainstterrorism.org.

  • The Global Internet Forum to Counterterrorism (GIFCT) has announced two new members: Discord and Mailchimp. Both were supported in their membership process by the Tech Against Terrorism Mentorship Programme. You can find more info about our Mentorship for smaller tech platforms here.

  • This week the European Commission proposed a comprehensive set of new rules for all digital services, including social media, online market places, and other online platforms operating in the EU: the Digital Services Act (DSA) and the Digital Markets Act (DMA).

    • Tech Against Terrorism submitted a response to the DSA consultation process, a summary of which can be found here.

    • EURACTIV has collated EU Member States’ reactions to the Digital Services Act and the Digital Markets Act. Check it out here.

  • The UK unveiled on Tuesday details of its upcoming online harms proposals,  which include new rules and penalties for tech companies, as well as new powers for the UK’s media regulator, Ofcom.

  • To learn more about the UK Online Harms proposal, see here for an assessment of it by Graham Smith on Cyberleagle.com.

    • To learn more about the online regulatory landscape in the UK, read our Online Regulation Series blogpost on the UK here.

  • The Council of the EU adopted a resolution on encryption: Security through encryption and security despite encryption.

  • The International Commission of Jurists (ICJ) called on the Council of the EU, the European Parliament and the European Commission to give a central role to the judiciary and ensure effective judicial remedies to prevent and redress human rights violations, in the draft Regulation on “Terrorist Content Online”.

  • The Paris criminal court has found 14 people guilty of complicity in the January 2015 attacks on the satirical magazine Charlie Hebdo and a kosher supermarket. The same court has sentenced the terrorist behind the foiled French Thalys train attack in 2015 with life imprisonment.

  • The Centre for Analysis of the Radical Right (CARR) released their annual Year in Review report reflecting on key developments of the radical right in 2020.

  • TikTok expanded its Community Guidelines,  aiming to strengthen its existing policies in areas like harassment, dangerous acts, self-harm and violence, while introducing new features focused on community well-being.

  • Twitch has announced that it will be banning the words “simp”, “incel”, and “virgin” when used in the context of harassment. Read more on this here.

  • The Centre for Research and Evidence on Security Threats (CREST) has published a report on Memetic Irony And The Promotion Of Violence Within Chan Culture.


Tech policy

  • Europe Rewrites Rulebook for Digital Age: Mark Scott, Thibault Larger, and Laura Kayali discuss the proposals announced by the EU this week: The Digital Services Act and the Digital Markets Act – aimed at boosting digital competition and protecting people from online harm. Under these proposals, large platforms will face new limits on how they expand their online services, otherwise facing penalties of up to 10 percent of their global revenue. In most serious cases, EU regulators would be granted powers to break up companies. The proposals also include separate fines of up to six percent of annual revenue for big tech companies – those with at least 45 million users across the EU – that fail to limit how illegal material, from hate speech to counterfeit products, spread across their platforms. The proposals are still far from becoming law as now the European Parliament and Member States will weigh in. Final rules are not expected before 2023. According to Scott, Larger, and Kayali, those in favour of the proposed rules say the current system for policing the online world is broken, with Silicon Valley holding too much sway over people’s digital lives. On the other hand, those against caution that the EU's proposals may harm innovation. (Scott, Larger, Kayali, Politico, 15.12.2020).

  • The Electronic Frontier Foundation and Article19 have also reacted to the publication of the DSA, see here and here.

  • UK Online Safety Bill, Coming Next Year, Will Propose Fines of up to 10% of Annual Turnover for Breaching Duty of Care Rules: Natasha Lomas sheds light on the latest updates regarding the UK’s plan to regulate a wide range of illegal and/or harmful content where such content might pose a risk to children. In an earlier response to the consultation on its Online Harms white paper, Ministers confirmed the UK’s media regulator, Ofcom, would enforce the forthcoming rules. Under the plan announced this week, the government said Ofcom would be able to levy fines of up to 10 percent of a company’s annual global turnover on those that are deemed to have failed in their duty of care to protect children from being exposed to illegal material, such as child sexual abuse, terrorist material or suicide-promoting content. Ofcom will additionally have the power to block non-compliant services from being accessed in the UK. The proposal also introduces a range of rules for tech companies to adhere to, which are intended to cover not just large companies like Facebook but a wide range of internet services, such as video sharing platforms and consumer cloud storage. According to Lomas, the UK’s plan has already attracted criticism from the tech sector, who stress that the Online Safety Bill will impact scores of smaller digital businesses, instead of the platform giants. (Lomas, Tech Crunch, 15.12.2020).


Far-right violent extremism and terrorism

  • The Christchurch Terror Attack: A Case of Online Radicalisation? Joe Whittaker and Chamin Herath discuss the radicalisation of the Christchurch attacker, who killed 51 people and injured 40 more at Mosques in 2019, highlighting the offline and online factors that may have contributed to the perpetrator’s radicalisation. This follows the report that the Government of New Zealand released on 8 December, 2020, of their Royal Commission into the attack. Whittaker and Herath note that following the report’s release, mainstream news sources noted that the terrorist was radicalised on YouTube. In response, Whittaker and Herath offer an overview of the killer’s pre-event behaviours and argue that despite the internet playing an important role, “it is a misnomer to think of this case as one of ‘online radicalisation’”. They first outline a clear path of the killer’s online interaction with extreme right-wing communities from a very young age, whether it be on 4chan or Facebook, including how he began donating to various international right-wing organisations through PayPal and Bitcoin online. However, Whittaker and Herath stress that there are important offline factors that could be key in understanding the perpetrator’s trajectory. For example, they note that his travels, including a mugging incident abroad, might have been a factor in the hardening of his beliefs and movement to action. Whittaker and Herath pose questions about the role of the internet in radicalisation, especially how it might compare with offline live events. They conclude that “frames such as “online radicalisation” can be misleading and lead us to policy positions that only address part of the problem and may lead to negative outcomes”. (Whittaker, Herath, Vox Pol, 16.12.2020).

  • After Christchurch: Mapping Online Right-Wing Extremists: Lise Waldek, Dr. Brian Ballsun-Stanton and Dr. Julian Droogan discuss their research on six social media platforms – including Twitter, Gab, Reddit, 4chan, 8chan, and Facebook – to examine the spread of far-right extremist sentiment across New South Wales (NSW), following the Christchurch terrorist attack. The key findings show that related levels of risk originate from the online right-wing extremist environment. According to the researchers, the findings demonstrated an environment that is characterised by narratives that challenge the fundamentals of pluralist liberal democracy through exclusivist appeals to race, ethnicity, and nation. The findings also show that individuals in this space advocate for the use of violence as a tactic for expressing political grievances. Other findings included that the online extremist communities in NSW have a high interest in real world issues occurring in Australia and in the US, especially American populist politics. The researchers portrayed how the online right-wing extremist communities largely consist of networks of socially connected individuals that engage and share content across diverse social media platforms. They write that “online right-wing extremism in NSW can be thought of as loosely connected ‘milieu’, rather than a series of clearly defined extremist or terrorist groups”, however noting that “within this wider social milieu, formal extremist groups do exist and pose a threat”. (Waldek, Ballsun-Stanton, Droogan, GNET, 14.12.2020).


For any questions, please get in touch via:
contact@techagainstterrorism.org


1 min read

Reader's Digest – 24 April 2020

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. -Terrorist...

Read More