Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
– Tech Against Terrorism & GIFCT e-learning webinars are back! Don’t miss out on our next webinar – Thursday, 25 October, 5 pm BST / 12 pm EDT/ 9 am PDT on “Online terrorist financing: assessing the risks and mitigation strategies”. You can register for the event here. Please stay tuned for upcoming agenda details.
Tech Against Terrorism Resource Spotlight
– Our “Trends in terrorist and violent extremist use of the internet | Q1-Q2 2021” covers key trends in terrorist and violent extremist use of the internet identified by Tech Against Terrorism’s OSINT team over the first six months of 2021.
– The European Parliament has called for a ban on police use of facial recognition in public spaces and predictive policing.
– Singapore has passed a law aimed at preventing foreign interference in domestic politics, which allows the government to compel internet service providers to provide user information to the authorities, block content, and remove applications to prevent the dissemination of content assessed to be “hostile” by the government.
– Taliban’s spokesperson Zabihullah Mujahid’s Twitter account was temporarily suspended on Sunday evening 3 October. It remains unclear on what grounds Twitter took this action. Twitter has not removed any other content that relates to or comes from Taliban officials.
– This week, Roblox has published its updated Community Standards, which now includes an explicit prohibition of terrorism and violent extremism.
– A losing game: moderating online content fuels Big Tech power: In this article, Claire Fernandez, Executive Director of European Digital Rights (EDRi) argues that in its current shape, the Digital Services Act (DSA) proposed by the EU Commission feeds into the power of Big Tech. This is as the DSA does not consider democratic scrutiny or judicial oversight and puts the onus of moderating hate speech and other harmful content on tech companies. She also argues that the focus on content removal risks freedom of speech and safety, particularly those of marginalised groups. She argues that more holistic approaches need to be taken, as the internet reflects societal issues that can’t just be tackled by technology. She concludes that human rights impact assessments should inform proactive, policy, and legislative measures. (Fernandez, The Parliament, 29.09.21).