Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
When tackling terrorist use of the internet, is content removal really our only option? Our upcoming webinar, on Wednesday, 16 December, at 5pm GMT, will look at what alternative steps tech companies can take. We have an exciting panel of experts and practitioners lined up – don’t forget to register here.
The EU Commission has released its counter-terrorism agenda for the European Union. It includes increased assistance to member states by Europol to strengthen the European Union’s resilience against terrorist threats. You can find more information on the European Union’s regulatory framework here. Amnesty International has responded to the agenda in a statement, outlining potential risks to human rights, particularly freedom of expression.
The European Parliament and the European Council reached an agreement regarding the proposed regulation on preventing the dissemination of terrorist content online on 10 December. You can find Tech Against Terrorism’s VOX-pol article on the regulation here.
Twitter, Automattic, Vimeo and Mozilla have written a joint letter as a response to the new regulation set by the European Union, including the Digital Services Act and the Democracy Action Plan, and advocate for a conversation on how the new regulation will tackle illegal and harmful content, whilst also upholding the open internet principles of the European Union. You can also find Tech Against Terrorism’s response to the Digital Services Act here.
EDRi has published an analysis on the final text published by the European Parliament Committee on Civil Liberties (LIBE) on the Regulation of Cross Border Access to Data, also called the e-evidence proposal.
The Royal Commission of Inquiry into the Terrorist Attack on Christchurch Mosques on 15 March 2019 has released its concluding report, establishing key findings and recommendations.
France’s anti-terror prosecutors have called for defendants on trial for the Charlie Hebdo attacks, where a series of violent Islamist attackers killed 17 people, to be given long jail sentences.
The United Kingdom’s Home Office has released its quarterly update that reveals the operation of the police powers under the Terrorism Act 2000 and the subsequent legislation in the UK, up to the year ending September 30, 2020.
Twitch has updated its hateful content and harassment policy which will go into effect on January 22, 2021.
Moonshot CVE released a report, analysing the effectiveness of the Facebook Redirect Programme pilot and also providing recommendations on future deployments of the pilot. Facebook’s redirect method, launched in 2019, leads users to educational resources or outreach groups when they search for hate or violence search queries.
Jigsaw has published visualised data from the Global Terrorism Database, exploring how documented violent extremist attacks have increased in the United States, the United Kingdom, and Germany. The visualisations particularly highlight the hotspots of attacks related to white supremacy.
Platform regulation should focus on transparency, not content: This article, by Susan Ness, argues how European and American policymakers should collaborate and design transparent governance frameworks for tech companies that are feasible for different legal systems and societal norms, rather than overly focussing on content regulation. Ness argues there are two types of regulatory frameworks currently, the potential elimination of the protection from liability of platform for user-generated content, and the removal of content within a particular timeframe and if tech companies fail to do so, accept steep fines. Ness argues that both regulatory frameworks risk infringing on freedom of speech, as well as hinder other human rights. Therefore, she argues that hate speech and disinformation should be countered by the United States and Europe working together to mandate transparency and accountability through strict oversight. She provides three ways of legislative requirements that will allow this: require platforms to have a transparent Terms of Service, transparent enforcement mechanisms and transparent appeal mechanisms; require tech platforms to reveal the impact of their algorithms, to take swifter action on those that lead users to more illegal content; and to set up an oversight board of industry and public members that ensure accountability. (Ness, Slate, 02.12.20).
To see more on the regulatory frameworks of the United States and Europe, as well as other regions including the Middle East, Africa and South America, take a look at our Online Regulation Series.
Reports of al-Qaeda’s demise are greatly exaggerated | Opinion: This article by Colin Clarke, a Senior Research Fellow at the Soufan Center, warns how overestimating the effect of the unconfirmed death of al-Qaeda’s leader, al-Zawahiri, has on the capability of the al-Qaeda might hinder the world’s resilience to the group. Clarke identifies that since the Islamic State’s (IS) rise in 2014, al-Qaeda has focussed on franchising its model to a regional level, by strengthening and growing groups like al-Shabaab in Somalia, Jama’at Nasr al-Islam wal Muslimin (JNIM) in West Africa, Hurras al-Din in Syria, al-Qaeda in the Indian Subcontinent in South Asia (AQIS), and al-Qaeda in the Arabian Peninsula (AQAP) in Yemen. Therefore, Clarke argues, whilst al-Zawahiri’s reported death might create challenges in managing the relationships between al-Qaeda’s core and its affiliates, al-Qaeda’s model has shown great adaptivity and resilience when presented with hits to leadership. Clarke concludes that the operational autonomy held by these regional affiliates, as well as their ability to carry out strategic objectives, means that if al-Zawahiri were to be dead, it will have a lesser effect on al-Qaeda’s capability than the West might hope. (Clarke, Newsweek, 07.12.20).
Indonesia: social grievances and violent extremism: This report by Moonshot CVE assesses the propensity of Indonesian users that are at risk to being recruited online, to interact with ideological counter-messages compared to psychosocial support content. The report highlights how online recruitment mechanisms exploit personal grievances and vulnerabilities by potential recruits, and that the emphasis of current counter-radicalisation is often on ideological counter messaging. To analyse this discrepancy, Moonshot CVE conducted a pilot study where they defined psychosocial support as addressing social issues, such as a lack of belonging or employment, as well as psychological factors including depression and loneliness, typically exploited by recruiters. The study finds that users were more likely to interact with employment assistance than ideological counter messages, as well as a 128% more likely to engage with the ad that offered help with loneliness. The report finds that the pilot indicates that psychosocial support offers opportunities for counter-radicalisation and hinders potential recruitment by Islamist terrorists in Indonesia, as well as the population’s willingness to engage with online support. The report concludes that these findings are applicable to wider counter-radicalisation strategies. (Moonshot CVE, 08.12.20).