Since the 28 May Executive Order issued by President Trump on Section 230 and the debate it stirred around tech platforms and the regulation of online content, new legislation looking to modify Section 230 has been proposed. Section 230 of the 1996 Communications Decency Act is a landmark law that protects tech companies from being held legally liable for user-generated content posted on their platforms. However, it also states that companies should voluntarily moderate content in “good faith”. Section 230 is therefore widely seen as fundamental to the growth of the internet, as it has encouraged innovation without fear of legal ramifications. In recent years, however, there have been concerns about Section 230’s suitability to the modern information landscape, and in particular with regards to the dissemination of harmful content online (see Citron’s article below as one example.
Following President Trump’s Executive Order, Senator Josh Hawley (R-MO) introduced the “Limiting Section 230 Immunity to Good Samaritans Act” on 17 June. The Act stipulates that tech companies need to revise their Terms of Service to operate in “good faith” and to avoid “intentionally selective enforcement” of their policies. If they do not comply, the Act states that companies should lose liability protections. This law would apply to platforms with more than 300 million global users a month, or 1.5 billion in global annual revenue. The tech industry has expressed concerns regarding the new Act, saying it risks increasing extremist content accumulating online since, in theory, those whose content has been removed, including content taken down under counterterrorism policies, would be able to sue the platform removing it.
The same day, the Department of Justice shared their recommendations on reforming Section 230. These suggestions would expand liability claims to include holding platforms accountable for not taking action on child exploitation, terrorism, and cyberstalking, as well as removing 230 protections from “truly bad actors” that “facilitate” access to third-party content violating federal law. The Department of Justice also recommends introducing a definition of “good faith”. This prompted Techdirt to state that the recommendations, as well as the bill introduced by Senator Hawley, “raises serious 1st Amendment questions.”
These developments are occurring in the aftermath of the introduction of the EARN-IT Act, introduced in March of this year, which aims to counter child sexual abuse material (CSAM). In this act, tech companies would need to earn the liability shields put in place by Section 230, rather than being automatically protected. Earning protections would, in turn, be based on whether companies are seen as complying with “best practice” with regards to countering CSAM. This would be assessed by a federally appointed commission, of which the Attorney General will hold executive power. Riana Pfefferkorn of Stanford University, in her analysis of the Act from earlier this year, raised concerns that the law could lead to an indirect targeting of the use of encryption technologies by tech companies, since the use of encryption might be viewed as not complying with “best practice” by the commission.
– La loi Avia contre la haine sur Internet s’effondre quasi intégralement: The French Constitutional Council has blocked the newly adopted legislation to counter ‘cyberhate’, Julien Lausson reports. Adopted by Parliament on 13 May, the legislation would have required tech platforms to remove terrorist and child sexual abuse content within one hour of notification by a French authority, and hateful content – as defined by French legislation – within 24 hours of reporting by user. However, these two requirements have been deemed unconstitutional due to the risks of inciting platforms to over-zealously censor online content. The Constitutional Council deemed that the requirement to remove flagged content within 24 hours does not leave enough time for tech companies to verify whether the content is in fact manifestly unlawful under existing French law. With regard to terrorist content, the Constitutional council has underlined that whether such content is illegal requires a decision by a judge. However, it would not be possible for tech platforms to obtain such judgement within the one-hour deadline proposed by the ‘cyberhate’ law. Overall, the risks for individual freedoms were ruled “unadapted and disproportionate” by the Council. However, Lausson underlines that some aspects of the law remain, including the obligation for platforms to have a user reporting system for hateful content and the formation of a cyberhate observatory to monitor hateful content online. (Lausson, Numerama, 18.06.2020)
– Oversight boards are a “bold new approach” to tackling today’s problems: In this article, Evelyn Aswad, a member of Facebook’s recently announced Oversight Board, describes the goals of the recently formed Board, highlighting how it might serve to mitigate American concerns about the “concentration of power” amongst corporate social media actors. She argues that she has faith in the Board’s ability to render binding decisions and hold Facebook accountable. The Oversight Board is committed to upholding international human rights standards in content moderation, which, she argues, means that it will favour less intrusive means of content moderation that respects freedom of expression rather than imposing vague speech codes or banning speech. (Aswad, Knight Foundation, 16.06.20).
– Digital platforms’ power over speech should not go unchecked: Danielle Citron analyses here the findings of a new Gallup/Knight Foundation report, and highlights the tension between the American public’s low faith in social media companies to prioritise their interests and their distrust of the government to regulate speech. Citron argues that most online activity amounts to “conduct” rather than expression, and therefore warrants little to no First Amendment protection. She also stresses that legislation to enforce platform “neutrality” would jeopardize free speech values, since it would kneecap responses to cyber harassment that itself chills free expression. On the other hand, Citron notes that “[t]he power online platforms have over digital expression should not proceed unchecked, as it does in crucial respects today”, and that in this respect Section 230’s immunity has enabled platforms to avoid bearing the cost of “destructive online activity”. She thus encourages a fix to Section 230 whereby platforms would enjoy immunity from liability if they could show that their “content moderation practices writ large are reasonable” (Citron, Knight Foundation, 16.06.20).
You can find the full report here: ‘Free expression, harmful speech, and censorship in a digital world
For more insight, check our summary of the latest modifications to Section 230 above.
– How is Twitter going to moderate these voice recordings?: In this article, Jason Koebler analyses Twitter’s recent update that allows users to directly post voice tweets, instead of them first having to be posted by third-party platforms. Koebler indicates how audio tweets like this potentially pose difficulties for content moderation. Namely, for a larger audio file to be assessed, a moderator will have to listen to the full recording, identify the harmful part of the audio, and then moderate accordingly. Whilst AI technology might aid this process, the article mentions AI has its own issues regarding content moderation and requires human oversight. In order to respond to the risk of this new update complicating content moderation on the platform, Koebler notes that Twitter is asking stream watchers to assist with moderating comments, arguing that this approach is potentially flawed. (Koebler, Vice, 17.06.2020).
– The Hazimi-movement: a takfir-based ideology: This report by the Nuance door Training en Advies (NTA) research centre discusses the rise of the Hazimi movement in the Netherlands. The report cautions that the Hazimi movement is “even more extreme than the Islamic State (IS)” and caused ideological rifts in the then-caliphate, where some fighters were killed as a consequence of their alignment with Hazimi ideology. The rift is based on the principle of takfir – the process of ex-communication. For IS, the report notes, takfir means that “true” Muslims (those adhering to IS’ interpretation of Islam) can proclaim anyone who does not adhere to the same strand an “apostate”, a crime which should be punishable by death. However, the Hazimi movement takes this one step further, namely in order to adhere to their ideology, “true” Muslims need to declare takfir on the aforementioned apostates and if not, they become themselves worthy of takfir, and thereby of execution. The movement’s name stems from the Saudi imam Ahmad al-Hazimi, whose sermons have been translated and shared on Dutch-language Telegram and Facebook channels. The NTA concludes that Dutch governmental organisations that work with returning foreign fighters should invest in increasing their awareness of the group. (Nuance door Training & Advies, June 2020).
– Militant extremism is on the rise in West Africa. Education is suffering: In this article, Amy Mackinnon comments on the impact of Islamic State (IS) and al-Qaeda’s attacks on the education system in West Africa. In Burkina Faso, the looting and burning of schools by IS and al-Qaeda, carried out due to their disapproval of the country’s secular curriculum, has left 350,000 children without access to education over the last three years. In addition, Mackinnon stresses that the ongoing clash between IS and al-Qaeda-affiliated groups in the region is worsening in the ungoverned border areas of Burkina Faso, Mali, and Niger. She highlights how this might get worse due to the US’ considerations of the withdrawing its troops from the already largely ungoverned area. Mackinnon also stresses that the absence of education leaves children more vulnerable to becoming radicalised by one of the terrorist groups or to being forced into child marriage. (Mackinnon, Foreign Policy, 29.05.2020).
– Number of far-right terrorist prisoners in Britain hits record high: Dan Sabbagh reports here on the recent release of the Home Office data related to the Terrorism Act for 2018-2019, including statistics on the number of persons convicted for terror related crimes. Sabbagh focuses on a trend underlined by the report: the increase in far-right violent extremists in custody for terror offences in Great Britain, a number that has reached its highest ever recorded level, with a 33 prisoner increase from the previous year. Whilst Islamist extremists are still the main contingent of terror related prisoners, Sabbagh stresses that the increase in jailed far-right terrorists demonstrate the rising threat of far-right terrorism and violent extremism in the UK. This was visible last weekend in London, with the “far right‘unity demonstration’” representing its largest mobilisation in London since 2017, according to Hope Not Hate. According to Sabbagh, the number of jailed far-right violent extremists rose drastically following the UK’s proscription of neo-Nazi group National Actional in 2016. (Sabbagh, The Guardian, 17.06.2020)
For an overview of far-right violent extremism and neo-Nazism in the UK, see: “The many faces of neo-Nazism in the UK” (Ariza, OpenDemocracy, 06.02.2020)
– Countering the radical right, back to basics: In this article, Dr Craig McCann makes the case for a rethinking of strategies to counter far-right violent extremism. McCann argues that strategies should be rooted in exploring intra and inter community relationships, as well as the social pressure they might be under “to promote meaningful and sustainable integration” within and between communities. Drawing on an interview with Jonathan Evans, Former Director General of MI5, who sees the rise of far-right violent extremism as a reflection of the social pressures on communities as a result of austerity measures, McCann argues for increased community investments to address the root cause of the grievance narratives that have fed violent extremism over the last years. Such investments should, according to McCann, aim to address “difficult conversations” such as inequality, immigration, marginalisation, and mental health. McCann further lays out this argument in “Back to basics” the final chapter of his recent book: The prevent strategy and right wingextremism: a case study of the English Defence League. (McCann, CARR, 15.06.2020)
– Intelbrief: women in the white supremacist online ecosystem: In this Intelbrief, the Soufan Center provides an overview of an often overlooked and misunderstood phenomenon: the role of women in far-right violent extremist movements and the gender dimension of these groups, and more precisely on women’s role within online spaces. Whilst the exact roles and levels of participation of women in far-right violent extremist spaces online is difficult to assess – mostly because of anonymity – this brief focuses on the depiction of women in those spaces and on the gender aspect of violent extremist’ ideologies. Broadly, far-right violent extremists’ views on women often invoke discussions about “purity”. Mainly, the need for white women to be protected from non-white people – in order to protect what is considered white men’s “property” – on the one hand, and to traditional gender roles and general hostility towards women on the other. Intersectionality of hate – with the crossing of anti-women sentiment with racism, homophobia, and anti-Semitism – is also present in far-right violent extremist online spaces, with content calling for violence against the LGBTQIA community, feminists, non-white women, or women in interracial relationships. The brief concludes by calling for further research into understanding women’s involvement in far-right violent extremism. (The Soufan Center, 17.06.2020)
Spoiler alert: The Tech Against Terrorism Podcast is returning soon with an episode on women’s role in terrorist and violent extremist online spaces, taking a gender perspective on terrorist and violent extremist use of the internet. Be sure to follow us on Twitter to be the first to know when the podcast is live!
On this topic, we are listening to:
Radikaal podcast with Christian Picciolini, a former far-right violent extremist who has since dedicated his life to countering violent extremism through his Free Radicals Project (Radikaal Podcast)
We also tweeted about this:
ADL Hate, Extremism, Anti-Semitism, Terrorism (H.E.A.T) Map
You can find our tweet and comment here
–Terrorisme: la majorité veut développer la justice préventive: On Wednesday, the French Parliament began their reading of a proposed bill on new security measures for released terrorist offenders, Jean-Baptiste Jacquin reports here. The proposed legislation would permit to place released terrorist offenders under various surveillance measures, including obliging released offenders to live in a pre-determined location and carrying a GPS tracker for up to 10 years after their release from, or for 20 years if the offender served 10 years or more. With 43 Islamist terrorist offenders set to be released from prison in France this year, Jacquin reports that this bill aims to complement existing measures monitoring released terrorist offenders that are deemed insufficient by supporters of the proposed legislation. Jacquin underlines that the proposed legislation has been criticised for the security abuses it might lead to through its supposition that offenders having served their sentence will remain dangerous, and by assessing that individuals will remain threats for as long as 20 years after their release. (Article in French, Jacquin, Le Monde, 16.06.2020)