15 min read

THE ONLINE REGULATION SERIES | EUROPEAN UNION (update 2021)

NOVEMBER 2021 UPDATE TO THE EUROPEAN UNION'S REGULATORY FRAMEWORK


Tech Against Terrorism first analysed the EU’s online regulatory framework in the first edition of the Online Regulation Series 2020, and we subsequently updated our analysis for the Online Regulation Series Handbook in July 2021. In this entry, we provide updates on further developments in the last six months and add resources that were not included in the initial version of our analysis.

The regulation on addressing the dissemination on terrorist content online (passed)

The EU’s Regulation 2021/784 on addressing the dissemination of terrorist content online (often called the “TCO” or “TERREG”) was enacted in June of 2021, and will come into force in June 2022. In October 2021, the EU Commission commendably announced that, as was recommended by Tech Against Terrorism in our response to the regulation, it will fund initiatives supporting smaller platforms to comply with the regulation. It was also announced that Europol’s PERCI tool – a platform built to facilitate the implementation of the TCO – will be used by Member States’ ‘competent authorities’ (see below) to issue removal orders.

Seven UN Special Rapporteurs[1] once again voiced criticism of the regulation as part of a wider critique of the EU’s Counter-Terrorism Agenda (see below). They warned that the definition employed in the TCO regulation could conceivably encompass legitimate forms of expression. The Rapporteurs called on the EU to provide further information on the definition of ‘terrorist content’ and to explain its “operational consistency with human rights norms of legal precision”.[2] The criticisms made of the TCO before it received legislative approval can be found below.

Digital Services Act (proposed)

Following a first draft published in December 2020, the Digital Services Act (DSA) has undergone scrutiny both in the European Parliament and the Council of the EU.

In Parliament, the Internal Market and Consumer Protection (IMCO) Committee has primary responsibility for scrutinising the DSA. The Committee on Civil Liberties, Justice and Home Affairs (LIBE), the Committee for Industry, Research and Energy (ITRE) and the Legal Affairs Committee (JURI) are associated committees for the purpose of scrutinising the DSA and adopted their positions on the DSA in September and October 2021.

The DSA received extensive scrutiny in the European Parliament, in the course of which the various committees proposed more than 2,000 amendments to the original draft.[3] Whilst some amendments focus on expanding the scope of the law and on increasing transparency requirements, others propose content removal deadlines which were absent in the first draft of the law and are similar to those found in other regulations, including the TCO. Amendments adopted by the JURI committee, for example, propose a removal deadline of 30 minutes for illegal content disseminated by live broadcast, [4] and 24 hours for illegal content which “can seriously harm public policy [and] public security” (this amendment was broadly mirrored in IMCO’s draft report).

Digital rights groups have raised concerns about these developments in Parliament, arguing that they risk undermining what many such groups saw as a promising start in the first draft of the DSA. The Electronic Frontier Foundation (EFF) said[5] that amendments like the tight removal deadlines could increase reliance on content filters, which according to EFF will lead to the “overblocking” of content. Further, Brussels-based digital rights network Edri warned[6] that some of the parliamentary amendments risk seeing the DSA commit to what in Edri’s view are instruments flawed in the manner of those introduced by the TCO regulation. Edri has also warned that this would lead to a lack of independent judicial oversight, “unjustified” removal deadlines, and would incentivise an increased reliance on platforms’ Terms of Service as a basis for content removal at the exclusion of the rule of law.

The IMCO Committee is currently working on finalising its joint position, which it is expected to do in December 2021.[7]

Meanwhile, the Council announced that it adopted its general approach to the DSA on 25 November 2021. The Council’s approach proposes fewer amendments than were tabled in Parliament, but the approach nonetheless seeks to include online search engines in the scope of the DSA and to strengthen public authorities’ ability to issue orders concerning illegal content to private companies.[8]

The DSA, following the finalisation of the European Parliament’s position on the draft, will become the subject of the tripartite negotiations between the Commission, Parliament, and the Council known as the ‘trilogue’.

Measures to tackle criminal use of end-to-end encryption

Over the past year, there have been significant developments in EU initiatives that could have an impact on end-to-end encryption (E2EE). Terrorist use of such technology is often cited by the EU as justification for proposed action or regulation of E2EE, although child sexual abuse material is the reason most frequently mentioned.

The Council resolution “Security through encryption and security despite encryption”,published in November 2020, calls on EU institutions to propose a legislative framework for legal law enforcement access to encrypted communications – mainly to counter terrorist activity and child sexual abuse material – and for facilitated cooperation with service providers. Further, the EU strategy for a more effective fight against child sexual abuse, published in December 2020, lays out eight initiatives meant to provide a framework for improving action on child sexual abuse, and notes that the use of end-to-end encryption makes detection of such crimes “impossible”. The strategy also tasks the EU Internet Forum (see below) with assessing technical solutions to detect and report child sexual abuse. Further, a leaked 2020 Commission report suggests that the EU is seriously exploring technical solutions to target criminal use of E2EE. This was reiterated in January 2021 by the EU Commissioner for Home Affairs, Ylva Johansson who also confirmed that the Commission is exploring “permanent legislation” to counter child sexual abuse material online. In December 2021, Johansson announced that a proposal on this matter is going to be unveiled in early 2022. It is currently unclear to what extent such regulation would impact E2EE.

For more information about the EU’s regulatory approaches to criminal use of E2EE, as well as the approaches of other jurisdictions, see Tech Against Terrorism’s report on “Terrorist use of E2EE: State of Play, Misconceptions, and Mitigation Strategies.”


THE ONLINE REGULATION SERIES 2020 | EUROPEAN UNION

The European Union (EU) is an influential voice in the global debate on regulation of online speech. For that reason, two upcoming regulatory regimes might – in addition to shaping EU digital policy – create global precedents for how to regulate both online speech generally and terrorist content specifically.

European Union’s regulatory framework:

  • European Counter

    Terrorism Strategy
    , adopted in November 2005, which sets out the EU’s priorities on countering terrorism in

    the Union.
  • European

    Agenda on Security
    , adopted in April 2015, which announced the establishment of key institutions to

    tackle terrorist use of the internet such as the EU Internet Referral Unit and the EU Internet Forum.
  • EU Security

    Union Strategy
    , published in 2020. The Strategy covers the 2020-2025 period and focuses on ‘future-proofing’

    the EU’s security environment via ‘a whole-of-society approach to security’ aimed at responding to a

    rapidly-changing threat landscape. The strategy explicitly highlights countering illegal content online, and

    commits the EU Commission to exploring “technical, operational, and legal solutions” to target terrorist use of

    end-to-end encryption, and mentions some of the regulatory approaches on E2EE and terrorist content online

    described in this entry, including the TCO. (added December 2021)
  • EU

    Counter-Terrorism Agenda
    , announced in 2020 as part of the Security Union Strategy, the Agenda centres the

    EU’s counterterrorism approach around four principles: anticipate, prevent, protect, and respond. The Agenda

    mentions countering extremist ideologies online. (added December 2021)
  • Directive (EU) 2017/541 on

    combating terrorism
    , adopted in March 2017, and the key EU legal act on terrorism.[3]
  • E-Commerce

    Directive
    , adopted in June 2000, which provides the overall framework for the EU’s Digital Market and

    dictates that tech companies are exempt from liability for user-generated content.
  • Audio Visual

    Media Services Directive
    , adopted in November 2018, which compels Member States to prevent audio-visual

    services, including online video-sharing platforms, from disseminating harmful material, including terrorist

    content.

Proposed regulation:

Key organisations and forums:

  • Europol, the European Union’s law enforcement agency which supports
    Member States in countering organised crime and terrorism.
  • EU Internet Referral
    Unit
    , (Europol), which reports terrorist content to tech platforms for their assessment and removal based on
    platform Terms of Service.
  • EU
    Internet Forum
    , a public-private forum set up by the Commission to tackle terrorist use of the internet.
  • Code of Practice on
    Disinformation
    , unveiled in 2018, which sets a code of practice for tech companies to help tackle
    disinformation on their platforms. (added December 2021)

Collaborative scheme:

  • EU
    Code of Conduct on Illegal Hate Speech
    , in which signatory tech companies commit to remove and report on
    hate speech flagged to them by a select number of European civil society groups.
  • EU
    Crisis Protocol
    , a collaborative mechanism between governments and tech companies for the rapid detection
    and removal of terrorist content in the event of an online crisis.

Key takeaways for tech platforms:

  • Platforms are currently exempt from liability for user generated content, but this will change when the
    regulation on addressing the dissemination of terrorist content online starts applying in June 2022.
  • Companies have the possibility to participate in several voluntary collaborative schemes together with European
    law enforcement agencies and Member States.
  • The EU is an influential
    regulatory force
    , and there is reason to believe that EU regulation could inspire similar efforts elsewhere.

Regulation on addressing the dissemination of terrorist content online (from June 2022)

  • One-hour removal deadline (Article 3). Companies are to remove content within one hour of receiving a removal
    order from a “competent authority” (which each Member State will be able to appoint – more information below).
    If the platform shows “systematic and persistent” failure to meet the one-hour deadline, it could result in
    penalty fees of up 4% of the company’s global annual turnover.
  • “Specific measures”. Companies are to introduce “specific measures” to prevent terrorist content if instructed
    by competent authorities (Article 5): The choice of measure is up to each platform. Platforms will need to
    ensure that the measures are effective in tackling terrorist content without having adverse impact on human
    rights and freedom of speech.
  • Preservation. Companies are obliged to preserve removed terrorist content for six months (Article 6).
  • Transparency reporting. Companies will need to produce transparency reports on measures taken to comply with the
    regulation (Article 7). Platforms will also need to describe more widely the efforts they are making to remove
    terrorist content.
  • Complaint mechanisms. Platforms will need to introduce complaint mechanisms for users whose content has been
    removed (Article 10)
  • User notice (Article 11). Companies will need to inform users when their content has been removed as a part of
    the company complying with the regulation.
  • Point of contact. Platforms will have to establish a point of contact to coordinate and respond to removal
    orders from competent authorities (Article 15).
  • Legal representative. All non-EU based platforms offering services in the EU will need to assign a legal
    representative in the EU (Article 17).
  • Violations of the above obligations may lead to penalties (Article 18), although the regulation does not specify
    what exact penalty would be awarded outside of violations of Article 3.

Digital Services Act: Proposed measures

  • Platforms are largely protected from liability (similar to the E-Commerce Directive), but could be held liable
    if they have “actual knowledge” of illegal content on their sites (Articles 3-5). There is also a provision
    which specifies that platforms will not be held liable for proactively carrying out activities aimed at reducing
    the presence of illegal content (Article 6).
  • Smaller platforms are largely exempt from some of the more rigorous requirements, whereas so-called “very large
    online platforms” (platforms with more than 45 million monthly active users in the EU) will need to introduce
    additional measures, including additional transparency obligations, risk assessments and being subject to
    independent audits (Articles 25-33).
  • Content removal order mechanism (Article 8).
  • Requirements to assign points of contact and legal representatives in the EU (Articles 10-11).
  • Obligations to clarify content moderation policies and practices in Terms of Service (Articles 12).
  • Transparency reporting obligations. Smaller and micro companies (as specified in Recommendation 2003/361/EC) are
    exempt in the proposal (Article 13).
  • Notice and action mechanism, allowing anyone to report suspected illegal content to platforms. Such a report
    would qualify as “actual knowledge” and would therefore render platforms liable for hosting such content
    (Article 14).
  • Trusted flaggers. Platforms will have to ensure that trusted flaggers’ reports are prioritised (Article 19).

Tech Against Terrorism's Commentary

Regulation on addressing the dissemination of terrorist content online

In September 2018, the EU Commission introduced proposed “regulation on preventing the dissemination of terrorist content online”. The regulation has since undergone the EU’s legislative trilogue process of negotiation between the Commission, Parliament, and the Council. The Commission’s proposal drew criticism from academics, experts, and civil society groups. Further, the proposed regulation was criticised by three separate UN Special Rapporteurs, the Council of Europe, and the EU’s own Fundamental Rights Agency, which said that the proposal is in possible violation of the EU Charter for Fundamental Rights. Criticism mainly concerns the short removal deadline and the proactive measures instrument, which according to critics will lead to companies erring on the side of removal to avoid penalty fees. The EU Parliament’s reading of the proposal, unveiled in April 2019, provided some changes, for example by deleting the referral instrument and limiting the scope of the regulation to “public” dissemination of terrorist content to avoid covering private communications and cloud infrastructure. These changes were largely welcomed by civil society groups. In April 2021, the regulation was approved by parliament following Council review. The law entered into force on 7 June 2021 and will be fully applicable from 7 June 2022.

At Tech Against Terrorism, we have since the publication of the original proposal highlighted our concerns over smaller platform capacity and effectiveness in achieving its intended purpose of tackling terrorist content online and creating a safer EU. We have also shared several of the freedom of expression concerns that have been raised by civil society groups. At the publication of the final regulation in 2021, we reiterated these concerns, and highlighted that the regulation provides little legal certainty for platforms. It also offers little clarity on how smaller platforms will be supported in tackling this threat, and in complying with the regulation.

We warned that this is a misjudgement from the EU that casts further doubt over what evidence basis underpins the regulation. As a result, we fear that the regulation will do little to achieve its intended purpose of tackling terrorist use of the internet and risks harming innovation and competition in the process. Furthermore, the EU should clarify what safeguards are in place to avoid authorities abusing their position, and consider the incentives the law creates and what this means for the objectives the EU has set out in its overall tech strategy and the Digital Services Act.

See above sections for further information that the EU Commission has shared in this regard that partially addresses some of the concerns raised above.

The Digital Services Act

The Digital Services Act is an ambitious proposal aimed at introducing several new regimes to tackle illegal and harmful online content in the EU. It was relatively well-received by civil society groups, with groups commending the focus on transparency, accountability, size-oriented obligations, and that the liability scheme set out on the E-Commerce Directive remains largely intact, albeit slightly modified. However civil society groups criticised certain aspects of the proposal. For example, Electronic Frontier Foundation criticised the fact that notices in Article 14 equals actual knowledge, noting that this may lead companies to err on the side of removal, since such a notice can make them liable. This mechanism might force companies, as opposed to courts, to act as arbiters of legality. Digital rights organisation Article19 added that there are not sufficient human rights safeguards built into the risk assessment and audit provisions assigned to very large online platforms.

At Tech Against Terrorism, we noted that the DSA has several positive aspects, but it is unlikely that it will contribute to preventing terrorist use of the internet. In our assessment, the DSA is – whilst more balanced than other regulation – part of a global trend in which governments and inter-governmental bodies implement mechanisms that risk undermining the rule of law. Furthermore, despite claiming to want the opposite, the DSA may give private and democratically unaccountable tech platforms more power over online speech. The DSA is also part of another global trend in that it risks leading to increased extra-territorial enforcement of national law. There are risks that the DSA will lead to a more fragmented regulatory landscape in the EU, rather than harmonising it.

In our response to the draft DSA, we highlighted that governments and bodies like the EU should provide strategic leadership on matters related to terrorism, both online and offline. The DSA in our view, does not do that, and (whilst containing several commendable aspects) focusses on the wrong issues in terms of tackling terrorist use of the internet. Instead, policymakers should focus on improving designation of far-right terrorist groups, supporting smaller tech companies (where most terrorist activity is located) in tackling terrorist use of their platforms, and formulate effective and human rights compliant approaches to tackle terrorist operated websites.

Annex 1. EU couterterrorism strategy

The EU’s Counter Terrorism Strategy, launched in 2005, provides a framework for the Union to respond to terrorism across four strands: prevent, protect, pursue, and respond. Whilst the strategy does not focus on terrorist use of the internet, it does mention the need to counter this as part of its “prevent” strand.

Many of the texts and bodies involved in tackling terrorist use of the internet in the EU were introduced around 2015. In April of 2015, the EU adopted the European Agenda on Security, which addresses preventing terrorism and radicalisation that leads to terrorism at length, including terrorist use of the internet. The Agenda also committed the EU to setting up two collaborative schemes: Europol’s EU Internet Referral Unit (EU IRU) and the EU Internet Forum. This framework was further developed upon in the 2020 EU Security Union Strategy.

The key regulatory document guiding the EU-wide counterterrorism response is Directive 2017/451 (also known as the “Terrorism Directive”). The Directive replaced previous texts[9] and provides definitions of key terms, including of “terrorist groups,” “terrorist offences”, and terrorist propaganda (“public provocation to commit a terrorist offence”). The Directive was partly introduced to better reflect the need to tackle terrorist use of the internet, and lays down guidelines for Member States to address this threat. For example, the Directive instructs Member States to ensure “prompt removal” of online terrorist content, whilst stressing that such efforts should be based on an “adequate level of legal certainty” and ensure that there are appropriate redress mechanisms in place.

Annex 2. Online terrorist content: foundational regulation

The main legal act outlining tech company responsibilities with regards to illegal and harmful content is the E-Commerce Directive of 2000. Whilst initially meant to break down obstacles to cross-border online services in the EU, the E-Commerce Directive also exempts tech companies from liability for illegal content (including terrorist content) that users create and share on their platforms, provided they act “expeditiously” to remove it.[10] Further, Article 15 outlines that tech companies have no obligation to monitor their platforms for illegal content. This arrangement is being reconsidered by the EU, both through the proposed Regulation to combat online terrorist content and the Digital Services Act.

In 2018, the EU updated its Audio-Visual Media Services Directive (AVMSD), which governs Union-wide coordination of national legislation on audio-visual services (such as television broadcasts), to include online video-sharing platforms (VSPs). It encourages Member States to ensure that VSPs under their jurisdiction comply with the requirements set out in the AVMSD, including preventing the dissemination of terrorist content. In a Communication, the European Commission specified that VSP status primarily concerns platforms who either have the sharing of user-generated video content as its main purpose or as one of its core purposes. This means that in theory, the AVMSD could apply to social media platforms on which videos are shared, including livestreaming functions.

Annex 3. EU-led voluntary collaborative forums to tackle terrorist use of the internet

  • EU Internet Forum (EUIF), bringing together Member States, tech companies, and relevant expert stakeholders[11] with the aim of creating joint voluntary approaches to preventing
    terrorist and violent extremist use of the internet, as well as child sexual abuse material. Tech Against
    Terrorism is a member of the EUIF. Voluntary arrangements like EUIF have been criticised for setting undue
    speech regulation under the guise of volunteerism. Professor Danielle Citron described the EUIF as an example of
    the EU contributing to “censorship
    creep
    ”.[12] According to Citron, several of the voluntary steps that
    tech companies have taken to address terrorist use of their platforms since 2015 have been made specifically to
    placate EU legislators. Whilst Citron acknowledges that results have come out of this approach (the GIFCT
    hash-sharing database is one example), she argues that the definitional uncertainty around terms like terrorist
    content means that there is significant risk of erroneous removal, which negatively impacts freedom of
    expression. Further, since companies are tackling content “voluntarily”, material is removed under company
    speech policies rather than local or regional legislation, meaning that effects are global effects despite being
    based on European standards.

  • EU Internet Referral Unit (EU IRU), based on the model pioneered by the UK’s Counterterrorism Internet Referral
    Unit. The EU IRU employs subject matter experts to detect suspected Islamist terrorist content based on
    assessment of illegality under Directive 2017/541 (see above), and subsequently refer it to tech platformswho
    assess whether the content violates their Terms of Service. Member States are also able to refer content to the
    EU IRU. The unit conducts so-called referral assessment days with tech companies. This has led to substantial
    removal of terrorist content, including a
    joint operation with Telegram
    to remove a large number of Islamic State channels. According to the EU
    IRU, the Unit has to date referred more
    than 111,000 pieces of content
    to tech companies. Whilst this approach has been commended, criticism has
    been leveraged against the EU IRU (and IRUs generally) as they risk undermining the rule of law. This is because
    they can promote content removal via extra-legal channels as content is removed based on company ToS rather than
    legal statutes. Whilst the Unit does release annual transparency reports, the Global Network Initiative (GNI)
    has
    noted
    that there is no formal oversight of judicial review of the EU IRU’s activities.


[1] UN Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Fionnuala Ní Aoláin; UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan; UN Special Rapporteur on the rights to freedom of peaceful assembly and of association Clement Nyaletsossi Voule; UN Special Rapporteur on the human rights of migrants, Felipe González Morales; UN Special Rapporteur on minority issues Fernand de Varennes; UN Special Rapporteur on the right to privacy, Ana Brian Nougrères; and UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume. All seven signed a letter addressed to EU Commission President Ursula von der Leyen, EU Home Affairs Commissioner Ylva Johansson, EU Counterterrorism Coordinator Ilka Salmi, and EU Parliament President David Maria Sassoli raising concerns about the EU’s counterterrorism agenda, including the TCO regulation.

[2] Fionnuala Ní Aoláin;, Irene Khan; Clement Nyaletsossi Voule; Felipe González Morales; issues Fernand de Varennes; Ana Brian Nougrères; E. Tendayi Achiume (2021), “Letter addressed to EU Commission President Ursula von der Leyen, EU Home Affairs Commissioner Ylva Johansson, EU Counterterrorism Coordinator Ilka Salmi, and EU Parliament President David Maria Sassoli”.

[3] Edri (2021), Digital Services Act: The EDRi guide to 2,297 amendment proposals, Edri.

[4] The amendment mentions sporting or entertainment events as examples.

[5] Cory Doctorow (2021), Europe's Digital Services Act: On a Collision Course With Human Rights, Electronic Frontier Foundation.

[6] Edri (2021), Warning: the EU’s Digital Services Act could repeat TERREG’s mistakes, Edri.

[7] Legislative train (2021), A Europe Fit for the Digital Age, European Parliament.

[8] Ilara Buri and Joris van Hoboken (2021), The general approach of the Council to the Digital Services Act, DSA Observatory.

[9] Such as Council Framework Decision 2002/475/JHA

[10] This has some similarity to the US Section 230 of the US Communications Decency Act, which exempts tech companies from legal liability for user-generated content located on their platforms.

[11] Tech Against Terrorism has participated in EUIF meetings since 2017

[12] By censorship creep, Citron means that online counterterrorism efforts or mechanisms risk taking on functions beyond its intended purpose, which risks leading to censorship of legal and legitimate speech online.

Resources:

Hadley Adam &  Berntsson Jacob (2020), “The EU’s terrorist content regulation: concerns about effectiveness and impact on smaller tech platforms”, vox-pol

Tech Against Terrorism (2020) Summary of our response to the EU Digital Services Act consultation process, Tech Against Terrorism

Kaye David, Ni Aoilain Fionnuala, Cannataci Joseph (2018) Letter from the mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights, UNOHCR

Citron Danielle (2018), “Extremist Speech, Compelled Conformity, and Censorship Creep”, Notre Dame Law Review

Keller Daphne (2019), “The EU's terrorist content regulation: expanding the rule of platform terms of service and exporting expression restrictions from the eu's most conservative member states”, Stanford Cyber Policy Center

Article 19, Article 19’s Recommendations for the EU Digital Services Act

AccessNow (2020), “How the Digital Services Act could hack Big Tech’s human rights problem

Europol (2019), EU IRU 2018 transparency report

Europol (2020), EU IRU 2019 transparency report