8 min read

The Online Regulation Series | The European Union

You can access the ORS Handbook here

The European Union (EU) is an influential voice in the global debate on regulation of online speech. For that reason, two upcoming regulatory regimes might – in addition to shaping EU digital policy – create global precedents for how to regulate both online speech generally and terrorist content specifically.

European Union’s regulatory framework:

  • European Counter Terrorism Strategy, adopted in November 2005, which sets out the EU’s priorities on countering terrorism in the Union.
  • European Agenda on Security, adopted in April 2015, which announced the establishment of key institutions to tackle terrorist use of the internet such as the EU Internet Referral Unit and the EU Internet Forum.
  • Directive (EU) 2017/541 on combating terrorism, adopted in March 2017, and the key EU legal act on terrorism.[1]
  • E-Commerce Directive, adopted in June 2000, which provides the overall framework for the EU’s Digital Market and dictates that tech companies are exempt from liability for user-generated content.
  • Audio Visual Media Services Directive, adopted in November 2018, which compels Member States to prevent audio-visual services, including online video-sharing platforms, from disseminating harmful material, including terrorist content.

Proposed regulation:

  • Regulation on preventing the dissemination of terrorist content online (proposed by the European Commission in 2018 and currently in trilogue[2] process), which proposes to compel tech companies to remove terrorist content within one hour and introduce proactive measures to filter such material.[3]
  • Digital Services Act (DSA), announced in 2020 as part of the new European Commission’s aim for a “Europe fit for the Digital Age”. The DSA seeks to examine and potentially alter the E-Commerce Directive, which since 2000 has been the core regulatory statute for the EU digital market. Although it is unclear what exact scope the DSA will have, it will seek to update EU policy and regulation on matters related to liability, transparency, competition, and harmful content in the online space. The EU opened a consultation process on the DSA which closed in September, and you can read our response here.

Key organisations and forums:

  • Europol, the European Union’s law enforcement agency which supports Member States in countering organised crime and terrorism.
  • EU Internet Referral Unit, (Europol), which reports terrorist content to tech platforms for their assessment and removal based on platform Terms of Service.
  • EU Internet Forum, a public-private forum set up by the Commission to tackle terrorist use of the internet.

Collaborative scheme:

  • EU Code of Conduct on Illegal Hate Speech, in which signatory tech companies commit to remove and report on hate speech flagged to them by a select number of European civil society groups.
  • EU Crisis Protocol, a collaborative mechanism between governments and tech companies for the rapid detection and removal of terrorist content in the event of an online crisis.

Key takeaways for tech platforms:

  • Companies are currently exempt from legal liability for user-generated content, although this could change as part of the new Digital Services Act.
  • There is a possibility that removal deadlines and demands for proactive measures to tackle terrorist content will be introduced as part of new Union-wide regulation.
  • Companies have the possibility to participate in a number of voluntary collaborative schemes together with European law enforcement agencies and Member States.
  • The EU is an influential regulatory force, and there is reason to believe that EU regulation could inspire similar efforts elsewhere.

EU Counterterrorism strategy

The EU’s Counter Terrorism Strategy, launched in 2005, provides a framework for the Union to respond to terrorism across four strands: prevent, protect, pursue, and respond. Whilst the strategy does not focus on terrorist use of the internet, it does mention the need to counter this as part of its “prevent” strand.

Many of the texts and bodies involved in tackling terrorist use of the internet in the EU came into fruition around 2015. In April of 2015, the EU adopted the European Agenda on Security, which addresses preventing terrorism and radicalisation that leads to terrorism at length, including terrorist use of the internet. The Agenda also committed the EU to setting up two collaborative schemes: Europol’s EU Internet Referral Unit (EU IRU) and the EU Internet Forum.

The key regulatory document guiding the EU-wide counterterrorism response is Directive 2017/451 (also known as the “Terrorism Directive”). The Directive replaced previous texts (such as Council Framework Decision 2002/475/JHA) and provides definitions of key terms, including of “terrorist groups,” “terrorist offences”, and terrorist propaganda (“public provocation to commit a terrorist offence”). The Directive was partly introduced to better reflect the need to tackle terrorist use of the internet, and lays down guidelines for Member States to address this threat. For example, the Directive instructs Member States to ensure “prompt removal” of online terrorist content, whilst stressing that such efforts should be based on an “adequate level of legal certainty” and ensure that there are appropriate redress mechanisms in place.

Online terrorist content: current regulatory landscape

The main legal act outlining tech company responsibilities with regards to illegal and harmful content is the E-Commerce Directive of 2000. Whilst initially meant to break down obstacles to cross-border online services in the EU, the E-Commerce Directive also exempts tech companies from liability for illegal content (including terrorist content) that users create and share on their platforms, provided they act “expeditiously” to remove it.[4] Further, Article 15 outlines that tech companies providing have no obligation to monitor their platforms for illegal content. This arrangement is being reconsidered by the EU, both through the proposed regulation to combat online terrorist content and the Digital Services Act.

In 2018, the EU updated its Audio-Visual Media Services Directive (AVMSD), which governs Union-wide coordination of national legislation on audio-visual services (such as television broadcasts), to include online video-sharing platforms (VSPs). It encourages Member States to ensure that VSPs under their jurisdiction comply with the requirements set out in the AVMSD, including preventing the dissemination of terrorist content. In a communication, the European Commission specified that VSP status primarily concerns platforms who either have the sharing of user-generated video content as its main purpose or as one of its core purposes, meaning that in theory the AVMSD could apply to social media platforms on which videos are shared, including livestreaming functions.

Proposed regulation on preventing the dissemination of terrorist content online

In September 2018, the EU Commission introduced a proposed “regulation on preventing the dissemination of terrorist content online”. The regulation has since undergone the EU’s legislative trilogue process of negotiation between the Commission, Parliament, and the Council. To date, only Parliament’s reading of the proposal has been published in full.

The proposal suggests three main instruments to regulate online terrorist content:

  • Swift removals: companies would be obliged to remove content within one hour of having received a removal order from a “competent authority” (which each Member State will be able to appoint). Failure to meet the one-hour deadline could result in penalty fees of up 4% of the company’s global annual turnover.
  • Content referral: the competent authority will also be able to refer content to companies, similar to the role currently played by the EU IRU, for removal against company Terms of Service.
  • Proactive measures: companies would be required to take “proactive measures” to prevent terrorist content from being uploaded on their platforms – for example by using automated tools.

The Commission’s proposal drew criticism from academics, experts, and civil society groups. Further, the proposed regulation was criticised by three separate UN Special Rapporteurs, the Council of Europe, and the EU’s own Fundamental Rights Agency, which said that the proposal is in possible violation of the EU Charter for Fundamental Rights. Criticism mainly concerns the short removal deadline and the proactive measures instrument, which according to critics will lead to companies erring on the side of removal to avoid penalty fees.

Whilst the regulation clarifies that its definition of “terrorist content” is based on the Terrorism Directive, there have been concerns that companies – due to the risk of fines – might remove content shared for journalistic and academic purposes. There has also been criticism raised against the referral mechanism, since this allows for tech company Terms of Service, as opposed to the rule of law, to dictate what content gets removed for counterterrorism purposes. Content moderation expert Daphne Keller has called this the “rule of ToS.” At Tech Against Terrorism, we have cautioned against the proposal’s potential negative impact on smaller tech companies, and warned against the potential fragmentation that it risks leading to. We also encourage the EU to provide more clarity as to what evidence base motivates the one-hour removal deadline.

The EU Parliament’s reading of the proposal, unveiled in April 2019, provided some changes, for example by deleting the referral instrument and limiting the scope to “public” dissemination of terrorist content to avoid covering private communications and cloud infrastructure. These changes were largely welcomed by civil society groups. Although a version of the proposal worked on by the Council, which reintroduces some of the elements that Parliament modified, was leaked in March 2020, there has been no confirmation as to what a final version of the regulation will look like.

EU-led voluntary collaborative forums to tackle terrorist use of the internet

Whilst there is currently no EU-wide legislation regulating terrorist use of the internet, the EU has been influential in encouraging tech company action on terrorist content via a number of forums.

  • EU Internet Forum (EUIF), bringing together Member States, tech companies, and relevant expert stakeholders (Tech Against Terrorism has participated in EUIF meetings since 2017) with the aim of creating joint voluntary approaches to preventing terrorist use of the internet and hate speech. Whilst there have been concrete outcomes of the Forum, such as the EU Code of Conduct on Hate Speech and the EU Crisis Protocol, voluntary arrangements like EUIF have been criticised for setting undue speech regulation under the guise of volunteerism. One notable critic is Professor Danielle Citron, who has described the EUIF as an example of the EU contributing to “censorship creep”.[5] According to Citron, several of the voluntary steps that tech companies have taken to address terrorist use of their platforms since 2015 have been made specifically to placate EU legislators. Whilst Citron acknowledges that results have come out of this approach (the GIFCT hash-sharing database is one example), the definitional uncertainty around terms like terrorist content means that there is significant risk of erroneous removal negatively impacting freedom of expression. Further, since companies are tackling content “voluntarily”, material is removed under company speech policies rather than local or regional legislation, meaning that effects are global effects despite being based on European standards.

  • EU Internet Referral Unit (EU IRU), based on the model pioneered by the UK’s Counterterrorism Internet Referral Unit. The EU IRU employs subject matter experts to trawl the web and refer suspected Islamist terrorist content to tech companies, who then assess whether the content violates their Terms of Service. Member States are also able to refer content to the EU IRU. The unit conducts so-called referral assessment days with tech companies. This has led to substantial removal of terrorist content, including a joint operation with Telegram to remove a large number of Islamic State channels. According to the EU IRU, the Unit has to date referred more than 111,000 pieces of content to tech companies. Whilst this approach has been commended, criticism has been leveraged against the EU IRU (and IRUs generally) due to their risk of undermining rule of law by promoting content removal via extra-legal channels as content is removed based on company ToS rather than legal statutes. Whilst the Unit does release annual transparency reports, the Global Network Initiative (GNI) has noted that there is no formal oversight of judicial review of the EU IRU’s activities.


[1] In EU law-making, a “Directive” is a legislative act sets out goals that all EU countries must achieve, however without specifying exactly how to reach these targets. For more information, see: https://europa.eu/european-union/law/legal-acts_en

[2] The negotiation process between the EU’s three legislative bodies: the European Commission (which proposes regulation), the EU Parliament, and the Council of the EU, who are able to suggest changes to the proposed text before its adoption.

[3] Unlike a Directive, a Regulation is legally binding and must be applied in its entirety across the EU.

[4] This has some similarity to the US Section 230 of the US Communications Decency Act exempts tech companies from legal liability for user-generated content located on their platforms.

[5] By censorship creep, Citron means that online counterterrorism efforts or mechanisms risk taking on functions and having reach beyond its intended purpose, which risks leading to censorship of legal and legitimate speech online.

Resources:

Hadley Adam &  Berntsson Jacob (2020), “The EU’s terrorist content regulation: concerns about effectiveness and impact on smaller tech platforms”, vox-pol

Tech Against Terrorism (2020) Summary of our response to the EU Digital Services Act consultation process, Tech Against Terrorism

Kaye David, Ni Aoilain Fionnuala, Cannataci Joseph (2018) Letter from the mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights, UNOHCR

Citron Danielle (2018), “Extremist Speech, Compelled Conformity, and Censorship Creep”, Notre Dame Law Review

Keller Daphne (2019), “The EU's terrorist content regulation: expanding the rule of platform terms of service and exporting expression restrictions from the eu's most conservative member states”, Stanford Cyber Policy Center

Article 19, Article 19’s Recommendations for the EU Digital Services Act

AccessNow (2020), “How the Digital Services Act could hack Big Tech’s human rights problem

Europol (2019), EU IRU 2018 transparency report

Europol (2020), EU IRU 2019 transparency report