8 min read

The Online Regulation Series | Tech Sector Initiatives

You can access the ORS Handbook here

Although regulation frameworks of terrorist and harmful content online have been passed by governments in recent years, regulation in practice remains mostly a matter of solo or self-regulation by the tech sector. That is, when companies draft and apply their own rules for moderating user-generated content on their platforms or when they voluntarily comply with standards shared amongst the tech sector (the Global Internet Forum to Counter Terrorism is one example), without such standards being enforced by law. This, coupled with increased public pressure to address the potential harmful impact of certain online content – in particular terrorist material – has led major tech companies to develop their own councils, consortiums, and boards to oversee their content moderation and its impact on freedom of speech online. In this blogpost, we provide an overview of some of the prominent tech sector initiatives in this area.

Key takeaways:

  • Major tech platforms are creating ambitious oversight and advisory bodies to address concerns about their content moderation policies and practices.
  • Such bodies aim to increase accountability and transparency by, for example: providing an extra instance for user appeals; providing insight into a platforms’ practical decision-making in the content moderation process; Providing external expert guidance on policies.
  • Collaborative industry efforts such as the Global Internet Forum to Counter Terrorism (GIFCT) aim to provide practical capacity building and knowledge sharing for tech companies, and have also launched their own research network.
  • Global Internet Forum to Counter Terrorism (GIFCT)

    The GIFCT was founded in 2017 by Facebook, Microsoft, Twitter and YouTube to facilitate collaboration and knowledge sharing amongst the tech sector to tackle terrorist use of the internet. Since its founding, the GIFCT, which runs its own membership programme, has grown to a dozen members and has taken a prominent role in the Christchurch Call to Action – launched following the March 2019 attack in Christchurch, New Zealand, which was livestreamed on Facebook.

    Tech Against Terrorism has been a core partner to the GIFCT since its inception, organising its inaugural workshop in San Francisco in 2017. Since then, Tech Against Terrorism has been running the GIFCT knowledge sharing programme by organising workshops and e-learning webinars, as well as implementing a mentorship programme to assist companies in meeting GIFCT’s membership requirements.

    In 2019 the GIFCT announced that it would become an independent organisation. This was formalised in 2020 with the hiring of its first Executive Director, Nicholas Rasmussen. The foundational goals of the new organisation include empowering the tech sector to respond to terrorist exploitation, enabling “multi-stakeholder engagement around terrorist and violent extremist misuse of the Internet”, promoting dialogue with civil society, and advancing understanding of the terrorist and violent extremist landscape “including the intersection of online and offline activities.”

    The independent GIFCT’s structure is complemented by an Independent Advisory Council (IAC) made up of 21 members representing the governmental (including intergovernmental organisations) and civil society sectors, and covers a broad range of expertise related to the GIFCT’s areas of work,  such as counterterrorism, digital rights, and human rights. The IAC is chaired by a non-governmental representative, a role currently held by Bjorn Ihler, a counter radicalisation expert and founder of the Khalifa-Ihler Institute. The four founding companies are also represented via the Operating Board, which appoints the Executive Director and provides the GIFCT’s operational budget. Other members of the board include one other member company (on a rotating basis), a rotating chair from the IAC, and of new members that meet “leadership criteria”.

    The GIFCT also runs the Hash-Sharing Consortium to help member companies moderate terrorist content on their platforms. The consortium is a database of hashed terrorist content.[1] Members can add hashes of content they have previously identified to be terrorist material on their platforms to the database. All companies using it are able to automatically detect terrorist material on their platforms and prevent its upload. The Consortium was set up by the four founding companies in 2016.

    Whilst the GIFCT states that “each consortium member can decide how they would like to use the database based on their own user terms of service”, critics have raised concerns over the lack of transparency surrounding the use of the database and the removal of content it contributes to. However, the GIFCT has to date published two transparency reports, which provide insights into the hash-sharing database and the type of content that was added to it.[2] In the 2020 report, the GIFCT said that the hash-sharing database contained content across the following categories:

  • Imminent Credible Threat: 0.1%
  • Graphic Violence Against Defenseless People: 16.9%
  • Glorification of Terrorist Acts: 72%
  • Radicalization, Recruitment, Instruction: 2.1%
  • Christchurch, New Zealand, attack and Content Incident Protocols (Christchurch, 6.8% Halle attack, 2% Glendale attack 0.1%)
  • Academic and online regulation expert, Evelyn Douek, has used the GIFCT as an example when cautioning against the role played by industry initiatives aiming to curb harmful online content, a phenomenon she calls “content cartels. In her analysis, Douek stresses what she sees as risks of collaborative industry arrangements including both larger and smaller companies, where “already powerful actors” can gain further power as they are able to set content regulation standards for the smaller platforms. In particular, she argues that such arrangements leave little room for challenging the standards they set – including, in some cases, what they consider to be terrorist or harmful content.

    Evelyn Douek spoke about her criticism of the GIFCT in a Tech Against Terrorism podcast episode earlier this year on the complexities of regulating the online sphere.

    We recently responded to an article mentioning concerns about the GIFCT’s hash-sharing database, and how we are planning on taking into account these concerns when developing the Terrorist Content Analytics Platform.

    Facebook Oversight Board

    Facebook announced in 2018 that it would set up an independent “Supreme Court” to decide on complex content moderation issues for user-generated content on both Facebook and Instagram. The Facebook Oversight Board was announced a year later, in September 2019, and its first members in 2020. The Board began accepting cases in October 2020.

    The goal of the Board is to “protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.” The board is set up as a last appeal instance for users who wish to contest the removal of their content, and whose appeal has already been rejected twice by Facebook internal appeal process. For now, the Board will limit its oversight to content that has already been removed from Facebook or Instagram. However, Facebook has stated that the scope of the Board will be expanded to allow users to appeal for content they want to be removed from the platforms. In selecting and handling cases, the Board will focus on cases that have significant impact on online freedom of expression and public discourse, real-world impact, or “raise questions about current Facebook policies”. Facebook itself can submit “urgent [cases] with real-world consequences” for review.

    Besides advising Facebook on whether to allow or remove content, the Board can also “uphold or reverse a designation that led to an enforcement”, such as a designation leading to the removal of a page on the grounds of terrorism. Board decisions will function as caselaw and will help influence Facebook’s content moderation policies. Beside this, the Board will be able to provide direct policy guidance to Facebook on its policies and processes.

    Whilst the concept of the Oversight Board has been welcomed, it has nonetheless drawn criticisms. One concern relates to the fact that the Board’s charter: “still provides Facebook some leeway about how to implement the board’s decisions. Critically, it only has to apply the decision to the specific case reviewed, and it’s at the company’s discretion to turn that into blanket policy”. In particular, Facebook has stated that it would “support the Board” depending on whether implementing a decision to other cases or as policy guidance is “technically operationally feasible”, and on the resources it would take the company to do so.

    Kate Klonick – an expert on online speech governance – has summarised the different reactions and criticisms addressed to the Board. Amongst the main criticisms are concerns over how the Board could negatively impact Facebook’s content moderation by encouraging it to either under-moderate or over-moderate; that the Board is, effectively, a PR stunt; or that it risks not being scalable. Klonick commented on these concerns by underlining the Board’s potential to have a broader impact on Facebook policies, beside single cases, and on how it “might lead to more widespread user participation in deciding how to design private systems that govern our basic human rights.”

    Concerned with the fact that the Board would not be up-and-running by the time of the US elections, a “group of about 25 experts from academia, civil rights, politics and journalism” led by the UK-based advocacy group The Citizens, set up their own “Real Facebook Oversight Board” in September 2020. The group set out to organise weekly public meetings on Zoom to scrutinise a broad range of issues linked to Facebook’s moderation practices. Commenting on this initiative, Klonick described it as “misleading”, given that it would not hear any user appeals.

    Twitch Safety Advisory Council

    Twitch, the leading global live-streaming platform, announced the creation of its Safety Advisory Council in May 2020. The Council’s mission is to advise Twitch in its decision-making process and policy development. This includes drafting new policies, helping developing product and features for moderation, as well as promoting diversity and the interests of marginalised groups on the platform.

    The Council is made up of 8 members representing a mix of Twitch creators, experts in online safety (including cyberbullying), and in content moderation. The mix of experts and creators is meant to ensure that the Council has “a deep understanding of Twitch, its content and its community”. Amongst the experts is Emma Llanso, Director of the Free Expression Project at the Center for Democracy & Technology, and an expert on free expression online and intermediary liability (Emma has previously guested our podcast and our webinar series).

    TikTok Content Advisory Council

    Video-sharing app TikTok unveiled its Content Advisory Council in March 2020. In a drive to improve its accountability and transparency, TikTok also announced its Transparency and Accountability Center, and has proposed the creation of a Global Coalition to Counter Harmful Content.

    The Coalition is meant to target the challenges posed by the constant posting and re-posting of harmful content that all tech platforms face, and to do so via collaborative efforts between tech platforms and the “development of a Memorandum of Understanding (MOU) that will allow us to quickly notify one another of such content.”

    The Council, for its part, is made up of several tech and safety experts, and will advise TikTok around its content policies and practices. TikTok has announced that the Council would meet regularly with its US leaders “to discuss areas of importance to the company and our users”, such as the platform integrity and policies related to misinformation. 

    The Council is chaired by Dawn Nunziato, an expert on free speech and content regulation at George Washington University, and includes different tech policy, online safety, and young mental health experts, with the plan to grow to about 12 experts.


    [1] Hashing technology allows for the attribution of a unique fingerprint tot a photo or audio content, thus facilitating its identification without having to see the content itself. This can be used to facilitate the identification of terrorist content and prevent its upload. As the GIFCT explains it: “An image or video is “hashed” in its raw form and is not linked to any original platform or user data. Hashes appear as a numerical representation of the original content and cannot be reverse-engineered to recreate the image and/or video. A platform needs to find a match with a given hash on their platform in order to see what the hash corresponds with.”

    [2] The reports also provide information about the Content Incident Protocol (CIP) and the URL Sharing mechanisms. Two others technical mechanisms implemented by the GIFCT to ensure the facilitated removal of terrorist content, grater collaboration between platforms, and limit the spread of terrorist content following an attack in the case of the CIP.

    Ressources:

    Article19 (2019), Social Media Councils: Consultation.  

    Bijan Stephen (2020), Twitch establishes a safety advisory council to help it sort out its rules, The Verge.

    Botero-Marino Catalina, Greene Jamal, McConnell Michael W., and Thorning-Schmidt Helle (2020), We Are a New Board Overseeing Facebook. Here’s What We’ll Decide, The New York Times.

    Constine Josh (2018), Facebook will pass off content policy appeals to a new independent oversight body, TechCrunch.

    Constine Josh (2019), Facebook’s new policy Supreme Court could override Zuckerberg,, TechCrunch.

    Douek Evelyn (2020), The rise of content cartels, Knight 1rst Amendment Institute.

    Ghaffary Shirin (2020), Facebook’s independent oversight board is finally up and running, Vox.

    Ghosh Dipayan (2019), Facebook’s Oversight Board Is Not Enough, Harvard Business Review.

    Harris Brent (2019), Establishing Structure and Governance for an Independent Oversight Board, Facebook News Room.

    Klonick Kate (2020), The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression, The Yale Law Journal.

    Perez Sarah (2020a), TikTok brings in outside experts to help it craft moderation and content policies,  TechCrunch.

    Perez Sarah (2020b), Twitch announces a new Safety Advisory Council to guide it is decision-making, TechCrunch.  

    Radsch Courtney (2020), GIFCT: Possibly the Most Important Acronym You’ve Never Heard Of, JustSecurity.

    Reichert Corinne (2020), TikTok now has a content advisory panel, CNET

    Solon Olivia (2020a), While Facebook works to create an oversight board, industry experts formed their own, NBC News.

    Solon Olivia (2020b), Months before it starts, Facebook's oversight board is already under fire, NBC News.

    Windwehr Svea and York Jillian (2020), One Database To Rule Them All, Vox-Pol.

    Zuckerberg Marc (2019), Facebook’s Commitment to the Oversight Board.