5 min read

The Online Regulation Series | Insights from Academia II

You can access the ORS Handbook here

To follow-up on our previous blogpost on academic analysis of the state of global online regulation, we take here a future oriented approach and provide an overview of academics and experts’ suggestions and analysis of what the future of online regulation might bring.

Systematic duty of care and the future of content moderation:

With certain policy-makers around the world, notably in the UK, pursuing the possibility of mandating platforms to abide by a “systematic duty of care” (SDOC) for online content regulation, Daphne Keller has laid out possible models that a SDOC could follow, and their implications for tech platforms’ immunity from legal liability, content moderation, human rights, and smaller tech platforms. Keller divides SDOCs into two possible models: a prescriptive one, and a flexible model.

  • Prescriptive model: Under this formulation governments would set out clear rules and specify the proactive measure that platforms would be required to abide by. Thus setting a clear legal framework which could offer platforms immunity from legal liability. In practice, platforms would still have the possibility to do more than what would be required of them, “deploy[ing] novel ‘Good Samaritan’ efforts”, meaning content moderation would not significantly change from how it is today. Except that we would witness an increase reliance on automated monitoring, such as upload filters which have long been criticised for their potential negative impacts on human rights and removing legal speech. Keller further notes that this model would have detrimental consequences for competition and innovation, as smaller platforms would have difficulties keeping up with the resources need to meet the proactive monitoring requirements.
  • Flexible model: In this instance, regulators would limit their requirements to “broadly defined and open-ended obligations”, which could be more adaptive to a changing and diverse landscape, but would also raise a number of questions on platforms’ legal liability and whether compliance and over-compliance would grant them immunity. In general, this model would be characterised by platforms removing too much or too little depending on whether their own terms of services go beyond what would be required of them. Flexibility could also allow for more “leeway to figure out meaningful technical improvement”, leading to more nuanced and diverse automated mechanisms. However, Keller stresses that in effect, this would be determined by regulators opting either for a diverse tech environment or for efficient regulation, whilst transparency would in any case be negatively impacted. Keller further predicts that if smaller tech platforms could have the possibility to deploy their own measures, it is likely that we would witness “an inevitable drift” toward SDOC being based on large platforms’ practices.
  • Section 230: A landmark reform?

    Following the Trump Administration’s executive order in May 2020 directing independent rules-making agencies to consider regulations that narrow the scope of Section 230, the US witnessed a wave of proposed bills and Section 230 amendments from both government and civil society.

    A 2019 report, published by the University of Chicago’s Booth School of Business, suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230. Paul M. Barrett embraces this concept and says lawmakers should adopt this approach for Section 230, emphasising that it provides a workable organising principle to which any number of platform obligations could be attached and that “the benefits of Section 230 should be used as leverage to pressure platforms to accept a range of new responsibilities related to content moderation”. Examples of such additional platform responsibilities would include requiring platform companies “to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement” and that platforms would disclose data on content moderation methods, advertising policies, and which content is being promoted and to whom. Barrett also calls for the creation of a specialised federal agency, or the “Digital Regulatory Agency”, which would oversee and enforce the new platform responsibilities in the “quid pro quo” model, as well as would focus on making platforms more transparent and accountable.

    Jack Balkin has suggested that governments make liability protections conditional, as opposed to the default, on the basis that companies “accepting obligations of due process and transparency. Similarly, Danielle Citron has argued that immunity should be conditioned on companies having “reasonable” content moderation standards in place. Such reasonableness would be determined by a judge.

    Suggestions for new governance or regulation models:

    International human rights law

    David Kaye, the former UN Special Rapporteur on Freedom of Expression, has suggested that tech companies ground their content moderation policies in international human rights law (IHRL). Kaye argues that this is the best solution to solve several of the challenges highlighted by academics in our previous post. For example, international human rights law offer a global structure (as opposed to national law), and provide a framework for ensuring that both companies and governments are complying with human rights standards in a transparent and accountable manner. Further, Kaye notes that Article 19 of the International Covenant on Civil and Political Rights (ICCPR) – which mandates freedom of expression – also provides for cases where speech can be restricted, where necessary to protect others’ rights, and where necessary for public health and national security. Kaye argues that this means that platforms will be able to take action on legitimately harmful and illegal content.

    Evelyn Douek, has whilst acknowledging that this approach has several benefits, questioned whether it will be efficient. Douek notes that there is a “large degree of indeterminacy” in IHRL, which according to her means that it will be up to platforms to assess content against such standards. Further, Douek worries that such standards could in theory provide companies with a basis for allowing legitimately harmful content to remain online (or vice versa), since platforms and local speech culture might differ in their interpretation of the IHRL.

    Social media councils

    Civil society group Article 19 has suggested the creation of an independent “Social Media Council”. They argued that this would increase accountability and transparency with regard to content moderation, without government restricting on speech via regulation targeting online content. The Council would be based on a “self-regulatory and multi-stakeholder approach” with “broad representation” from various sectors, and would apply human rights standards in content moderation review. Loosely based on other self-regulatory measures such as press regulatory bodies, the Council would not be legally binding but participating platforms would commit to executing council decisions.

    This suggestion was supported by David Kaye and the Stanford University’s Global Digital Policy Incubator (GDPi). Following a working meeting discussing the suggestion, GDPi proposed that the social media council should avoid adjudicating specific cases and instead develop and set core guidelines for companies. Article 19 differed, advocating for the Council to have an adjudicatory role and serve as an appeals and review body, with a first version being launched on a national scale as a trial.


    Resources:

    Keller Daphne (2020), Systemic Duties Of Care And Intermediary Liability, The Center for Internet and Society, Stanford University.

    Keller Daphne (2020), Broad Consequences Of A Systemic Duty Of Care For Platforms, The Center for Internet and Society, Stanford University.

    Citron Danielle and Wittes Benjamin (2017), The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity, Fordham Law Review.

    Citron Danielle and Franks Mary Anne (2020), The Internet As a Speech Machine and Other Myths Confounding Section 230 Reform, Boston Univ. School of Law, Public Law Research Paper.

    Citron Danielle (2020), Section 230's Challenge to Civil Rights and Civil Liberties, Boston Univ. School of Law, Public Law Research Paper.

    Article19 and UN Special Rapporteur on Freedom of Opinion and Expression (2019), Social Media Councils - from concept to reality.

    McKelvey Fenwick, Tworek Heidi, Tenove Chris (2019), How a standards council could help curb harmful content online, Policy Options.

    Balkin Jack (2020), How to Regulate (and not regulate) social media, Yale Law School, Public Law Research Paper.

    Douek Evelyn (2020), The Limits of International Law in Content Moderation, UCI Journal of International, Transnational, and Comparative Law (forthcoming 2021).

    Barrett Paul M. (2020a), “Regulating Social Media: The Fight Over Section 230 — and Beyond”, NYU Stern.

    Barrett Paul M. (2020b), “Why the Most Controversial US Internet Law is Worth Saving”, MIT Technology Review.