You can access the ORS Handbook here
Online regulation and content moderation in the United States is defined by the First Amendment right to freedom of speech and Section 230 of the Communication Decency Act 1996, which establishes a unique level of immunity from legal liability for tech platforms. It has broadly impacted the innovation of the modern Internet, causing global effects beyond the US. Recently, however, the Trump Administration administered an executive order directing independent rules-making agencies to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices. This shook the online regulation framework and resulted in a wave of proposed bills and Section 230 amendments from both government and civil society.
US’ regulatory framework:
Relevant national bodies:
Key takeaways for tech companies:
Freedom of expression online
Legally speaking, regulation of online content and of content moderation practices by technology companies operating in the US has been limited to date. This is due to two principal legal frameworks that shape the US’ freedom of expression online: The First Amendment to the US Constitution and Section 230 of the Communications Decency Act (CDA).
The First Amendment outlines the right to freedom of speech for individuals and prevents the government from infringing on this right. Internet platforms are able to establish their own content policies and codes of conduct. Section 230 of the Communication Decency Act of 1996 (CDA) establishes intermediary liability protections related to user-generated content in the US. The broad immunity granted to technology companies in Section 230 states that “no provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.” Companies are, therefore, able to moderate content on their platforms without being held accountable. In other words, online platforms have the freedom to police their sites and restrict material as they see fit, even if speech is constitutionally protected. For example, this protects platforms from lawsuits if a user posts something illegal, although there are exceptions for copyright violations, sex-work related material, and violations of federal criminal law. It is important to note that Section 230 of the CDA is unique to American law: European countries, Canada, Japan, and the vast majority of other countries do not have similar statutes on their books.
The historical context behind Section 230 is complex, but it gives an illuminating look into the culture of free speech in the US and its relation to content online. The statute was the product of debates over pornography and other “obscene” materials in the early 1990s. With the advent of early internet services like CompuServe or Prodigy, US Courts tried to understand whether those service providers were to be treated as “bookstores” (neutral distributors of information) or as “publishers” (editors of that information) when adjudicating their standing under the First Amendment. A court ruled that CompuServe was immune to liability because it was similar to a bookstore, while Prodigy did not get the same immunity due to its enforcement of its own content moderation policies – thereby, making it a publisher. In other words, companies were incentivised to not engage in content moderation in order to preserve their immunity. Section 230 of the CDA sought to change this mismatch of incentives by preserving the immunity of these platforms and providers while they engage in content moderation.
The question of content moderation has to some extent developed into a partisan cleavage between the liberal Democratic Party and the conservative Republican Party in recent years. Democrats tend to claim that online platforms do not moderate enough and are therefore complicit in the spread of hate speech and disinformation. Republicans, on the other hand, often argue that these companies moderate too much, producing an alleged ‘liberal bias’ that they say undermines ‘conservative’ content. As a result, there has been a flurry of recent legislative and executive proposals to influence content moderation.
In June 2019, Republican? Senator Josh Hawley introduced the “Ending Support for Internet Censorship Act,” which seeks to amend Section 230 so that larger internet platforms may only receive liability protections if they are able to demonstrate to the Federal Trade Commission that they are “politically neutral” platforms. However, the Act raises First Amendment concerns, as it tasks the government to regulate what platforms can and cannot remove from their websites and requires platforms to meet a broad, undefined definition of “politically neutral.”
President Trump issued an executive order in May 2020 directing independent rules-making agencies, including the Federal Communications Commission, to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices.
On June 17 this year, Senator Josh Hawley (R-MO) most recently introduced the Section 230 Immunity to Good Samaritans Act. Nominally, the Hawley bill would prevent major online companies from receiving the protections of Section 230 of the CDA unless their terms of service were revised to operate “in good faith” and publicise content moderation policies. According to Senator Hawley, “the duty of good faith would contractually prohibit Big Tech from discriminating when enforcing the terms of service they write and failing to honor their promises”. This would open companies to being sued for breaching their contractual duties, along with a fine of $5,000 per claim or actual damages, whichever is higher, in addition to attorney’s fees.
Following President Trump’s executive order, the Department of Justice issued a proposal in September for legislatively rolling back Section 230. This draft legislation focuses on two areas of reform, which, according to the DOJ are “necessary to recalibrate the outdated immunity of Section”: promoting transparency and open discourse; and addressing illicit activity online. The DOJ also shared their own recommendations for altering Section 230 with Congress. If enacted, the DOJ recommendations would pave the way for the government to impose steep sanctions on platforms if they do not move to remove illicit content, including that related to terrorism.
According to an evaluation of the proposed Section 230 bills by Paul M. Barrett, the deputy director of the NYU Stern Center for Business and Human Rights,two bipartisan Senate bills “have at least a chance of eventual passage”: the EARN IT Act and the PACT Act.
Beyond Government – Scholars and Civil Society
Scholars and civil society have developed their own reports and recommendations to amend Section 230, and some have even proposed entirely new regulatory frameworks and agencies to oversee US content moderation.
Beside government proposals, a 2019 report, published by the University of Chicago’s Booth School of Business, suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.
Another proposal comes from Danielle K. Citron, a law professor at Boston University. Citron has suggested to amend Section 230 by including a “reasonableness” standard, which would mean conditioning immunity on “reasonable content moderation practices rather than the free pass that exists today”. The “reasonableness” would be determined by a judge at a preliminary stage of a lawsuit, wherein the judge would assess the “reasonableness” of a platform’s overall policies and practices.
Regulatory framework proposals beyond Section 230
Others have yet studied another idea: the creation of a new federal agency specifically designed to oversee digital platforms. A study released in August 2020 by the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy proposes the formation of a Digital Platform Agency. The study recommends that the agency focus on promoting competition among internet companies and protecting consumers in connection with issues such as data privacy.
In a report, The Transatlantic Working Group (TWG) has emphasised the need for a flexible oversight model, in which authorising legislation could extend the jurisdiction of existing agencies or create new ones. As possible examples of existing agencies, the TWG cites the US Federal Trade Commission, the French Conseil Supérieur de L’Audiovisuel, and the British Office of Communications, or OFCOM. The TWG overlaps in some of the goals of the PACT Act, for instance in requesting greater transparency. The TWG envisions a digital regulatory body that requires internet companies to disclose their terms of service and their enforcement mechanisms.
 Critics have underlined that the enforcement of this order is legally debatable and raises questions regarding the administration’s approach to regulating content moderation, given the First Amendment protections do not allow anyone to determine what a private company can or cannot express. See: Why Trump’s online platform executive order is misguided, Brookings, Niam Yaraghi.
Barrett Paul M. (2020a), “Regulating Social Media: The Fight Over Section 230 — and Beyond”, NYU Stern
Barrett Paul M. (2020b), “Why the Most Controversial US Internet Law is Worth Saving”, MIT Technology Review
Brody Jennifer, Null Eric (2020), “Unpacking the PACT Act”, Access Now
Feiner Lauren (2020), “GOP Sen. Hawley unveils his latest attack on tech’s liability shield in new bill”, CNBC.
Mullin Joe (2020), “Urgent: EARN IT Act Introduced in House of Representatives”, Electronic Frontier Foundation
Newton Casey (2020), “Everything You Need to Know About Section 230”, The Verge
Ng Alfred (2020), “Why Your Privacy Could be Threatened by a Bill to Protect Children”, CNET
Robertson Adi (2019), “Why the Internet’s Most Important Law Exists and How People Are Still Getting it Wrong”, The Verge
Yaraghi Niam (2020), “Why Trump’s online platform executive order is misguided”, Brookings