On Wednesday 23 September U.S. Department of Justice (DoJ), in response to the President’s May 2020 Executive Order on Preventing Online Censorship, issued a legislative reform request to Congress regarding Section 230 of the Communications Decency Act. Since early 2020, the DoJ has conducted hearings and consultations with stakeholders on potential reforms of the measure, which shields online companies from liability for content provided by users. While the Department had voiced concern that Section 230 provides too broad a shield from civil liability for tech companies that hampers law enforcement, its precise concerns and desired reforms were largely unclear.
Yesterday’s proposal, conveyed to Congress via a letter from Attorney General Bill Barr, outlines a number of specific reforms to Section 230, including a fully revised draft statute. Here are the key elements:
- Restricting moderation – Revisions to Section 230(c)(1) and (2) – Content removal is only permitted if a provider has an “objectively reasonable” belief that specific material violates the terms of service, or that it is unlawful, or that it is obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, or promoting self-harm. This would remove language that currently enables providers to take down a much broader array of content that may be “otherwise objectionable.”
- Reestablishing “bad Samaritan” liability – New Section 230(d)(1) – Establishes liability under both state criminal law and federal and state civil action when a provider “acted purposefully with the conscious object to promote, solicit, or facilitate” material or activity illegal under federal criminal law.
- Require court-ordered take-down – New Section 230(d)(3) – Establishes liability for criminal or civil prosecution if a provider fails to take down content after “receiving notice” of a final judgement that it is defamatory under state law or otherwise unlawful.
- Required public notice system – New Section 230(d)(4) – All providers must provide a free mechanism for the public to notify defamatory or unlawful material or activity and are liable if they fail to remove material they have been notified is illegal under federal criminal law.
- Full exception for federal civil law enforcement – Revision to current Section 230(e)(1) – Enforcement by the government of any federal civil statute or regulation is given a full exemption from Section 230.
- Additional horizontal exemptions – New Section 230(f)(6)-(9) – Enforcement of additional areas of civil law are given full exemptions from Section 230, namely anti-terrorism, child sex abuse (including state law), cyber stalking, and anti-trust.
- Expands definition of content provider – Revision to Section 230(g)(3) – The definition of “information content provider” is amended to explicitly include any person or entity who “solicits, comments upon, funds, or affirmatively and substantively contributes to, modifies, or alters” another’s content.
- Defining “good faith” – New definition Section 230(g)(5) – In order to be deemed to be acting in “good faith” per Section 230(c)(2), a provider must have public terms of service with information on its moderation practices, moderate or restrict access to content consistent with these terms, not moderate or restrict content on “deceptive or pretextual grounds,” and give timely notice to a content provider explaining the “reasonable factual basis” for restricting content and an opportunity to respond.
This proposal comes at a time when the Trump administration and many members of Congress have been increasingly hostile towards Section 230. In addition to a number of proposals from lawmakers of both parties to revise the provision, the Trump administration has also requested new rulemaking by the Federal Communications Commission to clarify certain terms under Section 230.
Section 230 protections apply to a wide range of online businesses of every size beyond social media, from web hosting to communications to home-sharing. These revisions would substantially narrow the circumstances under which these online providers could voluntarily moderate and remove content, making it more difficult for online platforms to remove harmful speech or activity. It would also widen the circumstances under which they could be subjected to litigation, subjecting them to costly burdens even when they have not engaged in wrong doing or negligence. These costs would substantially threaten many online business models.
Access Partnership recommends that any online business established in the US which directly or indirectly through its customers involves user-provided content to carefully assess the draft legislation and its potential impacts for their business model. As Congress weighs various reforms, keeping visibility into different versions and engaging stakeholders will be key to mitigating risks.