
Leaders in government and tech want to rewrite a law that shapes the internet. It is clear that Section 230 in its current form is no longer working.
This law needs to be updated so that it continues to work well. Many believe that Congress should update the law to ensure it functions as intended.
The Debate Over Section 230
Section 230 governs who’s liable for what people post on social media. However, critics argue that it either allows platforms to censor users too much or enables the spread of harmful information.
While most agree that it’s time to review Section 230, there’s little consensus on the exact problems or how to address them.
Legal Protections for Tech Companies
To understand this debate, it’s important to dissect the law. The first critical passage states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This gives websites two key protections. First, they have immunity when users post something offensive or harmful.
In that case, the user—not the internet company—is liable. For example, Twitter is not legally responsible for what people tweet.
This is different from traditional media companies, which can be sued over the content they publish. The second protection is the freedom to moderate content.
The law states that companies operating in good faith can restrict access to material they consider objectionable, even if it is constitutionally protected. This means they won’t be held liable for taking down offensive or harmful content.
Calls for Reform and Political Divide
Section 230 was written during the early days of the internet when many startups were launching. Legal frameworks for online speech were unclear.
In the 1990s, internet service providers like Prodigy were developing interactive features such as bulletin boards. One post on a Prodigy bulletin board led to a lawsuit from Stratton Oakmont, the infamous brokerage firm featured in The Wolf of Wall Street.
Stratton won its defamation suit because the court determined that Prodigy’s content moderation made it a publisher. That ruling made Prodigy liable for user-generated content.
This ruling created a dilemma. If internet companies moderated content, they could be sued.
If they didn’t, they could allow harmful content to spread unchecked. To address this, Congress passed Section 230 in 1996.
Senator Ron Wyden, one of the law’s co-authors, has said the goal was to allow small tech companies to focus on growth rather than legal battles. However, since Section 230 was enacted, the reach and influence of online platforms have expanded dramatically.
This has prompted calls for reform. Republicans have argued that the law enables tech companies to censor conservative voices.
Senator Roger Wicker previously pointed to content takedowns ahead of the 2020 election as evidence of bias. He suggested that tech firms may obstruct the flow of information to favor certain ideologies.
Tech companies dispute these claims. They insist they operate without political bias and that their moderation policies align with their business interests and community standards.
Meanwhile, Democrats argue that internet companies allow too much harmful content to remain online. They point specifically to misinformation about elections and other important issues.
Some lawmakers believe platforms should take a more active role in moderating content. During his campaign, President Joe Biden suggested revoking some of the legal protections granted by Section 230.
He argued that platforms should be held accountable for knowingly spreading misinformation. Others, like former FCC Commissioner Susan Ness, warn that excessive changes to Section 230 could stifle free expression.
Overregulation, they argue, could lead to platforms removing too much content out of caution. This could limit open discussion online.
The Future of Section 230
Some lawmakers propose that tech companies should be held responsible for content they amplify using algorithms. This is especially true when harmful material goes viral.
Others argue that platforms have too much control over what content remains online. One Senate Republican bill seeks to replace the broad “otherwise objectionable” standard with specific categories such as self-harm, terrorism, or unlawful activity.
Another major focus of reform discussions is transparency. There is significant public mistrust regarding whether tech companies moderate content fairly.
Tech executives acknowledge these concerns and have increasingly supported transparency measures as a middle ground. They recognize that outright resistance to regulation is no longer viable.
Despite widespread debate, legislative solutions remain difficult to implement. A divided Congress makes passing reforms challenging.
This has led some to believe that changes to Section 230 may first come through the courts. Supreme Court Justice Clarence Thomas has questioned whether the law has been interpreted too broadly.
He has invited legal challenges to Section 230. If the law is revised or reinterpreted, the way we interact with the internet could change dramatically in the coming years.