Section 230: What Your Organization Needs to Know

As some people believe social media companies to be overstepping their bounds, there’s been a lot of talk about Section 230 lately. Unfortunately, not all of this talk has been accurate. What is a platform? What is a publisher? Does any of that have anything to do with Section 230? As the representative of a business, you can’t afford to get caught up in politics, you need to understand the facts of how the law affects you. In this post, we’re going to look at just the facts surrounding Section 230.

Section 230: What Your Organization Needs to Know

What Is Section 230?

In the early days of the internet, the Communications Decency Act was the first attempt by the government to regulate indecent or obscene material on the web, particularly as it relates to exposing that material to children. Civil liberties groups were able to get some of the law struck down, arguing that restricting indecent speech was an unlawful infringement on freedom of speech.

Section 230 remains as possibly the most important part of the law. Any website that allows user-generated content could have theoretically been liable for that content. Companies could have criminal charges brought against them if a user posted something deemed indecent or offensive. Because it’s unfair to charge someone for the actions of another, Section 230 was added to absolve them of that responsibility. 

Section 230, stated in full, says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Why Is It Considered Controversial?

There are people on both sides of the political aisle attacking Section 230, for nearly opposing reasons. Some on the left claim that hate speech and political misinformation is rampant on social media and Section 230 allows social media companies to ignore these posts. Some on the right claim that moderation of content is unfairly biased due to the generally left-leaning nature of tech company founders. Their belief is that removing Section 230 protections will force companies to moderate fairly.

How Does This Tie Into the First Amendment?

One of the reasons civil liberties groups were able to weaken the Communications Decency Act is that it’s unconstitutional for the government to restrict the speech of citizens. Punishing a website for what it says would be a violation of the first amendment rights of the website owners. That was decided long ago, but there are two issues in the modern era to contend with.

First, any attempt to force a website to moderate content that doesn’t violate any laws simply because the content is deemed indecent or because it is false would be a violation of the first amendment rights of the company. However, the company itself is not the government. It can certainly choose to uphold the values of free speech but is not required to do so. In fact, forcing a company to act as a conduit for content that it doesn’t want to host would also be a violation of the company’s right to expression under the first amendment. 

This is where the misconception about platforms and publishers comes in. The claim is that publishers are liable for the content they produce in their own name, whereas platforms are absolved because they do not produce the content. The argument is that if you moderate content at all, you are acting as a publisher and are not protected by Section 230. This is wrong. The law explicitly states that if the content is provided by someone else, the provider cannot be treated as a publisher. Nowhere does it say that they must allow all speech. Moderating content is not the same as generating content.

Why Is This Important Now?

The law was largely ignored until recently, but in the past year or so, high ranking politicians on both sides have launched attacks on it. Donald Trump signed an executive order in May 2020 asking for the law to be defined more narrowly. He pushed agencies to use the law to justify revoking its protection from companies that display political bias in their moderation. He later strengthened his opposition, vetoing the 2020 National Defense Authorization Act because it didn’t include a repeal of Section 230. Congress overrode the veto.

In January 2020, Joe Biden took aim at Section 230, particularly in relation to the protections it affords to Facebook. He called for the law to be revoked, stating, “It should be revoked [for Facebook and other platforms] because it is not merely an internet company. It is propagating falsehoods they know to be false.” He doubled down in December, stating that Section 230 should be thrown out and new legislation drafted.

How the Law Has Changed

The law has already been changed once. In 2018, the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) amended Section 230, targeting websites like Backpage that allowed sex workers to post ads for their services. The change stated that the law doesn’t apply to civil and criminal charges of sex trafficking or to conduct that “promotes or facilitates prostitution.”

What Is Being Proposed

In February 2020, The United States Department of Justice looked at ways the law could be changed in order to protect people who are victims of nonconsensual or illegal pornography that is distributed online. 

Proposals from Democrats

Aside from the leaders of both parties calling for the law to be revoked, there are two major proposals to change Section 230. The Democrats have led the charge on replacing it with the Platform Accountability and Consumer Transparency (PACT) Act, which focuses on requiring websites to transparently report how they moderate content.

Regarding proposals to repeal Section 230 entirely, “Section 230 enshrines free speech as a guiding principle online by protecting websites from being held liable for what its users post. As it stands right now, an individual can tweet something defamatory, and they would be sued, not Twitter. Without it, almost no social media company or even a news site with a comment section would be able to afford the costs of content moderation to let users post. That is, except for Facebook…”With billions of users worldwide and a market cap approaching $800 billion, Facebook knows that between its acquisitions of Instagram and WhatsApp, it’s the dominant social media platform, significantly more than Twitter. But that will not last forever, and Zuckerberg knows it, so now is the time for Facebook to ask the government to step in with a wildly onerous and expensive regulation to crowd out any potential competitors.” — Tiana Lowe, Washington Examiner (Oct 2020)

Proposals from Republicans

Republicans have proposed creating a much narrower scope for the law, only granting immunity to moderation decisions that are “done in accordance with plain and particular terms of service and accompanied by a reasonable explanation.” Smaller Republican proposals include eliminating protections for any company that moderates in a biased way, going so far as to suggest monetary damages be paid to anyone deemed to be unfairly censored. Other proposals from the party include only allowing moderation on content that is explicitly illegal. 

“The Senate hearing and the Trump proposal are obvious attempts to suppress private speech. The First Amendment stands as a check against government censorship. It doesn’t restrict private entities, which themselves have free speech rights…Governmental threats that stop tech companies from combating disinformation are far more dangerous to our constitutional values than social-media companies engaging in content moderation.” — Danielle Keats Citron and Spencer Overton, Slate (Oct 2020)

How Social Networks Have Reacted

Mark Zuckerburg has suggested that Facebook should be regulated as something “in between a telecommunications company and a newspaper.” In a whitepaper released by the company, “Charting a Way Forward: Online Content Regulation”, they outline some considerations they want the government to make before coming to any regulation decisions:

  • This approach holds a few assumptions:
  • Platforms are global and are subject to many different laws and cultural values
  • Platforms like Facebook are intermediaries for speech rather than traditional publishers
  • Platforms like this are constantly changing for competitive reasons
  • Platforms are always likely to get some moderation decisions wrong

Facebook thinks that it’s reasonable for the government to hold them accountable for specific metrics (such as holding violating posts below a certain number of views, or setting a median response time for removing these types of posts), but warns that any regulation efforts could create perverse incentives with unforeseen consequences. They give the example of how platforms may limit their attention to only newer posts if they were required to remove certain posts within 24 hours.

We don’t know what the future holds for Section 230 right now. Although they are doing so for very different reasons, both parties have expressed a desire to fundamentally change how the law works. If they do, there will undoubtedly be challenges to those changes from civil liberties groups, just as there were to the Communications Decency Act that spawned the law.

To keep updated on changes to Section 230 that may affect your business, as well as other news and trends in the social media marketing space, subscribe to our newsletter below.

Subscribe to Social You Should Know

Sources and Additional Readings:



Ignite Social Media