This statement was originally published on eff.org on 3 December 2020.
Even though it’s only 26 words long, Section 230 doesn’t say what many think it does.
So we’ve decided to take up a few kilobytes of the Internet to explain what, exactly, people are getting wrong about the primary law that defends the Internet.
Section 230 (47 U.S.C. § 230) is one of the most important laws protecting free speech online. While its wording is fairly clear – it states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” – it is still widely misunderstood. Put simply, the law means that although you are legally responsible for what you say online, if you host or republish other peoples’ speech, only those people are legally responsible for what they say.
Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.
But there are many, many misconceptions – as well as misinformation from Congress and elsewhere – about Section 230, from who it affects and what it protects to what results a repeal would have. To help explain what’s actually at stake when we talk about Section 230, we’ve put together responses to several common misunderstandings of the law.
Let’s start with a breakdown of the law, and the protections it creates for you.
How Section 230 protects free speech:
Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions.
One of the Internet’s most important functions is that it allows people everywhere to connect and share ideas – whether that’s on blogs, social media platforms, or educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 says that any site that hosts the content of other “speakers” – from writing, to videos, to pictures, to code that others write or upload – is not liable for that content, except for some important exceptions for violations of federal criminal law and intellectual property claims.
Section 230 makes only the speaker themselves liable for their speech, rather than the intermediaries through which that speech reaches its audiences. This makes it possible for sites and services that host user-generated speech and content to exist, and allows users to share their ideas – without having to create their own individual sites or services that would likely have much smaller reach. This gives many more people access to the content that others create than they would ever have otherwise, and it’s why we have flourishing online communities where users can comment and interact with one another without waiting hours, or days, for a moderator, or an algorithm, to review every post.
And Section 230 doesn’t only allow sites that host speech, including controversial views, to exist. It allows them to exist without putting their thumbs on the scale by censoring controversial or potentially problematic content. And because what is considered controversial is often shifting, and context- and viewpoint- dependent, it’s important that these views are able to be shared. “Defund the police” may be considered controversial speech today, but that doesn’t mean it should be censored. “Drain the Swamp,” “Black Lives Matter,” or even “All Lives Matter” may all be controversial views, but censoring them would not be beneficial.
Online platforms’ censorship has been shown to amplify existing imbalances in society – sometimes intentionally and sometimes not. The result has been that more often than not, platforms are more likely to censor disempowered individuals and communities’ voices. Without Section 230, any online service that did continue to exist would more than likely opt for censoring more content – and that would inevitably harm marginalized groups more than others.
No, platforms are not legally liable for other people’s speech – nor would that be good for users
Basically, Section 230 means that if you break the law online, you should be the only one held responsible, not the website, app, or forum where you said the unlawful thing. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. Remember – this sharing of content and ideas is one of the major functions of the Internet, from Bulletin Board Services in the 80s, to Internet Relay Chats of the 90s, to the forums of the 2000s, to the social media platforms of today. Section 230 protects all of these different types of intermediary services (and many more). While Section 230 didn’t exist until 1996, it was created, in part, to protect those services that already existed – and the many that have come after.
What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230.
If you consider that one of the Internet’s primary functions is as a way for people to connect with one another, Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party. This makes particular sense when you consider the staggering quantity of content that online services host. A newspaper publisher, by comparison, usually has 24 hours to vet the content it publishes in a single issue. Compare this with YouTube, whose users upload at least 400 hours of video [pdf] every minute, an impossible volume to meaningfully vet in advance of publishing online. Without Section 230, the legal risk associated with operating such a service would deter any entrepreneur from starting one.
The post On Section 230, one of the most important laws protecting free speech online appeared first on IFEX.
Source: MEDIA FEED