Reflection on the Limits of Free Speech in Social Media   -Recent Examples from United States Courts

Free Speech in Social Media.jpg

By Amin Kef Sesay

The public seems to have a fundamental misunderstanding about the true extent of “Freedom of Speech” under the country’s new cybercrime law.

Fundamentally, what type of speech can be restricted? And how does this apply to speech restrictions on social media platforms?

Lawsuits alleging free speech violations against social media companies are routinely dismissed.  The primary grounds for these dismissals are that social media companies are not State actors and their platforms are not public forums, and therefore they are not subject to the free speech protections of the constitution.

Consequently, those who post on social media platforms do not have the right to free speech on these social media platforms.

The overarching principle of Free Speech is that its reach is limited to protections against restrictions on speech made by the Government. When speech takes place in a public forum, that speech can qualify for protection of speech.

However, social media platforms are often characterized as a digital public square. Yet, courts have repeatedly refused arguments that social media platforms are public forums.

This reasoning is justified because their networks are private, and merely hosting speech by others does not convert a private platform to a public forum. Only in limited cases have social media sites been found by courts to qualify as a public forum.

For example, in a case in the United States, an appellate court held that the official Twitter page operated by then President Donald Trump was a designated public forum. As a result, Government officials could not engage in viewpoint discrimination by blocking individuals from posting comments with critical views of the President and his policies.

Social media platforms may also be analogized to newspapers when they attempt to exercise editorial control and judgment over the publishing of users’ posts. In this scenario, the US Supreme Court has held that newspapers exercise the freedom of the press protected by the First Amendment and cannot be forced to print content they would not otherwise include.

This is due to a newspaper’s ability to exercise editorial control and judgment, including making decisions on the size and content of the paper, along with treatment of public issues and public officials (whether such treatment is fair or unfair).

This leads us to next examine what protections are afforded to social medial companies for content posted by their users on their platforms.

Section 230 of the Communications Decency Act (“CDA”), codified as 47 U.S.C. § 230, was enacted in response to a court decision ruling that an internet service provider, Prodigy, was considered a “publisher” of defamatory statements that a third party had posted on a bulletin board hosted and moderated by Prodigy, and Prodigy could therefore be subject to a civil lawsuit for libel.

19 Sec. 230(c)(1) remedies this by providing immunity to internet service providers from lawsuits that attempt to make them liable for the user content posted on their sites.20 Social media companies, which are currently considered to be service providers under Sec. 230(c)(1), are broadly protected from responsibility for what users say while using their social media platforms.

To preclude liability for decisions to remove or restrict access to content that the provider deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, Social media platforms therefore set their policies and Terms and Conditions to state that they can remove violent, obscene, or offensive content and can ban users who post or promote such content.

For example, Facebook, Twitter, and YouTube have banned terrorist groups that post materials promoting violence or violent extremism, and have also banned ISIS, Al Qaeda, and Hezbollah solely because of their status as U.S.-designated foreign terrorist organizations.

As was recently seen following the 2020 Presidential election, Facebook, Twitter, Snapchat, YouTube (Google), Reddit, and Twitch (Amazon) also justified their suspension of the accounts of President Trump and some of his supporters under Sec. 230(c)(2) for continuing to post misinformation, hate speech and inflammatory content about the election.



Please enter your comment!
Please enter your name here