The Complexities of Free Speech on Social Media: Exploring Corporate Responsibility, Monopoly Power, and AI in Dialogue Regulation
By Muhammad Ahmad Irfan
Social media has emerged as a crucial means for the advocacy and safeguarding of free speech in contemporary times. While some may argue that social media, being a medium of expression, should adhere to the same principle of unfettered freedom of speech, it is essential to note the disparities between the way individuals conduct themselves online in contrast to real-life interactions. The veil of internet anonymity often causes people to communicate in ways they would not dare to do in person. Anonymity on the internet changes human behavior and may make people more offensive and disclose a side of people face-to-face interactions may hide. [1] This raises the question over freedom of speech and how it should apply to social media today.
Historically, the link between social media and freedom of speech was first discussed in 1997, when the U.S. Supreme Court – in Reno v. American Civil Liberties Union [8] – ruled that a certain provision of the Communications Decency Act (CDA) which criminalized the transmission of material that depicted “sexual or excretory activities or organs” was unconstitutional; it directly infringed upon First Amendment free speech protections. [2] This led to internet service providers and social media websites becoming immune from lawsuits regarding the claims of any third party – or what is today known as Section 230 of CDA. They were also disburdened of the responsibility to monitor and regulate any content on their platform – unless deemed extremely harmful such as hate speech or content inciting violence. Many argue that mainstream Internet today has become a left-wing bubble, amplifying a narrow liberal perspective while restricting access for those with different ideologies. If this argument has a basis in truth, much like the argument in favor of imposing greater restrictions on online speech because of anonymity, it puts a huge question mark on the provision of the fundamental right to freedom of speech in new media.
Legally, there have been multiple instances where complete freedom of expression has been upheld – even at times when it was undesirable – to set a firm legal precedent in its favor. In 2017, Lester Gerard Packingham – a registered sex offender – was called in for violating a North Carolina law that prohibited registered sex offenders to access social media platforms that could be also accessed by minors. [3] This case was taken up by the North Carolina Supreme Court. The proceedings of the case ultimately led to the law being declared unconstitutional as it was a direct violation of the First Amendment and a restriction of freedom of speech. This decision was instrumental in legally reaffirming the principle of Free Speech, and its protection by the First Amendment, on all platforms including the Internet. Such cases are prime examples of showcasing how social media still remains a fluid topic of discussion within legal circles.
In recent discourse, however, social media companies have come under heavy criticism for their content moderation policies: Twitter has been criticized by conservative groups who argue that the company’s policies are biased against them and that there is unnecessary content moderation [4]; Meta has been criticized for its lenient approach towards content moderation where the platform has been found to spread fake news faster than any other social media platform [21][5]. With company acquisitions such as Twitter, it is now apparent that there is a shift in social narrative and increased concerns over the right to freedom of expression. Elon Musk – before taking over Twitter as its CEO – tweeted, “Given that Twitter serves as the de facto public town square, failing to adhere to free speech principles fundamentally undermines democracy” raising concerns over the adherence to principles of free speech and questions, whether or not social media platforms such as Twitter remain open to opinions on the other side of the spectrum. Musk’s plan to make Twitter a politically neutral space, indicates a crucial change in the way social media companies and important stakeholders look at content moderation. [22][6] Overall, social media companies are currently in a precarious position where increasing content moderation or making lenient content moderation policies, both lead them to hot waters.
Today, social media faces the same dilemma regarding freedom of expression that has long remained a significant component of contemporary discourse. Freedom of expression and hate speech have a very fine line between them. Although some may argue that there are definite boundaries after which freedom of expression turns into hate speech, the question of who decides what is and what is not hate speech, has never been answered. The prevalent assumption in the past has been that the state is an altruistic entity devoid of any preferences or biases. However, history has demonstrated that granting unchecked power to the state, particularly concerning free speech, is an unwise proposition. Analogies may be drawn from the philosophy of a democratic state which has always been arguing for freedom of expression all the while it continues to infringe upon the same right. An inherent tension between freedom of expression and regulation of hate speech can be observed. [7] To highlight this tension, it has been argued, for the purposes of debate, that racial slurs have the potential to empower marginalized groups by reclaiming their agency. However, from the perspective of the state, such slurs are still classified as forms of hate speech. As a result, this conflation between hate speech and freedom of expression is now increasingly observed to permeate into the domain of social media. [9] Furthermore, since these companies have leeway and autonomy over their policies, when they try to strike a balance between protecting freedom of expression and restricting hate speech, different approaches emerge: some prefer greater restrictions while others believe that a lax policy is more suitable for the needs of their company. Thus, alternative viewpoints emerge, which argue in favor of an umbrella policy for all social media sites, appeasing all parties that participate in online speech. Otherwise, social media sites will continue to pursue policies that suit themselves, and ambiguity over content moderation will persist indefinitely.
The idea of just a few companies controlling people’s access to information and opinion can be quite daunting. However, the prospect of a single company, possibly owned by the state, having complete control over people’s access to opinion is even more unsettling. When analyzing countries like China, it becomes evident that access to information is heavily restricted, to the extent that all content must adhere to state policy. [10] Stringent measures have been taken by major platforms, such as WeChat, under the influence of the government. In January 2018, WeChat suspended multiple accounts, citing that they were “wrong-oriented”. Similarly, during the pandemic, a plethora of activists’ accounts were banned as they dissented against the narrative pushed by the government, which was to show a ‘normal state’ during the pandemic. [11] They were banned for having genuine concerns over the state of their country and questioning government officials whose inefficient oversight had led to the pandemic worsening in China. Monopolistic media power, whether in control of the state or not, gives extreme control over the way a country and its people operate. This phenomenon, while not new, has become an escalating cause of concern, given that monopolistic power now translates into social media’s pervasive presence on virtually every smartphone. This pervasive presence can result in the propagation of a singular narrative, controlled by individuals who oversee these platforms from positions of privilege, detached from the reality of the actual state of affairs.
Similarly, recent advancements in technology, especially Artificial Intelligence, also pose a threat of similar, if not greater, magnitude. AI today is mainly being used in content moderation; algorithms are written to remove hateful speech and harmful content which is then removed with the help of identifying specific keywords in social media posts. [12] However, it is crucial to note that this approach may strip AI of its ability to differentiate between hate speech and legitimate discourse, potentially resulting in the censorship of authentic sources of debate and discussion. AI also faces issues such as algorithmic bias and a lack of surveillance, both of which undermine the legitimacy of AI as a means of content moderation. [13] It can be argued that since AI fails to fulfill the basic criterion to moderate, for now, an increased dependency on AI for content moderation may act as a scapegoat for media companies. At this stage, AI should not be allowed to replace human moderators, unless it becomes extremely advanced and can learn from non-exhaustive amounts of data on moderators’ behavioral patterns. In the current state, these AI models have the potential to positively impact the process of content moderation by identifying problematic content left to be reviewed by a human moderator, however, to completely let AI takeover would be disastrous. [14] The impetus to keep improving AI models must continue on, as media companies try to cut down on costs and in the context of recent legal advancements, being more vigilant themselves. Making AI a more effective tool in understanding free speech is a difficult task – since humans themselves don’t have an exact definition – however, it can still be made more effective by foremost removing any form of bias that may be replicated on a large scale. The opinions reflected by AI are a byproduct of underlying datasets provided to the algorithm, thereby rendering them susceptible to biases. This again raises the fundamental question of who holds the authority to determine what qualifies as acceptable or objectionable free speech.
What further complicates the power of social media is corporate responsibility and the way social media companies consider the ethical and moral obligations imposed upon them. Since all social media is privately owned, they can technically pursue any policy that they deem suitable in regards to their ability to disdain the responsibilities that inhere with their power to facilitate speech. [15] Today, as important as giving the right to freedom is essential, limiting that right in certain cases becomes equally pertinent such as when this fundamental right may be used to harm individuals or groups of marginalized communities. It is paramount to establish a balance between freedom of speech and social responsibility. [16] In doing so, social media companies must take upon themselves, if not legal, an ethical burden to negatively sanction individuals who use their platform to spread hate speech and should formulate a healthy and safe online environment. In addition to this, social responsibility complicates the issue by setting up a dichotomy for these companies to either meet a baseline of what is required of them or to go an extra step which is not their legal responsibility but a sort of compromise they reach upon willingly. To tackle this, one solution could be to legally separate these from online entities such as media commentary, to which regulation would apply. [17] This solution, while may not be received well by social media companies, will bring in a form of crucial accountability which may not be achievable otherwise. Disinformation and similar issues faced by social media companies can also then be addressed more directly by empowering users, supporting media and information literacy, promoting responsible journalism, and leveraging technology to enhance transparency and accountability. [18]
As we look ahead, it is important to envision a future where social media platforms have roots in both freedom of expression and unfettered discourse. Multi-stakeholder collaboration between the state, civil society, and the private sector has been recently introduced as an extremely crucial concept to build a future of improvement and further innovation. [19] Similarly, it is equally significant to remember recent cases such as Halleck v. Manhattan Community Access Corp. in which the Supreme Court noted that social media platforms such as Facebook and Twitter might be considered state actors if they perform functions traditionally exclusive to the state, establishing standard operating rules for social media companies and keeping their power in check. [20] To this regard, the case also makes sure that discriminatory policies are not followed in the moderation process, helping us to position AI moderators and their possible role in the near future as well.
The fine line between freedom of expression and hate speech has always remained blurry, but the advent of social media reignites hope in reinforcing freedom of speech in contemporary online circles and empowering the upcoming generations by opening up freedom of expression to a vast majority of the world. Lawmakers today have proposed legal action and legislation to regulate social media platforms ensuring a transparent and fair content moderation system. However, these laws may at times infringe upon the right to freedom of expression while influencing the way social media companies operationalize their content moderation policies. There is a persisting dichotomy we can observe with social media and its dual responsibility to maintain freedom of expression and protect individuals from hate speech and unless social media sites formulate a balance between the two, the question of the right to speech may continue to persist.
In conclusion, the pervasive nature of social media platforms, coupled with the potential for bias in AI-driven content moderation, necessitates the implementation of a more nuanced approach towards regulating online speech. This includes granting social media a limited degree of autonomy, while also acknowledging the need for governmental oversight and intervention in content moderation. In addition to this, there must be an implementation of an umbrella policy, which fosters collaboration between various social media companies, towards promoting a more inclusive and diverse online environment. Ultimately, only through a concerted effort between all stakeholders, including different governments, civil society, and social media companies, can we establish a more balanced and equitable approach towards regulating online speech.
Word Count: 2280
ENDNOTES:
Dawson J. 2023. Who Is That? The Study of Anonymity and Behavior. APS Observer. 31. [accessed 2023 Apr 19]. https://www.psychologicalscience.org/observer/who-is-that-the-study-of-anonymity-and-behavior
Section 230 of the Communications Decency Act – Minc Law. 2020 Sep 23. [accessed 2023 Apr 18]. https://www.minclaw.com/legal-resource-center/what-is-section-230-of-the-communication-decency-act-cda/
Harvard. 2017 Nov 10. Packingham v. North Carolina – Harvard Law Review. Harvard Law Review. [accessed 2023 Apr 18]. https://harvardlawreview.org/print/vol-131/packingham-v-north-carolina/
O’Neil T. 2022 May 10. Elon Musk says Twitter obviously has a “strong” left-wing bias. Fox Business. [accessed 2023 Apr 19]. https://www.foxbusiness.com/politics/elon-musk-twitter-obviously-strong-left-wing-bias
Hutchinson A. 2022 Dec 6. Meta’s Oversight Board Criticizes the Company’s More Lenient Moderation Approach for Celebrities. Social Media Today. [accessed 2023 Apr 19]. https://www.socialmediatoday.com/news/Oversight-Board-Criticizes-Metas-Lenient-Moderation-of-Celebrities/638120/
Lerman R. 2022 May 10. Here’s what Elon Musk has said about his plans for Twitter. Washington Post. [accessed 2023 Apr 19]. https://www.washingtonpost.com/technology/2022/05/10/elon-musk-twitter-plans/
Massaro T. Equality and Freedom of Expression: The Hate Speech Dilemma. https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=1923&context=wmlr
Reno v. American Civil Liberties Union. 2017. Mtsuedu. [accessed 2023 Apr 19]. https://www.mtsu.edu/first-amendment/article/531/reno-v-american-civil-liberties-union
McGoldrick, Dominic. “The Limits of Freedom of Expression on Facebook and Social Networking Sites: A UK Perspective.” Journal of Media Law 5, no. 2 (2013): 217-240. https://www.corteidh.or.cr/tablas/r30709.pdf
Moynihan H, Patel C. 2021. Restrictions on online freedom of expression in China The domestic, regional and international implications of China’s policies and practices International Law Programme Asia-Pacific Programme. https://www.chathamhouse.org/sites/default/files/2021-03/2021-03-17-restrictions-online-freedom-expression-china-moynihan-patel.pdf
Huang K. 2020 Feb 26. Coronavirus: China tries to contain outbreak of freedom of speech, closing critics’ WeChat accounts. South China Morning Post. [accessed 2023 Apr 19]. https://www.scmp.com/news/china/politics/article/3052463/coronavirus-china-tries-contain-outbreak-freedom-speech-closing
Haas, Julia. “Freedom of the Media and Artificial Intelligence.” Office of the OSCE Representative on Freedom of the Media, 20 June 2019, www.osce.org/fom/420666
Privacy and Freedom of Expression in the Age of Artificial Intelligence Privacy and Freedom of Expression In the Age of Artificial Intelligence. 2018. https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf
Raso F, Hilligoss H, Krishnamurthy V, Bavitz C, Kim L. 2018. Artificial Intelligence & Human Rights: OppOrtunities & risks. https://cyber.harvard.edu/sites/default/files/2018-09/2018-09_AIHumanRightsSmall.pdf
Gelber K. 2021 Jan 27. Social media and “free speech.” ABC Religion & Ethics. [accessed 2023 Apr 20] https://www.abc.net.au/religion/katharine-gelber-social-media-and-free-speech/13093868
Cohen-Almagor R. 2017 Sep. Balancing Freedom of Expression and Social Responsibility on the Internet. ResearchGate. [accessed 2023 Apr 19]. https://www.researchgate.net/publication/317776065_Balancing_Freedom_of_Expression_and_Social_Responsibility_on_the_Internet
Finckenstein, R. R., & Menzies, J. I. (2022). Social Media Responsibility and Free Speech. Macdonald-Laurier Institute. Retrieved from https://macdonaldlaurier.ca/mli-files/pdf/Feb2022_Social_media_responsibility_and_free_speech_Finckenstein_Menzies_PAPER_FWeb.pdf
Broadband Commission. (2019). Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression. Research Report on ‘Freedom of Expression and Addressing Disinformation on the Internet’. Retrieved from https://www.itu.int/en/ITU-D/Broadband/Documents/FoE_Disinformation_Report.pdf
Council of Europe. Internet governance: fostering multi-stakeholder dialogue. Freedom of Expression. Published August 14, 2017. Accessed April 19, 2023. https://www.coe.int/en/web/freedom-expression/news/-/asset_publisher/thFVuWFiT2Lk/content/internet-governance-fostering-multi-stakeholder-dialogue
SUPREME COURT of the UNITED STATES.; 2018. https://www.supremecourt.gov/opinions/18pdf/17-1702_h315.pdf
Travers M. Facebook Spreads Fake News Faster Than Any Other Social Website, According To New Research. Forbes. https://www.forbes.com/sites/traversmark/2020/03/21/facebook-spreads-fake-news-faster-than-any-other-social-website-according-to-new-research/?sh=2a4132ae6e1a Published October 12, 2022. Accessed April 21, 2023.
Twitter. Published 2023. Accessed April 21, 2023. https://twitter.com/elonmusk/status/1507777261654605828