Regulating Social Media Content – A Primer

Regulating Social MediaIt is quite difficult to describe or define social media in all its functions and glory using pre-millennium terminology. Even the statutory definition of social media sounds tortured: it only fits into the definition of “interactive computer service providers.”[1] But social media is so much more than a chatroom. It is a chronicle of our lives, a newsroom, a professional avatar. It is a convenient forum to participate in discussions of both the crucial and the mundane.

The extraordinary success of social media lies in its ability to produce engaging content without paying a dime for it. The more users join a particular platform, the more content is being created. As the content gets shared, it may “go viral” – often seamlessly from one platform to another. As users evaluate content, popularity begets more popularity. The content appears to the user to be an unfiltered, never ending feed of, well, everything – and nothing.

However unfiltered the content appears, it is not so. Content is prioritized to each viewer by a complex and ever-changing algorithm. The platforms compete not only for users but more of the users’ time. Their algorithm places higher value on engaging content, and this results in a proliferation of controversial, “clickbait” topics online. The “network effect” of social media platforms result in users flocking to the more popular platforms. The larger the networks get, the easier they overcome any upstart that threaten their primacy. When a rival platform becomes successful enough to be reckoned with, the result is often an acquisition and incorporation of the rival’s innovations into their own product.

A significant problem described regarding social media is their “flattening” or “democratizing” effect.[2] Content posted by large media outlets appears equal to the content posted by an average user. This democratization was once highly praised and may have toppled dictators in the Middle East. However, we were soon enough disabused of this naïve notion when it became apparent that hostile state actors can also masquerade among us and bait us to engage with their content.[3] It has become an onerous hassle to fact-check the content posted on social media, and, due to social proof, we are unlikely to fact-check popular postings.

As this development unfolded in the last decade, culminating in at least one presidential election tainted by unbridled trolling, the mood universally turned in favor of “regulating” social media platforms. To understand what could be changed, we need to examine how social media is regulated now. Thus, we find that it is not regulated much at all.

If we look at the federal statutes, we find precious little. The “26 words that created the Internet”[4] are found in Section 230 of the Communications Decency Act of 1996. This is the only section that survived constitutional scrutiny;[5] it provides broad immunity from liability to social media companies for the content posted by the users of the platform.[6] Simply put, the liability remains on the author of the content, the person typing it into the platform, and the platform that allows the sharing of this content is not deemed a publisher in the traditional senses. They are not required to read, review, edit, or approve the content. They may, however, decide to do so and block it or flag it, because another part of Section 230 allows them protection from liability for screening or blocking content that “the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”[7]

Social media platforms have policies that prohibit graphic violence, child sexual exploitation, and hateful content or speech. Under Section 230, they may suspend or ban users who violate these policies, as well as take down, block, or flag the content. However, there is no uniform standard for content moderation, and there is no uniform takedown mechanism or tolerance level either. The platforms are not required to release information on their content moderation, although some voluntarily release how many accounts or messages they have flagged, blocked, or suspended. There is variance between companies in how much they rely on users, content moderators, or artificial intelligence (AI) technologies in spotting problematic content.

As we know, fact-checking and cite-checking are labor intensive endeavors. On social media, information spreads from one service to the next, where it may proliferate even if it was removed from the original site. Moreover, information can be “re-contextualized” by, for example, creating a screen-shot picture of a text, making it impossible for AI to read the message.

The proposed solutions so far appear ill-fitting to cure the problem. Under the Trump Administration, a Federal Communications Commission (FCC) rulemaking was requested to “clarify the circumstances under which an interactive computer service restricting access to content would not receive Section 230 immunity.”[8] Several legislative proposals also emerged addressing two distinct aspects of the social media environment. One would address the allegedly “biased” restrictions of certain content, for example, “throttling” certain political viewpoints.[9] The other legislative approach would require platforms restrict COVID-19 misinformation. Also, although it has not progressed into an actual move by the Justice Department, many are talking about using antitrust measures and breaking up the social media “monopolies.”[10]

A regulatory response is typically justified by concerns for the de facto censorship position that social media platforms may have if they block a particular viewpoint.[11] Scholars, on the other hand, complain that victims of social media bullying have no recourse against the companies that enable the abuse.[12] The facelessness of social media creates an environment ripe for abuse; even content moderators requested compensation for the mental health effect of their jobs.[13]

We have to also consider the unintended consequences of any regulatory interference. Creating a uniform moderation requirement now would only further enshrine the larger social media companies’ monopolies by increasing the cost of entry on smaller, startup competitors.[14] Similarly, creating a regulatory agency to moderate the content would make it cheaper for established platforms to do business by relieving them of the moderation cost, which would also result in monopolies, as it would free up money to acquire competitors.

Revisions to Section 230 would, in turn, result in one of two extremes: either companies would be so concerned with liability for removing content they do not remove anything, or they would not allow anyone to post content unless it is reviewed, fact-checked, edited, and curated. One outcome is thousands of 4-chans, another is a few The Atlantics or Harpers. Neither of these approaches is particularly #appetizing. While there are some practical solutions emerging, none of them have garnered any traction so far.[15] At least, we can safely continue doomscrolling[16] for now.

[1]Section 230 of Communications Act of 1934 (47 U.S.C. §230), enacted as part of the Communications Decency Act of 1996.
[2]Clara Shih, THE FACEBOOK ERA (Mar 12, 2009, Addison-Wesley), available at https://www.informit.com/articles/article.aspx?p=1330222&seqNum=3 (Iast accessed January 26, 2022); Gord Hotchkiss, A World Flattened By Social Media, MEDIA INSIDER, June 16, 2020, available at https://www.mediapost.com/publications/article/352615/a-world-flattened-by-social-media.html (last accessed January 26, 2022); Craig Silverman, Social Platforms Promised A Level Playing Field For All. The Russian Trolls Showed That Was Never True, BUZZFEED NEWS, November 28, 2017, available at https://www.buzzfeednews.com/article/craigsilverman/social-platforms-promised-a-level-playing-field-for-all-the (last accessed January 26, 2022), Nathan Pippenger, The Great Internet Flattening, DemocracyJournal.org August 7, 2015, available at https://democracyjournal.org/arguments/the-great-internet-flattening/ (last accessed January 26, 2022).
[3]Silverman, supra note 2.
[4]Jeff Kosseff, THE TWENTY-SIX WORDS THAT CREATED THE INTERNET, (1st Ed., Cornell University Press, April 15, 2019).
[5]Reno v. ACLU (1997) 521 U.S. 844 [117 S.Ct. 2329, 138 L.Ed.2d 874].
[6]Supra note 1 (“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”).
[7]The statute has no effect on federal criminal laws, state laws, intellectual property laws, and (other than allowing the blocking/screening of content) on sex trafficking law (i.e., it allows for a civil action for sex trafficking). Id.
[8]FCC Chairman Ajit Pai declined moving forward with this rulemaking during the remainder of his tenure as Chairman. Carrie Mihalcik, FCC’s Ajit Pai says he won’t move forward on Section 230 rule-making, CNet.com, Jan. 8, 2021, available at https://www.cnet.com/news/fccs-ajit-pai-says-he-wont-move-forward-on-section-230-rule-making/, last accessed January 26, 2022.
[9] For example, viewpoints that coincide with misinformation on vaccinations or violate the platform’s content policy by calling for violence.
[10] Social media companies are not monopolies in a traditional sense as we discuss below. As of the time of this writing, Facebook owns Instagram and WhatsApp; Google owns YouTube; and Microsoft owns LinkedIn. There remains the question: would breaking up these “monopolies” address the specific concerns described above? Would Facebook be different if it broke apart from Instagram? This writer remains doubtful.
[11]Due to their market dominance (in their own context), blocking content means they are practically censoring the speaker. U.S. Sen. Ted Cruz (R-Texas) chairman of the Senate Committee on the Judiciary’s Subcommittee on The Constitution, stated in several public appearances that social media and browser companies are silencing conservative voices. See, e.g., on the senator’s homepage, https://www.cruz.senate.gov/newsroom/press-releases/sen-cruz-google-is-a-monopoly-that-is-abusing-its-power (last accessed January 26, 2022). The Federalist Society, American Principles Project, The American Conservative, etc. have echoed these complaints. See, e.g., Elyse Dorsey et al., Is Common Carrier the Solution to Social-Media Censorship? A Regulatory Transparency Project Webinar, Federalist Society, February 9, 2021, available at https://fedsoc.org/events/is-common-carrier-the-solution-to-social-media-censorship (last accessed January 26, 2022).
[12]See, e.g., Danielle Keats Citron and Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 FORDHAM L. REV. 401 (2017).
[13]Casey Newton, Facebook will pay $52 million in settlement with moderators who developed PTSD on the job, THE VERGE, May 12, 2020, available at https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health (last accessed January 26, 2022).
[14]Clyde Wayne Crews Jr., The Case against Social Media Content Regulation: Reaffirming Congress’ Duty to Protect Online Bias, ‘Harmful Content,’ and Dissident Speech from the Administrative State (June 28, 2020). CEI, Issue Analysis 2020 No. 4, available at SSRN: https://ssrn.com/abstract=3637613 (last accessed January 26, 2022); John Samples, Why the Government Should Not Regulate Content Moderation of Social Media, Cato Institute, April 9, 2019, available at https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media (last accessed January 26, 2022).
[15]These solutions include: creating distinct moderation requirements for the platforms based their size; allowing users to customize algorithmic filter settings; opening the raw, unsorted, and un-curated content feeds of dominant platforms to allow others to build customizable services that users may use based on their content preferences; requiring some or all social media operators to regularly publish detailed content moderation reports; mandating that social media users disclose their identity. Another popular position is to regulate social media platforms as “public utilities” or common carriers. However, as large as social media companies are, they are still not an “essential service,” as the blackout of one platform has demonstrated. Furthermore, even the largest platform is a “natural” monopoly only in its own context, as there are multiple platforms that one can use, and the Internet is limitless. In this limitless universe new platforms continue to emerge and take large swaths of users with them. For further reading on the “common carrier” solution, see, e.g., Dorsey, supra note 11.
[16] According to the Merriam-Webster dictionary, Doomscrolling and doomsurfing are new terms referring to the tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back. See https://www.merriam-webster.com/words-at-play/doomsurfing-doomscrolling-words-were-watching (last accessed January 27, 2022).