
Why Section 230 reformers should start paying attention to social code platforms
In June 2022, a YouTuber trained an AI bot, GPT-4chan, to mimic racist, misogynistic, and antisemitic posts from the website 4chan. He used the bot to post 30,000 comments within a few days, then shared its underlying AI model on an online collaborative coding platform: Hugging Face Hub. Any user could download the GPT-4chan model and flood the Internet with thousands of hateful comments—until Hugging Face’s CEO announced that the platform would restrict access in order to prevent further harm.
The GPT-4chan incident demonstrates the growing importance and potential harms of social code platforms: platforms that let users share and collaborate on datasets, AI models, and other coding projects (we call this content “social code”).
Social code platforms make up the pillars of the digital infrastructure we interact with and rely on every day. GitHub stores code from over 83 million developers, including open source software that largely powers the devices websites we use. Kaggle hosts datasets from its 10 million-user community. Hugging Face hosts AI models on its “GitHub of machine learning” that anyone can use for applications in their own work, reaching a $2 billion valuation.
Like social media platforms, social code platforms not only rely on user growth and network effects, but allow users to use them for both good and harm. Yet, the nature of social code platforms makes them markedly different from social media platforms. And policymakers must start paying attention.
Similarities and Differences Between Social Media and Social Code Platforms
A key differentiator of social code platforms is their inherently collaborative nature. For example, users on GitHub can make a copy of code by “forking” it or contribute their own code with a pull request. Such collaboration makes these platforms more akin to Wikipedia or Google Docs, rather than one-off Tweets or Facebook posts. Moreover, social code is often released under open source licenses with minimal restrictions on usage, which have driven considerable technological innovation. For example, Python has become one of the most widely used programming languages due to its development through open source collaboration.
Another major difference is that social code platforms are meant to share tools to be used, rather than share content to be consumed. While these tools can be used for good—over a hundred open source applications or datasets comprise “digital public goods” that help attain the Sustainable Development Goals—they can also be co-opted by bad actors. Whereas abuse of social media platforms involves its users sharing harmful words or images, abuse of social code platforms can have a different, and potentially more potent, nature.
One AI model, or a piece of open source code, can share such content at a much faster and larger scale, becoming the foundation for countless extremely harmful applications: it can exploit security vulnerabilities; generate non-consensual intimate imagery; and reproduce extremist content. Granted, harmful social code may be harder to find, since social code platforms do not rely on advertising as a business model and thus make little use of algorithmic curation. But this doesn’t lessen the severe consequences if users misappropriate code for bad ends.
In fact, social code platforms moderate content based on stated guidelines to mitigate these harms. It is here where social media and social content platforms converge. GitHub and Hugging Face have Acceptable Use Policies and Content Guidelines, respectively, that are similar in principle to Facebook’s Community Standards. These guidelines and policies help delineate prohibited content and outline a moderation process. Once a platform aims to comply with Section 230 by balancing collaborative sharing with limitations on what can be shared, it opens itself to debates over free speech and the principles of a free and open internet.
Section 230 Reform, Social Media, and Potential Impacts on Social Code Platforms
Section 230 of the Communications Decency Act of 1996, 47 U.S.C. § 230, enabled the start of the modern Internet by broadly shielding online platforms from liability for (1) user-posted content and (2) their own content moderation decisions. Stanford Law professor Evelyn Douek explains the key provisions of Section 230 best: 230(c)(1) means that “[i]f I defame you in a tweet, you can sue me, but not Twitter” and 230(c)(2) means that “platforms can take down content and you can’t sue them for that either.” Without this immunity, today’s tech companies would have been buried in lawsuits over content posted on their platforms by ordinary users.
Section 230 has proven a lightning rod for calls around free speech and platform accountability. On the right, many want to promote free speech by reducing social media platforms’ ability to moderate their own content. Others on the left seek to require these platforms to more aggressively combat activities such as disinformation, hate speech, and cyberbullying. But as policymakers—and the Supreme Court—consider changes to Section 230, they should keep in mind that the law, as it stands, allows social code platforms both to host content and moderate it to prevent its harms. Any drastic change in Section 230 protections might upset this already delicate balance.
On the one hand, if Section 230 reform entirely takes the shape of the right’s calls for reduced moderation and free speech, it might undermine platforms’ ability to moderate content altogether. If social code platforms lose the ability to expeditiously take down harmful content, this could make it easy for anyone with basic coding knowledge to unleash incredible harm at scale.
On the other hand, if Section 230 reform excessively follows the left’s calls for requiring greater moderation, it may increase platforms’ liability for user-posted content, leading social code platforms to more severely restrict or eliminate content. Not only could this chill open source innovation, but taking down too much code could break builds, dependencies, and even slow down getting security fixes to end users. This would harm the digital infrastructure we all rely on, because social code is used for good: it advances equity, transparency, and Internet freedom by shifting power away from proprietary vendors and empowering anyone to use world-class tools for free.
Recommendations
Section 230 has shepherded both the rise of social media platforms and the growth of social code platforms. There is a need to balance the positive principles of open collaboration against potential harm in social code platforms, just as there is to moderate content in social media platforms. But when the animus of Section 230 reform is largely focused on social media platforms, lawmakers may overlook their reforms’ consequential and potentially devastating effects on social code platforms.
We therefore recommend that policymakers carefully consider the important role that social code platforms play in the global Internet. When Congress considers changes to platform liability, lawmakers should not legislate with solely Facebook and Twitter in mind. This requires understanding what makes social code platforms different, how they underlie our digital infrastructure, their benefits and harms, and how they are affected by Section 230. Only then can we avoid the unintended consequences of legal reform that is outdated upon passage.
Sean Norick Long, Esther Tetruashvily, & Ashwin Ramaswami
Sean Long
GLTR Staff Member; Georgetown Law, J.D. expected 2025; Harvard University, M.P.P. expected 2025; University of Notre Dame, B.A. 2015.
Esther Tetruashvily
Georgetown Law, J.D. expected 2024; Harvard University, M.A. 2014, The College of New Jersey, B.A. 2011.
Ashwin Ramaswami
GLTR Staff Member; Georgetown Law, J.D. expected 2024; Stanford University, B.S. 2021.
Thanks to Nithin Venkatraman, Victoria Houed, Ayelet Gordon-Tapiero, Brittany Smith, and Peter Henderson for their thoughts and feedback on this piece.