Blog post 5

The prompt for the blog post in this module asks us to consider the benefits and drawbacks of protecting free expression for government, civil society, the media, and private internet companies, and to suggest the next steps to protect free expression.

The essay from Jack Balkin and the Knight First Amendment Institute, How to Regulate (and Not Regulate) Social Media describes three values (benefits) of free speech: 1) democratic participation, 2) democratic culture, 3) growth and spread of knowledge. Balkin further describes this benefit by pointing out that these three values are essential for a healthy, well-functioning public sphere.

In contrast, Balkin also describes the drawbacks when these values are not well protected:

“In a nutshell, that is the problem we are facing in the 21st century. We have moved into a new kind of public sphere—a digital public sphere—without the connective tissue of the kinds of institutions necessary to safeguard the underlying values of free speech. We lack trusted digital institutions guided by public-regarding professional norms. Even worse, the digital companies that currently exist have contributed to the decline of other trusted institutions and professions for the creation and dissemination of knowledge.”

I’d argue that this lack of “trusted digital institutions” facilitates what Daphne Keller delightfully describes as “lawful but awful” (clearly harmful, but legally protected)  speech in her post “If Lawmakers Don’t Like Platforms’ Speech Rules, Here’s What They Can Do About It. Spoiler: The Options Aren’t Great.” These harms include the spread of misinformation, hate speech, and more.  

So what, then, should each sector do to better protect our free speech values? Let’s take a look by sector, as suggested by the prompt:

For government, both of the authors cited above reference the impracticality of having the state determine what is harmful. So, policymakers should instead focus on other, less direct, tools to bring social media platforms and content provider incentives into alignment with the free speech values mentioned above. Balkin describes three possible policy methods for this: “Antitrust and competition law, Privacy and consumer protection law, and Balancing intermediary liability with intermediary immunity.” In particular, his suggestion to “separate control over advertising brokering from the tasks of serving ads, delivering content, and moderating content.” And to implement a fiduciary model for handling user data seemed intriguing. More detailed proposals, examples or even a proof of concept along these lines would be a very compelling next step.

Civil society should resist the temptation to go after the “easy targets” when seeking to regulate free expression in the digital sphere, and should instead ensure that content moderation is happening at a level where it can reasonably function. To quote Balkin again, “Governments and civil society groups often want to use basic internet services and payment systems to go after propagandists, conspiracy mongers, and racist speakers. I think this is a mistake…”, Instead, they should focus on utilizing content moderation tools provided by social media platforms, and hold actual content creators responsible for their speech, even if they’re harder to hold accountable than the channels through which their content is accessed.

Social media platforms also have an important role to play in emerging next steps. In addition to adopting the “fiduciary” approach to data that Balkin described, possibly to avoid more direct legislative interference, platforms could also adopt more open standards to encourage more diverse content experiences. Daphne Kelley suggested this by way of “Magic APIs”.  

Social media platforms should also proactively enforce their own terms of service. While both the authors previously cited reference the scale of the content involved as a barrier to effective curation, due to advances in Natural Language Processing (NLP) and other emerging AI technologies, this is less true now than it was several years ago. Detecting “legal but awful” content and handling it thoughtfully, ideally according to content moderation preferences made available to end users, is increasingly realistic, and should be enthusiastically embraced.

In conclusion, I get the sense from both these and other authors that the very real work to ensure that our rights to free expression result in a vibrant public sphere is just beginning, and technology is currently well ahead of both policy and culture. I hope that the emerging efforts to adapt to these changes from each of the sectors described above will focus on applying policy where it can be most effective, and align the interests of both social media platforms and civil society to those ends.