Blog post 4

For the first question in the prompt this week “How has the internet changed freedom of expression,” I’d like to focus on one issue in particular: the consequences of anonymity, and the tension created by the need to regulate harmful speech and the right to privacy, especially for vulnerable groups.

For this discussion, let’s assume that some expression causes harm, and should fall outside First Amendment protections. Content on the internet poses an essential problem for policymakers: who should be responsible for that harmful speech, and at what points between ideation, indexing, discovery, and consumption of that expression should policy operate?

Prior to the readings this week, I had argued (vehemently) that responsibility lies solely with the content creator, and that the issue we should be discussing is how to resolve the complex issues, both technical and legal, that remove that individual from the consequences of their speech. However, the interview with David Kaye made me reconsider, or at least understand that this issue is more complex than I had appreciated. Kaye points out:

“There are vulnerable communities whose members can only engage in communications with one another if they do so in a secure way. My 2015 report alluded to some of these groups, such as LGBT communities or religious minorities in hostile environments, or simply those seeking information about politically or socially unpopular topics.”

These vulnerable communities should have a right to expression and assembly, but their access to that right is dependent on anonymity. This creates a tension, highlighted in our second prompt, between their right to expression and assembly and a policy need to curb harmful speech by holding its originator accountable. Kaye returns to this tension at the close of the interview “We need to find ways to protect expression but also protect those subject to real abuse.” Unfortunately, a succinct, elegant policy solution was not subsequently provided.

While Kaye’s closing sentiment highlights a difficult task that still seems largely ahead of us, the readings this week provided insights on who should not be held responsible for harmful speech on the internet. This is largely the theme in The Test of Time: Section 230 of the communications Decency Act Turns 20, which provides broad protections for “interactive computer service” providers.  

To explore that theme, I’d like to start by disagreeing with a credible expert. In an interview with Jeff Kosseff on 60 Minutes, the author of The Twenty-Six Words That Created the Internet explains why news networks are liable for defamatory statements, but social media platforms are not, “the difference between a social media site and, say, the letters to the editor page on the New York Times is the vast amount of content they deliver.” 

I’d argue that the scale of content is irrelevant and obfuscates more important questions. The difference is that the “Letters to the Editor” page is a distinct, consumable, piece of content provided by the New York Times. The New York Times has authored that page, even if it’s largely re-publishing other author’s work. In reference to section 230, it’s clearly a “material contribution”. Social media platforms are channels through which content may be found; they’re a newsstand (or a bookseller – see Smith v. California), not a newspaper.

We need clarification on what constitutes a “material contribution,” and thereby clarify whether social media platforms or search engines are responsible for the content their users consume. In particular, I’d argue we urgently need a very clear example protecting the algorithms that determine content relevance or that make content recommendations. In our reading, the author highlights this:

“A relatively untested theory alleges providers are responsible for content they aggregate and manipulate. Courts that have confronted such situations have generally held Section 230 bars claims based on repurposed user content but not those based on the provider’s own representations about their manipulation of that content.”

Tying this discussion of section 230 protection back to the example in 60 minutes of a harmful and bizarre theory on the origin of Covid, the law seems to intentionally protect YouTube from the “grim choice” of providing some curation vs. opting to provide none at all.  Effective policy should focus on making that video’s author responsible, while somehow protecting the privacy of vulnerable groups depending on anonymity.