In an article published earlier
this year, Kate Klonick memorably described social media platforms like
Facebook as the “New Governors” of online speech. These platforms operate with significant
legal discretion. Because of the state action doctrine, they are generally
assumed to be unconstrained by the First Amendment. Because of Section 230 of
the Communications Decency Act, they enjoy broad immunity from liability for the
user-generated content posted on their sites. Nevertheless, Klonick showed,
these platforms have created intricate rules for determining whether and how to
limit the circulation of material that is arguably offensive or obscene, rules
that in some respects appear to track U.S. free speech norms. By studying
internal Facebook documents and interviewing employees, Klonick began to
illuminate the mysterious world of social media content moderation.
Klonick’s latest
essay pushes this project further.[*]
In “Facebook v.
Sullivan,”
she investigates Facebook’s use of the “public figure” and “newsworthiness”
concepts in its content moderation decisions. Again drawing heavily on
interviews, Klonick recounts how Facebook policymakers first turned to the
public figure concept in an effort to preserve robust debate on matters of
widespread concern while cracking down on the cyberbullying of “private”
individuals. Newsworthiness, meanwhile, emerged over time as a kind of
all-purpose free speech safety valve, invoked to justify keeping up content
that would otherwise be removable on any number of grounds. Defining public
figures and newsworthiness in an attractive yet administrable manner has been a
constant challenge for Facebook—the relevant First Amendment case law is no
model of clarity and, even if it were, translating it to a platform of Facebook’s
scale would be far from straightforward—and Klonick walks us down the somewhat
mazy path the company has traveled to arrive at its current approach.
Klonick’s essay
offers many intriguing observations about Facebook’s “free speech doctrine” and
its relationship to First Amendment law and communications torts. But if we
step back from the details, how should we understand the overall content
moderation regime that Klonick is limning? At one point in the essay, Klonick
proposes that we think of it as “a common law system,” given the way Facebook’s
speech policies evolve “in response to new factual scenarios that present
themselves and in response to feedback from outside observers.” The common law
analogy is appealing on several levels. It highlights the incremental,
case-by-case development that some of these policies have undergone, and it
implies a certain conceptual and normative integrity, an immanent rationality, to
this evolutionary process. Facebook’s free speech doctrine, the common law analogy
might be taken to suggest, has been working itself
pure.
Common law systems
are generally understood to involve (i) formally independent dispute resolution
bodies, paradigmatically courts, that issue (ii) precedential, (iii) written
decisions. As Klonick’s essay makes clear, however, Facebook’s content
moderation regime contains none of these features. The regulators and
adjudicators are one and the same, and the little we know about how speech
disputes get resolved and speech policies get changed at Facebook is thanks in
no small part to Klonick’s own sleuthing.
A very different
analogy thus seems equally available: Perhaps Facebook’s content moderation
regime is less like a common law system than like a system of authoritarian or
absolutist constitutionalism. Authoritarian constitutionalism, as Alexander
Somek describes it, accepts many
governance features of constitutional democracy “with the noteworthy exception
of … democracy itself.” The absence of meaningful democratic accountability is
justified “by pointing to a goal—the goal of social integration”—whose
attainment would allegedly “be seriously undermined if co-operation were sought
with [the legislature] or civil society.” Absolutist constitutionalism, in Mark
Tushnet’s formulation, occurs when “a
single decisionmaker motivated by an interest in the nation’s well-being
consults widely and protects civil liberties generally, but in the end, decides
on a course of action in the decisionmaker’s sole discretion, unchecked by any
other institutions.”
The analogy to
authoritarian/absolutist constitutionalism calls attention to the high stakes
of Facebook’s regulatory choices and to the awesome power the company wields
over its digital subjects as a “sovereign” of cyberspace.
It also foregrounds the tension between Facebook’s seemingly sincere concern
for free speech values and its explicit aspiration to make users feel socially
safe and “connected” (and hence to maximize the time they spend on the site), a
tension that is shaped by market forces but ultimately resolved by benevolent
leader and controlling shareholder Zuckerberg.
There is a jarring
scene in Klonick’s essay, in which a photograph from the Boston Marathon
bombing that is “graphically violent” within the meaning of Facebook’s rules is
dutifully taken down by content moderators, only to be put back up by unnamed
executives on account of its newsworthiness. These executives may have had good
intentions, and they may even have made the right call. The episode is nonetheless
a reminder of the potential for arbitrary and cynical assertions of authority from
on high in Facebookland—and of the potential disconnect between the policies
that Facebook adopts and the policies that a more democratic alternative would
generate.
Systems of authoritarian
constitutionalism and absolutist constitutionalism are not lawless. But their
commitment to civil liberties and the public interest is contingent,
instrumental, fragile. If one of these models supplies the most apt analogy for
Facebook’s regulation of online speech, then the crucial tasks for reformers might
well have less to do with refining the company’s content moderation rules than
with resisting its structural stranglehold over digital media.
Three response
pieces identify additional concerns raised by Facebook’s content moderation
practices. Enrique Armijo argues that First
Amendment law on “public figures” can and should be embraced by Facebook and
Twitter, but that constitutional protections for anonymous speech become far
more frightening when exported to these platforms. To the extent that First
Amendment law has predisposed platform architects to be tolerant of anonymous speech,
Armijo suggests, it has led them disastrously astray.
Amy Gajda points out that Facebook’s
“newsworthiness” determinations have the potential to affect not only millions
of Facebook users, at great cost to privacy values, but also an untold number
of journalists. Given courts’ unwillingness to define newsworthiness when
reviewing privacy claims, Facebook’s “Community Standards” could become a
touchstone in future media litigation unless and until judges become more
assertive in this area.
Finally, Sarah
Haan reminds us that Facebook’s
decisions about how to regulate speech are inevitably influenced by its profit
motive. Indeed, Facebook admits as much.
Maintaining a prosocial expressive environment, Haan observes, is difficult and
expensive, and there is little reason to expect Facebook to continue to privilege
the preferences of American customers as its business model becomes
increasingly focused on other parts of the globe.
For those of us who worry about
the recent direction of U.S. free speech doctrine, Haan’s invitation to imagine
a future Facebook less beholden to First Amendment ideology is also an
invitation to imagine a range of new approaches to online content moderation
and social media regulation. And that is precisely what the Knight Institute’s
next visiting scholar, Jamal Greene, will be asking academics and advocates to
do in a forthcoming paper series.