Anupam Chander
To borrow Churchill’s line about democracy, a Facebook Supreme Court is the
worst idea, except for all the others.
In 2018, Mark Zuckerberg introduced the idea of establishing an
independent board that would make the most difficult decisions with respect to content.
He compared the new body to a Supreme
Court. In January 2019, Nick
Clegg, Facebook’s Vice-President of Global Affairs and Communications,
announced the charter of this independent oversight board. Facebook may have sought
to reduce apprehensions of its growing global power by ceding some control to an
outside body. However, it was clear that Facebook was borrowing the apparatus,
and even the personnel, of government: not only was Facebook implementing a pseudo-judicial
body, but Nick Clegg had once served as the Deputy Prime Minister of the United
Kingdom.
As Dawn
Nunziato has observed, the
internet represents the “most powerful forum for expression ever created.” How
decisions over content are made on one of the internet’s principal platforms—a
platform that connects literally billions of people—is of great importance. Scholars such as Jack
Balkin, Tarleton
Gillespie, Daphne
Keller, Kate
Klonick, Thomas
Kadri, and Sarah
Roberts have powerfully analyzed how
internet platforms make decisions on the content that is carried on their sites
and the role of intermediaries in free expression today. In my scholarship, I
have sought to demonstrate a “First
Amendment/?Cyberlaw Dialectic” in which “the First Amendment constituted cyberlaw, and cyberlaw in turn
constituted free speech.”
Facebook’s many critics argued that such an oversight board would serve
principally as window dressing, or that the introduction of the outside
mechanism was merely rearranging the deck chairs on the Titanic. Yet, the
Facebook Oversight Board marks a major new experiment. This essay compares the
Oversight Board with its alternatives. After all, the Oversight Board must be
considered not only on the basis of its flaws—of which there will likely prove
to be many—but rather in comparison to its alternatives.
I will consider five alternatives, which I will dub: Mark Decides; Democracy
by Likes; Feudal Lords; Judge Robot; and Official Censorship.
Mark Decides
In December 2015, some employees inside
Facebook were troubled. A candidate for U.S. president was calling for a ban on
immigration for people of a particular religion. Users had flagged the
content as hate speech, triggering a review by Facebook’s community-operations
team, with its many employees in several offices across the world. In internal messages, some
Facebook employees declared that posts constituted hate speech. On its face, a
statement targeting a particular religious group for a ban on immigration would
seem to violate Facebook’s community guidelines.
Facebook’s head of global policy
management, Monika Bickert, explained internally “that
the company wouldn’t take down any of Mr. Trump’s posts because it strives to
be impartial in the election season.” Facebook explained
its decision to the public as follows: “In the weeks ahead, we’re going to
begin allowing more items that people find newsworthy, significant, or
important to the public interest—even if they might otherwise violate our
standards.”
According to the Wall
Street Journal, “The decision to allow Mr. Trump’s posts
went all the way to Facebook Inc. Chief Executive Mark Zuckerberg, who ruled in
December that it would be inappropriate to censor the candidate.” Zuckerberg is
the ultimate arbiter of what stayed up or what came down on Facebook.
A central difficulty of Facebook’s current
model is that it places enormous power in the hands of Facebook’s leadership
and to those employees to whom the company leaders choose to delegate this
power. When Facebook deleted a Norwegian government
minister’s posting of the famous photograph of the naked Vietnamese girl
fleeing a napalm attack, the Norwegian government complained of censorship.
Facebook reversed its decision, even though its community
guidelines banned nudity. (“While we recognize that this photo is iconic, it’s
difficult to create a distinction between allowing a photograph of a nude child
in one instance and not others,” a spokesman for Facebook said in response to
queries from the Guardian.) In the wake of the censorship of the photograph,
Espen Egil Hansen, the editor-in-chief and CEO
of Norway’s largest paper, would declared that Zuckerberg was “the world’s
most powerful editor.”
Democracy by Likes
What if Facebook put governance decisions to a vote, asking people to like
or dislike a particular post? We do not typically solve controversies over a
particular statement through popular vote. This may be because such a mechanism
might often degenerate into a contest focused on the popularity of the
controversial content, rather than a reasoned assessment of whether the content
violated the community guidelines. This would have the effect of reinforcing
popular views at the cost of minority viewpoints.
Feudal Lords
What of Reddit-style
moderators, granted the authority to regulate particular discussions, charged
with the authority to remove posts they found to be a violation of that group’s
guidelines? This essentially becomes a kind of dispersed version of Mark
Decides—instead of a single king, multiple lords. If only a few “lords” have
power, this approach raises the same concentrated power issues of Mark Decides.
If there are many “lords,” however, there might be enough alternatives that
controversial content might find some home somewhere. Thus, such an approach
might only shift the material to different corners of Facebook, rather than
removing material that genuinely violates Facebook’s community guidelines.
Judge Robot
Perhaps we could rely upon a computer to make content decisions. In
fact, of course, if Facebook is making millions of content decisions each day,
it is likely relying in significant part on AI. Natasha Duarte, Emma Llansó,
and Anna Loup have
argued, however, that
“large-scale automated filtering or social media surveillance […] results in
overbroad censorship, chilling of speech and association, and disparate impacts
for minority communities and non-English speakers.” As Duarte et al. describe, decision-making by AI would not result in unbiased decisions,
but rather would potentially amplify bias. I have described this in my
scholarship as a kind of “viral
discrimination.”
Official Censorship
There is reason to worry about the great private power concentrated in
companies like Facebook, which reserves the right to delete information that
violate its community guidelines. An alternative is to vest such
decision-making in traditional governments—either through courts or
administrative bodies. Controversies over content would then be determined by an
official process, rather than by corporate employees following often secret
processes and secret rules. Of course, it is unclear whether individuals would
have the resources to bring or defend claims that some material should be
removed. Not only may the process be expensive, it may also be slow. Furthermore,
governments may use their content management powers to target negative or
opposition information as “fake news.” This concern will be heightened in illiberal states. Finally,
states with a strong commitment to free expression, like the United States, would
find it difficult to censor content that was not illegal except under time,
place, and manner restrictions that are difficult to translate to the internet.
Afterword: Facebook’s Last Experiment with Outside Decision-Making
This is not Facebook’s first experiment with ceding decision-making to
outsiders. In 2009, Facebook permitted its users to vote on proposed changes to
the terms of use—though on terms that made practically all such votes advisory
rather than mandatory. That Facebook democracy proved short-lived.
Under the now-defunct
system, a site-wide vote would
be triggered if proposals to modify Facebook’s terms of service received
comments from at least 7,000 users. Then if at least 30 percent of active users
voted on the proposal, Facebook would treat the vote as binding. Otherwise the
vote would be merely advisory. Given that Facebook already had a billion users,
there was little likelihood that any vote would be binding.
In December 2012, Facebook put two policy changes to a vote: whether Facebook should be able to share
data with Instagram and whether Facebook should end users’ rights to vote on
further governance questions. 88 percent of the 668,872 voters resoundingly rejected
the changes. But the vote fell far short
of the 300 million or so required for a binding vote, and thus could be safely
ignored.
After merely three votes, Facebook ended its chimerical experiment with
democracy. Perhaps Facebook’s new experiment with external governance might
prove longer-lived.
Anupam Chander is Professor of Law,
Georgetown University; A.B. Harvard, J.D. Yale. The author is grateful to Delia
Brennan and Ryan Whittington for superb research assistance, and also thankful
for a Google Research Award to support related research. You can reach him by e-mail at ac1931 at georgetown.edu