E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Privatization
can be a controversial practice. To its proponents, it is an engine of
efficiency, introducing a competitive atmosphere to stodgy and
self-perpetuating bureaucracies. But there are also externalities which can
come into play when governments abrogate direct responsibility over an area of
administration. A private prison may be run at less cost to the taxpayer, but
will it respect the rights of inmates and devote sufficient resources to their
rehabilitation? Privatizing a water company could turn it profitable, but this
might come at the cost of an increase in contaminants or a refusal to service
unprofitable areas. Despite the common refrain that government
should be run like a business,
there is an important distinction between the core functions of these two types
of entities. A private company’s purpose, its only purpose, is to maximize
profit for its shareholders. A government’s purpose is to promote and protect
the rights of its people.
Regulating
speech is among the most important, and most delicate, tasks that a government
may undertake. It requires a careful
balancing between removing harmful content while
providing space for controversial and challenging ideas to spread, and between deterring
dangerous speech while minimizing a broader chilling effect that can impact
legitimate areas of debate. The challenges in regulating speech are among the
most vibrant and hotly debated areas of law and philosophy, with a voluminous history
of jurisprudence and academic theory on how regulations should be crafted.
Today,
this entire school of thought is being cast by the wayside, as the practical
functions of content regulation are being increasingly handed over to an
industry which is not only totally unprepared to handle the subtleties and
technical challenges associated with defining the contours of acceptable speech
on a global scale, but has, as far as possible, resisted taking responsibility
for this function.
How
did we get here?
In
the early days of the commercial Internet, policymakers realized that the
commercial and social potential of this new medium could best be realized if service
providers were protected against direct liability for the words of their users.
Without it, scalability of the kind achieved by Facebook and Twitter would
never have been possible. However, this has turned into a double-edged sword. Having
been allowed to grow without an expectation of policing their users, the
world’s biggest tech firms were built around business models that make it very difficult
to control how their products are being used.
Now,
governments are demanding that the companies start taking responsibility, and impose
content controls that suit their needs. In some cases, these involve fairly well
recognized categories of harmful content, such as hate speech or child abuse
imagery. Other examples revolve around content which is outlawed locally, but
whose prohibition runs counter to global freedom of expression standards, from risqué
photos of the King of Thailand to
material deemed
to violate conservative religious standards.
In some instances, companies have entered into collaborative relationships with
governments to remove content that is determined to be objectionable, notably (and
controversially) in Israel.
Demands for private sector cooperation are backed by a variety of coercive
measures, including the imposition of large fines, threats to block a company’s
website, and even the arrest and imprisonment of company employees.
The
end result is a “privatized” system of content control, which is run at the
behest of government authorities, but which is operated and enforced by the
tech companies. To understand why this is problematic, consider the case of
South Korea, where content enforcement decisions are made by the Korea
Communications Standards Commission (KCSC),
an administrative body whose members are appointed by the President. The KCSC
is notoriously
heavy handed, and frequently targets sites which criticize
politicians or challenge sensitive policy areas. Their decisions are issued to
the platforms, rather than to the users who post the material, and come in the
form of non-binding requests for removal. Weak intermediary liability
protections mean that, in practice, these requests are always followed. However,
the fact that the decisions are not formally binding means that, technically,
enforcement originates from the platform, rather than the KCSC, which strips
users of any procedural safeguards, such as a right of appeal or even
notification that their material is subject to removal.
This
practice of “laundering” government content restrictions through the private
sector allows for mechanisms of control which vastly outstrip what might
otherwise be permissible in a democratic context. For example, Germany’s Network
Enforcement Act (NetzDG), which came into
force in 2018, requires companies to remove “obviously illegal” material within
24 hours of being notified of its existence. More recently, proposals
from European Parliament could push the
deadline for responding to “terrorist content” notifications to just one hour. No
judicial or administrative process in the world operates this quickly. Similarly,
traditional content restrictions were designed on the understanding that their
applicability would be limited by the resources available for enforcement. But
in the context of private sector platforms, enforcement is expected to be close
to 100 percent, creating a vastly more intrusive system.
These
issues are compounded by the fact that, due to the size and scale of the major
platforms, the only practical avenue to developing moderation solutions that
approach what governments are demanding is to lean heavily on automated decision-making
systems. But while AI is relatively competent at screening for nudity, content
that implicates hate speech or copyright infringement is vastly more difficult
since it is inherently contextual. An identical statement made in Myanmar and
in Canada could
qualify as hate speech in the former but not in the latter,
due to the fact that one country has a much higher level of underlying ethnic
tension. Not only is AI presently incapable of making this type of determination,
but it is questionable whether the technology will ever be able to do so.
Moreover,
in a context where the legal framework sets a minimum standard of enforcement,
with harsh penalties for dropping below that standard, platforms are
incentivized to err on the side of caution and remove anything which even
approaches the line. This problem has been widely documented
with regard to the DMCA system of copyright enforcement, including clear
instances where it has been gamed
to target political opponents.
Increasing automation will only exacerbate this tendency.
None
of this is to suggest that tech companies should have no responsibilities with
regard to the impact of their products on the world. But perspective is important.
The resiliency of the Internet
to pervasive forms of content control is a feature of the technology, not a bug.
Just as we celebrate the inability of Vladimir Putin to remove an embarrassing
image of himself or Xi Jinping’s struggles to stop
Internet users from comparing
him to Winnie the Pooh, it is these same
characteristics that make it so difficult to clamp down on the viral
spreading of video of the Christchurch attack.
The
new privatized enforcement models, which are being embraced, to some degree, by
virtually every developed democracy, threaten many key safeguards that were developed
to prevent the abusive application of content restrictions. While there are
clearly problems in moderating online speech that need to be addressed, the
solution to these challenges must be crafted within well-recognized global
norms of freedom of expression, including appropriate checks and balances, and
not as a private sector solution to what is fundamentally a matter of public interest.
Michael Karanicolas is a human rights advocate who is based
in Halifax, Canada. He is a graduate student in law at the University of
Toronto and, as of July 2019, will be the incoming WIII Fellow at the
Information Society Project at Yale Law School. You can reach him by email at michael.karanicolas
at mail.utoronto.ca and on Twitter at @M_Karanicolas.