E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Online information gatekeepers
are in the spotlight. Their roles are being questioned and societal
expectations reformulated daily – not only in Europe, but around the globe.
However, much of the attention of regulators is biased only towards achieving
removal of the objectionable content. Owing to a never-ending stream of
controversies, the regulators fail to see (or worse, decide to ignore) that, as
much as societies risk under-removal of illegitimate content, they also risk
over-removal of legitimate speech of their citizens.
No other regulator better
illustrates this mindset than parts of the European Commission. As a direct
offspring of the European refugee crisis, the European Commission set up an
informal agreement with technology companies to quickly remove hate-speech in
May 2016. Since then, the Commission publicly
communicates that the less notified content is rejected by
platforms (and therefore removed), the better for all of us. It does not take
an expert to recognize that this thinking assumes that underlying notifications
are flawless—something that the European Commission does not evaluate in its
monitoring exercise. Despite the criticism, the Commission continues to celebrate
increasing removal rates as some form of ‘evidence’ of the fact that we are
improving. In reality, we are far from knowing what the net positive value of
this exercise is.
Academics have long argued
that even the baseline system of intermediary liability, which allocates
responsibilities with several stakeholders under a notice and takedown regime,
is prone to over-removal of legitimate speech. Faced with potential liability,
providers have a rational bias towards over-removal; they err on the side of
caution. These arguments have been proven
right by daily news and rigorous empirical and experimental
studies.
Although some regulators have started
recognizing this as an issue, many still do not think that magnitude of the
problem is too severe, in particular when compared to social problems
associated with failing to enforce the laws. To be fair, even academics cannot yet
properly tell what the aggregate magnitude of this problem is. We can point to
the gap between false positives in removals and extremely low user complaint
rates at the service level, but not too much more than that. The individual
stories that make up this graveyard of erroneously blocked content are mostly
unknown.
To their credit, the stakeholders
have successfully voiced the problem recently. Several upcoming pieces of the
Union law—such as the Digital Single Market (DSM) Directive, the Terrorist
Content Regulation, and the Platform to Business Regulation—now include some commitment
towards safe-guards against over-removal of legitimate speech. However, these
are still baby steps. We are lacking a vision of how to effectively achieve high-quality delegated enforcement that
minimizes under-removal and
over-removal at the same time.
Article 17(9) of the DSM
Directive mandates that E.U. Member States require some online platforms dealing
with copyrighted content to “put in place an effective and expeditious complaint
and redress mechanism that is available to users of their services.” The right
holders who issue requests for removal have to justify their requests, and the
platforms must use humans to review these user complaints. The Member States have
to facilitate alternative dispute resolution (ADR) systems and should ensure respect
for some types of copyright exceptions and limitations. The Terrorist Content
Regulation aims to prescribe such mechanisms to the hosting platforms directly.
Although the Commission proposed a full reinstatement obligation for wrongly
removed content, the European Parliament recently suggested to soften it
towards a mere obligation to hear a complaint and explain its decision (as seen
in Article 10(2) of the proposal). Article 4 of the Platform to Business
Regulation prescribes that complaint processes are available for cases of
restriction, suspension or termination of services of business users.
All of these initiatives, even
though well-intended, show a great deal of misbalance between two sides. While the
regulators are increasingly ramping up the effort to increase the volume and
speed of removals, by finding more wrongful content online and blocking it more
quickly, their approach is almost surgical when it comes to over-removal. They suddenly
want the platforms to weigh all the interests on a case-by-case basis. While the
regulators apply all pressure possible on the detection and removal side by
prescribing automation, filters and other preventive tools which ought to be
scalable, they limit themselves to entirely ex-post individual complaint
mechanisms that can be overruled by platforms in cases of over-removal errors. When
fishing for bad speech, regulators incentivize providers to use the most
inclusive nets, but when good speech gets stuck in the same nets, they provide
the speakers only with a chance to talk to providers one-on-one, thus giving
them a small prospect of change.
We fail to create equally
strong incentives for providers to avoid over-removal at scale. Without parity
in incentives, delegated enforcement by providers is no equal game; and without
equality of weapons, there is no due process. Even with policies like the ones
currently baked in the European Union, the users (whether private or business
ones) have to invest to counter false allegations. They bear the cost, although
they cannot scale up or speed up their defense. Without strong ex-ante
incentives for higher quality review, the cost of mistakes is always borne by
the users of those platforms since the correction takes place ex-post after a
lengthy process. Even if somehow legitimate speakers prevail after all, the
system, by definition, defies the legal maxim that justice delayed is justice
denied.
The solutions that we need
might not always be that complicated. The first experimental evidence
suggests that exposing platforms to counter-incentives in a form of external
ADR, which also punishes their over-removal mistakes by small fees in exchange
for legal certainty, can in fact reduce the over-removal bias and thereby lower
the social costs of over-blocking. The logic here is simple: if platforms bear
the costs of their mistakes because over-removal suddenly also has a price tag,
they have more incentive to improve by investing resources into the resolution
of false positives too. Moreover, since platforms can learn at scale, each
mistake is an opportunity for the benefit of everyone else, thereby improving
the technology and associated governance processes in the long-run.
However, to complicate things
further, regulators need to find a way to strike a balance between user’s
expectations to share their lawful content and platform’s interest to pick and
choose what to carry. Treating all platforms as states by imposing must carry
claims to all legal content overshoots the target to
the detriment of speech. However, treating platforms as purely private players
underappreciates their existing social function. We need to find a mechanism
that preserves the contractual autonomy, and ability to shape communities along
some values or preferences, which at the same time safeguards due process of
speakers. However, due process has to mean something more than mere explanation
from a human. It has to amount to credible and timely contestability of
decisions, which platforms cannot simply override without too much effort.
Martin Husovec is Assistant
Professor at Tilburg University (appointed jointly by Tilburg Institute for
Law, Technology and Society & Tilburg Law and Economics Center) and
Affiliate Scholar at Stanford Law School’s Center for Internet & Society
(CIS). He researches innovation and digital liberties, in particular,
regulation of intellectual property and freedom of expression. He can be
reached at martin@husovec.eu.