Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Build Your Own Intermediary Liability Law: A Kit for Policy Wonks of All Ages
New Controversies in Intermediary Liability Law
years, lawmakers around the world have proposed a lot of new intermediary
liability (IL) laws. Many have been miscalibrated – risking serious collateral damage without necessarily using the best
means to advance lawmakers’ goals. That shouldn’t be a surprise. IL isn’t like
tax law or farm subsidies. Lawmakers, particularly in the United States, haven’t
thought much about IL in decades. They have little institutional knowledge
about which legal dials and knobs can be adjusted, and what consequences to
This post will lay
out a brief menu, framed for a U.S. audience, of IL legal mechanisms. Most are
relatively well-understood from laws and literature around
the world; a few are
newly emerging ideas. It foregrounds legislative choices that affect free
expression, but does not try to identify hard limits created by the First
Amendment or other free expression laws.
crafting laws isn’t really like ordering off a menu. It’s more like cooking:
the ingredients intermingle and affect one another. A law holding platforms
liable for defamatory speech they “know” about, for example, may mean something
different depending whether the law lets accused speakers explain and defend
their posts. But isolating the options in modular form can, I hope, help in
identifying options for pragmatic and well-tailored laws.
IL laws generally
try to balance three goals. The first is preventing harm. It’s no accident that
intermediary immunities are typically weakest for content that poses the greatest
threats, including material criminalized by U.S. federal law. The second is
protecting speech and public participation. For this goal, one concern is to avoid
over-removal – the well-documented phenomenon of platforms cautiously deferring
to bogus legal accusations and taking down users’ lawful speech. Another is to encourage
new market entrants to build, and investors to fund, open speech platforms in
the first place. The third, related goal is encouraging technical innovation
and economic growth. A rule that creates great legal uncertainty, or that can
only be enforced by hiring armies of moderators, raises formidable barriers to
entry for potential competitors with today’s mega-platforms. Lawmakers use the
doctrinal dials and knobs listed in the remainder of this post to adjust policy
trade-offs between these goals.
Major Free Expression Considerations
Who decides what speech is illegal?
Outside the United
States, blanket immunities like CDA 230 are rare. But it’s not uncommon for
courts or legislatures to keep platforms out of the business of deciding what
speech violates the law. One IL model widely endorsed by free expression
advocates holds platforms immune unless a court or other government authority rules
content illegal. In practice, this highly speech-protective standard typically has
exceptions, requiring platforms to act of their own volition against highly
recognizable and dangerous content such as child sex abuse images. Lawmakers who
want to move the dial more toward harm prevention without having platforms
adjudicate questions of speech law can also create accelerated administrative
or TRO processes, or give platforms other responsibilities such as educating
users, developing streamlined tools, or providing information to authorities.
Must platforms proactively monitor,
filter, or police users’ speech?
literature includes strong warnings against making platforms monitor their
users. Many IL laws expressly bar such requirements, though they have gained
traction in recent European legislation. One concern is that technical filters
are likely to over-remove, given their inability to recognize contexts like news reporting or parody. (However,
filtering is relatively accepted for child sexual abuse images, which are
unlawful in every context.) Another is that, when platforms have to review and
face over-removal incentives for every word users post, the volume and
invasiveness of unnecessary takedowns can be expected to rise. Legal exposure
and enforcement costs under this model may also give platforms reason to allow
only approved, pre-screened speakers – and deter new market entrants from
Must platforms provide “private due
process” in takedown operations?
internal notice-and-takedown processes can protect against over-removal. A
widely supported civil society document, the Manila
Principles, provides a list
of procedural rules for this purpose. For example, a platform can be required
or incentivized to notify speakers and let them defend their speech – which may
help deter bad-faith notices in the first place. Accusers can also be required
to include adequate information in notices, and face penalties for bad-faith
takedown demands. And platforms can be required to disclose raw or aggregate
data about takedowns, in order to facilitate public review and correction.
Can platforms’ use of private Terms of
Service prohibit lawful expression?
prohibit disfavored but legal speech under their Terms of Service (TOS). To
maximize users’ free expression rights, a law might limit or ban this
restriction on speech. In the United States, though, such a law might violate
platforms’ own speech and property rights. Platforms’ value for ordinary users
would also decline if users were constantly faced with bullying, racial
epithets, pornography, and other legal but offensive matter. (I address relevant
law in depth here and explore possible regulatory models in that paper’s
Can speakers defend their rights in
over-removal incentives come in part from asymmetry between the legal rights of
accusers and those of speakers. Victims of speech-based harms can often sue
platforms to get content taken down. Speakers can almost never sue to get
content reinstated. A few untested new laws in Europe try to remedy this, but
it is unclear how well they will work or how speakers’ claims will intersect
with platforms’ power to take down speech using their TOS.
Are leaving content up and taking it
down the only options?
IL laws occasionally
use more tailored remedies, in place of binary take-down/leave-up requirements
– like making search engines suppress results for some search queries, but not
others. Platforms could also do things like showing users a warning before displaying
certain content, or cutting off ad revenue or eligibility for inclusion in
recommendations. In principle, IL law could also regulate the algorithms
platforms use to rank, recommend, or otherwise amplify or suppress user content
– thought that would raise particularly thorny First Amendment questions and be
extremely complex to administer.
Treating Platforms Like
Making platforms liable for content they
Most IL laws
strip immunity from platforms that are too actively involved in user content.
Some version of this rule is necessary to distinguish platforms from content
creators. More broadly, putting liability on an entity that exercises
editor-like power comports with traditional tort rules and most people’s sense
of fairness. But standards like these may play out very differently for
Internet platforms than for traditional publishers and distributors, given the comparatively
vast amount of speech platforms handle and their weak incentives to defend it.
Laws that reward passivity may also deter platforms from trying to weed out
illegal content and generate legal uncertainty about features beyond bare-bones
hosting and transmission.
Making platforms liable for content they
systems hold platforms liable for continuing to host or transmit illegal
content once they “know” or “should know” about it. Laws that rely on these scienter standards can protect legal
speech somewhat by defining “knowledge” narrowly or adding elements like private
due process. Other legal regimes reject scienter
standards, considering them too likely to incentivize over-removal.
Using “Good Samaritan” rules to
encourage content moderation
be deterred from moderating content by fear that their efforts will be used
against them. Plaintiffs can (and do) argue that by moderating, platforms
assume editorial control or gain culpable knowledge. Concern about the
resulting perverse incentives led Congress to create CDA 230, which makes knowledge
and control largely irrelevant for platform liability. This encouraged today’s
moderation efforts but also introduced opportunities for bias or unfairness.
Different Rules for
IL laws often tailor
platforms’ duties based on the claim at issue. For example, they may require
urgent responses for particularly harmful content, like child sex abuse images;
deem court review essential for claims that turn on disputed facts and nuanced
law, like defamation; or establish private notice-and-takedown processes in
high-volume areas, like copyright.
Platform technical function
Many IL laws
put the risk of liability on the entities most capable of carrying out targeted
removals. Thus, infrastructure providers like ISPs or domain registries
generally have stronger legal immunities than consumer-facing platforms like YouTube,
which can do things like take down a single comment or video instead of a whole
page or website.
have raised the possibility of special obligations for mega-platforms like
Google or Facebook. Drafting such provisions without distorting market
incentives or punishing non-commercial platforms like Wikipedia would be
challenging. In principle, though, it might improve protections on the most
popular forums for online expression, without imposing such onerous
requirements that smaller market entrants couldn’t compete.
General Regulatory Approach
Bright-line rules versus fuzzy standards
IL rules can
hold platforms to flexible standards like “reasonableness,” or they can prescribe
specific steps. Platforms – especially the ones that can’t hire a lawyer for
every incoming claim – typically favor the latter, because it provides relative
certainty and guidance. Free expression advocates also often prefer clear processes,
because they reduce the role of platform judgment and allow legislatures to add
procedural protections like counter-notice.
Liability for single failures versus
liability for systemic failures
European laws and proposals accept that takedown errors are inevitable and
do not impose serious financial penalties for individual items of content.
Instead they penalize platforms if their overall takedown system is deemed
inadequate. This approach generally reduces over-removal incentives, but is
more viable in legal systems with trusted regulators.
Liability for platforms versus liability
may see little reason to avoid disseminating unlawful content when the legal
consequences of their actions fall primarily on platforms. Laws could be
structured to shift more risk to those individuals. For example, claims against
platforms could be limited if claimants do not first seek relief from the
responsible user. Or platforms’ immunities could be made contingent on
preserving or disclosing information about online speakers – though this would
raise serious concerns about privacy and anonymity rights.
is Director of Intermediary Liability at the Stanford Center for Internet and
Society, and was previously Associate General Counsel at Google. She can
be reached at firstname.lastname@example.org.