E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
In an arresting and powerful passage in The Cult of the
Constitution, Mary Anne Franks argues that “the self-serving and
irrational appropriation of constitutional principles to justify lies,
harassment, discrimination, and outright violence endangers society as a whole
and threatens to destroy democracy.” Indeed, a public sphere where words lose their meaning, disinformation spreads like
wildfire, cyber mobs silence the vulnerable with online assaults, and hatemongers
incite violence against unpopular persons or groups, is on a fast track towards
both democratic decline and eroding social consensus on basic beliefs and
obligations. Disinformation trafficked by state actors and adopted by guileless
targets has not only frayed social cohesion, but has also caused the spread of dread
diseases. Individuals
beset upon by cyber mobs retreat from online social networks,
undermining the aspiration of diverse voices engaged in public debate. The El
Paso shooter turned hateful beliefs of fellow
8chan posters
into deadly reality, killing parents, grandparents, and kids doing back to
school shopping at Walmart.
After
every major scandal involving public-health-endangering snake oil and other
forms of disinformation, cyber mob attacks, and incitements to violence, big
tech firms apologize. They promise to do better. Sometimes, they make a show of
hiring more content moderators. Sometimes, they ban the worst sources of
misinformation and harassment, as in the case of the banning of Alex Jones from
YouTube and Facebook after years of his harassing victims of school shootings.
However, every step toward safety and responsibility on platforms is in danger
of reversal thanks to a brew of concern fatigue, negligence, and at bottom potential
profit from eyeballs and ears drawn to arresting, graphic content.
We
have seen this wash, retreat, and repeat cycle with Google and anti-Semitism. While
the search giant labeled and offered counter speech to some troubling Nazi and
white supremacist content in 2004, it backslid later, until it was called out
by Guardian journalist Carole Cadwalladr and prominent academic Safiya Noble in
2016. By 2018, details emerged about YouTube’s algorithmic rabbit hole that was
AI-optimized to lure unsuspecting viewers into alt-right content, and worse.
Will Sommer and Kathleen Belew have chronicled the spread of radicalizing
content on social media via closed groups and quasi-public posts. As long as tech
companies optimize our automated public spheres for profit, we expect the same
pattern: flurries of concern spurred by media shaming, followed by a long
backsliding back into irresponsibility.
Franks’s
important book helps us understand how a combination of idealism and cynicism
drives this troubling pattern. Techno-idealists at firms endeavor to “keep up”
content, because it fits with the myth that the First Amendment requires
companies to do so, even if their employers are profit-driven companies and not
state actors, and even if the content is manifestly false and harmful. Cynics
realize that such cyber-libertarianism not only lacquers a veneer of principle
onto controversial decisions. They also know that this non-response is far cheaper
than employing fact checkers,
journalistic ombudspeople, and content moderators. To be sure, not all tech
companies respond this way, especially in the face of really bad press. But in
the end the combination of techno-idealism and -cynicism tends to lead to a far
more hands-off approach in the end.
This
is big tech’s absentee ownership problem. Massive platforms are gradually
taking over parts of the public sphere once constituted by more traditional
media, especially media like NPR, PBS, and the BBC that are repositories of the
public trust. Also displaced are community groups and other civil society
actors. The platforms are not willing to engage in the kind of structuring and
vetting that was once part of a denser and richer communicative landscape. They
advert to the promise of algorithmic solutions, just around the corner, that
will vanquish terrorist content, hate speech, and other troubling online
activity. But experts in media studies and content moderation realize that is
merely an unrealistic technical solution to what is ultimately a deeply
political and social problem.
So
platforms rely on something like “law’s halo” to grant some legitimacy to
their own policies of inaction. In other words—when people tend to think a
course of action is either commanded or commended by the law, they are more
likely to view it as meritorious. Ironically, the First Amendment actually
gives platforms significant latitude in deciding what they will and will not
include or prioritize in feeds. Like the South Boston Allied War Veterans
Council who wanted to keep
gay marchers out of
the St. Patrick’s Day Parade, platforms can and do assert their own First
Amendment right to stop the state from interfering with the feeds they
generate. For some, that makes their repeated failures to adequately deal with
harassment, lies, and public health endangering hoaxes seem all the more
troubling. To boot, as Franks documents and exposes, platforms are even further
protected in the choices they make thanks to a federal law that broadly
immunizes them for under- and over-filtering user-generated content. They have
not been treated as common carriers in this sphere. They have not been treated
as broadcasters with must-carry obligations. They have latitude to act, just as
newspaper editors do, and unlike real-space newspapers they enjoy broad
immunity from liability in their decisions. From the perspective of a moral or
even religious devotion to laissez-faire and neglect, they are doing exactly
the right thing.
That
is where Franks’s repeated references to fideistic types of belief systems now justifying the
contemporary automated public spheres are especially illuminating. Some
religions insist on a certain fit between faith and reason, demanding that
religious authorities offer some connection between the rules they impose and a
plausible conception of human flourishing. Fideistic authorities, by contrast,
demand adherence to principles on the basis of faith alone. For them, the faith
does not need to adjust to the world in any way; the world must adjust to the
faith. Like the praxeological, axiom-driven approaches to economics popularized
by Mises and Rand, fideistic constitutionalists
believe in broad and sweeping principles to be applied without exception or
accommodation. Fiat oratio, ruat caelum.
We
can do better. We can amend the federal law immunizing tech
platforms from liability to condition that immunity on responsible, reasonable
content moderation practices. We can ensure that platforms respond to
defamatory deep fakes, cyber stalking, and other harmful illegality without
extirpating expression online. The internet will not break. We need only break the spell of cultish devotion
to free speech absolutism, to clear the ground for pragmatic reconstruction of
a sound and safe online sphere. Franks’s bold, eloquent, and immensely
insightful work can serve as a jurisprudential foundation for dispelling such
illusions.
Frank Pasquale is a Professor of Law at the University of Maryland Francis King Carey School of Law. You can reach him by e-mail at pasquale.frank at gmail.com.
Danielle Keats Citron is Professor of Law at Boston University Law School. You can reach her by e-mail at dkcitron at bu.edu