E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
The study of information disorder is a study in
epistemic anxiety. The anxiety that false information may be harmful to our
democracy is matched by an anxiety that we have little insight into prevalence
and effects of exposure to false information. This, in turn, is matched by an
anxiety that digital information platforms are simply funhouse mirrors,
distorting and enlarging analog social pathologies. The liturgy then completes
its recursive turn in anxiety that we know truly nothing without greater access
to platform data, concluding in a supplication to the platform companies. (Or
some higher power — for example, a European Union consultative process).
Amen.
Yet what the 2020 election has accentuated is an
opportunity for progress, both in assessing the moral harm from information
disorder and in measuring it. The key is in identifying the species of harm
that are of the highest moral urgency as the starting point for observation. In
particular, the material experience of the election suggests that digitally
disseminated, accelerated and amplified false information should be understood
“diachronically,” to borrow a term from linguistics. That is, the time, place,
and manner of false information all matter, as do its provenance and
understanding of those most affected. History, in a wide sense, is meaningful
for grasping both the mechanics of false information and the magnitude of moral
harm.
The first contextual dimension of information
disorder underscored by the election is temporal. Timing matters. Seeing,
disseminating, and amplifying false information about the election — during a
period in which many millions of Americans were making a decision about how to
vote — was a clear and present threat to electoral integrity. This would not
have been true about precisely the same false information a calendar year
earlier. Similarly, while the amplification of false information about the
election a year earlier would have been deleterious to the rule of law in a
general sense, raising these issues in the aftermath of the election has
constituted a graver threat. Today, the rule of law is potentially materially
weakened because, it appears, many people have decided — here and now, thanks
to digitally accelerated false information — that the recently undertaken democratic
procedure was performed illegitimately.
The platform urgency to address COVID-19
misinformation evinces a similar recognition that context matters. Many have
noted that, despite some efforts to curb misinformation related to the
pandemic, the same patterns of false information continue to obtain traction
for other critical health categories, such as vaccines. But, of course,
declining vaccination rates had not been seen as a public health crisis — until
they became one. COVID-19, however, has been an emergency since its appearance.
There is some utility to this distinction. False information about COVID-19,
particularly during a period of community spread, is a clear and present danger
to public health.
False information in the right place and at the
right time vastly amplifies the stakes. This is not to suggest that false
information outside of emergent circumstances is benign. To the contrary,
social media is not helping us in the quest to eradicate measles; thanks to
vaccine-related false information, it’s likely hurting us. It does suggest,
however, that the magnitude of harm is larger when timing is relevant.
A second dimension is spatial or contextual,
particularly with respect to who is most affected. There is increasing evidence
that harmful content of all kinds, including false information about elections,
both disproportionately targets — and
affects — historically marginalized communities, including women and people of color. This is a
more urgent harm for several reasons. First, to adopt Merrill Sanger’s term,
false information that targets communities of color, particularly about life-
or democracy-critical events like elections, is “syndemic.” This is meant to describe the known biosocial
interaction between diseases and the social context of spread. False
information about elections targeted toward people of color is syndemic on its
face: It amplifies and interacts with the effects of other, known analog
deterrents to voting, including various voter suppression techniques. Second,
the spatial context of identity introduces new moral harm. That the pattern of
false information could be discriminatory is, if accurate, itself objectionable
on that basis alone, irrespective of potential disparate impact.
Assuming there is something to a diachronic
view, that would suggest a kind of prioritarian argument to address certain
kinds of false information first and most aggressively, based their “history”
and temporal and spatial context.
This reshapes the question of measurement, with
at least two early moves worth considering. The first is to develop a consensus
on the types of events more likely to signal magnified harm potential.
Pandemics and national elections seem easy. There are likely other low-hanging
fruit. An initial but not exhaustive or exclusive framework would be a start.
The second is to improve our systems to
understand exposure and influence of critical false information in a way
disaggregated by demographic proxies for vulnerable communities. And before we
proceed down the important but well-worn path of decrying proprietary control
of data, we should also stipulate that companies themselves build this lens
into their enforcement efforts and reporting. Airbnb, for example, developed a complex but intriguing system to track potential incidents of race
discrimination on its platform through consultation with civil society
organizations.
Even if we do proceed down that known and
unsatisfying path of noting that the failure to access to platform data is a
fatal research flaw, a diachronic, prioritarian lens can illuminate new and
potentially productive paths forward. For example, the 16 “critical
infrastructure” sectors are subject to unique information policies that enable
the uniquely protected sharing of proprietary data so that the Department of
Homeland Security can monitor their health. Some have suggested that social
media platforms be labeled as “critical infrastructure” to trigger federal
cybersecurity protections. Another recent example of such a practice is in the
application of fair housing policy, which has been long focused on
ameliorating, or preventing, disparate impact in housing. From 2015 to 2018,
localities were required to document and track patterns of bias and
discrimination in housing in order to meet Affirmatively Furthering Fair
Housing obligations.
This diachronic approach to measuring the harm
of information disorder, especially with regard to critical events like
elections, is merely a sketch meant to raise some useful questions. But, as a
conceptual approach, it has a few salutary benefits. First, it can delimit
information disorder research by focusing our inquiry on the false information
phenomena that matter most. Second, it can accommodate disputes about the
relationship between misinformation and other social pathologies by focusing on
the syndemic effects of information disorder in a wider sociopolitical context.
Third, it can clarify and narrow the claims to privately-controlled data,
locating them in more specific arguments about critical or crisis moments.
This approach is unlikely to satisfy our
descriptive, social scientific pieties. But it may help us to address the
normative exigencies that are moving ahead whether we have scientific clarity
or not. Consider the responses of the major social media platforms,
particularly Facebook, Twitter, and YouTube, in both the lead-up to Election
Day 2020 and during the weeks following. They can rightly be characterized as
inconsistent, halting, and inconstant. They can also be described as frenetic
and urgent.
The race to thwart the problem is on — with or without fundamental
understanding.
Sam Gill is senior vice president and chief program officer at the John S. and James L. Knight Foundation.