E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
In
ThePrivacy Fallacy: Harm and Power in the Information Economy, Ignacio
Cofone delivers a powerful and much needed rebuke of our current approach to regulating
privacy in the information economy. Synthesizing and building on a prior literature
to which Cofone himself has contributed, he shows us how and why the largely
individualistic, contractual and procedural methods of data protection and data
privacy law have persistently failed to deliver. Cofone’s arguments drawn from
the (behavioral) economics of data processing are especially persuasive. As he argues,
under conditions of asymmetric information and power between consumers and
firms, consumer irrationality, uncertainty about future data use, and the
relational, non-rivalrous and only partially excludable nature of personal
data, bilateral contracts for personal data will be inherently incomplete. This
is increasingly true in a world of big data and sophisticated AI systems, in which
it is much more difficult for individuals to meaningfully consent to future
inferences and uses of their personal data.
Given
these fault lines, Cofone advocates for a shift from the existing contract- and
market-based approach to privacy regulation to a more top-down, tort-like liability
regime focused on reducing privacy harms. A handful of key, recurring motifs bind
Cofone’s narrative, eloquently leading the reader to his prescriptions. These
include the “multiparty information economy,” in which bilateral data trades
have been replaced by multilateral trades with third parties not privy to the
original contract; reconceptualizing personal data as not individual but relational
and social; and reframing informational exploitation as a systemic and not only
individual harm, and privacy as a social not solely individual value.
I
agree with much of Cofone’s description of the problem, as well as his prescriptive
direction of travel. But, like any thought-provoking piece of work, The
Privacy Fallacy also opens up more questions and offers opportunities for
further contemplation. I shall focus here on one just one of these questions, drawn
from the title of the book. Namely, what exactly do we mean by “privacy” and
“harm” in the information economy? Privacy scholars, and courts, have grappled
with this question for decades (for relatively recent examples, see here,
here, here, and here).
But reading The Privacy Fallacy left me still scratching my head over
where privacy, and privacy harm, begin and end.
I
theorize privacy mostly through the lens of consumer financial markets and their
regulation. This lens has led me to take seriously the normative trade-offs in
the information economy, particularly intra-normative, “autonomy-autonomy”
trade-offs. As I have argued elsewhere (see here and
here),
there are important autonomy-autonomy trade-offs implied by the greater use of
consumer data in digital markets. In consumer credit markets, for example,
consumers stand to gain autonomy from data-driven innovations such as alternative
credit scoring and Open
Banking, which allow them to access credit and other financial services on
more favorable terms. A staggering 49
million Americans have either missing or insufficient data on their credit
files, limiting their ability to access credit on favorable terms. For some, but
not all, of these consumers, processing more personal data and using more data-driven
inferences in credit decisions improves their credit outcomes, and the
opportunities that flow from credit like access to housing and education, in
turn enhancing their autonomy and wellbeing.
But
to the extent that personal data processing per se, including drawing
inferences from personal data, is considered to be an intrinsic privacy harm
and thus autonomy diminishing—as I read Cofone, and others, to posit—, consumers
who stand to gain autonomy from the use of data-driven inferences in,
say, credit decisions, would be foreclosed from this benefit in order to
protect them from the loss of autonomy resulting from the very act of drawing
inferences from their personal data. Is this how we should measure and balance
autonomy/privacy losses and gains from personal data processing, if they are
even commensurable? Once we concede that there are (potentially greater) instrumental
benefits from processing personal data, how can we also protect against
intrinsic privacy harms? How should we navigate these autonomy-autonomy
trade-offs?
In my own view,
to the extent that personal data can be used to improve material and
nonmaterial outcomes for consumers—for example, by improving access to
affordable credit—, these benefits can and often should outweigh the intrinsic harms
due to data processing per se. In this context, a strongly pre-emptive
and precautionary approach that starves credit markets of consumer data and
data-driven inferences in order to protect against intrinsic privacy harms, would
ultimately be autonomy-diminishing for consumers, and thus undesirable. This
is not to say that data inferences could not also produce negative outcomes for
some consumers, such as higher credit cost due to the revelation of negative
characteristics. The challenge is that these outcomes are not always knowable at the level
of data itself, and prior to the use of the data. There is inherent duality
and uncertainty in data processing outcomes, as Cofone rightly points out. The
same data can be used in ways that both benefit and harm consumers, which is
often unknown, unknowable and unmeasurable at the level of the data and prior
to use. This duality and uncertainty makes it harder to protect against
intrinsic privacy harms from data inferences without also squandering future,
consequential benefits from those inferences.
A
more utilitarian, consequentialist approach to consumer privacy and its
regulation, one focused on mitigating harmful data uses rather than the
processing of personal data and data inferences per se, offers one way of
navigating the autonomy-autonomy dilemma. It may not be the only solution, and
one that may be more appropriate for certain sectors of the information economy
than others. At the very least, however, if we are going to regulate privacy (harm)
in the information economy, we must be more attuned to its essentially contested
nature.
Nikita
Aggarwal is Associate Professor of Law at University of Miami School of Law.
You can reach her by email at nikita.aggarwal@miami.edu.
This post is adapted from remarks delivered at the 2024 Annual Meeting of the
Law and Society Association.