For the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).
Margot E. Kaminski
Ignacio Cofone has written a
masterful book arguing for increasing the role of liability in information privacy
law. There is little about this substantive call with which I disagree. Yes,
courts need to do a better job of recognizing and remedying non-consequential
privacy harms (p. 113-114, 157). Yes, as a matter of institutional design,
privacy desperately needs civil liability (p. 89). And yes, much of the
proceduralized control, or consent, on which many privacy laws around the world
rely is steeped in fallacies about how people and markets behave (Ch. 3).
I found myself nodding along and
wishing I lived in a country where (a) the Supreme Court weren’t so wedded to a
deeply consequential understanding of privacy, and (b) the political process in
states didn’t repeatedly result in a devil’s bargain with no private right of
action for individuals. The world Cofone wants for us isn’t the world in which
most of us live. But one role of great scholarship is to make us want to give
that world a try.
For this Symposium, I address Cofone’s take on data protection in particular, which unrolls primarily in Chapter 5. Cofone’s view and overview are careful, informed, and nuanced, unlike a lot of what has been written about the General Data Protection Regulation (GDPR). For example, while pointing out that “[c]onsent is key to data protection law” outside of Europe (p. 90), Cofone observes that under the GDPR, there are in fact six grounds for lawful data processing, including legitimate basis. This is a distinction many people miss.
He notes, however, that
individual control (distinct from consent) is still central to the GDPR
(p. 90). Control, he argues, comes with a lot of the same problems of a
consent-based regime. At the core of Cofone’s arguments against the FIPS-based
individual rights in data protection is that today’s data economy is inherently
deeply complex (p. 94). Data protection rights like access or correction, he
argues, might work with respect to your health information held by your doctor.
But they “aren’t equipped to operate in scalable, ever-present, and opaque
corporate data practices”(94). That’s not wrong. I personally think individual
rights are necessary, not sufficient, and play a significant role in the
overall design of data protection law, whether they successfully vindicate
individual interests or not. But I don’t disagree that complexity and
shenanigans render them far from perfect in practice.
I love Cofone’s attention to
jurisdictions beyond the U.S. and EU. I love the clarifications of the ways in
which the GDPR, a more recent update amongst global data protection regimes,
differs from how data protection is instantiated elsewhere. Again, this is
detailed, researched, careful work. I learned a lot.
Cofone doesn’t wholesale dismiss
data protection; instead, he highlights where it can be useful. This is where things
get more interesting. He notes, correctly in my view, that broader data
protection laws do in fact set “consent-independent, mandatory rules” (p. 98).
The problem, however, is in the operationalizing: primarily through procedure,
rather than substance, which leads to checklist compliance (p. 98). The
procedure-heavy—as opposed to substance-heavy—nature of data protection tends,
according to Cofone, to both underprotection and overregulation (pp. 99-101).
Instead, Cofone calls for mandatory provisions that (1) can’t hinge on user
agreement; (2) should address systemic rather than individual harms; and (3) must
be substantive rather than procedural (p. 104). He insists that data protection
alone is not enough: “a better approach is to combine substantive prohibitions
aimed at reducing risk and liability when risk materializes”(89). Hear, hear.
And he points to the EU AI Act as a better way forward. Erm.
What I bring to bear is a focus
on the role of institutions. This is particularly salient when it comes to how Cofone
contrasts data protection under the GDPR with the EU AI Act (p. 105). The
takeaway: wow, do institutions matter. They matter as much, if not more, than
the shape of law on the books. What do I mean? The EU’s admittedly
proceduralist data protection law has, even prior to the GDPR, been ratcheted
up by the Court of Justice of the European Union. The CJEU has a data
protection mandate in the form of human rights law (the Charter of Fundamental
Rights) that it has interpreted expansively. The CJEU, institutionally
speaking, is the backstop of the entire EU data protection enterprise.
Liability, from where I stand, is
a crucial aspect of institutional design (particularly, as Doug Kysar reminds
us, in risk
regulation, so central to both data protection and AI law). That’s not
really what Cofone is concerned with. He’s more concerned with forcing
companies to internalize externalities, and with making harmed individuals
whole. He’s also very concerned with shifting us away from an ineffective
consent-based paradigm, which is admirable. But liability also plays a role in
overall regulatory design. It can serve an information-forcing function when
regulators get too cozy with the regulated. It can spur substantive policy
development by bringing poorly regulated behavior to light. It can, through
common-law processes, provide substantive interpretations of broader standards
by institutions other than regulators: courts. But the mere existence of liability
isn’t the only thing that matters. Having in the mix a court backed by a human
rights instrument that is willing to sometimes privilege data privacy as a
higher value gives significant heft to what may often look like proceduralist
garble. The individual claim triggers the rights-effecting court.
Cofone praises the EU AI Act, in
contrast with data protection, for actually banning specific practices, and for
otherwise being “structured by risk”(105). Sure. The AI Act does ban some things,
and that’s fascinating. (So are the exceptions—a result of torturous political
compromise, as I understand it.) But the AI Act comparatively flails when it
comes to institutions. There’s no private right of action! There are regulators,
but not with existing expertise in AI systems. It is not clear—to me, at
least—what role if any the CJEU will play. When it comes to a lot of the core
requirements for high risk systems, the Act delegates substantive policymaking
to standards-setting organizations rather than public lawmakers. (It’s also
rife with proceduralism, FWIW.)
My point is this: who interprets
the law, who negotiates, develops, and changes its meaning—who makes, it,
monitors it, and enforces it—may be as significant to its ultimate efficacy as
whether it’s structured on its face around substance or procedure. A private
right of action that sounds in tort law is different from a private right of
action that sounds in a human rights instrument. If we’re going to wish for a better
world, why not try for that as well?