For the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).
Ignacio Cofone
[This first part of this response appeared yesterday as “AI, Privacy, and the Politics of Accountability Part 1: Post-traditionalist Privacy for AI”]
Privacy Harm
is Systemic Because Privacy is Relational
Systemic harms relate to power asymmetries. Solow-Niederman
emphasizes the structural power imbalances inherent in the information economy,
a point echoed by Shvartzshnaider when discussing the opacity of data flows and
by Bietti when identifying surveillance as infrastructural. AI intensifies
these dynamics by enabling large-scale data aggregation and analysis that grow
power over those whose data is held. Governance frameworks must account for
these entrenched imbalances, as failure to do so risks perpetuating harms
masked by claims of neutrality in AI.
For example, AI-powered credit scoring systems have been shown to disproportionately deny loans to minority applicants, even when data on race is excluded. This occurs through inferences such as those drawn from zip codes and purchasing patterns. Guggenberger correctly indicated that “the difference between product liability for cars and data lies in the type of harm.” Products’ liability harm might be systematic, but it is not systemic. Shifting responsibility from individuals whose data is being processed (where consent provisions place it) to entities that process it responds to critiques that account for power. Doing so requires advocating for governance models that recognize the systemic nature of AI-driven harms.
Haupt’s adaptation of the book’s framework to health privacy
illustrates the harm-related challenges introduced by AI. Consider the use of
AI in analyzing patient records to predict health risks. While beneficial,
these systems can inadvertently expose sensitive information, such as genetic
predispositions, to unauthorized parties through data breaches or
misconfigurations. Her distinction between professional and direct-to-consumer
applications of generative AI highlights the importance of context-adaptable
safeguards. In professional settings, there is a relationship of trust often
anchored in fiduciary duties but, in consumer relationships, even within healthcare,
reliance on an AI system often occurs without meaningful oversight or
accountability. Wearables and health apps show how non-sensitive data can
generate sensitive inferences, complicating traditional models. This leaves
regulatory compliance premised on faulty assumptions (such as the assumption
that deidentified data can’t be reidentified), and therefore ineffective at
preventing privacy harms from occurring. It therefore underscores the need for
systemic protections—such as information fiduciaries or liability frameworks
that account for the cumulative effects of AI-driven data processing. A focus
on systemic mechanisms, such as information fiduciaries or accountability for
(inferential) privacy harm, also accounts for political critiques such as Solow-Niederman’s,
since those mechanisms are designed to work independently of the behavior of
the powerless.
Valuation Requires
Distinguishing Privacy Loss, Privacy Harm, and Consequential Harms
A key aspect of making privacy harm work is its valuation, which Pasquale
highlighted. The challenge of assigning monetary value to privacy violations is
pronounced, particularly for relational data, as often is the case with AI.
Determining the harm caused by an algorithm’s biased decision-making, for example,
requires an understanding of both direct and downstream impacts. Harms are
often hidden in aggregated inferences, which complicate the valuation.
Key to valuation is (a) distinguishing between (descriptive) privacy
losses and (normative) privacy harms and (b) recognizing that probabilistic
information (such as AI inferences) can be harmful absent material
consequences—two aspects that Zeide highlighted. For example, courts have
struggled to assign a value to harms caused by continued surveillance, such as
constant monitoring in the workplace, which erode wellbeing without immediate
material consequences. Key to untangling the situation is identifying that
people lose privacy with any amount of surveillance and their privacy is harmed
when such surveillance is exploitative, as continued surveillance often is,
separately from the material harms that such surveillance may produce.
Zeide’s analysis of probabilistic privacy loss and harm also
captures the context-dependence of AI-driven data practices, where the same
data practice can become harmful in some situations and not in others because harms
arise from (probabilistic) inferences rather than direct disclosures of one
sensitive piece of information. For instance, the use of AI to infer health
risks from wearable device data can lead to insurance premium increases for
entire demographic groups, even if no specific individual’s data is explicitly
compromised. A framework for distinguishing privacy loss, harm, and
consequential damages provides a foundation for addressing these challenges
because it helps identify trade-offs. This intersects with Aggarwal’s analysis
of autonomy trade-offs in consumer finance, raising questions about the dual
impact of AI data processing. While AI data processing can enhance autonomy in
some situations, she explains, it can also undermine it, for example by
facilitating manipulation. And these harms extend beyond harms to autonomy. For
instance, dynamic pricing algorithms for necessary products, such as financial
instruments, often offer personalized discounts to some users while charging
others higher prices based not on their ability to pay but on their inferred need,
deepening inequality. Addressing trade-offs requires an analysis on probabilistic
information because (a) it highlights the non-binary nature of AI’s effects on
privacy and autonomy and (b) an approach to data practices grounded in
immaterial harms where not all privacy losses are considered harmful offers a
framework for navigating these trade-offs.
A dual remedial regime—combining private rights of action with
public enforcement—also aims to address valuation, as public enforcement avoids
the question of valuation by decoupling sanctions and compensation. Pasquale’s comment
underscores the need for additional clarity on how these mechanisms can be
coordinated to ensure both deterrence and redress—concerns that resonate with
Bietti’s critique of ex-post models. I would suggest, as I mentioned above,
that hybrid approaches that integrate valuation tools into liability frameworks
and complement them with valuation-independent regulatory oversight are an
improvement over the alternative. This does imply, however, as Zeide states, “not
simply asking for courts to apply law to specific facts, but to make normative
choices about legitimate and highly contested values.” So, as Bietti notes,
civil liability mechanisms imply delegating some amount of power to courts (as
do information fiduciaries). And, depending on the institutional and political
context, doing so could have significant drawbacks. It implies more than ever
that regulating privacy means shifting power. As Kaminski notes, “who
interprets the law, who negotiates, develops, and changes its meaning” matters.
Methods of
Valuation Can be Taken from Longstanding Doctrine
This leads to the central challenge that
Pasquale brings: “what methods of valuation might best ensure just remedies for
wronged data subjects?” I believe that two methods hold promise—and which holds
most promise will depend on each jurisdiction. Two doctrinal categories that capture
the notion of privacy harm as exploitation are dignitary harms and emotional
harm suffered by a reasonable person.
The first way to categorize privacy harm is
as dignitary harm. Privacy harm interferes with data subjects’ dignity because privacy
harm is a form of instrumentalization, and not being instrumentalized has long
been considered an aspect of human dignity. In it, data subjects are exploited
by treating them as a means for self-serving ends (including the end of making
a profit) rather than as ends in themselves. Disregarding the negative effects
of profitable data practices on data subjects constitutes an affront to their
dignity: the intrinsic worth of data subjects is disrespected when profit is
made at the expense of their increased risk of material harm.
The dignitary harms categorization fits well
in continental European civil law frameworks, which have a history of relating
privacy to dignity. The GDPR’s fairness requirement, moreover, adds support to
the idea that this form of exploitation is a legal dignitary harm in the EU.
This is because the principle states that personal data must be used in a way
that is fair, in the sense of using it in ways that people reasonably expect
and not in ways that have unjustified adverse effects.
Under this first
doctrinal approach, privacy harm is inflicted when someone is treated as a
means to an end for making profit at their expense. The corollary is that,
under this doctrinal category, national courts should grant the compensation
amounts that they can grant for dignitary harm particularly, but not solely,
under national law. The European Court of Human Rights, for example, has developed
case law on nonpecuniary harms for privacy claims against state actors. The
court, focused on the dignitary (non-material) value of privacy, awarded
nonpecuniary dignity-based damages with an average individual amount of €16,000
in approximately two-thirds of the cases where it found a privacy violation, in
addition to granting material damages. For this first form of privacy harm
compensation, courts could grant the quantum they would consider appropriate
for other forms of dignitary harm.
The second way to categorize privacy harm
under national laws is the emotional harm that would have been suffered by a hypothetical
reasonable person. This categorization might fit most common law jurisdictions.
The UK case Vidal-Hall, for example, recognized
this doctrinal way to categorize non-material privacy harm, emotional harm
suffered by a reasonable person, showing how it can also fit doctrinal
categories under national laws of Member States. In most European civil law
jurisdictions, courts seem to retain freedom to apply both dignitary harm and
emotional harm.
Under this second
doctrinal category, damages for privacy harm should be quantified in the same
way that national courts would quantify other emotional harm suffered by a
reasonable person. Courts across
jurisdictions have recognized that emotional distress is sufficient to
establish harm in non-digital contexts. Others before me have argued
that data breaches yield an amount of emotional distress analogous to
the distress that courts otherwise recognize as harmful. The corollary
to this second doctrinal approach would be to grant each person amounts that
courts in that jurisdiction have granted in cases of (non-privacy) emotional
distress. This method allows for quantification efforts to fall under common practices
in national case law. Common law
courts, for example, have awarded as much as $100,000 for such emotional
distress claims in some jurisdictions. Alternatively,
national courts can base this compensation on other forms of intangible harm for
which they have precedent, such as those for assault and battery. This
emotional harm should be determined objectively—not as the emotional harm
subjectively felt by each person. With either the dignity or the emotional harm
approach, therefore, a court could base compensation amounts on national
precedent in a way that captures immaterial harm objectively.
Conclusion
I’m enormously thankful to the symposium contributors for their engagement with the arguments in The Privacy Fallacy. Their reflections bring out important challenges in governance, institutional design, and political economy, particularly for AI. I trust that the points made in the different symposium comments will spark some further discussion on how to address privacy harm and other data harms. And I hope that, with several of us thinking about how to develop accountability frameworks for AI anchored in social values such as privacy, we might have a regulatory regime that encourages socially beneficial innovation while keeping people safe.
Ignacio Cofone is Professor of Law and Regulation of AI at Oxford University. You can reach him by e-mail at ignacio.cofone@law.ox.ac.uk.