For the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).
Ignacio Cofone
Introduction
I’m very grateful to the contributors of this Balkinization symposium for their sharp analyses of The Privacy Fallacy—as I am to Jack Balkin for putting the symposium together. The comments in the symposium highlight key issues (and many challenges) in regulating the information economy and, particularly, in preventing and remedying harms in the context of data and AI. I would like to structure this response by highlighting two recurring themes across the reviews. The first theme, examined in this entry, is the limits of traditional consent-based and procedural frameworks to address the collective and inferential nature of privacy under AI. Most contributors highlighted the limitations of these mechanisms, especially when AI is involved, and shared the argument that privacy law must shift toward frameworks that prioritize substantive protection—the question is which ones. The second theme, which all commentators touched upon in one way or another and from different angles, is the issue of defining the boundaries of privacy harm in the information economy, which is examined in an entry that will follow this one. Across both themes is the issue of power.
Relational
and Collective Dimensions of Privacy
Many commentators (chiefly, Elana Zeide, Frank Pasquale, Claudia
Haupt, and Nikita Aggarwal) analyzed the idea of privacy being relational,
meaning that people’s personal information is interrelated. AI systems
challenge the helpfulness of relying on individual autonomy by aggregating
innocuous data points into harmful inferences—often group inferences. This
reality points to the need for a regulatory approach that prioritizes
collective social values over individual control of discrete pieces of
information. Tons of daily examples illustrate this collective nature of
privacy harm. Opaque AI decision-making systems that magnify vulnerabilities,
as seen in ride-hailing or gig economy platforms, illustrate this relational
dynamic. The Cambridge Analytica incident illustrates how aggregated inferences
can produce systemic harms: Cambridge Analytica’s use of innocuous Facebook “likes”
to infer political affiliations and target information campaigns exemplifies
collective consequences. Capturing these requires regulatory approaches that
address collective risks.
Privacy being relational means that the answer to which data
practices are acceptable is inevitably context-specific. Normative trade-offs
in data uses and disclosures, which Nikita Aggarwal’s comment explored, illustrates
this idea because the existence of trade-offs between individuals’ autonomy suggests
that (a) individual decisions over information have spillover effects on others
and (b) collective harms might require collective governance models instead. Aggarwal
added that many forms of data processing can be useful and desirable, so
regulatory interventions on data uses should be different depending on their
context. It would be bad law and bad policy to call any data collection,
processing, and disclosure harmful. The issue is that the mechanisms we have to
distinguish the harmful practices from the non-harmful ones (e.g. whether a
particular person agreed to them) don’t work so well anymore.
These concerns with individuality in the information economy,
reflected in Guggenberger’s and Aggarwal’s comments, connect with the need for
systemic approaches (such as Claudia Haupt’s emphasis on trust, suggesting that
fiduciary models could reduce reliance on consent and address and relational
harm). Think of when individuals must rely on (often opaque) AI systems that
have systemic effects. Addressing these harms requires a combination of ex-ante
safeguards and ex-post remedies. Think of ride-hailing algorithms that
determine pricing and availability, often leaving drivers and riders in
positions of dependence without a recourse to challenge unfair outcomes. The
call for an explicit acknowledgment of the relational nature of privacy
together with one of power imbalances (for example in the light of Frank Pasquale’s,
Solow-Niederman’s, and Bietti’s comments) underscores the need for
principle-based regulatory measures.
Kaminski’s proposal resonates with such a focus on relational harm. Kaminski
highlighted the role of governance (in the sense of the relationship between
power dynamics and harm within privacy law), drawing attention to the
challenges of operationalizing harm-based accountability mechanisms. Kaminski’s
comment based on power converges with Solow-Niederman’s focus on systemic
inequalities, emphasizing how AI systems often obscure their power dynamics
through claims of neutrality. And, I believe, their emphasis on governance
frameworks that address these power asymmetries connects with the book’s
advocacy for corporate accountability. For instance, AI hiring tools trained on
biased data sets can disproportionately disadvantage women or minority
applicants based on traits they have in common with people on the training
dataset; these tools, which reinforce pre-existing biases, show how systemic
inequalities are perpetuated through ostensibly neutral technologies. Further, algorithmic
transparency requirements often fail to uncover how specific features of AI
design embed the interests of the entities deploying them, such as the
prioritization of profit over fairness in gig economy platforms.
The Decline
of Individual Consent Provisions and the Governance Vacuum
Haupt underscored the limitations of consent provisions,
particularly in environments where individuals face overwhelming informational
asymmetries or information overload—a point also raised by
Shvartzshnaider and Pasquale. Kaminski’s comment complements this view by
emphasizing how consent mechanisms often obscure power imbalances, leaving
individuals with limited agency against systems designed to prioritize
organizational interests. AI systems exacerbate these issues by unshackling
inferences—and enabling decisions about people that are often opaque to them. Using
any app might generate inferences about a person’s intimate characteristics and
behavior, something they never could have anticipated due to the opacity of
data processing. This complements critiques of “notice-and-choice” frameworks,
which fail to account for the cascading effects of data processing and
algorithmic decision-making. Even if individuals were to read and understand
every privacy policy—a nearly impossible task—they would still lack visibility
into how their data is aggregated and used for inferences, a problem amplified
by AI.
Solow-Niederman and Yan Shvartzshnaider agree that the onus should
not be on individuals to enforce their data rights. Shvartzshnaider’s proposal
for “privacy inserts,” adapted from the medical domain, offers a potential way
to mitigate information asymmetries exacerbated by AI through a collective mechanism
in a way that reduces individual cognitive burdens. It is impossible to
meaningfully engage with privacy policies, given their simultaneous complexity
and ambiguity (and their length, and how often they change). So regulatory
mechanisms in other fields that face this problem can inform regulatory regimes
for privacy and AI.
Nik Guggenberger’s exploration of the tensions between tort
liability and consent-based governance is especially pertinent for AI. To clarify,
I don’t believe that consent is unimportant in privacy, but rather that online
agreements-at-scale are unimportant because they don’t amount to any meaningful
notion of consent. Individual consent provisions are, for example, ill-suited
to address algorithmic decision-making, where harms often emerge from
aggregated data and probabilistic inferences. The inference problem renders
consent-based safeguards underinclusive, necessitating liability models that
account for harms generated through AI inferences. As Guggenberger notes, meaningful
consent “is incompatible with informational capitalism’s dominant business
model of data extraction for behavioral manipulation.” For instance, AI-based
recommendation systems on e-commerce platforms might lead users to unknowingly
make suboptimal financial decisions, such as overspending due to
algorithmically-induced impulse buying. Addressing this tension requires
complementary safeguards, such as prohibitions on high-risk AI practices, in a
dual regulatory approach.
Governing the information economy requires an approach that
leverages both public enforcement by regulatory agencies and private rights of
action, a point raised by Pasquale (“how should courts and regulators
coordinate in order to rationally divide the labor of privacy protection?”) and
Guggenberger (“a dual regulatory approach to replace consent-based
governance”). For example, dual remedial regimes highlight the complementary
roles of fines imposed by regulatory bodies and individual claims for damages.
Public enforcement can target systemic practices, while private litigation can
provide tailored remedies to individuals. For instance, public agencies could
enforce penalties against companies for systemic algorithmic bias, while
individuals harmed by specific biased outcomes could seek compensation through
the courts. This dual approach ensures that large-scale AI harms are addressed
without leaving individuals without recourse.
The inevitability of some normative trade-offs that Aggarwal’s
comment highlights leads to legitimate concerns about the politics of power.
Alicia Solow-Niederman’s note that privacy, due to its close relationship to
power, is inherently political, is particularly relevant for AI. The design and
deployment of AI systems often reflect contested value judgments, such as particular
notions of fairness tied to output variable accuracy. Predictive policing
algorithms have faced criticism for disproportionately targeting minority
communities, not due to explicit bias but due to historical data patterns
reinforcing systemic inequalities. Precisely because I agree that substantive
privacy protections require normative decisions, framing privacy as a
foundational social value (critical to autonomy, democracy, intimacy, and
equity) can help anchor those political decisions—and perhaps even bridge some
political divides. A normatively grounded approach that clarifies what the
values potentially in tradeoff are, such as the one the book advocates for, helps
respond to the call for frameworks that address systemic harms rather than
individual choices in a context of competing values.
While I advocate for a dual regulatory approach, I recognize Guggenberger’s
concern that liability mechanisms (such as tort law) face the challenge of
requiring courts to consider several particular situations of individuals and
groups—even though I believe it’s incorrect that privacy, as Guggenberger
believes, “requires an inquiry into the individual’s expressive preferences”
and, further, that a well-structured liability system would avoid interrogating
individual preferences in favor of an objective standard.
Liability as
a Regulatory Tool
Several contributors, notably Pasquale, explored the potential of
liability mechanisms, tort law among them (but also statutory liability and
information fiduciaries), to address privacy. Kaminski eloquently explained
that liability “is a crucial aspect of institutional design,” adding to
compensation and deterrence information-forcing functions and the
interpretation of standards. Liability mechanisms, in that sense, address information
and power asymmetries embedded in AI systems where regulators and individuals
have limited information and limited tools.
Elettra Bietti raises legitimate concerns about the distributive
effects of ex-post enforcement mechanisms such as liability. Bietti aptly notes
that ex-post enforcement has counterintuitive distributive effects in
addressing systemic harms. For example, in AI-driven hiring systems, harm may
manifest not as rejections of obviously qualified candidates but as a gradual
narrowing of opportunities over time for members of particular groups due to
algorithmic bias, complicating the assignment of liability. Ex-ante regulatory
frameworks are therefore essential to complement ex-post liability,
particularly in high-risk contexts like automated decision-making for hiring.
This principle should extend to some technologies, like facial recognition. Proactive
auditing of facial recognition systems could reveal racial biases before
deployment, preventing widespread harm rather than relying on class actions
after the fact. Prohibitions for high-risk data practices coincide with
Bietti’s call for proactive measures that prevent harm rather than merely
addressing it after the fact. Her critique relates to valuation concerns raised
by Pasquale’s comment, as ex-ante measures can indeed consider the cumulative
social impacts of AI systems. The issue is considering the comparative (e.g.
informational) advantages of ex-ante and ex-post mechanisms to combine them
effectively, as ex-ante systems have other limitations, such as accounting for
context-dependence and for harms that were impossible to predict.
Such a combination requires one particular type of liability:
collective. Pasquale’s comment calls for this collective accountability, which
groups liability under collective action and fiduciary frameworks raised by
Haupt. These ideas are critical for AI, where harms are often diffuse, making
traditional (in particular, individual) liability frameworks difficult to apply
without significant adaptation. One clarification is that I don’t think that
ex-ante interventions are more deontological or that ex-post interventions are
more consequentialist. Ex-ante rules are, in a meaningful way, not
deontological when they depart from first principles to focus on procedure (and
they could be motivated by first principles or consequentialist risk-analysis).
And ex-post systems can be consequentialist or entirely deontological. While
different justifications can be attached to any system, state liability for
human rights violations, for example, has a deontological flavor to it, and so
do dignitary torts. I believe privacy needs that same deontological flavor,
such as by grounding the notion of privacy harm on the exploitation of people
through their personal information. Otherwise, we risk inserting into a new
accountability framework old and current problems brought by the sole focus on
consequential, material harms.
Accountability requires defining privacy standards that can adjust
to different contexts where privacy norms, or the likelihood of privacy harm,
may vary. These include requirements for explainability to reduce the opacity
of decision-making systems and fairness metrics to prevent automated decisions from
exacerbating social biases. For example, data minimization principles can be
adapted to AI systems by requiring algorithms to process only the minimum data
necessary for their task. By being required to embed principles into AI design,
similarly, companies can be encouraged or forced to mitigate risks. To
future-proof privacy laws in a context of constant new uses of AI,
accountability must be embedded both in the design of systems and in their
operational practices.
Therefore, principles such as privacy-by-design and data
minimization anchor liability regimes, which ensure that companies bear the
consequences of their data practices. And doing so facilitates ex-ante /
ex-post coordination. For example, mandating regular algorithmic impact
assessments helps regulators identify risks, such as potentially discriminatory
outcomes or security vulnerabilities, and it allows them to require mitigation
plans. Breach of privacy-by-design and data minimization can act as a liability
trigger when anticipation of harms for regulators was impossible but ex-post
verification is possible. Liability frameworks can, in that way, improve the
current misalignment of corporate incentives and societal values, providing
some additional incentives for safe AI development.
[This response will continue in the entry “AI, Privacy, and the Politics of Accountability Part 2: Privacy Harm in the AI Economy”]
Ignacio Cofone is Professor of Law and Regulation of AI at Oxford University. You can reach him by e-mail at ignacio.cofone@law.ox.ac.uk.