For the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).
Claudia E. Haupt
Ignacio Cofone’s insightful new book, The Privacy Fallacy: Harm and Power in the Information Economy, illustrates the importance of asking the right questions. In his telling, the traditional contracts-based approach to privacy lacks regulatory salience. First, it overlooks the crucial role of the larger information ecosystem. By ignoring the structure of hierarchies built into this system, the traditional approach misses the embeddedness of individual interactions. Solutions to protect privacy based on this approach will necessarily fall short, because they erroneously assume discrete individual relationships. Second, the contract-based model of privacy is based on a range of faulty assumptions about the way individuals operate within this system. Instead, Cofone proposes a liability system built on concepts from tort law to remedy harm.
Privacy
law, Cofone notes, is “stuck in time. Core concepts in privacy law no longer
correspond with daily social interactions in the information economy.”[1] The relevance of Cofone’s laudably accessible
account to a wide variety of areas is immediately evident. Take health law. Users
enter one billion health questions into Google each day, which results in
health-related searches comprising an estimated 7% of Google searches; reportedly,
“four in ten Americans use Google instead of seeing a doctor.” Entering health
queries into search engines already reveals enough data to make significant
inferences about the user’s health.[2] As Cofone emphasizes, “[t]he sheer
amount of data inferred in the information economy is hard to fathom.”[3]
Wrestling
with an ever increasing flood of sensitive health data—most recently with the
advent of large language model (LLM)-based generative AI—health law depends on
robust responses to important privacy issues. Questions of privacy are frequently raised in the literature on generative
AI in medicine, but concrete answers remain elusive. While some scholars try to
make the best of existing health privacy law,[4] there seems to be growing
recognition in the field that its outdated privacy cornerstone, the Health
Insurance Portability and Accountability Act (HIPAA) of 1996, is unresponsive
to the new challenges.[5] But how do we arrive at a
more promising approach to modern health privacy?
To
illustrate how Cofone’s intervention can meaningfully help push the field
forward, I will use the example of generative AI chatbots dispensing health
advice. Two areas highlighted in The Privacy Fallacy are particularly instructive.
First, the focus on trust, which Cofone discusses as part of the debate on
information fiduciaries.[6] Second, with respect to
harm, Cofone urges us to move on from the traditional focus on tangible
consequences, such as the paradigmatic data breach, and confront the harms of
modern data practices built around “complicated power dynamics” involving use
of “people’s information, often with the help of AI, to make decisions about
their opportunities and experiences.”[7] The Privacy Fallacy
thus outlines central themes that also ought to guide our thinking about
privacy in the health context: how should privacy regimes map on to social
relationships, and how can we account for systemic power asymmetries in which
seemingly discrete (and discreet) individual interactions are embedded into a vast
and overbearing ecosystem of surveillance and control?
Trust
In
The Privacy Fallacy, Cofone explicitly builds on the work of privacy scholars
who stress social relationships of trust, and he views his contribution to be
compatible with this approach. Interrogating the usefulness of information
fiduciaries in privacy law, Cofone outlines a theory positing “that entities
who invite others to trust them with their personal information, and who profit
from it, should be required to act in the best interest of the people who
trusted them with it.”[8] Data companies should thus
act in the best interests of their users, meaning that they don’t use the data
to harm individuals who share their data with them. This is consistent with
proposals for data loyalty.[9] By imposing fiduciary
duties, tech companies may be held liable for failing to protect user
interests.
The
scholarly debate on information fiduciaries revolves around social
relationships characterized by asymmetries of knowledge and power. Though tech
companies entrusted with users’ personal data are similar to other fiduciaries,
the fiduciary duties of professionals are not a perfect analogy.[10] Nonetheless, in the
context of health privacy, we are in fact dealing with professionals who do have
traditional fiduciary obligations.[11] Loyalty and trust, of course,
traditionally are central themes in health law, anchored in patient confidentiality
reaching back to the original Hippocratic Oath.
Conversations
with generative AI chatbots feel almost like chatting with a human. It’s
tempting to think of them as essentially a friendly conversation partner.
Problematically, this might entice users—whether health professionals or
non-professional users—to unduly trust and rely on the information they
receive. We might usefully distinguish between (a) the professional
relationship and (b) direct-to-consumer applications.
This
distinction is reflected in HIPAA’s Privacy Rule. Whereas chatbots designed for
medical use must comply with the Privacy Rule as covered entities, chatbots
outside the health care setting are not covered entities subject to the Privacy
Rule.[12] In scenario (a), the
chatbot complements the professional’s advice. In this situation, the
information dispensed by the AI chatbot is filtered through the human
professional who uses their own professional judgment. We know this generally as
the “human in the loop” scenario.[13] Professionals in this
situation should be aware of potential privacy implications of the technology
they use; at the very least, they should “reaffirm their critical role in
protecting health data.”[14] Scenario (b) is the more worrisome one. As
noted in the medical literature and the press, chatbot answers to health
questions compare quite favorably to other sources. At the same time, some
AI-generated health advice is dangerous. With respect to confidentiality
guardrails, recent press coverage has highlighted mental health as an example where human
providers are under legal obligations to keep information private whereas AI
chatbots are not. This means that user reliance on confidentiality is certainly
misplaced; reliance on the quality of advice is decidedly a mixed bag.
In
scenario (a), trust ought to be placed in the professional, who is subject to
the traditional guardrails of the professional relationship, including
fiduciary duties. This is unrelated to HIPAA’s Privacy Rule application to
professional-use AI. Rather, it is the social relationship between doctor and
patient that gives rise to these legal obligations.[15] In scenario (b), the
information fiduciaries approach would have us consider “consequence-focused
accountability,” asking “whether a data practice was harmful or didn’t
prioritize people’s wellbeing after the data practice takes place.”[16]
Cofone’s
approach shifts our focus to the social relationship. He urges us to ask the following set of
questions: what are the entities involved, what is the type of information, and
how was the information collected, processed, or shared. Under this framework,
“to examine the relevant social norms, one must ask what the information is,
who’s involved in the data practice, and how it’s carried out.” Linking social
standards to privacy’s social value reframes the question to “evaluate the
reasonableness of someone’s privacy claim by identifying whether the data
practice . . . unreasonably interfered with privacy’s social values.”[17]
Harm
Cofone’s
move from contracts to torts centers the question of harm. Thus, instead of
asking about regulatory compliance and individual consent, he would have us ask
about the potential for harm. There certainly is much to say about bad health advice
resulting from biased training data or AI “hallucinations” which, if followed,
could easily result in serious physical harm.[18] But for purposes of this
discussion, I want to focus squarely on privacy harms which Cofone defines as
“the wrong of producing unjustified privacy losses for others for private gain,
violating privacy’s socially recognized value.”[19]
Obliterating
the notion of individual interactions, Cofone warns that modern data practices
“also allow harms to arise between parties who never interacted with one
another, such as harms from data brokers, who buy your data to aggregate it and
sell a profile about you to others.”[20] Further, he cautions that
“[t]hrough these power dynamics, modern data practices introduced and fuel
informational exploitation, a different type of data harm that involves
profiting from people’s information with disregard for the harm that it causes
them. Informational exploitation differs from other data harms in that it’s
systemic, it’s opaque, and it facilitates, while simultaneously hiding, other
harms.”[21]
Even
without generative AI, the potential for privacy harms is significant. For
example, a study published in JAMA in 2019 found that of 36 top-ranked apps
for smoking cessation and depression, 29 transmitted data to services provided
by Facebook or Google, yet only 12 disclosed doing so in their privacy
policies. As Cofone rightly stresses throughout the book, the disclosure and
consent mechanism itself a relic of a bygone era. But the problems are
compounded when the data reality doesn’t map on to the regulatory assumptions
underlying privacy regimes which in turn do not protect against harm. Indeed,
as Cofone pointedly suggests, “[w]hen the law uses the wrong assumptions,
placing weight on them can impede it from protecting the vulnerable parties
that it’s meant to protect.”[22] Compliance with privacy
regimes built on such assumptions becomes meaningless.
To
illustrate the shortcomings of asking about regulatory compliance, consider deidentified
data. HIPAA’s Privacy Rule assumes, first, that data can be successfully
deidentified—that is, completely stripped of personal information—and, second,
that deidentified data is safe to use. Recognizing the fallacy of these
assumptions is not new. This is an example of Cofone’s
assertion that privacy law contains “assumptions [that] don’t reflect the
reality of contemporary data interactions.”[23] Generative AI makes the
result of the erroneous assumptions more salient than before. Deidentified data
can easily be reidentified, as Cofone vividly illustrates in a subchapter, and
even deidentified data can cause harm.[24] In short, despite HIPAA
compliance, privacy harms can occur.
And
so, instead of “ticking the box” of compliance, Cofone prompts us to ask a
different set of questions, focused on harm prevention and accountability:
“Harm prevention requires accountability for the consequences of data practices,
not just accountability for corporations’
promises in privacy policies and compliance with procedural rules.”[25] Privacy law thus must
focus on the impact of data practices and create responsibility mechanisms
accordingly.
Beyond HIPAA
With
respect to the question of HIPAA compliance for AI chatbots, my coauthor Mason
Marks and I concluded that “HIPAA is ill suited to protect patients in the face
of modern technologies, and asking whether LLMs could be made HIPAA compliant
is to pose the wrong question. Even if compliance were possible, it would not
ensure privacy or address larger concerns regarding power and inequality.”[26] The Privacy Fallacy
urges us to reconsider precisely this relationship among privacy, harm, and
power.
Cofone
widens the lens of what constitutes harm. Data harms, he explains, can be
reputational, financial, or physical. Moreover, they can consist of
discrimination or harms to democracy.[27] There may also be other
forms of data harms. Importantly, “they’re enabled by the collection,
processing, and sharing of personal data.”[28] Asking what kinds of
harms follow from certain data practices thus constitutes an important step in designing
an appropriate regulatory response.
Conceptualizing
privacy loss as distinct from privacy harm, Cofone prompts us to consider how
inferences accumulate. The world in which “privacy invaders . . . learn about
us from inferences, relational data, and de-identified data,” Cofone posits, is
fundamentally different from “the age of fax machines when privacy law was
conceptualized.”[29]
And when exploited, privacy loss is harmful.[30]
With
respect to power, Cofone centers corporate accountability. In short, “[b]y
making privacy harm as much a risk to corporations as it is to their users,
corporate liability could curtail informational exploitation by incentivizing corporations
to focus on the process of minimizing the likelihood of the harm occurring.” And
liability for ex post harm would be framed in traditional torts terms as “the
breach of a duty not to exploit others based on their personal information.”[31]
Even
somewhat more positive appraisals of HIPAA’s application to AI acknowledge that
privacy gaps persist, and the “law needs to place more of the burden of
protecting people’s privacy on those who design, implement, use, and control
medical AI systems.”[32] Whether the reader shares
the rather bleak assessment of HIPAA or a more optimistic version, a new or at
least significantly improved approach to health privacy in the age of
generative AI seems indispensable. The Privacy Fallacy offers help. Finding
the most effective regulatory approach always starts with asking the right
questions.
Claudia E. Haupt is Professor of Law and Political Science at Northeastern University School of Law and an Affiliated Fellow at the Information Society Project at Yale Law School. You can reach her by e-mail at c.haupt@northeastern.edu.
[1] Ignacio
Cofone, The Privacy Fallacy: Harm and Power in the Information Economy 11
(2024).
[2] Mason Marks & Claudia E.
Haupt, AI Chatbots, Health Privacy, and Challenges to HIPAA Compliance,
330 JAMA 309 (2023).
[3] Cofone, supra note 1, at 74.
[4] See, e.g., Barbara J. Evans,
The HIPAA Privacy Rule at Age 25: Privacy for Equitable AI, 50 Fla. St. U. L. Rev. 741 (2023) (arguing
that HIPAA’s Privacy Rule is “potentially well-tailored for novel ethical
challenges that lie ahead in the age of AI-enabled health care.”).
[5] Marks & Haupt, supra
note 2.
[6] Cofone, supra note 1, at 107-09.
[7] Id. at 6.
[8] Id. at 107.
[9] See, e.g., Neil Richards
& Woodrow Hartzog, A Duty of Loyalty for Privacy Law, 99 Wash. U. L. Rev. 961 (2021).
[10] Claudia E. Haupt, Platforms
as Trustees: Information Fiduciaries and the Value of Analogy, 134 Harv. L. Rev. F. 34 (2020).
[11] Claudia E. Haupt, Artificial Professional
Advice, 18 Yale J. Health Pol’y, L.
& Ethics 55 (2019)/21 Yale J. Law & Tech. 55 (2019).
[12] Marks & Haupt, supra
note 2.
[13] See generally Rebecca
Crootof, Margot E. Kaminski, W. Nicholson Price II, Humans in the Loop,
76 Vand. L. Rev. 429 (2023).
[14] Marks & Haupt, supra
note 2.
[15] Haupt, Artificial
Professional Advice, supra note 11, at 64-65.
[16] Cofone, supra note 1, at 108.
[17] Id. at 128-29.
[18] Claudia E. Haupt & Mason
Marks, AI-Generated Medical Advice – GPT and Beyond, 329 JAMA 1349
(2023).
[19] Cofone, supra note 1, at 111.
[20] Id. at 6.
[21] Id.
[22] Id. at 11.
[23] Id.
[24] Marks & Haupt, supra
note 2.
[25] Cofone, supra note 1, at 111.
[26] Marks & Haupt, supra
note 2.
[27] Cofone, 112.
[28] Id. at 113.
[29] Id. at 121.
[30] Id. at 123.
[31] Id. at 130-31.
[32] Evans, supra note 4, at 801-09.