Pages

Wednesday, December 04, 2024

Valuing Privacy Harms while Structuring Data Governance

For the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).

Frank Pasquale 

Ignacio Cofone’s The Privacy Fallacy is an important contribution to a rapidly growing literature on data protection. He critiques over-reliance on contract law in the governance of data, and the need for tort principles to compensate for (and deter) privacy harms. He articulates a complex theory of privacy liability that is capacious enough to address a wide range of harms arising out of data breaches, misuses of sensitive information, and other wrongs. This post is largely an appreciation of the book, with a few closing thoughts on two areas of future work it invites: better valuation of privacy harms, and more robust structures of data governance.

          Cofone sets the stage by arguing that a core legal rationale for the obligations and opportunities embedded in digital data transactions today is a lie. As he observes: 

Privacy consent is an illusion. Consent-based privacy protections allow corporations to do as they please with people’s data as long as they’re able to extract superficial agreement. We routinely experience this (lack of) protection when we mechanically click “I agree” to websites’ and apps’ terms of service. Individual consent provisions fail to address the harms produced by aggregated, inferred, and relational data. They ignore information asymmetry, lack of choices, and unequal bargaining (66). 

Far too many contracts “unshackle[] informational exploitation” rather than offering robust protections.

Aware of this, many voices in privacy law have tried to improve consent—for example, by making it more informed. Cofone calls these “traditionalist solutions,” and many do seem obsolete. The overwhelming weight of dark patterns and manipulation online, as well as the many offline pressures contributing to social acceleration, make it exceptionally difficult for any consumer to sagely weigh the costs and benefits of granting data to one entity and denying it to another. As Cofone argues, “information overload prevents us from realizing how much risk our information involves.”

This is a critically important set of observations, because it helps turn upside down the longstanding effort by to trivialize privacy injuries as speculative. Both in damages determinations and in standing doctrine, US courts are all too willing to deny or marginalize a claim based on a data breach or invasion of privacy because of the plaintiff’s inability to demonstrate the concrete and particular harm arising out of a particular offense. But that very same rationale--the great difficulty of predicting when and where data will be used against someone--also undermines the legitimacy of any contract premised on a data subject’s rational projection of the costs and benefits of the transaction.

While many scholars have exposed the hollow normative core of privacy policies, Cofone marshals arguments here with expertise and force. He also offers a quite conditional endorsement of reformist measures, judging that “moving privacy consent from an opt-out to an opt-in model, establishing contract law mechanisms, and reinforcing privacy notices….work as palliative solutions that are helpful only as long as they don’t distract from meaningful ones.”

So what are these meaningful forms of regulation? Cofone gives an overview of at least three. First, there should be prohibitions of “high-risk data practices, likely to be against collective best interests” (104). Presumably such high-risk practices would include several of the specific misuses of data Cofone profiles in the book, which give a “human face” to what is an often-abstract enterprise. Cofone also proposes that the “noncompliant corporation faces a fine (public enforcement), monetary damages (private right of action), or both” (105). The “both” option would be a useful upgrade to, for example, federal U.S. health privacy law enforcement, where at present the Health Insurance Portability and Accountability Act (HIPAA) only permits fines.

Second, Cofone counsels for more use of standards in privacy regulation, instead of rigid rules. Facing a fast-moving market for data, regulators need the broad authority inherent in authorities like “data minimization” standards, which “corporations to consider alternatives and adopt the one that requires the least data to achieve the desired outcome” (106). “Privacy by design” is another such standard, focused on the planning of product and service offerings.  Though firms will complain about standards’ indeterminacy, it is blackletter U.S. administrative law that agencies have some degree of discretion in choosing between rulemaking and adjudication to enforce a statute, absent some direct legislative mandate for one or the other approach. Moreover, as Cofone notes, repeated application will “flesh out” a standard.

Third, Cofone examines one particular overarching standard—that of a fiduciary—as a way of re-orienting data processors toward recognizing the best interests of those whose data they collect.  There has been fierce debate over the information fiduciary standard (as first developed by Jack Balkin) since Lina Khan and David Pozen published their “A Skeptical View of Information Fiduciaries” in 2019. Cofone acknowledges the value of the fiduciary approach, as well as the validity of some issues the critics raise. In Chapters 6 and 7 of the book he proposes alternative forms of liability for many troubling data practices, based on an analysis of a wide range of potential harms caused by negligent or reckless collection, analysis, and use of data.

Cofone’s list of objective data harms is worth quoting in full: 

Data harms include reputational harm (for example, when employers find inaccurate information about a job candidate), financial harm (such as identity theft), physical harm (like doxing, where the disclosure of personal information often leads to bodily harm), discrimination (for example, when a member of a nonvisible minority is outed), and harms to democracy (such as when someone’s tricked into voting for a candidate they wouldn’t have voted for otherwise). 

Cofone argues that all these harms involve actual adverse consequences to data subjects. (Some are also known as “consequential harms” in U.S. caselaw.) Rife with comparative insights, The Privacy Fallacy also demonstrates that they constitute “material harms” pursuant to the GDPR. Cofone creatively analogizes such harms to battery, and argues that, just as psychological harms arising out of a battery can be compensated, so too should psychological or subjective harms (arising out of the material and objective harms of data practices) be compensable. (This is a nice adjunct to arguments for greater recognition of subjective distress in Citron & Solove’s classic “Privacy Harms” article.) However, his survey of U.S., Canadian, and European rulings in the area shows that there are numerous hurdles to assuring compensation for the full range of privacy harms, as well as some important opportunities.

          For Cofone, there are numerous routes for assuring better remedies for privacy harms. On his account, “Liability can be brought through three legal pathways. First, when a data practice breaches a privacy or data protection statute (for example, breaching someone’s right to know). Second, through a violation of data security law (for example, failing to notify [data subjects about] a data breach [affecting them]). Third, through privacy torts in common law jurisdictions or the law of obligations in civil law jurisdictions (for example, intrusion upon seclusion)” (139).  He notes that while, on paper, the GDPR gives far greater scope for private rights of action than the U.S., most privacy enforcement in Europe is still public.

          To broaden the scope for privacy protection, Cofone proposes a private right of action [PRA] in privacy laws, broader and more forceful than extant PRAs in U.S. federal and state statutes, and those in the GDPR. A key reform appears to be legislation that guarantees that “the right to sue doesn’t hinge on material, consequential harm” (143). He also wants to ensure that the right to sue survives exculpatory clauses sure to be included in online terms of service. Cofone points to hopeful signs in Commonwealth countries’ privacy jurisprudence that expand privacy rights to address modern threats and respect modern mores.  He also tours the varied successes and failures of efforts to expand privacy liability beyond tangible, consequential harm, to include immaterial harm. For Cofone, “courts can and should focus on privacy harm, rather than consequential harms, produced by data practices that breach statutory rules or privacy policies – that is, taking the opposite direction of the US Supreme Court in the TransUnion case.” The remainder of the chapter examines whether liability’s basis should be negligence or no-fault, and the procedural dimensions of the expanded privacy liability Cofone proposes. 

Valuation and Governance in a Dual Remedial Regime  

          The Privacy Fallacy offers an ambitious vision for better vindicating privacy rights. It also suggests an expansive research agenda for future work. Two questions particularly intrigue me going forward. First, what methods of valuation might best ensure just remedies for wronged data subjects? Second, how should courts and regulators coordinate in order to rationally divide the labor of privacy protection?  We might explore each of these through the lens of a concrete example explored by Cofone throughout the book: the dating app Grindr’s sale of “its users’ personal information to third parties, which included sexual orientation and HIV status.”

          As for the first question, Cofone mentions that Grindr faced a “historic fine” imposed by the Norwegian DPA for its misdeeds: €9,600,000, or ten percent of its global revenue in 2020 (according to the DPA’s website). Cofone recommends dual remedial regimes, where victims of illegal data practices can sue in courts, while regulators pursue parallel administrative proceedings. One issue that arises out of this dual regime is how to coordinate damages, and their respective purposes. I am not certain as to whether DPAs keep the fines they collect, or seek to compensate victims with them, or some combination of those goals. I assume that, by proposing a dual remedial regime, Cofone would prefer for the DPAs to keep the fines (to better fund future investigations), at least up to a point, while encouraging courts to find proper levels of compensation—but more clarity on this point would be welcome.

          At the judicial level, proper levels of compensation are a fascinating topic. Cofone cites an article by Jojo Y.C. Mo with extensive discussion of remedies, and I hope the field follows both Mo’s and Cofone’s lead in exploring the proper level of punishment and compensation due. After another HIV-related privacy breach, Aetna (a US insurer) settled a case involving the illegal revelation of about 12,000 persons’ status for $17 million—roughly $1,400 per person had it been distributed equally. This figure is the same order of magnitude as the £3500 and £5000 privacy damages awards in Britain mentioned in Mo’s article. The £3500 case involved a famous model, who was photographed leaving a drug rehabilitation clinic, and who recovered damages for misuse of private information. The small award does not seem like adequate recompense for the anguish the model suffered. Nor does it seem likely to deter major media firms from engaging in similarly invasive conduct in the future. But Britain is a loser-pays legal system, and presumably the model incurred high attorneys fees to press her case.

I hope to see future work exploring the bases of privacy harm valuations. Perhaps they reflect the theories of probabilistic harm explored by Cofone. The Aetna breach occurred because “the fact that [affected data subjects] had been taking HIV drugs was revealed through the clear window of the envelope” sent to them, which contained a letter regarding a prior Aetna privacy violation. The settlement established a two-tier remedy framework, according to reporting on the case: 

As part of the payout, the law firms are setting aside at least $12 million for payments of at least $500 to the estimated 11,875 people who may have received a letter exposing that information. . . . A fund will be set up for those who experienced additional financial or emotional distress. Individuals will be able to claim up to $20,000. The rest of the money will go toward legal fees and costs. 

On one level, this seems like a sensible outcome. Some claimants may not have had any impact from the careless printing of the HIV-related information on the envelope window. The $500 award to them simply serves to deter misconduct, rather than to recompense harm. However, on the opposite end of the scale, the $20,000 figure seems inadequate to truly recompense plausible “worst case scenarios,” given the potential for stigma documented in the complaint.

For example, it is possible that one in a thousand persons had truly catastrophic results from the breach—say, being disowned, ostracized, shunned, or fired once the information spread. The distress and economic damage arising out of those catastrophic cases might be valued at $1 million or more. It is relatively easy to imagine tragic sequelae of a privacy violation—and the unimaginative can find instruction in the facts of Doe v. Medlantic. But this raises other difficult questions. For example, when should class action settlements be shared equally by members of the class, and when should those who are particularly harmed receive more? Perhaps the latter lost their right to make their own individual case when they failed to opt out. But fuller consideration of these issues would enrich future discussions of the privacy class actions which Cofone endorses.

A second question that arose for me after reading The Privacy Fallacy was the proper coordination of non-monetary, judicial remedies with regulators’ demands. It seems that plaintiffs who demand equitable remedies are in a sense acting as private attorneys general (in U.S. parlance). To offer a concrete example: PRAs in California filed pursuant to the California Consumer Privacy Act (CCPA) may give rise to equitable remedies that duplicate, exceed, or fall short of relevant regulatory requirements crafted by the California Privacy Protection Agency (CPPA). It would seem advisable for courts to at least consult these regulatory requirements. And when courts prescribe equitable remedies whose demands exceed what regulators presently demand, that should be an opportunity for the CPPA to re-examine whether its own regulations are too lax.

It is to Cofone’s great credit that The Privacy Fallacy raises such intriguing questions. The book clearly advances our understanding of the difficult challenges ahead in balancing the rights of businesses and consumers, as well as the responsibilities of supranational, national, and subnational regulators. I look forward to Cofone’s further work in the area, as his remarkably cosmopolitan perspective and passion for social justice greatly enriches contemporary privacy law discourse.

Frank Pasquale is Professor of Law at Cornell Tech & Cornell Law School. You can reach him by e-mail at fp269@cornell.edu.