Balkinization  

Tuesday, December 22, 2020

Regulating AI: The question now is no longer whether, but how

Guest Blogger

From the Workshop on  “News and Information Disorder in the 2020 US Presidential Election.” 

Olivier Sylvain

The distribution and availability of networked information devices, applications, and services define the ways in which consumers share with friends, transact business, learn, and create. We did not need the pandemic to reveal this. I and others have for decades been writing about the ways in which a robust, resilient, and equitably distributed internet infrastructure is vital to the successful operation of democracy and free markets.

Just as salient for policymakers today is whether, now that the availability of internet structure is widely understood to be essential, policymakers can or should do anything more to ensure that the applications and services that such systems now make available abide by longstanding democratic norms and consumer protections. This question remains unresolved largely because, for the past quarter century, the preponderance of technologists, scholars, and policymakers have been skeptical about government regulators’ competence and capacity to promulgate timely or effective rules. The prevailing view presumes that such interventions are likelier to impede technological innovation than achieve any well-meaning objective. Thus, today, policymakers are reticent to do anything that risks slowing invention and entrepreneurship in the market for network information services. This is the view that Congress and the courts have enshrined in a handful of regulatory regimes about which I have written, including the FCC’s regulation of broadband network management, the court-made doctrine under Section 230 of the 1996 amendments to the Communications Act, and the public regulation of social automated decision-making generally.

This prevailing view extends well beyond those two areas. One prominent contemporary example is the Department of Housing and Urban Development’s proposal in August 2019 to create a safe harbor for administrators of automated decision-making systems (ADS) in housing markets. Specifically, HUD proposed to rewrite longstanding agency “disparate impact” regulations that bar discrimination on the basis of protected categories like race, gender, and age under the Fair Housing Act. Ever since the Supreme Court’s 2015 decision in Texas Department of Housing & Community Affairs v. The Inclusive Communities Project, litigants may bring claims arising from any given housing decision or policy that has a disparately impacts people on the basis of protected categories, irrespective of whether there is specific evidence of those entities’ intent to discriminate. Among other things, HUD’s proposal would have exempted building managers, brokers, and other entities that rely on ADS to evaluate potential home renters or buyers. It would do this by, among other things, recalibrating the burden of proof in favor of entities that rely on ADS, effectively making it very difficult for any aggrieved party from advancing a disparate impact claim. (The exempted entity could make the representations itself about whether any protected category is a variable on which the ADS relies or a third-party could certify that this is the case.) The principal argument that HUD advanced in support of the change is that, unencumbered by disparate impact regulations, ADS will promote progress, innovation, and American competitiveness.

The HUD proposal is just one of the more recent echoes of the longstanding argument that positive regulation (here, the disparate impact rule under the FHA) impedes the efficient operation of the market. Consider that the Consumer Finance Protection Bureau last summer initiated a public inquiry into how or whether its policymakers should regulate ADS pursuant to the Equal Credit Opportunity Act. These recent efforts are of a piece with the romantic but now outdated view that most restraints on innovation in the market for networked information technologies unduly stifle innovation and the free circulation of ideas.

One year later, HUD decided to scrap the proposal, and this is arguably because of the substantial pushback it received during the public comment period. Opponents convincingly argued that the safe harbor was flawed for at least two reasons. First, it revealed a misunderstanding of how current ADS work. Scholars in law, social science, and humanities have convincingly shown over the past five or so years that, even when developers do not explicitly use protected categories or their proxies as input data, disparities along those very dimensions may nevertheless prevail in pretrial detention in criminal cases, the delivery of public health and disability benefits, hiring and employment, facial recognition technologies, online search, consumer finance, consumer credit scoring, and consumer markets generally. Second, anyway, ADS developers often cannot explain these discriminatory outcomes after they reveal themselves. If they do not have a full grasp of the ways in which their own ADS operate until after consumers engage them, we cannot count on any representations they or anyone else makes about their impacts on protected groups.

This project will draw on and engage this work to propose regulatory strategies that could redress the social costs and disparities that ADS cause. Among other things, it will build on a law review article of almost four years ago that, on the basis of then-budding evidence, proposed precautionary approaches to the governmental regulation of ADS. Scholars in law and social science remain skeptical, however, of anything but a forgiving laissez-faire approach largely for the reasons that I set out above — namely, that any ex-ante system of regulation “is likely to stifle innovation and to block the development of more flexible, current algorithms.” Scholars in this line generally recommend that technologists ought to develop industry standards for evaluating the impacts of new technologies, before they put such technologies to market. To the extent there is an emergent consensus among scholars, it has settled on the idea that ADS developers should be transparent about the workings of any given ADS.

It is therefore time that scholars and policymakers “rethink regulatory strategies to ensure that public values inform AI research, development, and deployment.” We should no longer be as wary of positive regulation of ADS as we were two decades ago. Such systems are pervasive and familiar enough now that policymakers are past the caricatured question of whether positive intervention is prudent. I propose that policymakers confidently assess whether they should adopt an ex-ante precautionary stance with regards to some ADS. Rather than ponder over whether they should intervene, policymakers should be carefully defining the circumstances under which flat-out bans, moratoria, permanent tailored restrictions, or other less presumptively deferential proscriptive interventions are appropriate.

Olivier Sylvain is a professor of law at Fordham University.


Cross posted at the Knight Foundation


Older Posts
Newer Posts
Home