E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
I have posted a draft of my forthcoming essay, The Three Laws of Robotics in the Age of Big Data, on SSRN. This essay was originally presented as the 2016 Sidley Austin Distinguished Lecture On Big Data Law And Policy at the Ohio State University Moritz College of Law on October 27, 2016.
Here is the abstract:
* * * * *
In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the "laws of robotics" are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents.
This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.
The homunculus fallacy is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.
The substitution effect refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.
The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The "three laws of robotics" for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.
Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.
With this in mind, we can state three basic "laws of robotics" for the Algorithmic Society:
First, operators of robots, algorithms and artificial intelligence agents are information fiduciaries who have special duties of good faith and fair dealing toward their end-users, clients and customers.
Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.
Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be algorithmic nuisances. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term "algorithmic nuisance" captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified "pollution"-- that is, using computational power to make others pay for the costs of one's activities.
Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency—and its cousins, accountability and due process—apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.