Pages

Wednesday, December 28, 2016

The Three Laws of Robotics in the Age of Big Data

I have posted a draft of my forthcoming essay, The Three Laws of Robotics in the Age of Big Data, on SSRN. This essay was originally presented as the 2016 Sidley Austin Distinguished Lecture On Big Data Law And Policy at the Ohio State University Moritz College of Law on October 27, 2016.

Here is the abstract:

* * * * *

In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the "laws of robotics" are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents.

This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.

The homunculus fallacy is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.

The substitution effect refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.

The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The "three laws of robotics" for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.

Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.

With this in mind, we can state three basic "laws of robotics" for the Algorithmic Society:

First, operators of robots, algorithms and artificial intelligence agents are information fiduciaries who have special duties of good faith and fair dealing toward their end-users, clients and customers.

Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.

Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be algorithmic nuisances. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term "algorithmic nuisance" captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified "pollution"-- that is, using computational power to make others pay for the costs of one's activities.

Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency—and its cousins, accountability and due process—apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.