E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
The views expressed in this essay are the author’s only and
do not necessarily reflect the views of the Department of Justice or the Office
of Legal Counsel. This post is adapted from a working paper on the same
subject.
Google
may have tweaked its algorithm, and run thousands of simulations (and variants
on the simulation)—but how can we be sure that Google’s vehicles are safe? How
many thousands of miles, or tens of thousands of hours, should an autonomous
vehicle algorithm log before it’s road-ready? How do we decide that we are
confident enough that, when an autonomous vehicle algorithm does fail, it won’t
fail catastrophically? The answers to those questions are all still
being worked out.
My
contribution to the Unlocking the Black
Box conference is to suggest that the rise ofincreasingly complex semi-autonomous algorithms—like those that
power Google’s self-driving cars—calls for developing a new specialist
regulatory agency to regulate algorithmic safety.An FDA for algorithms.
That
might sound strange at first, but hear me out. The algorithms of the future
will be similar to pharmaceutical drugs: The precise mechanisms by which they
produce their benefits and harms will not be well understood, easy to predict,
or easy to explain. They may work wonders, but exactly how they do it will
likely remain opaque. To understand why will require a dive into the future of
algorithms.
The
future of algorithms is algorithms that learn. Such algorithms go by many
names, but the most common are “Machine Learning,” “Predictive Analytics,” and
“Artificial Intelligence.” Basic machine learning algorithms are already
ubiquitous. How does Google guess whether a search query has been misspelled?
Machine learning. How do Amazon and Netflix choose which new products or videos
a customer might want to buy or watch? Machine learning. How does Pandora pick
songs? Machine learning. How do Twitter and Facebook curate their feeds?
Machine learning. How did Obama win reelection in 2012? Machine learning. Even
online dating is guided by machine learning. The list goes on and on.
Importantly,
because machine learning algorithms do not “think” like humans do, it will soon
become surpassingly complicated to deduce how trained algorithms take what they
have been taught and use it to produce the outcomes that they do. As a
corollary, it will become hard, if not impossible, to know when algorithms will
fail and what will cause them to do so.
Looking
to the more immediate future, the difficulties we confront, as learning
algorithms become more sophisticated, are the problems of “predictability” and
“explainability.” An algorithm’s predictability is a measure of how difficult
its outputs are to predict; its explainability a measure of how difficult its
outputs are to explain. Those problems are familiar to the robotics community,
which has long sought to grapple with the concern that robots might
misinterpret commands by taking them too literally (i.e. instructed to darken a
room, the robot destroys the lightbulbs). Abstract learning algorithms run
headlong into that difficulty. Even if we can fully describe what makes them
work, the actual mechanisms by which they implement their solutions are likely
to remain shrouded: difficult to predict and sometimes difficult to explain.
And as they become more complex and more autonomous, that difficulty will
increase.
What
we know—and what can be known—about how an algorithm works will play crucial
roles in determining whether it is dangerous or discriminatory. Algorithmic
predictability and explainability are hard problems. And they are as much
public policy and public safety problems as technical problems. At the moment,
however, there is no centralized standards-setting body that decides how much
testing should be done, or what other minimum standards machine learning
algorithms should meet, before they are introduced into the broader world. Not
only are the methods by which many algorithms operate non-transparent, many are
trade secrets.
A
federal consumer protection agency for algorithms could contribute to the safe
development of advanced machine learning algorithms in multiple ways. First, it
could help to develop performance standards, design standards, and liability
standards for algorithms. Second, it could engage with diverse stakeholders to
develop methods of ensuring the algorithms are transparent and accountable.
Third, for especially complex algorithms involved in applications that may pose
significant risks to human health and safety—for example when used in
self-driving cars—such an agency could be empowered to prevent the introduction of algorithms onto the
market until their safety and efficacy have been proven through evidence-based
pre-market trials.
Not everyone will
be enthusiastic about an algorithmic safety agency, at least not at first. Not
everyone was enthusiastic about the FDA at first either. But the United States
created the FDA and expanded its regulatory reach after several serious
tragedies revealed its necessity. If we fail to begin thinking critically about
how we are going to grapple with the future of algorithms now, we may see more
than a minor fender-bender before we’re through.
Andrew Tutt is an Attorney-Adviser at the Office of Legal Counsel at U.S. Department of Justice, and was until recently a Visiting Fellow at the Yale Information Society Project.