Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
A Legal Theory of Autonomous Artificial Agents offers a serious look at several legal controversies set off by the rise of bots (and robots generally). "Autonomy" is one of the key concepts in the work. We would not think of a simple drone programmed to fly in a straight line as an autonomous entity. On the other hand, films like Blade Runner envision humanoid robots that so closely mimic real homo sapiens that it seems churlish or cruel to dismiss their claims for respect and dignity (and perhaps even love). In between these extremes we find already well-implemented, cute automatons. As Sherry Turkle has noted, when confronted by a robotic stuffed animal called Paro, children "move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?”"
I want to ask another, perhaps childish, question: can the bot speak? The question will be particularly urgent by 2020, but is relevant even now as corporate and governmental entities want to promote armies of propagandizing bots to disseminate their views and drown out opposing voices. Consider the experiment run by Tim Hwang, of the law firm Robot, Robot, & Hwang, on Twitter, as explained in conversation with Bob Garfield:
GARFIELD: Earlier this year, 500 or so Twitterers received tweets from someone with the handle @JamesMTitus who posed one of several generic questions: How long do you want to live to, for example, or do you have any pets? @JamesMTitus was cheerful and enthusiastic, kind of like those people who comment on the weather and then laugh heartily. Perhaps because of that good nature or perhaps because of his inquiring spirit and interest in others, @JamesMTitus was able to strike up a fair number of continuing conversations. Only thing is, there is no @JamesMTitus. He, or it, is a bot, a software program designed to engage actual humans in social networks.
HWANG: Bots are everywhere online. Some might actually argue that there’s more bots online than there are, are humans online. . . [So I said:] Design a bot and we'll drop these bots into the network and see how they do. And the scoring was the bot got a point for every mutual connection it created, you know, it follows someone, someone follows it, and three points for every @reply, retweet, you know, conversation piece they're able to generate.
Hwang have several forward-thinking proposals for bot swarms in networks:
So what if we were to launch a whole swarm of them to, to change the way people connect on a very large scale, right, 10,000, 100,000 people? . . . They bring communities together and then the bots sort of deactivate, right? . . . .Now, in addition to building connections, one of the worries, of course, is that the bots might be used to destroy connections or to disrupt emergent processes happening online. . .
You can think of the Iran election or the Twitter interactions around the Syrian protests. And there’s been some indication that bots are actually already in the mix, attempting to disrupt or support various sides of . . . that particular protest. And so, I think there’s a number of good uses and bad uses. I think ultimately what these bots do is they're kind of a tool for social architecting, for better or for worse.
I wonder if the bots may end up being far more than an "architecture" for expression online. The rise of a Fifty Cent Party (五毛) in China, tens of thousands of bloggers paid to echo the official party line, has already been well-documented. And just as the chairman of Foxconn has pledged to replace unruly workers with subservient robots, perhaps the propaganda department of the CCP will someday find it more economically effective to unleash bots rather than pay shills to promote its legitimacy. As image spam forms a "reserve army of digitally enhanced creatures who resemble the minor demons and angels of mystic speculation, luring, pushing and blackmailing people into the profane rapture of consumption," we're bound to see more robots optimized to thrive in flattened and deracinated communicative environments.
In the United States, it is difficult to envision a future Supreme Court permitting limits on or regulation of bot storms. If money is speech, its programmed implementations will probably also be protected as expression. But there is a ray of hope behind Pandora's bots. As Danielle Citron has convincingly argued, private online intermediaries have their own rights of freedom of expression and control over the content exchanged on their platforms. Though often derided as a form of digital feudalism, when used constructively, this power of the intermediary could expose and perhaps even silence disruptive interventions by bot swarms.
As moneyed interests use “big data” to slice and dice the electorate, bots will both comb websurfing records to find targets, and promote messages designed to sway them. Personalization will allow bots to send “Save Medicare” ads to worried seniors and “cut entitlements” ads to Tea Partiers. Programmers will try to identify every imaginable subgroup: octogenarians particularly worried about Medicare Advantage Plans, angry young men with no dependents, angry old men who can’t stand Medicare Advantage—you get the picture. As the messages proliferate, individuals will hopefully migrate to the platforms best able to explain how certain messages were developed, and to block ones from clearly untrustworthy sources, all in a transparent and challengeable manner. Just as people fled Yahoo! mail for Gmail after the former became inundated in spam and bad UX, those inundated by bots may migrate to the platforms that can explain and tame the blooming, buzzing confusion of an automated communicative landscape.
So while netizens might be upset when Twitter takes down a spoof account like QantasPR, they should also be thankful that entities like it are not hamstrung by absolutist interpretations of the First Amendment that have rendered the American public sphere a subject of mockery worldwide. Ideally, platforms like Twitter, Facebook, and Google will market themselves in part by their ability to expose and deter manipulative public relations campaigns conducted on behalf of bots, be they autonomous or not. The bot may be able to speak, but we don't have to listen.