Balkinization  

Monday, June 25, 2012

Automated Arrangement of Information: Speech, Conduct, and Power

Frank Pasquale



Tim Wu's opinion piece on speech and computers has attracted a lot of attention. Wu's position is a useful counterpoint to Eugene Volokh's sweeping claims about 1st Amendment protection for automated arrangements of information. However, neither Wu nor Volokh can cut the Gordian knot of digital freedom of expression with maxims like "search is speech" or "computers can't have free speech rights." Any court that respects extant doctrine, and the normative complexity of the new speech environment, will need to take nuanced positions on a case-by-case basis.

Digital Opinions

Wu states that "The argument that machines speak was first made in the context of Internet search," pointing to cases like Langdon v. Google, Kinderstart, and SearchKing. In each scenario, Google successfully argued to a federal district court that it could not be liable in tort for faulty or misleading results 1) because it "spoke" the offending arrangement of information and 2) the arrangement was Google's "opinion," and could not be proven factually wrong (a sine qua non for liability).

Most legal attention to those cases has focused on the second proposition. Wu takes on the first. Wu analogizes Google's search engine to a Frankensteinian monster, and argues that such runaway artificial agents should not be given rights (or perhaps even the right to do express themselves on behalf of their creator). This is probably the weakest argument in the piece. It's difficult to figure out what our proper legal or even moral intuitions would be if such a cyborgish entity existed. (Lawrence Solum goes so far as to say that "we don’t know what we should think about personhood for artificial agents.") Eugene Volokh offers the counter-example of an animatronic sculpture that speaks, and argues that "The government can’t restrict what the sculpture is programmed to say . . . because the artist is endowed with constitutional rights and the restriction would restrict the artist’s right to communicate (and the listeners’ right to hear)." (Julian Sanchez pounces on the example at a more visceral level, depicting Wu's views as a thinly veiled effort to empower censorship of persons by censoring their tools.)

Volokh's argument against Wu here struck me as slightly odd, given the position he took last year on the BART system's decision to shut down cell phone service during the Oscar Grant protests. Volokh defended the BART system's right to use its control over the "cell phone hardware that was on its property" to stop "bad people" from organizing their activity. I would suppose, then, that Volokh wouldn't oppose the government's conditioning a license to a cable company on that company's willingness to adopt a policy like Wu's? It's a complex area of law, but it seems odd for Volokh to be so skeptical of one way of regulating technology that aids speech, and so firmly supportive of another.*

Wu's best arguments against giving "free speech rights" to computers are consequentialist. As he observes, "Computers make trillions of invisible decisions each day; the possibility that each decision could be protected speech should give us pause." Imagine, for instance, a credit reporting agency telling you that it had the right to say whatever it wanted about you, and you had no right to correct its report. Privacy and antitrust law in information industries could not survive such a broad reading of the First Amendment. As Wu notes, "Facebook’s computers may ["speak" by] widely sharing your private information; . . . recommendations made by online markets like Amazon could one day serve as a means for disadvantaging competing publishers." Even if we consider these actions to be speech, it's a kind of expression that has long been regulated. No plausible digital activist platform wants the government to dictate political results in Google or Facebook. The concerns Wu and Volokh are addressing revolve around the commercial, not political, speech of corporations.

There is clear precedent endorsing the application of antitrust law in information industries. Even the majority opinion in the Supreme Court's highwater mark of First Amendment law trumping privacy concerns (Sorrell v. IMS Health) clearly indicated that privacy law in general was not at risk of being swept away by constitutional challenges:

[T]he State might have advanced its asserted privacy interest by allowing the information’s sale or disclosure in only a few narrow and well-justified circumstances. See, e.g., Health Insurance Portability and Accountability Act of 1996, 42U. S. C. §1320d–2; 45 CFR pts. 160 and 164 (2010). A statute of that type would present quite a different case than the one presented here. But the State did not enact a statute with that purpose or design.

That said, IMS Health is a bad case for Wu, because it does grant First Amendment protection to largely automated data flows. Judge Selya of the First Circuit had ventured a position like Wu's (analogizing the restriction of data flows to the restriction of beef jerky) in another case involving IMS Health:

We say that the challenged elements of the Prescription Information Law principally regulate conduct because those provisions serve only to restrict the ability of data miners to aggregate, compile, and transfer information destined for narrowly defined commercial ends. In our view, this is a restriction on the conduct, not the speech, of the data miners. In other words, this is a situation in which information itself has become a commodity. The plaintiffs, who are in the business of harvesting, refining, and selling this commodity, ask us in essence to rule that because their product is information instead of, say, beef jerky, any regulation constitutes a restriction of speech. We think that such an interpretation stretches the fabric of the First Amendment beyond any rational measure. [citation omitted]

That relegation of data processing to conduct cannot survive a decision like Sorrell, as Judge Lipez anticipated in his concurrence. But we don't need reasoning as sweeping as Selya's to clear a space for common-sense regulation and monitoring of search results. It's a logical extension of existing commercial law and principles of internet governance.

Second Amendment Rights for Drones?

While Wu's detractors are quick to push his argument down a slippery slope, they might want to watch their own footing. For example, Volokh's "animatronic sculpture" is a rather easy case for him. The more plausible and important scenario for him to address would be manipulative public relations campaigns conducted by bots. Can government require such entities to be registered, or to reveal their owner on demand? Sounds like a good example of information-promoting regulation to me, but given the war against disclosure waged by the GOP recently, one never knows how conservative thinkers will come down on the question. They might project a right to anonymity from a bot-owner to its bots, but that point of view may run up on the shoals of cybersecurity regulation.

Volokh seems to want to keep the broader question of legal rights for non-human actors at bay, calling it "unsurprising" that we have no answers to the questions "Would Frankenstein’s monster have his own First Amendment rights? . . . A right to keep and bear arms against the farmers’ pitchforks?" Nevertheless, we quite urgently need answers to these sorts of questions in the case of drones. Would Volokh, or other defenders of First Amendment rights for automated expression, also want to see Second Amendment rights for automated self-protection? Bryan Caplan's libertarian futurism envisions "robot stewards" for the rich who wouldn't take a "transition to socialism" lying down. He frankly acknowledges that "Just because robots do all the killing doesn't mean humans won't do their share of the dying." Perhaps anticipating such a dire future, libertarians will not merely want (millions of) machines that speak, but also machines armed to the teeth.

The increasing use of long-range acoustic device (LRAD) technology also heralds a convergence of questions regarding speech, technology, and force. The LRAD is a tempting "crowd control" device because it can precisely calibrate volume up from "announcing everyone should disperse" to "very loud orders to disperse" to "pain-causing noise" and perhaps beyond. So far, it's mainly governments using LRADs, but they could easily be deployed by private corporations. Aaron Bady elaborates on some implications for the speech/conduct disctinction:

To ask the question of whether an LRAD is designed to hurt people or designed to communicate across long distances with people is to mystify its central design function: it is a technology whose purpose is to FORCE you to listen and obey . . . Ideally, perhaps, the “you” it targets will obey the communicated threat, sparing police the need to force you to obey and sparing them the need to produce the spectacle of people running away while holding their ears. But the whole point of having an LRAD is to ensure that one way or another, the police can get the people they address to do what they want them to do.

You can see this blending happening, for example, on the LRAD homepage (since LRAD is both a brand name and the device it manufactures), where this fact sheet tells the story of how the LRAD is meant to be used, what they call a “layered defense/escalation of force strategy for law enforcement and government agencies.” By this, they simply mean the spectrum that connects telling people that you will hurt them if they don’t obey to actually hurting them with sound, a spectrum that cannot, as a result, clearly distinguish them.

Might we see a future Supreme Court discern, in the penumbra of the First and Second Amendments, an individual right to a layered speech and self-defense strategy via LRADs? Such a ruling could help dissolve the old speech/conduct dilemmas, vesting corporations with vast rights. As Andrew Koppelman shows, the resulting polity could be either anarchic or feudal. That's what the "constitution in exile" seems to demand, while the "republican form of government" clause is conveniently non-justiciable.

The Other Path

Fortunately, there is another path. The Supreme Court has clearly stated (in Turner Broadcasting) that the First Amendment “does not disable the
government from taking steps to ensure that private interests not restrict, through physical control of a critical pathway of communication, the free flow of information and ideas.” Pro-competitive and privacy-protecting pathways to communication are critical to a robust public sphere (and if you don't believe me, just ask the activists in Egypt who were identified and punished by the corrupt corporatist-state apparatus that has run that nation for years.)

While some see the Wu-Volokh debate as a clash over the rights of computers and computer users, it is better characterized as a more complex argument about balancing the rights of platform owners, platform users, and the public interest. There are First Amendment values that favor Volokh’s position on animatronic sculptures, and those which support Wu's attempt to preserve privacy and antitrust law in digital media. We need to continually balance and rebalance the interests of multiple stakeholders in myriad situations, not declare, once and for all, protected or unprotected status for automated arrangements of information. If a principle of judicial minimalism and "trimming" is appropriate anywhere, it is appropriate here.

*Postscript: The California Assembly Committee on Utilities and Commerce is considering prohibiting "a provider of communication service acting at the request of a government entity, from intentionally interrupting communication service . . . except pursuant to a court order based on a finding of probable cause."

Image Credit: finishing-school, "meet / greet": "a semi-autonomous drone designed to move remotely through public space and greet individuals with multilingual salutations."

Simulposted: Concurring Opinions.

Older Posts
Newer Posts
Home