Balkinization  

Wednesday, August 22, 2018

Terrorist Speech and Global Platform Governance

Guest Blogger

Hannah Bloch-Wehba

Early this week, Julian King, the European Union’s commissioner for security, told the Financial Times that Brussels was drawing up draft legislation to require online platforms to remove terrorist speech from their services within an hour after it is posted. Since the European Commission has been ramping up pressures on platforms to “voluntarily” participate in a range of content-removal frameworks over the last several years, its move to make those arrangements compulsory comes as no real surprise. Nonetheless, the new development represents the first time that the EU has directly regulated the way that platforms handle illegal content online.

In a sense, governments’ efforts to regulate illegal content on the web—whether pirated works, child pornography, or defamatory speech—are a tale as old as time, or at least as old as the Internet. The difficulty of effectively governing online content has raised enduring questions about the wisdom of insulating intermediaries from liability for illegal content posted by users. These efforts, too, have long raised questions about the scope of a nation’s prescriptive jurisdiction and its ability to apply and enforce national laws on a global Internet.

But the European Commission’s direction has signaled a new innovation in online content governance: the EU is moving away from the simple threat of intermediary liability and toward legal structures that will leverage private infrastructure and private decision making to carry out public policy preferences. While collateral censorship is, of course, nothing new, the Commission’s proposal raises two distinct sets of concerns. First, the Commission’s new strategy sidesteps ongoing debates about the appropriate geographic reach of local content regulation by relying in part on platforms’ own terms of service and community standards as the basis to take down content globally. Second, although the new mechanisms rely on private enterprise to partner with government and, often, play a quasi-governmental role, mechanisms that would promote the accountability of content-related decision making are conspicuously absent.


Background

The draft legislation is likely to build on the Commission’s “Recommendation on measures to effectively tackle illegal content online,” which it released in March 2018. The Recommendation called on platforms to provide “fast-track procedures” to take down content referred by “competent authorities,” “internet referral units” and “trusted flaggers,” regardless of whether the content was illegal or was a violation of the platform’s own terms of service. The Commission also called on platforms to use “automated means” to find, remove, and prevent the reposting of terrorist content.

King’s recent comments have suggested that the new legislation will require platforms to delete “terrorist” content within an hour after it is posted. However, the Recommendation published earlier this year is not so limited—it applies to hate speech and to “infringements of consumer protection laws,” among other categories. King has also suggested that platforms have an increasing role to play in combating the weaponization of fake news and disinformation.

Local policy, global effect

One unappreciated consequence of the EU’s new strategy for regulating content: by leveraging platforms’ own terms of service as proxies for illegality, the takedown regime will be effective on a global scale, not simply within Europe. This global reach distinguishes the Commission’s policy on terrorist speech from other content deletion controversies. Online platforms have usually tried to accommodate local policy preferences by withholding access to content that violates a local law within a defined geographic area. Accordingly, for example, Google and Facebook will restrict access within Thailand to content that insults the Thai monarchy, which violates the country’s lese-majeste law. Yet new takedown regimes challenge this tradition of geographically constrained deletion. For example, the French data protection authority (CNIL) has taken the position that the right to be forgotten requires search engines to delist links worldwide, not simply within France or Europe. Google has resisted global delisting of links in the interest of ensuring that “people have access to content that is legal in their country.”

But platforms’ community standards and terms of service are drafted to apply globally, not on a country-by-country basis. Accordingly, content that violates these private policies will be deleted worldwide. Perhaps this is as it should be, in light of a growing consensus concerning the risk of online extremism and terrorist propaganda—it’s certainly likely to be more effective at limiting access than geo-blocking would. But the framework also raises obvious subjectivity issues: in the absence of a global (or even regional) consensus on the definition of “terrorist content,” is a global deletion strategy truly prudent?

The potential for error and abuse is obvious: last year, for example, Facebook mistakenly deleted the account of a political activist group that supported Chechen independence. The likelihood that platforms will over-comply with deletion requests is particularly troubling in light of recent rightward shifts in European politics. Yet because of the Recommendation, platforms are virtually certain to comply with government demands rather than stand up for speech rights in edge cases. For example, if an Internet Referral Unit in Hungary flags a Facebook post supporting the Open Society Foundations as “terrorist content,” the Recommendation suggests that Facebook should “fast track” the takedown and delete the content worldwide; so-called terrorist content would presumptively violate the platform’s community standards.

Inadequate safeguards

A second set of consequences results from the commingling of public and private authority to censor online speech. The Recommendation endorses extensive cooperation between industry and government, and illustrates the increasingly dominant role of government in informing decisions that were once largely left to private enterprise. One example: under the Recommendation, platforms are explicitly instructed to prioritize law enforcement’s takedown requests for rapid deletion, and to defer to law enforcement’s judgments concerning whether content violates the law or the platform’s terms of service.

Here, platforms are playing quintessentially administrative roles, setting rules, implementing policy, and adjudicating disputes concerning public policy outside the judicial setting. Likewise, government-led decisions to delete online content—even if ultimately implemented by private actors—resemble traditional prior restraints: they prevent the dissemination of speech, without any judicial hearing on its legality, and in the absence of punishment for the speaker.

Both of these analogies, however sketchy, point to a common result: it would be appropriate to impose certain procedural or substantive safeguards to protect against over-deletion or other abuse. Embedding values of transparency, participation, reasoned decision making, and judicial review within this regime would go far to ensure that lawful speech remains protected and that government and industry, working together, do not over-censor.

But these safeguards are nowhere to be found. Perhaps the greatest obstacle to accountability is the obscurity surrounding how platforms and governments are operating together to delete content online. It is not clear that platforms always provide users with notice and an opportunity to contest the removal of content; in fact, in the case of terrorist speech, the Recommendation strongly suggests that “counter-notice” will be inappropriate. Speeding up and automating decisions on whether online content is illegal or violates terms of service will likely make the process even less transparent and accountable. Secrecy and closed-door decision making present obvious (and likely intentional) barriers to public participation. And without sufficient information about these practices, few members of the Internet-using public are in a position to bring suit.

In a sense, it’s not surprising that the Commission’s new strategy to combat unlawful content online focuses on terrorism: it’s a context in which Brussels and Washington tend to cooperate, and the usual speech and privacy norms are often shoved aside. But as calls mount in Europe for platforms to take increasing responsibility for policing online content, we should be mindful of the potential global effects as well as the absence of safeguards that typically might protect civil liberties and user rights.


Hannah Bloch-Wehba is Assistant Professor of Law at Drexel University. You can reach her by e-mail at hcb38 at drexel.edu.

Older Posts
Newer Posts
Home