Pages

Thursday, October 10, 2019

Mary Anne Franks as Constitutional Truth-Teller

For the Symposium on Mary Anne Franks, The Cult of the Constitution (Stanford University Press, 2019).

Anupam Chander

The Cult of the Constitution exhorts us to see the Constitution, warts and all--to recognize that overzealous interpretation of one important value can diminish other values we hold dear. First Amendment zealotry, for example, often leaves women, minorities, and other vulnerable people at the mercy of the loudest, most profane, and most threatening voices. In sharp and powerful prose, Mary Anne Franks shows us that the costs of such zealous interpretations—whether it be of the First or Second Amendment—are often borne disproportionately by women and other marginalized groups.

She boldly observes that the Constitution was written by (and for) white men, a proposition that seems hard to refute given the facts staring us in the face: The Constitution recognized slavery (whether implicitly or explicitly) through the three-fifths clause, and women didn't gain a constitutionally-protected right to vote until 1920. Franks speaks truth to power, and doesn't pull punches. She is willing to pierce the hagiography of the Constitution—a text that is supposed to be the very foundation of our society and nation.

I wish here to embrace her broad concerns, with some cautions about both the descriptive claims and her proposed solutions, focusing on issues of internet governance.

First Amendment zealotry has led internet platforms to be slow to recognize their role in perpetuating white male supremacy. Even if the First Amendment does not bind these “non-state actors,” they have often embraced a highly permissive free speech vision, and have proven reluctant to remove material, hoping that ‘good speech’ would drown out ‘bad speech’ in a marketplace of ideas. Twitter’s early mantra was to be “the free speech wing of the free speech party.” Internet platforms have convinced themselves that they are neutral actors, who should stay out of disputes about the values that make up a better world. Franks refutes the notion that the platforms are neutral; she argues that remaining “neutral” by permitting harassment of women drives women off the platform. Franks is convincing in her argument that technologies are never neutral—and Vivek Krishnamurthy and I have also offered arguments in a similar vein in a new paper, The Myth of Platform Neutrality (proving that great minds think alike!).

Franks observes that internet companies have taken some positive measures, for example, creating a hashing scheme called PhotoDNA to quickly identify and disable child pornography. This has been adopted by a number of major internet platforms. Relatedly, FBI Director, Christopher Wray, has recently cited Facebook’s efforts to report child pornography—literally identifying millions of photos that might contain such horrific behavior. Wray says that “Facebook is saving lives with those tips.” Wray worries that if Facebook decides to implement end-to-end encryption it will be unable to identify child pornography in the future. This is a serious concern. If Facebook decides to continue its current plans, it should examine whether it can still manage to identify child pornography. Siva Vaidhyanathan writes, “Pick your losers. Pick whom you care to protect. The kinds of people you value most will indicate whether you support the spread of strong encryption or not.” Vaidhyanathan continues, “Let’s not pretend this future will be pain-free.”

Franks laments that free speech advocates have typically criticized such interventions. Franks reports that the Electronic Frontier Foundation lodged its concern that “every time a company throws a vile neo-Nazi site off the Net, thousands of less visible decisions are made by companies with little oversight or transparency.” (195) Even if that is true—requiring Facebook to keep up white supremacist material would do little to contain the various decisions Facebook makes about what posts of my more than a thousand Facebook friends to highlight. Facebook will continue making lots of invisible decisions, even if they are required to keep up the Nazi material. I agree with Franks that internet platforms’ efforts to reduce hate are helpful. Free speech advocates are also correct in pointing out that the interventions are not without a price—in the form of speech that is mischaracterized, for example. Automated content moderation systems in particular remain poor at identifying context, as Natasha Duarte, Emma Llanso and Anna Loup have argued, risking speech. (In a related context, they are also poor at assessing fair use of copyrighted works, as Dan Burk has argued.) But, overall, internet companies’ interventions such as the barring of the white supremacist site Daily Stormer represent a positive development.

Active intervention by internet platforms to protect users against harassment and those who would promote white male supremacy—their ability to not be neutral—is made possible by Communications Decency Act (CDA) Section 230. Platforms can take down white male supremacist content without worrying that they will be sued in the United States—because of clear protections offered in Section 230. Franks recognizes this, approving the part of the CDA that immunizes companies from liability for taking down content.

Franks, however, disagrees with the protections against distributor liability that courts have read into the CDA: “it is not necessary or beneficial to immunize Facebook and similar social media platforms from distributor liability for the posts, likes, shares and so-on of third-party content…” (171). Franks argues that distributor liability immunity creates a kind of moral hazard—allowing them to benefit from the circulation of hateful material, without paying the price for that hate.

In fact, hateful material will drive some users off the platform. Yes, controversy may generate some engagement—but hate will also lead many to turn elsewhere, reducing the eyeballs that platforms value.  Internet companies, especially ones that seek broad appeal, thus do not have a clear economic incentive to maintain controversial material. More importantly, distributor (or notice-based) liability will lead inevitably lead companies to delete controversial content—including content of marginalized groups that will be targeted with notices. Consider claims of sexual harassment and assault that have come to light over the last few years. It’s easy to imagine platforms immediately taking down such accusations when the accused responds by claiming that they are untrue. Without the Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997) interpretation of Section 230, would internet platforms, faced with Harvey Weinstein’s aggressive lawyers, have allowed accusations against him? The ability to tell one’s story of harassment and assault through the hashtag #MeToo depends on a set of laws protecting internet platforms carrying those stories. Indeed, according to Twitter itself, MeToo was one of the most tweeted about movements of 2018, along with March for Our Lives, NFL Protests, Students Stand Up, and Black Lives Matter. Twitter highlighted this via a tweet, of course:


Some would prefer an internet where controversial speech is suppressed for fear of liability. If we were honest, many of us would want an internet where only the speech we dislike is suppressed for fear of liability. But expanding our liability law to introduce notice-based liability for internet activities will inevitably suppress the speech of not only white male supremacists, but also marginalized groups. I, for one, am not convinced that we were more equal in the 20th century media landscape than we are in the 21st.

Franks writes beautifully. The book is immensely accessible without sacrificing scholarly precision. The Cult of the Constitution is a must read for anyone who wishes to learn about what our Constitution means in practice.

Anupam Chander is Professor of Law at Georgetown University Law Center. You can reach him by e-mail at ac1931 at georgetown.edu