For the Balkinization symposium on Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics-and How to Cure It (Yale University Press, 2022).
Julie E. Cohen
Rick Hasen’s timely and important book links disinformation-based strategies for election manipulation to the platform-based, massively intermediated information infrastructures that enable them. This essential contribution comes at a time when policymakers are, finally, paying systematic attention to platforms as sources of democratic vulnerability. They are not, however, paying attention in quite the right way, and Hasen’s exposition suggests some important policy shifts. In particular, as Hasen recognizes, strategies that regulate audience targeting, drawn from the privacy governance toolkit, can (and should) supplement the traditional election law toolkit.
Mechanisms for audience targeting are not the only platform feature of concern, however. Platform-based, massively-intermediated information systems are continually, iteratively optimized to amplify content based on its ability to drive user engagement—and, therefore, to privilege outrage and volatility over deliberation, reasoned contestation, and truth production. Although these systems were not designed for the principal purpose of undermining democratic governance, their affordances invite and amplify disinformation-based destabilization attacks to which democratic political systems are particularly ill-equipped to respond. Systems thinking about disinformation-based strategies for election manipulation requires attention not only to tools for audience design but also to the affordances that amplify destabilization attacks.
Systems for “Content Moderation”
For many, debates about content governance within platform-based speech environments are first and foremost about “content moderation”—an activity revolving around post hoc review of content posted by users. The content moderation frame encourages systems thinking of a sort, but its insistent focus on post hoc intervention systematically forecloses attention to vitally important pieces of the disinformation puzzle.
Broadly
speaking, content moderation involves flagging objectionable content for review
and possible removal. In operation, content moderation systems involve considerable variation
and complexity. The
content in question may be flagged by users or identified by automated means;
it may be flagged after posting or identified and quarantined mid-upload; it
may be removed or subjected to a lesser sanction (such as downranking,
shadowbanning, or demonetization); and the user who attempted to post it may or
may not be given notice and/or the opportunity to seek review by a human
moderator. In the case of Facebook/Meta, a very few high-profile cases are
taken up by the Facebook Oversight Board, which may make a recommendation to
Facebook on how to treat similar items.
One
important shortcoming of the content moderation frame is that it simply doesn’t
fit the problem. As evelyn douek explains, its post hoc, atomistic
orientation suggests a comparison to courts—and, from that standpoint, today’s
privatized and largely opaque processes fall far short of those that users
might have a right to expect. Reforms designed to make content moderation
processes more recognizably quasi-judicial also are doomed to fail, however,
because of the sheer scale of content moderation operations and the automated tools
they require. For douek, those conclusions point toward designing administrative
content removal processes capable of enacting regularized systemic
interventions.
The
more important objection to the content moderation frame, however, involves
what lies outside it. Systems for post hoc content moderation that leave underlying ex ante mechanisms for
content immoderation undisturbed don’t and can’t significantly
diminish flows of disinformation because they ignore the features of the
platform environment that optimize it for disinformation to begin with. An
important strength of Cheap Speech, which sets it above many other
discussions of content moderation’s defects, is its willingness to confront the
content immoderation problem.
Systems for Audience Targeting
As Hasen recognizes, a more serious discussion about content governance requires consideration of mechanisms for content provision as well as those for content takedown—and that requires attention to the ad-based digital business models on which (many) platform-based information systems rely. Ad-based business models in turn rely heavily on mechanisms for audience targeting and microtargeting, so regulatory attention to those mechanisms seems only logical. But proposals to regulate audience targeting mechanisms, which blend elements from election law and privacy law frames, also leave a vitally important piece of the disinformation puzzle unacknowledged and unaddressed.
Broadly
speaking, tools for audience targeting offered within platform-based,
massively-intermediated information systems allow third-party advertisers to
specify the types of audiences they want to reach. Here too, there is
considerable room for variation. Large national and regional businesses seeking
to position their offerings for maximum appeal use filters designed to match particular
demographic groups with ads designed for them. Local businesses rely heavily on
geographic filters and may also use other demographic filters to the extent
feasible and permissible—so for example, sporting equipment stores can target
suburban parents and nail salons can target women, but landlords cannot target
based on race, ethnicity, or religion. Political campaigns and interest groups attempt
to target candidate appeals and issue ads for maximum uptake by favorably
predisposed voters. Digital adtech companies, for their part, allow advertisers
to select demographic parameters and/or to target ads to various predefined
groups. More sophisticated advertisers, including some political campaigns, can
use information they have collected using various methods—website registration,
bespoke apps, requests for event tickets, and so on—to specify target audience parameters more precisely.
Crucially,
all of this activity involves not only conventional strategies for demographic
segmentation but also tools for behavioral profiling designed by both platforms
and the digital marketing consultants who use their services. So, for example,
a cosmetics company might target ads for a hair loss remedy to men aged 30 to
50 who buy high-fashion brands—or it might microtarget based on other
behavioral data, such as time spent using dating apps or browsing tips on how
to minimize the appearance of aging. A political campaign might use its
database of voters who lean conservative on education policy to target ads to
demographically similar audiences—or it might microtarget appeals using
particularly hot-button language to those whose browsing behavior reveals
particularly high engagement with content advocating strong parental control of
education. Actors wishing to conduct disinformation-based destabilization
attacks can rely on capabilities for behavioral profiling to microtarget audiences whose emotional buttons
they think they can push.
For example, they can microtarget ads for YouTube videos opposing vaccination to
those whose browsing behavior reflects engagement with conspiracy-themed
content more generally.
The
theoretical linkages between capabilities for microtargeting and disinformation
uptake seem straightforward enough. Political scientists who study voting
mechanisms have shown that single-party legislative districts (whether created deliberately or
via self-sorting)
tend to entrench differences of political opinion. And there is some evidence that ranked-choice voting
mechanisms may increase civility and reduce polarization. By analogy, it makes
sense to think that legislation banning or limiting targeting
and microtargeting
might make it harder for disinformation campaigns to flourish because it would
effectively require lumping together audiences of different political
persuasions.
In
reality, however, proposals to restrict or ban political targeting and microtargeting
are unlikely to counter disinformation-based destabilization attacks
effectively, for two principal reasons. First, the election law and privacy law
frames on which such proposals rely tend to exclude from coverage the very
types of communications and the very types of targeting that represent
disinformation operators’ stock in trade. Begin with election law. As Hasen
recognizes, many communications that would not qualify as covered political
advertisements under current law, because they do not advocate for or against a
specific candidate, are nonetheless crafted for politically polarizing effect.
By tweaking definitions, one might expand current coverage to include certain
types of issue ads pertaining to matters or candidates currently on the ballot,
but the communications used in destabilization attacks often are not so easy to
characterize. Election
laws also typically balance speech and anti-corruption values by excluding small-dollar
expenditures from reporting and disclosure requirements. Disinformation
campaigns, however, do no need to spend large sums to produce large impacts via
user-driven uptake and social circulation, and some such campaigns rely on posts that are not ads at
all.
Privacy
governance, for its part, tends to be conceptualized through the lens of privacy
self-management, or privacy as control over one’s own data. (I’ve discussed the
inadequacies of that paradigm in more detail elsewhere.) Privacy regulations drafted to facilitate
privacy as control allow consensual targeting, and that makes them particularly
ill-suited to combating disinformation-based attacks. Today’s digital political
campaigns are chiefly crafted to exploit the consent of the willing, beginning
with contact information supplied by interested voters and relying on processes
of social circulation to spread their messages more widely. When a political
campaign organized around a candidate or a ballot issue chooses to spread disinformation, its message can spread readily
among those willing to be targeted. Privacy statutes crafted around notions of
privacy as control also tend to exclude practices of so-called contextual advertising, in which advertisers bid to
target their ads next to particular types of content rather than targeting
consumers directly, but contextual advertising is an
important part of the disinformation playbook. Contextual advertising exceptions in proposed laws
to restrict or ban political targeting and microtargeting create generous loopholes
for disinformation-based destabilization attacks to persist and thrive.
The
more significant problem with attempting to fight disinformation by regulating
tools for audience targeting and microtargeting, however, is that such efforts
ignore other, less visible aspects of the platform business model that also play
a major role in driving disinformation’s spread and uptake.
Systems for Content Amplification
Tackling content governance within platform-based, massively-intermediated information environments requires consideration of all of the ex ante mechanisms for content immoderation that platforms employ, including not only tools for audience targeting and microtargeting but also tools for content amplification. Selective, strategic amplification for user engagement underwrites all parts of the platform business model, including both algorithm design and ad pricing.
Consider,
once again, the platform advertising dashboard. As we’ve seen, advertisers can use
the dashboard to communicate their wishes, selecting demographic parameters for
their target audiences or supplying more detailed and data-driven behavioral
profiles. But the dashboard is a tool for two-way communication. Through it,
platform operators conduct automated, real-time
auctions that perform two sets of simultaneous functions. They pit would-be advertisers
against one another to secure desired placements, and at the same time they
train the machine learning processes underlying the auctions to reward more
effective ads—specifically, ads that produce greater user engagement and social
circulation—with better placements. Functionally, then, the dashboard is also
an engine for flash trading in economies of user attention and engagement that
platforms themselves work to produce, and it rewards disinformation-based
destabilization attacks for their efficacy at generating engagement.
As
that description is intended to suggest, moreover, the advertising dashboard and
the machine learning engine behind it represent only parts of a larger whole
that is oriented first and foremost toward keeping eyeballs on the platform. And,
crucially, mechanisms for controlling audience design via targeting and
microtargeting are not the only platform features that work to circulate
content to users. Equally as important, though far less visible to the external
eye, are the internal engagement levers that amplify certain types of content. Platforms
continually optimize and reoptimize for user
engagement, routing,
suggesting, and upranking items that, based on past data, are likeliest to
generate interaction and social recirculation. Content that generates outrage, including especially outrage-generating content that plays to partisan extremes, does well on those metrics.
Whether
this business model “works” in the conventional sense—i.e., whether it produces
the sorts of conversion ratios that commercial advertisers care about—is beside
the point. Commercial advertisers understand one thing very well: The dominant
platforms, particularly those operated by Facebook/Meta and Google, are where
the eyeballs are. Adversarial attackers, meanwhile, do not care about efficacy
in the way commercial advertisers (maybe) do. The question is not whether every
attack works but whether some (enough) can leverage platform-provided engagement
levers to achieve maximum uptake. For any particular attack, the measure of success is its decontextualized and apparently
uncontrolled circulation—from
users to other users and user groups, from the originating platform to other
platforms, and in a few lucky cases, to coverage by mainstream broadcast and
print media. And because the platform business model mandates optimization for
engagement, platforms have little incentive to institute more global measures
designed to undercut destabilization attacks.
It
is clear, however, that platforms have the tools to dampen viral circulation,
should they choose or be required to do so. Processes of social
circulation are sometimes described as “organic”—and, to be fair, they are
designed to exploit irrational tendencies of human social groups—but there is
nothing natural about them. Technical and organizational processes can be
reeingineered. Internal processes now trained singlemindedly on optimization
for engagement could be retrained to respond differently to rapid spread. Practices
like Facebook’s/Meta’s Xcheck program, which gives certain
high-profile accounts leeway to violate the company’s own content policies based
on the engagement they generate, could and should be discontinued. Flash
trading dashboards could be constrained to offer transparent, fair, and equal pricing.
Disclosures could reach beyond “transparency theater” to operational reality, shedding
meaningful light on how both internal engagement levers and outward-facing
flash trading dashboards work. These are solvable problems.
Putting All the Systems Together
A signal virtue of Cheap Speech is its willingness to reach beyond the conventional tools of election law to address election-related problems. To respond effectively to platform-facilitated destabilization attacks, however, it is necessary to acknowledge all of the mechanisms in play. Disinformation-based destabilization attacks thrive within platform-based, massively-intermediated information environments constructed and iteratively fine-tuned both to enable audience targeting and to amplify the content that generates the most engagement. An effective response to the pathologies of cheap speech must address both systems.
Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology at Georgetown Law. You can reach her by e-mail at jec@law.georgetown.edu.