E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
When we talk about
AI and fact-checking, we often fixate on the informational: the deepfake, the
viral lie, or the bot. Yet the disinformation crisis is fundamentally
institutional. We have reached a crossroads where we must shift our focus from
the viral lie to the underlying political economy that shapes who defines
truth, and at what cost. If we fix the information but leave the infrastructure
of truth-making in the hands of a few market-driven empires, we have not solved
the disinformation crisis; we have simply automated it.
This institutional
struggle is at the heart of the anti-disinformation assemblage, a contingent,
often messy configuration of platforms, states, technology organizations, and
editorial actors. In an ongoing collaborative book project, my co-authors and I
argue that this assemblage is currently undergoing a profound fragmentation.
These diverse actors are held together by a struggle for definitional
authority: the power to decide what constitutes a social problem and what
requires an intervention.
The Response-Side Gap
If
you look at the last decade of interdisciplinary research
on disinformation, there is a massive supply-side bias. We have thousands of
papers on how disinformation is produced and why people believe it, but there
is a lacuna on the response side. We need to know more about how the
institutions responding to the crisis actually allocate authority in practice.
Governance
interventions in this field often reinforce existing hierarchies. When we
introduce AI into this mix, we are adding a tool for efficiency and automating
existing institutional biases. This occurs in the automation of harm detection
over veracity. During high-stakes elections, platforms deploy AI to suppress
borderline content, material that violates policy, and risks brand reputation.
This shifts the goal from a shared pursuit of truth to a mechanical pursuit of
market stability. AI tools are programmed to find what is least disruptive to a
platform's advertising ecosystem. We are thus moving from state-led propaganda
to platform-led digital governance, in which the authority to verify
information has shifted from public bodies to private, algorithmic entities.
Three Pillars of the AI-Truth
Economy
As we explore this
shift, three critical questions emerge:
Does AI reshuffle or entrench power? Most AI tools are beholden to
the walled gardens of platforms. If a fact-checking startup builds an AI
detection tool, its survival depends on API access granted by Meta or
Google. In this context, AI centralizes verification infrastructure rather
than democratizing it.
Is the goal truth or market stability? AI moderation is
implemented because it is scalable and cost-effective, not because it is
the most accurate. We are seeing an epistemological relativism where “harm,”
which carries legal and brand risks, is prioritized over “veracity.”
Has “truth” become a luxury good? We are currently in what our
team identifies as the "Retraction Era" (2022–2025). Platforms
are scaling back human trust and safety teams in favor of AI to reduce
costs. By failing to mandate "human-in-the-loop" oversight, law
and policy have allowed a global decoupling: while the Global North
retains some algorithmic protections, the Global South is left with
automated-only moderation.
Consider the Tigray
War in Ethiopia. While English-language content enjoys layers of human and
algorithmic oversight, internal documents, such as the Facebook Papers,
revealed that Meta’s AI systems were blind to languages like Amharic and Oromo.
Inflammatory calls for ethnic violence remained active for days because AI
tools lacked the linguistic nuance to identify the threat. Platforms
prioritized the high-cost maintenance of "truth" in Western markets
while leaving the Global South to be moderated by black-box systems that could
not recognize the significance of the language until violence had already
spilled into the streets.
Scaling the Analysis: Macro, Meso,
and Micro
To understand how
this functions, we must look at the anti-disinformation assemblage at three
levels:
*Macro Level: Three digital empires are currently projecting power. The U.S. follows
a neoliberal model, privileging free markets; the EU acts as a regulatory
superpower focusing on rights; and China utilizes a state-driven model of
surveillance. AI is the technical and legal force currently shaping the
boundaries of acceptable speech globally.
*Meso Level: Initiatives such as Vera.ai and Logically Intelligence have framed
disinformation as a technical problem, solvable with software. This
technologizing of fact-checking turns an epistemic struggle into a
data-management task. Similarly, X’s Community Notes shifts the labor of
truth-seeking onto unpaid users, turning a political debate into a ranking
problem.
*Micro Level: Consider the human fact-checker. Unlike traditional journalists,
fact-checkers make explicit epistemic judgments. When President Biden claimed
his uncle was eaten by cannibals, legacy outlets reported the claim without
evidence but hesitated to label it a lie; independent fact-checkers like Snopes
made an explicit judgment. Yet, as these actors partner with platforms, they
become serfs, a precarious labor force for the very tech giants they are meant
to monitor.
Moving Past the Disinformation
Crisis
We must recognize
that AI and law are not just tools; they are reflections of a crisis of trust
in professional authority.
To counter the
walled gardens of the digital empires, we must move toward a public-interest
infrastructure. Regulators should mandate that platforms provide real-time API
access to independent researchers. To stop global decoupling, we must regulate
the quality of AI, not just its output quantity, perhaps by mandating specific
ratios of human local experts.
Law and policy can
regulate bad content while simultaneously addressing the asymmetries of
truth-making. If we do not address the infrastructure, we have not solved the
crisis. We have automated it.
Valérie Bélair-Gagnon is Associate Professor at the Hubbard School of Journalism and Mass Communication, University of Minnesota-Twin Cities. You can reach her by e-mail at vbg@umn.edu.
Collaborators on
the forthcoming book project include: Steen Steensen (OsloMet), Rebekah Larsen
(MIT), Lucas Graves (Universidad Carlos III de Madrid), Bente Kalsnes
(Kristiana University College), Oscar Westlund (OsloMet/Gothenburg), Lasha
Kavtaradze (Kristiana University College), and Reidun Samuelsen (Norwegian
Media Authority).