E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Isaac Barnes May Anthropic’s
case against the government has a religious dimension. Anthropic filed suit
against the federal government after the government’s threat to declare it a
supply chain risk when the company objected to the use of its products in autonomous warfare and mass surveillance
of Americans. Anthropic presented the government’s actions as coercion under
the First Amendment. The case recently saw Judge Rita Lin issue a preliminary injunction against the government,
noting this “appears to be classic First Amendment retaliation.” Yet the case resembles not just prior cases about free
speech, which Anthropic and the judge invoked, but also those on religion. When
the rupture between the Pentagon and the company first became public,
Anthropic’s CEO Dario Amodei released a statement declaring that the
company “cannot in good conscience accede to their request.” Amodei’s
invocation of conscience as core to Anthropic’s stand positioned the company as
a kind of corporate conscientious objector. As such, it may be protected as
religion under the Religious Freedom Restoration Act (RFRA).
A recent
amicus filing in the Anthropic case by
a group of Roman Catholic Moral Theologians and Ethicists hints at the broader
religious implications of the current case, arguing that Anthropic’s position
has correspondences with Catholic moral teaching on surveillance and the use of
AI weapons. They rely on Catholic Just War Theory to argue, for example, that
AI controlled autonomous weapons “by definition fails to meet the conditions
for jus in bello required for acts of war to be morally licit in Catholic
thought.” By refraining from working on these weapons, they argue Anthropic is
“acting as a responsible and moral corporate citizen.” The company, the brief
implies, is exercising conscience. Over a decade ago, the Supreme Court in Burwell v. Hobby Lobby held that a for-profit
corporation was protected by RFRA in exercising religion. Hobby Lobby, a craft store chain,
could not be forced to provide health insurance coverage for contraceptives
because of its owners’ religious objections. Justice Alito, in the majority opinion,
even made clear that business practices “compelled or limited by the tenets of
religious doctrine” were examples of religious exercise under RFRA.Legal academics worried that there would be a
spate of corporate religious liberty claims, though relatively few
cases of corporations using RFRA occurred. If there is a sincere religious
objection to providing the Department of War with military AI, the government’s
actions against Anthropic would be subject to strict scrutiny. There is the issue of whether Anthropic’s conscience claims
are “religious” under the law. Legal notions of religion are broad; they do not require a theistic
belief, and they can cover practices like ethical vegetarianism or objection to
vaccination, which are not tied to comprehensive metaphysical systems. I
have argued elsewhere that AI-based beliefs, common with AI companies,
seem to fit legal definitions of
religion. Anthropic, which has ties to the Effective Altruist
movement, is perhaps the most religious seeming of the large AI
companies. It is a public benefit corporation which claims to “Act for the global good” while prioritizing AI
safety. Anthropic created a constitution to guide the values of its AI model, which has at
least some commonalities with a religious doctrine, devoting considerable time
to its LLM Claude’s relationship to virtue and ethics. There seems little
reason to doubt that Anthropic’s professed concern about killing civilians is a
sincere ethical belief.
Dario Amodei explicitly cites conscience as the reason for
not being willing to work on contracts with the Department of War involving
mass domestic surveillance or fully autonomous weapons. During the Vietnam War,
conscientious objector cases United States v. Seeger and Welsh v. United Statesfound that conscience
claims of refusal to render military service were “religious” for the purpose
of making someone a “religious” conscientious objector to war. Seeger was not a
traditional theist, while Welsh crossed the word religion off his draft form;
both were understood to be religious by the Supreme Court. While those cases
interpreted “religion” within the language of statute, they also have broader
implications that claims of conscientious objection did not have to be rooted
in established religious traditions.
Anthropic’s
case in some key ways resembles Thomas v. Review Board, where a Jehovah’s
Witness filed for unemployment after leaving a factory job where he was
assigned to make tank turrets when he felt he could not in good conscience help
produce weapons. Though this kind of work was not condemned by all Jehovah’s
Witnesses, the Supreme Court found that his objection to involvement in
producing war material was religious in nature. The act of refusing to take
part in producing weapons because of ethical objections to taking human life,
the demand to not be involved in killing, might be inherently religious in
nature. The
biggest difference, other than Anthropic is a corporation, is the fact that Anthropic does not object to AI
weapons in all circumstances. Anthropic has stated that it might not oppose
autonomous weapons if it felt they were reliable enough not to endanger
civilians and U.S. soldiers. While this might appear to undermine Anthropic’s
ethical stand, these sorts of moral arguments about a kind of weapons technology
have been common in ethical debates about warfare. The Jesuit theologian John Ford in 1944 for instance drew
a useful distinction between precision bombing, which he believed could be
morally undertaken, and obliteration bombing of the kind undertaken on cities
in Japan and Europe, which he classified as an “immoral attack on the rights of
the innocent.” If the technology does not allow moral use in its current form,
Anthropic’s objection is still an ethical one. Anthropic’s
objection to only certain kinds of AI use in warfare is a kind of selective
conscientious objection. When the U.S. had a draft, courts were not supportive
of selective
conscientious objection in Gillette v. United Stateswhen claimants were not opposed to all
war. Yet for the purposes of RFRA, this does not matter so long as
the objection to the government’s burdening of their beliefs is sincerely
religious. Anthropic explains why it objects to using AI in autonomous weapons
and to surveilling Americans, citing the technology’s great potential for
harms. Even if Anthropic’s explanation is not perfect considering their past
contracts with the military or their belief that autonomous AI weapons could
one day be developed, it is well established since Thomas that religious
liberty claims do not have to be internally consistent. There are reasons why Anthropic might not opt to use RFRA
to defend its ability to not develop AI weapons. RFRA would not salvage a
frayed relationship with the Department of War and would risk future contracts.
There would certainly be reputational costs for an AI company arguing that it
was religious, which might hurt its public reputation and cause it to be seen
as odd or even cult-like. Further, other legal avenues exist for Anthropic,
such as compelled speech and expression, and they seem to be working
effectively now. Those opposed to the government might worry about the
expansion of corporate conscience rights, even if this case is sympathetic. Yet as a
matter of law, Anthropic has a religious liberty claim that could shield it
from federal coercion. This claim is just as strong as any involving speech. An
AI company refusing for reasons of conscience to make a weapon is no less
obviously religious than a craft store like Hobby Lobby refusing to provide
employees with contraceptive coverage. Isaac Barnes
May is a Resident Fellow at the Information Society Project at Yale Law School.
He is the author of two books, including American Quaker
Resistance to War, 1917–1973: Law, Politics, and Conscience.You can reach him by e-mail at
isaac.may@yale.edu.