Balkinization |
Balkinization
Balkinization Symposiums: A Continuing List E-mail: Jack Balkin: jackbalkin at yahoo.com Bruce Ackerman bruce.ackerman at yale.edu Ian Ayres ian.ayres at yale.edu Corey Brettschneider corey_brettschneider at brown.edu Mary Dudziak mary.l.dudziak at emory.edu Joey Fishkin joey.fishkin at gmail.com Heather Gerken heather.gerken at yale.edu Abbe Gluck abbe.gluck at yale.edu Mark Graber mgraber at law.umaryland.edu Stephen Griffin sgriffin at tulane.edu Jonathan Hafetz jonathan.hafetz at shu.edu Jeremy Kessler jkessler at law.columbia.edu Andrew Koppelman akoppelman at law.northwestern.edu Marty Lederman msl46 at law.georgetown.edu Sanford Levinson slevinson at law.utexas.edu David Luban david.luban at gmail.com Gerard Magliocca gmaglioc at iupui.edu Jason Mazzone mazzonej at illinois.edu Linda McClain lmcclain at bu.edu John Mikhail mikhail at law.georgetown.edu Frank Pasquale pasquale.frank at gmail.com Nate Persily npersily at gmail.com Michael Stokes Paulsen michaelstokespaulsen at gmail.com Deborah Pearlstein dpearlst at yu.edu Rick Pildes rick.pildes at nyu.edu David Pozen dpozen at law.columbia.edu Richard Primus raprimus at umich.edu K. Sabeel Rahman sabeel.rahman at brooklaw.edu Alice Ristroph alice.ristroph at shu.edu Neil Siegel siegel at law.duke.edu David Super david.super at law.georgetown.edu Brian Tamanaha btamanaha at wulaw.wustl.edu Nelson Tebbe nelson.tebbe at brooklaw.edu Mark Tushnet mtushnet at law.harvard.edu Adam Winkler winkler at ucla.edu Compendium of posts on Hobby Lobby and related cases The Anti-Torture Memos: Balkinization Posts on Torture, Interrogation, Detention, War Powers, and OLC The Anti-Torture Memos (arranged by topic) Recent Posts AIs as Substitute Decision Makers
|
Thursday, November 01, 2018
AIs as Substitute Decision Makers
Guest Blogger Ian Kerr For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care. “Why, would it be unthinkable that I should stay in the saddle however much the facts bucked?”
Heading in
one direction, patients’ decision-making
capacity is increasing, thanks to an encouraging shift in patient
treatment. Health providers are moving away from substitute
decision-making—which permits
a designated person
to take over a patient’s health care decisions, should that patient’s cognitive
capacity become sufficiently diminished. Instead, there is a movement towards supported decision-making, which
allows patients
with diminished cognitive capacity to make their own life choices through the
support of a team of helpers.
Heading in
the exact opposite direction, doctors’ decision-making
capacity is diminishing, due to a potentially concerning shift in the way doctors diagnose
and treat patients. For many years now, various forms of data analytics and
other technologies have been used to support doctors’ decision-making. Now, doctors
and hospitals are starting to employ artificial intelligence (AI) to diagnose
and treat patients, and for an existing set of sub-specialties, the more honest
characterization is that these AIs no longer support doctors’ decisions—they
make them. As a result, health providers are moving right past supported
decision-making and towards what one might characterize as substitute decision
making by AIs.
In this
post, I contemplate two questions.
First, does
thinking about AI as a substitute decision-maker add value to the discourse?
Second, putting
patient decision making aside, what might this strange provocation tell us about
the agency and decisional autonomy of doctors, as medical decision making
becomes more and more automated?
1. The Substitution Effect
In a very
thoughtful elaboration of Ryan Calo’s well known claim that robots
exhibit social
valence, the main
Balkinizer himself, Jack
Balkin, has made
a number of interesting observations about what happens when we let robots and AIs stand in for humans and treat them as such.
Jack calls
this the “substitution effect”. It occurs
when—in certain contexts and for certain purposes—we treat robots and AIs as special
purpose human beings. Sometimes we deliberately construct these substitutions,
other times they are emotional or instinctual.
Jack is very careful to explain that
we ought not to regard robots and AI substitutes as fully identical to that
which for which they are a substitute. Rather—as with artificial sweeteners—it is merely a provisional
equivalence; we reserve the right to reject the asserted identity whenever
there is no further utility in maintaining it. Robots and AIs are not persons even if there is practical
value, in limited circumstances, to treat them as such. In this sense, Jack sees
their substitution
as partial. Robots and AIs only take on particular
aspects and capacities of people.
It is the very fact that the substitution is
only partial—that robots and AIs “straddle the line between selves and tools”—that
makes them, at once, both better and worse. A robot soldier may be a
superior fighter because it is not be
subject to the fog of war. On the other hand, its quality
of mercy is most
definitely strained (and “droppeth [not] as the gentle rain from heaven upon the
place beneath”).
Still, as Jack explains, there is
sometimes practical legal value in treating robots as if they are live agents,
and I agree.
As an example, Jack cites Annemarie
Bridy’s idea that a
court might treat AI-produced art as equivalent to a ‘work made for hire’ if
doing so minimizes the need to change existing copyright law. As the regal Blackstone
famously described
legal maneuvers of this sort:
We inherit an old Gothic castle, erected in the days
of chivalry, but fitted up for a modern inhabitant. The moated ramparts, the
embattled towers, and the trophied halls, are magnificent and venerable, but
useless. The inferior apartments, now converted into rooms of conveyance, are
cheerful and commodious, thought their approaches are winding and difficult.
Indeed, had Lon Fuller lived in these
interesting times, he would appreciate the logic of the fiction that treats robots as if they have legal attributes for special purposes. Properly
circumscribed, provisional attributions of this sort might enable the law to keep
calm and carry on until such time as we are able to more fully understand the
culture of robots in healthcare and produce more thorough and coherent legal
reforms.
It was this sort of motive that
inspired Jason Millar and me, back in 2012, to entertain
what Fuller would have called an expository
fiction (at the first ever We
Robot conference). We
wondered about the prospect of expert robots in medical decision-making.
Rejecting Richards’ and Smart’s it's-either-a-toaster-or-a-person
approach and
following Peter Kahn, Calo, and others, we take the
view that law may need to start thinking about intermediate
ontological categories
where robots and AIs substitute for human beings. Our main example is in the
field of medical diagnostics AIs. We suggest that these AI systems may, one
day, outperform human doctors; that this will result in pressure to delegate
medical diagnostic decision-making to these AI systems; and that this, in turn,
will cause various conundrums in cases where doctors disagree with the outcomes
generated by machines. We published our hypotheses and discussed the resultant
ethical and legal challenges in a book called Robot Law (in a chapter titled, “Delegation,
Relinquishment and Responsibility: The Prospect of Expert Robots”).
2. Superior ML-Generated Diagnostics
Since the publication of that
work, diagnostics generated through machine learning (ML), a popular subset of
AI, have advanced rapidly. I think it is fair to say that—despite IBM Watson’s overhyped
claims and recent
stumbles—a number of other ML-generated diagnostics have
already outperformed, or are on the verge of outperforming doctors in a narrow range of tasks and
decision-making. Although this may be difficult to measure, one thing is
certain: it is getting harder and harder to treat these AIs as mere
instruments. They are generating powerful decisions that the medical profession
and our health systems are relying upon.
This is not surprising when one
considers that ML software can see certain patterns in medical data that human
doctors cannot. If spotting patterns in large swaths of data enables ML to
generate superior diagnostic track records, it’s easy to imagine Jack’s
substitution effect playing out in medical decision making. To repeat, no one will
claim ML to be people, nor will they exhibit anything like the general skills
or intelligence of human doctors. ML will not perfect or even generate near
perfect diagnostic outcomes in every case. Indeed, ML will make
mistakes. In fact, as Froomkin et al. have demonstrated, ML-generated errors may be even more
difficult to catch and correct than human errors.
Froomkin
et al. (I am part of et al.) offer many reasons to believe
that diagnostics
generated by ML will have demonstrably better success rates than those
generated by human doctors alone.
The
focus of our work, however, is on the legal, ethical, and health policy consequences
that follow once AIs
outperform doctors. In
short, we argue that existing medical malpractice law will come to require superior
ML-generated medical diagnostics as the standard of care in clinical settings.
We go on to suggest that, in time, effective ML will create overwhelming legal,
ethical, and economical pressure to delegate the diagnostic process to machines.
This shift is what leads me to
believe that the doctors’ decision making capacity could soon diminish. I say
this because, as we argue in the article, medical decision-making will
eventually reach the point where the bulk of clinical outcomes collected in
databases result from ML-generated diagnoses, and that this is very likely to lead
to future decision scenarios that are not easily audited or understood by human
doctors.
3. Delegated Decision Making
While
it may be more tempting than ever to imbue machines with human attributes, it
is important to remember that today’s medical AI isn’t really anything more
than a bunch of clever computer science techniques that permit machines to perform tasks that would otherwise
require human intelligence. As I have tried to suggest above, recent successes in ML-generated diagnosis may catalyze
the view of AI as substitute decision makers in some useful sense.
But
let’s be sure to understand what is really going on here. What is AI really doing?
Simply
put, AI transforms a major effort into a minor one.
Doctors
can delegate to AI the work of an army of humans. In fact, much of what is actually
happening here is at best a metaphorical description whereby we allow an AI to
stand in for significant human labor that is happening invisibly, behind the scenes.
(In this case, researchers and practitioners feeding the machines massive
amounts of medical data and training algorithms to interpret, process and understand
it as meaningful medical knowledge). References to deep learning, AIs as
substitute decision makers, and similar concepts offer some utility—but they
also reinforce the
illusion that machines are smarter than they actually are.
Astra
Taylor was right to
warn us about this slight-of-hand, which she refers to as fauxtomation. Fauxtomation occurs not only in
the medical context described in the preceding paragraph but across a broader
range of devices and apps that are characterized as AI. To paraphrase her
simple but effective real-life example of an app used for food deliveries, we
come to say things like: ‘Whoa! How did your AI know that my order would be ready twenty minutes early?’ To which the
human server at the take-out booth replies: ‘because the response was actually from
me. I sent you a message via the app once your organic rice bowl was ready!’
This example is
the substitution effect gone wild: general human intelligence is attributed to a
so-called smart app. While I have tried to demonstrate that there may be value
in understanding some AIs as substitute
decision makers in limited circumstances—because AI is only a partial
substitute—the metaphor loses its utility once we start attributing anything
like general intelligence or complete autonomy to the AI.
Having examined the metaphorical
value in thinking of AIs as substitute decision makers,
let’s now turn to my second
question: what happens to the agency and decisional autonomy of doctors if AI becomes
the de facto
decider?
4. Machine Autonomy and the Agentic Shift
Recent successes in ML-generated diagnosis (and other applications in
which machines are trained to transcend their initial programming) have
catalyzed a shift in discourse from automatic machines to machine autonomy.
With increasing frequency, the final dance between data and algorithm
takes place without understanding, and often with human intervention or oversight.
Indeed, in many cases, humans have a hard time explaining how or why the
machine got it right (or wrong).
Curiously, the fact that a machine is capable of operation without
explicit command has become understood as the machine is self-governing, that
it is capable of making decisions on its own. But, as Ryan Calo has warned, “the
tantalizing prospect of original action” should not lead us to
presume that machines exhibit consciousness, intentionality or, for that
matter, autonomy. Neither is there good reason to think that today’s ML successes
prescribe or prefigure machine autonomy as something health law, policy, and
ethics will need to consider down the road.
As
the song goes, “the
future is but a question mark.”
Rather than prognosticating about
whether there will ever be machine autonomy, I end this post by considering
what happens when the substitution effect leads us to perceive such autonomy in
machines generating medical decisions. I do so by borrowing from Stanley
Milgram’s well known
notion of an ‘agentic
shift’—"the process whereby humans transfer
responsibility for an outcome from themselves to a more abstract agent.”
Before explaining how the
outcomes of Milgram’s experiments on obedience to authority apply to the
question at hand, it is useful to first understand the technological shift from
automatic machines to so-called autonomous machines. Automatic machines are
those that simply carry out their programming. The key characteristic of automatic
machines is their relentless predictability. With automatic machines, unintended consequences are to be understood as a
malfunction. So-called autonomous machines are different in kind. Instead of
simply following commands, these machines are intentionally devised to supersede
their initial programming. ML is a paradigmatic example of this—it is designed
to make predictions and anticipate unknown circumstances (think: object
recognition in autonomous vehicles). With so-called autonomous machines, the possibility
of generating unintended or unanticipated consequences is not a malfunction. It
is a feature, not a bug.
To
bring this back to medical decision making, it is important to see what happens
once doctors start to understand ML-generated diagnosis as anticipatory, autonomous
machines (as opposed to software that merely automates human decisions by if/then
programming). Applying Milgram’s notion of an agentic shift, there is a risk that
doctors, hospitals, or health policy professionals who perceive AIs as autonomous,
substitute decision makers, will transfer responsibility for an outcome from
themselves to the AIs.
This
agentic shift explains not only the popular obsession with AI superintelligence but also some rather stunning policy
recommendations regarding liability for robots that go wrong—including the highly
controversial report
by the European Parliament
to treat robots and AIs as “electronic persons”.
According to Milgram, when humans undergo an agentic shift,
they move from an autonomous state to an agentic state. In so doing, they no
longer see themselves as moral decision makers. This perceived moral incapacity
permits them to simply carry out the decisions of the abstract decision maker that
has taken charge. There are good psychological reasons for this to happen. An
agentic shift relieves the moral strain felt by a decision maker. Once a moral
decision maker shifts to being an agent who merely carries out decisions (in
this case, decisions made by powerful, autonomous machines), one no longer
feels responsible for (or even capable of making) those decisions.
This is something that was reinforced for me recently, when
I came to rely on GPS to navigate the French motorways. As someone who had
resisted this technology up until that point, not to mention someone who had
never driven on those complex roadways before, I felt like the proverbial cog
in the wheel. I was merely the human cartilage cushioning the moral friction
between the navigational software and the vehicle. I carried out basic
instructions, actuating the logic in the machine. Adopting this behavior, my decisional
autonomy was surrendered to the GPS. Other than programming my destination, I merely
did what I was told—even when it was pretty clear that I was going the wrong
way. Every time I sought to challenge the machine, I eventually capitulated. It
was up to the GPS to work it out. Although most people seem perfectly happy
with GPS, I felt a strange dissonance in this delegated decision making: in
choosing not to decide, I still had made a choice. I vowed to stop using GPS upon
my return to Canada so that my navigational decision making capacity would remain
intact. But I continue using it, and my navigational skills now suck,
accordingly.
My hypothesis is that our increasing tendency to treat AIs
as substitute decision makers diminishes our decisional autonomy by causing
profound agentic shifts. There will be many situations where we were previously
in autonomous states but are moved to agentic states. By definition, we will
relinquish control, moral responsibility and, in some cases, legal liability.
This is not merely dystopic doom-saying on my account. There
will be many beneficial social outcomes that accompany such agentic shifts.
Navigation and medical diagnostics are just a couple of them. In the same way
that agentic shifts enhance or make possible certain desirable social
structures (e.g., chain of command in
corporate, educational, or military environments), we will be able to
accomplish many things not previously possible by relinquishing some autonomy
to machines.
The bigger risk, of course, is the move that takes place in
the opposite direction—what I call the autonomous
shift. This is precisely the reverse of the agentic shift, i.e., the very opposite of what Stanley
Milgram observed in his famous experiments on obedience. Following the same
logic in reverse, as humans
find themselves more and more in agentic states, I suspect that we will increasingly
tend to project or attribute autonomous states to machines. AIs will
transform from their current role as data-driven
agents (as Mireille
Hildebrandt likes to call them) to being seen as autonomous and
authoritative decision makers in their own right.
If this is correct, I am now able to answer my second
question posed at the outset. Allowing AIs as a substitute decision makers,
rather than merely acting as decisional supports, will indeed impact the agency
and decisional autonomy of doctors. This, in turn, will impact doctors’
decision making capacity just as my own was impacted when I delegated my
navigational decision making to my GPS.
Ian Kerr is the Canada
Research Chair in Ethics, Law and Technology at the University of Ottawa, where
he holds appointments in Law, Medicine, Philosophy and Information Studies. You
can reach him by e-mail at iankerr at uottawa.ca or on twitter @ianrkerr
Posted 9:00 AM by Guest Blogger [link]
|
Books by Balkinization Bloggers ![]() Linda C. McClain and Aziza Ahmed, The Routledge Companion to Gender and COVID-19 (Routledge, 2024) ![]() David Pozen, The Constitution of the War on Drugs (Oxford University Press, 2024) ![]() Jack M. Balkin, Memory and Authority: The Uses of History in Constitutional Interpretation (Yale University Press, 2024) ![]() Mark A. Graber, Punish Treason, Reward Loyalty: The Forgotten Goals of Constitutional Reform after the Civil War (University of Kansas Press, 2023) ![]() Jack M. Balkin, What Roe v. Wade Should Have Said: The Nation's Top Legal Experts Rewrite America's Most Controversial Decision - Revised Edition (NYU Press, 2023) ![]() Andrew Koppelman, Burning Down the House: How Libertarian Philosophy Was Corrupted by Delusion and Greed (St. Martin’s Press, 2022) ![]() Gerard N. Magliocca, Washington's Heir: The Life of Justice Bushrod Washington (Oxford University Press, 2022) ![]() Joseph Fishkin and William E. Forbath, The Anti-Oligarchy Constitution: Reconstructing the Economic Foundations of American Democracy (Harvard University Press, 2022) Mark Tushnet and Bojan Bugaric, Power to the People: Constitutionalism in the Age of Populism (Oxford University Press 2021). ![]() Mark Philip Bradley and Mary L. Dudziak, eds., Making the Forever War: Marilyn B. Young on the Culture and Politics of American Militarism Culture and Politics in the Cold War and Beyond (University of Massachusetts Press, 2021). ![]() Jack M. Balkin, What Obergefell v. Hodges Should Have Said: The Nation's Top Legal Experts Rewrite America's Same-Sex Marriage Decision (Yale University Press, 2020) ![]() Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Belknap Press, 2020) ![]() Jack M. Balkin, The Cycles of Constitutional Time (Oxford University Press, 2020) ![]() Mark Tushnet, Taking Back the Constitution: Activist Judges and the Next Age of American Law (Yale University Press 2020). ![]() Andrew Koppelman, Gay Rights vs. Religious Liberty?: The Unnecessary Conflict (Oxford University Press, 2020) ![]() Ezekiel J Emanuel and Abbe R. Gluck, The Trillion Dollar Revolution: How the Affordable Care Act Transformed Politics, Law, and Health Care in America (PublicAffairs, 2020) ![]() Linda C. McClain, Who's the Bigot?: Learning from Conflicts over Marriage and Civil Rights Law (Oxford University Press, 2020) ![]() Sanford Levinson and Jack M. Balkin, Democracy and Dysfunction (University of Chicago Press, 2019) ![]() Sanford Levinson, Written in Stone: Public Monuments in Changing Societies (Duke University Press 2018) ![]() Mark A. Graber, Sanford Levinson, and Mark Tushnet, eds., Constitutional Democracy in Crisis? (Oxford University Press 2018) ![]() Gerard Magliocca, The Heart of the Constitution: How the Bill of Rights became the Bill of Rights (Oxford University Press, 2018) ![]() Cynthia Levinson and Sanford Levinson, Fault Lines in the Constitution: The Framers, Their Fights, and the Flaws that Affect Us Today (Peachtree Publishers, 2017) ![]() Brian Z. Tamanaha, A Realistic Theory of Law (Cambridge University Press 2017) ![]() Sanford Levinson, Nullification and Secession in Modern Constitutional Thought (University Press of Kansas 2016) ![]() Sanford Levinson, An Argument Open to All: Reading The Federalist in the 21st Century (Yale University Press 2015) ![]() Stephen M. Griffin, Broken Trust: Dysfunctional Government and Constitutional Reform (University Press of Kansas, 2015) ![]() Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) ![]() Bruce Ackerman, We the People, Volume 3: The Civil Rights Revolution (Harvard University Press, 2014) Balkinization Symposium on We the People, Volume 3: The Civil Rights Revolution ![]() Joseph Fishkin, Bottlenecks: A New Theory of Equal Opportunity (Oxford University Press, 2014) ![]() Mark A. Graber, A New Introduction to American Constitutionalism (Oxford University Press, 2013) ![]() John Mikhail, Elements of Moral Cognition: Rawls' Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment (Cambridge University Press, 2013) ![]() Gerard N. Magliocca, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment (New York University Press, 2013) ![]() Stephen M. Griffin, Long Wars and the Constitution (Harvard University Press, 2013) Andrew Koppelman, The Tough Luck Constitution and the Assault on Health Care Reform (Oxford University Press, 2013) ![]() James E. Fleming and Linda C. McClain, Ordered Liberty: Rights, Responsibilities, and Virtues (Harvard University Press, 2013) Balkinization Symposium on Ordered Liberty: Rights, Responsibilities, and Virtues ![]() Andrew Koppelman, Defending American Religious Neutrality (Harvard University Press, 2013) ![]() Brian Z. Tamanaha, Failing Law Schools (University of Chicago Press, 2012) ![]() Sanford Levinson, Framed: America's 51 Constitutions and the Crisis of Governance (Oxford University Press, 2012) ![]() Linda C. McClain and Joanna L. Grossman, Gender Equality: Dimensions of Women's Equal Citizenship (Cambridge University Press, 2012) ![]() Mary Dudziak, War Time: An Idea, Its History, Its Consequences (Oxford University Press, 2012) ![]() Jack M. Balkin, Living Originalism (Harvard University Press, 2011) ![]() Jason Mazzone, Copyfraud and Other Abuses of Intellectual Property Law (Stanford University Press, 2011) ![]() Richard W. Garnett and Andrew Koppelman, First Amendment Stories, (Foundation Press 2011) ![]() Jack M. Balkin, Constitutional Redemption: Political Faith in an Unjust World (Harvard University Press, 2011) ![]() Gerard Magliocca, The Tragedy of William Jennings Bryan: Constitutional Law and the Politics of Backlash (Yale University Press, 2011) ![]() Bernard Harcourt, The Illusion of Free Markets: Punishment and the Myth of Natural Order (Harvard University Press, 2010) ![]() Bruce Ackerman, The Decline and Fall of the American Republic (Harvard University Press, 2010) Balkinization Symposium on The Decline and Fall of the American Republic ![]() Ian Ayres. Carrots and Sticks: Unlock the Power of Incentives to Get Things Done (Bantam Books, 2010) ![]() Mark Tushnet, Why the Constitution Matters (Yale University Press 2010) Ian Ayres and Barry Nalebuff: Lifecycle Investing: A New, Safe, and Audacious Way to Improve the Performance of Your Retirement Portfolio (Basic Books, 2010) ![]() Jack M. Balkin, The Laws of Change: I Ching and the Philosophy of Life (2d Edition, Sybil Creek Press 2009) ![]() Brian Z. Tamanaha, Beyond the Formalist-Realist Divide: The Role of Politics in Judging (Princeton University Press 2009) ![]() Andrew Koppelman and Tobias Barrington Wolff, A Right to Discriminate?: How the Case of Boy Scouts of America v. James Dale Warped the Law of Free Association (Yale University Press 2009) ![]() Jack M. Balkin and Reva B. Siegel, The Constitution in 2020 (Oxford University Press 2009) Heather K. Gerken, The Democracy Index: Why Our Election System Is Failing and How to Fix It (Princeton University Press 2009) ![]() Mary Dudziak, Exporting American Dreams: Thurgood Marshall's African Journey (Oxford University Press 2008) ![]() David Luban, Legal Ethics and Human Dignity (Cambridge Univ. Press 2007) ![]() Ian Ayres, Super Crunchers: Why Thinking-By-Numbers is the New Way to be Smart (Bantam 2007) ![]() Jack M. Balkin, James Grimmelmann, Eddan Katz, Nimrod Kozlovski, Shlomit Wagman and Tal Zarsky, eds., Cybercrime: Digital Cops in a Networked Environment (N.Y.U. Press 2007) ![]() Jack M. Balkin and Beth Simone Noveck, The State of Play: Law, Games, and Virtual Worlds (N.Y.U. Press 2006) ![]() Andrew Koppelman, Same Sex, Different States: When Same-Sex Marriages Cross State Lines (Yale University Press 2006) Brian Tamanaha, Law as a Means to an End (Cambridge University Press 2006) Sanford Levinson, Our Undemocratic Constitution (Oxford University Press 2006) Mark Graber, Dred Scott and the Problem of Constitutional Evil (Cambridge University Press 2006) Jack M. Balkin, ed., What Roe v. Wade Should Have Said (N.Y.U. Press 2005) Sanford Levinson, ed., Torture: A Collection (Oxford University Press 2004) Balkin.com homepage Bibliography Conlaw.net Cultural Software Writings Opeds The Information Society Project BrownvBoard.com Useful Links Syllabi and Exams |