Balkinization  

Thursday, November 01, 2018

AIs as Substitute Decision Makers

Guest Blogger

Ian Kerr


For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.
“Why, would it be unthinkable that I should stay in the saddle however much the facts bucked?”

Ludwig Wittgenstein, On Certainty

We are witnessing an interesting juxtaposition in medical decision-making.

Heading in one direction, patients’ decision-making capacity is increasing, thanks to an encouraging shift in patient treatment. Health providers are moving away from substitute decision-making—which permits a designated person to take over a patient’s health care decisions, should that patient’s cognitive capacity become sufficiently diminished. Instead, there is a movement towards supported decision-making, which allows patients with diminished cognitive capacity to make their own life choices through the support of a team of helpers.

Heading in the exact opposite direction, doctors’ decision-making capacity is diminishing, due to a potentially concerning shift in the way doctors diagnose and treat patients. For many years now, various forms of data analytics and other technologies have been used to support doctors’ decision-making. Now, doctors and hospitals are starting to employ artificial intelligence (AI) to diagnose and treat patients, and for an existing set of sub-specialties, the more honest characterization is that these AIs no longer support doctors’ decisions—they make them. As a result, health providers are moving right past supported decision-making and towards what one might characterize as substitute decision making by AIs.

In this post, I contemplate two questions.

First, does thinking about AI as a substitute decision-maker add value to the discourse?

Second, putting patient decision making aside, what might this strange provocation tell us about the agency and decisional autonomy of doctors, as medical decision making becomes more and more automated?

1. The Substitution Effect

In a very thoughtful elaboration of Ryan Calo’s well known claim that robots exhibit social valence, the main Balkinizer himself, Jack Balkin, has made a number of interesting observations about what happens when we let robots and AIs stand in for humans and treat them as such.

Jack calls this the “substitution effect”. It occurs when—in certain contexts and for certain purposes—we treat robots and AIs as special purpose human beings. Sometimes we deliberately construct these substitutions, other times they are emotional or instinctual.

Jack is very careful to explain that we ought not to regard robots and AI substitutes as fully identical to that which for which they are a substitute. Rather—as with artificial sweeteners—it is merely a provisional equivalence; we reserve the right to reject the asserted identity whenever there is no further utility in maintaining it. Robots and AIs are not persons even if there is practical value, in limited circumstances, to treat them as such. In this sense, Jack sees their substitution as partial. Robots and AIs only take on particular aspects and capacities of people.

It is the very fact that the substitution is only partial—that robots and AIs “straddle the line between selves and tools”—that makes them, at once, both better and worse. A robot soldier may be a superior fighter because it is not be subject to the fog of war. On the other hand, its quality of mercy is most definitely strained (and “droppeth [not] as the gentle rain from heaven upon the place beneath”).

Still, as Jack explains, there is sometimes practical legal value in treating robots as if they are live agents, and I agree.

As an example, Jack cites Annemarie Bridy’s idea that a court might treat AI-produced art as equivalent to a ‘work made for hire’ if doing so minimizes the need to change existing copyright law. As the regal Blackstone famously described legal maneuvers of this sort:

We inherit an old Gothic castle, erected in the days of chivalry, but fitted up for a modern inhabitant. The moated ramparts, the embattled towers, and the trophied halls, are magnificent and venerable, but useless. The inferior apartments, now converted into rooms of conveyance, are cheerful and commodious, thought their approaches are winding and difficult.
 
Indeed, had Lon Fuller lived in these interesting times, he would appreciate the logic of the fiction that treats robots as if they have legal attributes for special purposes. Properly circumscribed, provisional attributions of this sort might enable the law to keep calm and carry on until such time as we are able to more fully understand the culture of robots in healthcare and produce more thorough and coherent legal reforms.

It was this sort of motive that inspired Jason Millar and me, back in 2012, to entertain what Fuller would have called an expository fiction (at the first ever We Robot conference). We wondered about the prospect of expert robots in medical decision-making. Rejecting Richards’ and Smart’s it's-either-a-toaster-or-a-person approach and following Peter Kahn, Calo, and others, we take the view that law may need to start thinking about intermediate ontological categories where robots and AIs substitute for human beings. Our main example is in the field of medical diagnostics AIs. We suggest that these AI systems may, one day, outperform human doctors; that this will result in pressure to delegate medical diagnostic decision-making to these AI systems; and that this, in turn, will cause various conundrums in cases where doctors disagree with the outcomes generated by machines. We published our hypotheses and discussed the resultant ethical and legal challenges in a book called Robot Law (in a chapter titled, “Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots”).

2.  Superior ML-Generated Diagnostics

Since the publication of that work, diagnostics generated through machine learning (ML), a popular subset of AI, have advanced rapidly. I think it is fair to say that—despite IBM Watson’s overhyped claims and recent stumbles—a number of other ML-generated diagnostics have already outperformed, or are on the verge of outperforming doctors in a narrow range of tasks and decision-making. Although this may be difficult to measure, one thing is certain: it is getting harder and harder to treat these AIs as mere instruments. They are generating powerful decisions that the medical profession and our health systems are relying upon.

This is not surprising when one considers that ML software can see certain patterns in medical data that human doctors cannot. If spotting patterns in large swaths of data enables ML to generate superior diagnostic track records, it’s easy to imagine Jack’s substitution effect playing out in medical decision making. To repeat, no one will claim ML to be people, nor will they exhibit anything like the general skills or intelligence of human doctors. ML will not perfect or even generate near perfect diagnostic outcomes in every case. Indeed, ML will make mistakes. In fact, as Froomkin et al. have demonstrated, ML-generated errors may be even more difficult to catch and correct than human errors.

Froomkin et al. (I am part of et al.) offer many reasons to believe that diagnostics generated by ML will have demonstrably better success rates than those generated by human doctors alone.

The focus of our work, however, is on the legal, ethical, and health policy consequences that follow once AIs outperform doctors. In short, we argue that existing medical malpractice law will come to require superior ML-generated medical diagnostics as the standard of care in clinical settings. We go on to suggest that, in time, effective ML will create overwhelming legal, ethical, and economical pressure to delegate the diagnostic process to machines.

This shift is what leads me to believe that the doctors’ decision making capacity could soon diminish. I say this because, as we argue in the article, medical decision-making will eventually reach the point where the bulk of clinical outcomes collected in databases result from ML-generated diagnoses, and that this is very likely to lead to future decision scenarios that are not easily audited or understood by human doctors.

3.  Delegated Decision Making

While it may be more tempting than ever to imbue machines with human attributes, it is important to remember that today’s medical AI isn’t really anything more than a bunch of clever computer science techniques that permit machines to perform tasks that would otherwise require human intelligence. As I have tried to suggest above, recent successes in ML-generated diagnosis may catalyze the view of AI as substitute decision makers in some useful sense.

But let’s be sure to understand what is really going on here. What is AI really doing?

Simply put, AI transforms a major effort into a minor one.

Doctors can delegate to AI the work of an army of humans. In fact, much of what is actually happening here is at best a metaphorical description whereby we allow an AI to stand in for significant human labor that is happening invisibly, behind the scenes. (In this case, researchers and practitioners feeding the machines massive amounts of medical data and training algorithms to interpret, process and understand it as meaningful medical knowledge). References to deep learning, AIs as substitute decision makers, and similar concepts offer some utility—but they also reinforce the illusion that machines are smarter than they actually are.

Astra Taylor was right to warn us about this slight-of-hand, which she refers to as fauxtomation. Fauxtomation occurs not only in the medical context described in the preceding paragraph but across a broader range of devices and apps that are characterized as AI. To paraphrase her simple but effective real-life example of an app used for food deliveries, we come to say things like: ‘Whoa! How did your AI know that my order would be ready twenty minutes early?’ To which the human server at the take-out booth replies: ‘because the response was actually from me. I sent you a message via the app once your organic rice bowl was ready!’

This example is the substitution effect gone wild: general human intelligence is attributed to a so-called smart app. While I have tried to demonstrate that there may be value in understanding some AIs as substitute decision makers in limited circumstances—because AI is only a partial substitute—the metaphor loses its utility once we start attributing anything like general intelligence or complete autonomy to the AI.

Having examined the metaphorical value in thinking of AIs as substitute decision makers,
let’s now turn to my second question: what happens to the agency and decisional autonomy of doctors if AI becomes the de facto decider?

4.  Machine Autonomy and the Agentic Shift

Recent successes in ML-generated diagnosis (and other applications in which machines are trained to transcend their initial programming) have catalyzed a shift in discourse from automatic machines to machine autonomy.

With increasing frequency, the final dance between data and algorithm takes place without understanding, and often with human intervention or oversight. Indeed, in many cases, humans have a hard time explaining how or why the machine got it right (or wrong).

Curiously, the fact that a machine is capable of operation without explicit command has become understood as the machine is self-governing, that it is capable of making decisions on its own. But, as Ryan Calo has warned, “the tantalizing prospect of original action” should not lead us to presume that machines exhibit consciousness, intentionality or, for that matter, autonomy. Neither is there good reason to think that today’s ML successes prescribe or prefigure machine autonomy as something health law, policy, and ethics will need to consider down the road.

As the song goes, “the future is but a question mark.”

Rather than prognosticating about whether there will ever be machine autonomy, I end this post by considering what happens when the substitution effect leads us to perceive such autonomy in machines generating medical decisions. I do so by borrowing from Stanley Milgram’s well known notion of an ‘agentic shift’—"the process whereby humans transfer responsibility for an outcome from themselves to a more abstract agent.”

Before explaining how the outcomes of Milgram’s experiments on obedience to authority apply to the question at hand, it is useful to first understand the technological shift from automatic machines to so-called autonomous machines. Automatic machines are those that simply carry out their programming. The key characteristic of automatic machines is their relentless predictability. With automatic machines, unintended consequences are to be understood as a malfunction. So-called autonomous machines are different in kind. Instead of simply following commands, these machines are intentionally devised to supersede their initial programming. ML is a paradigmatic example of this—it is designed to make predictions and anticipate unknown circumstances (think: object recognition in autonomous vehicles). With so-called autonomous machines, the possibility of generating unintended or unanticipated consequences is not a malfunction. It is a feature, not a bug.

To bring this back to medical decision making, it is important to see what happens once doctors start to understand ML-generated diagnosis as anticipatory, autonomous machines (as opposed to software that merely automates human decisions by if/then programming). Applying Milgram’s notion of an agentic shift, there is a risk that doctors, hospitals, or health policy professionals who perceive AIs as autonomous, substitute decision makers, will transfer responsibility for an outcome from themselves to the AIs.

This agentic shift explains not only the popular obsession with AI superintelligence but also some rather stunning policy recommendations regarding liability for robots that go wrong—including the highly controversial report by the European Parliament to treat robots and AIs as “electronic persons”.

According to Milgram, when humans undergo an agentic shift, they move from an autonomous state to an agentic state. In so doing, they no longer see themselves as moral decision makers. This perceived moral incapacity permits them to simply carry out the decisions of the abstract decision maker that has taken charge. There are good psychological reasons for this to happen. An agentic shift relieves the moral strain felt by a decision maker. Once a moral decision maker shifts to being an agent who merely carries out decisions (in this case, decisions made by powerful, autonomous machines), one no longer feels responsible for (or even capable of making) those decisions.

This is something that was reinforced for me recently, when I came to rely on GPS to navigate the French motorways. As someone who had resisted this technology up until that point, not to mention someone who had never driven on those complex roadways before, I felt like the proverbial cog in the wheel. I was merely the human cartilage cushioning the moral friction between the navigational software and the vehicle. I carried out basic instructions, actuating the logic in the machine. Adopting this behavior, my decisional autonomy was surrendered to the GPS. Other than programming my destination, I merely did what I was told—even when it was pretty clear that I was going the wrong way. Every time I sought to challenge the machine, I eventually capitulated. It was up to the GPS to work it out. Although most people seem perfectly happy with GPS, I felt a strange dissonance in this delegated decision making: in choosing not to decide, I still had made a choice. I vowed to stop using GPS upon my return to Canada so that my navigational decision making capacity would remain intact. But I continue using it, and my navigational skills now suck, accordingly.

My hypothesis is that our increasing tendency to treat AIs as substitute decision makers diminishes our decisional autonomy by causing profound agentic shifts. There will be many situations where we were previously in autonomous states but are moved to agentic states. By definition, we will relinquish control, moral responsibility and, in some cases, legal liability.

This is not merely dystopic doom-saying on my account. There will be many beneficial social outcomes that accompany such agentic shifts. Navigation and medical diagnostics are just a couple of them. In the same way that agentic shifts enhance or make possible certain desirable social structures (e.g., chain of command in corporate, educational, or military environments), we will be able to accomplish many things not previously possible by relinquishing some autonomy to machines.

The bigger risk, of course, is the move that takes place in the opposite direction—what I call the autonomous shift. This is precisely the reverse of the agentic shift, i.e., the very opposite of what Stanley Milgram observed in his famous experiments on obedience. Following the same logic in reverse, as humans find themselves more and more in agentic states, I suspect that we will increasingly tend to project or attribute autonomous states to machines. AIs will transform from their current role as data-driven agents (as Mireille Hildebrandt likes to call them) to being seen as autonomous and authoritative decision makers in their own right.

If this is correct, I am now able to answer my second question posed at the outset. Allowing AIs as a substitute decision makers, rather than merely acting as decisional supports, will indeed impact the agency and decisional autonomy of doctors. This, in turn, will impact doctors’ decision making capacity just as my own was impacted when I delegated my navigational decision making to my GPS.

Ian Kerr is the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, where he holds appointments in Law, Medicine, Philosophy and Information Studies. You can reach him by e-mail at iankerr at uottawa.ca or on twitter @ianrkerr



Older Posts
Newer Posts
Home