For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.
Artificial intelligence (AI) may well turn out to have a transformative effect on the delivery of healthcare services. Various mobile devices now feature “mHealth” applications, prompting healthcare providers to explore novel ways to incorporate these functions into medical care. Technology companies are connecting existing consumer-facing technologies like Alexa to diagnostics AI, creating new avenues of medical advice-giving. Similarly, professional-use algorithms and AI are available on the provider side. As we may be moving from computer-aided diagnosis to algorithm-generated advice, new technical, medical, and legal questions emerge. What seems like a new frontier in the delivery of healthcare services actually takes us back to the early days of AI—after all, ELIZA’s DOCTOR script, developed in the mid-twentieth century, simulated a psychotherapist.
Technological innovation in healthcare occurs in a densely regulated space dominated by asymmetries of knowledge and social relationships based on trust. Professional advice is valuable to patients and clients because of the asymmetry between lay and expert knowledge. Professionals have knowledge that clients lack, but need to make important life decisions. These relationships are governed by a legal framework of professional advice-giving consisting of several elements, including professional licensing, fiduciary duties, informed consent, and professional malpractice liability. The regulatory goal is to ensure that the patient (or client) receives comprehensive, accurate, reliable advice from the doctor (or other advice-giving professional). Traditionally, this framework assumed interactions between human actors. Introducing AI challenges this assumption, though I will stipulate that AI does not entirely replace human doctors for now.
The
questions underlying professional advice-giving involving various forms of
technology raise enduring questions about the nature of the doctor-patient
relationship. What does it mean to give and to receive professional advice, and
how do things change when technological solutions—including AI—are inserted
into the process of advice-giving?
We
might consider the various medical tech solutions to be medical devices and contemplate
potential regulation by the U.S. Food and Drug Administration (FDA). But the
line between medical devices, so understood, and other electronic health
gadgets seems increasingly blurry. And the process of professional
advice-giving is the same across professions, whereas the FDA’s potential
jurisdiction over medical devices only applies to one slice of the professional
universe. The theoretical questions have much deeper roots that would be
obscured by a sector-specific regulatory solution. Or we might want to subject
AI, independent of its application, to regulation by a separate agency. I have
recently started to explore yet another perspective: for AI in professional
advice-giving, such as AI in
the doctor-patient relationship, we might want to start with the
traditional regulatory framework for professionals. This perspective builds on a
theory of professional advice-giving that has the professional-client or doctor-patient
relationship at its core and conceptualizes professionals as members of knowledge
communities.
So doing, this
approach puts scholarship on professional regulation into conversation with the
emergent literature on AI governance.
Outside of the medical context,
Jack Balkin explains that a rapid move from “the age
of the Internet to the Algorithmic Society” is underway. He defines the
Algorithmic Society as “a society organized around social and economic decision
making by algorithms, robots, and AI agents [] who not only make the decisions
but also, in some cases, carry them out.” In this emerging society, we need
“not laws of robotics, but laws of robot operators.” Here, “the central problem
of regulation is not the algorithms but the human beings who use them, and who
allow themselves to be governed by them. Algorithmic governance is the
governance of humans by humans using a particular technology of analysis and
decision-making.”
We should likewise begin to
consider forms of algorithmic governance in the medical advice-giving context. Should
professional-use algorithms be subject to professional licensing, and what
level of technical proficiency should be expected of licensed professionals who
employ AI in their practice? As a matter of professional malpractice, who is
liable for harm caused by bad AI-generated advice? Does the introduction of AI require informed
consent? How do fiduciary duties apply?
Instead of assessing each algorithm
or AI agent individually, or dividing the professional AI world into
sector-specific regulatory regimes in order to consider whether and how it
should be regulated, we should first turn to the traditional regulatory
framework that governs professional advice-giving. This
point, in fact, applies to all advice-giving professions. But it is perhaps most
clearly conveyed in the medical context, where we have strong intuitions about
the doctor-patient relationship and its underlying values.
Claudia E. Haupt is Associate Professor of Law and Political Science at Northeastern University School of Law. You can reach her by e-mail at c.haupt@northeastern.edu