Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
As in most of Europe and Japan, the U.S. population is aging rapidly as baby boomers have entered retirement and birth rates have been declining for several decades. Demographic trends predict that a looming crisis in the provision of long-term care that will grow worse over time, especially in the climate of restrictive immigration policies and proposals to block grant and cap spending on Medicaid, which devotes 2/3 of its funding to long-term care. As described below, a potential solution to address the supply gap in long-term care is the increased use of smart machines, embedded sensors, and artificial intelligence empowered robots (I collectively refer to these varied technologies as “carebots”) to deliver long-term care services directly and/or augment the capabilities and productivity of fewer human providers. However, I contend that the basic FDA framework of reviewing the safety and efficacy of medical devices is inadequate as applied to these technologies as they potentially can harm the autonomy interests of patients that still retain decision-making capacity. Thus, I propose an enhanced regulatory framework for carebots that addresses a patient’s autonomy concerns in addition to safety and efficacy.
Long-term care provides support services for those who have physical or mental impairments that prevent them from autonomously carrying out activities of daily living (e.g., eating, bathing, and dressing) and instrumental activities of living (e.g., preparing meals, managing medications, housekeeping). Long-term care comprises a spectrum of services, including home health services, adult day centers, assisted living, nursing homes, skilled nursing facilities, and intensive care facilities. The typical long-term caregiver in the U.S. is not a paid professional, but rather an unpaid relative or friend. However, the challenge is that this cohort of caregivers is approaching the age where they might need long-term care and it is not clear where that assistance will come from. Thus, demographers predict that the caregiver support ratio, defined as the number of potential caregivers in the prime age group of 45-64 (includes unpaid family members and paid home aides) for every person over the age 80, will rapidly decline in the near future. In 2010, the caregiver support ratio was 7 to 1. In 2030, four years after the first “baby boomers” turn 80, this ratio will be 4 to 1—and by 2050 this ratio will drop to 3 to 1.
Beneficence vs Autonomy
The Constitutional “right to privacy,” or the “right to be let alone” in the words of Justice Brandeis, is rooted in protecting the liberty of personal autonomy. We can conceive of autonomy as the individual’s rule of the self, independent from both controlling interferences by others and personal limitations that prevent meaningful choice. An autonomous person acts intentionally, with comprehension, and free from external controlling influences. For example, the doctrine of informed consent is derived from the idea that we should protect individual autonomy.
Beneficence is an action or effort that is done for the benefit of others. As a third party, one can make a beneficent act by preventing or removing harms, or improving the situation of another person. In the healthcare context, some of the most common and intractable ethical issues arise when a patient’s autonomous decision conflicts with a provider’s duty to advance that patient’s objective best interests. For example, a patient might refuse potentially beneficial treatment recommended by a physician or ignore advice to cease an unhealthy behavior like smoking. In these situations, as long as the patient meets the minimal criteria for the capacity to make autonomous decisions (i.e., the patient comprehends the decision she is making and it is not based on a delusion or distortion of reality), the modern ethical consensus has been that the medical provider should respect the patient’s decisions, while still informing the patient what is medically in their best interest. However, in practice the precept of privileging autonomy when it conflicts with beneficence is not always followed. This phenomenon is perfectly illustrated by fall prevention protocols and monitoring technology that is typically used in hospitals for older adults.
After an older adult is hospitalized, there is “an inherent tension between preventing falls and promoting mobility.” (Inouye et al., 2017) Of course hospitals should try to prevent falls in vulnerable patients, but when this is accomplished by limiting a patient’s freedom of movement, there is evidence that this forced immobility does more harm than good, leading to “post-hospital syndrome,” a period of time in which patients are especially vulnerable to further decline and adverse medical events following hospitalization. Monitoring and surveillance technology plays a large role in this practice as most hospitals routinely use bed and chair alarms as part of their fall prevention protocols, which serve to “police” a patient’s unauthorized movements and further restrict mobility. If the alarm goes off, a hospital employee enters the room and warns the patient to not move around unless they are explicitly given permission to do so. After a while, most patients internalize the logic of this panopticon and restrict their movement without staff needing to admonish them.
There is evidence that these measures are overzealously used on patients even at low or moderate risk of falls as one study estimated that hospital patients spend over 95% of their time in bed. Most patients recoil at being subjected to these restrictions and surveillance measures and have likened them to being “in jail.” (Inouye et al., 2017) So why do hospitals continue to use this technology if patients despise its impact on their autonomy? The short answer is that hospitals are incentivized to prevent falls by reimbursement policies and protocols promoting assisted mobility in hospitals require more staff resources. In other words, this technology is for the benefit of hospitals to maximize billing and minimize staffing, and they can justify it on the empirically shaky ground that these policies are in the patient’s “best interests.” Furthermore, I would argue that when medical technology disintermediates treatment decision-making by human providers, we will see more privileging of perceived beneficence over patient autonomy. Put another way, if you are a low paid nursing assistant or home health aide with physical control over a physically frail patient, what is more likely to persuade you is the correct course of action, an expensive piece of medical technology or the frail patient?
Carebots: Agents of Autonomy, Beneficence, or Both?
Japan provides a window into how the future of carebots might be deployed in the US and elsewhere. Japan has led in robotics and automation innovation for a long time, and out of necessity it has the most experience in deploying these technologies to deliver long-term care. Because of low birth rates, highly restrictive immigration policies, and long life expectancy, Japan is experiencing the challenge of a low caregiver ratio years before the U.S. and Western Europe. For example, carebots already in use and development in Japan can assist with: daily activities such as cooking and cleaning; medication dispensing; service as a cognitive aid and companion (e.g., Paro the baby seal); physical assistance for bathing, toileting, and mobility; and the monitoring of health status, including activity, cognition, vital signs, and biomarkers. To the extent these technologies increase the independence and autonomy of elderly patients, allowing them to “age in place” instead of requiring institutionalization, one could see how long-term care patients would welcome this technology. Indeed, a recent study of seniors’ perceptions of robotic caregivers (Walden et al. 2015), found that 2/3 of respondents were amenable to using such technology. However, many study participants stated that they would want assurances of having complete control over interactions with robot caregivers, expressing worry that the robots will “boss” them around as if they were children. The ambivalence expressed by the seniors in this study highlights the tension that can often exist between the ethical principles of respecting autonomy and promoting beneficence.
Thus, while this elderly cohort can envision the benefits of such technology, they can also envision how it might be used to constrain their autonomy and that they might be powerless to resist. In other words, they do not want to be infantilized or subject to paternalistic controls that limit their ability to assume risks that adults are generally allowed to engage in (e.g., a glass of wine, cigar, consensual sex, etc.) even though such acts might be detrimental to their physical or mental health. Of course, there might be certain instances where the carebots should not be restrained by a patient and be allowed to act unilaterally (e.g., provide direct treatment if patient becomes incapacitated or call 911 if patient suffers a traumatic fall).
Inadequacy of the FDA’s Regulatory Standard for Carebots
Technology used in the manner described above would fall under the FDA’s statutory authority to regulate medical devices. The FDA’s basic framework for reviewing pharmaceutical agents and medical devices is to ensure through testing that these products are i) safe and ii) effective for their intended use. I concur that this is the appropriate standard for medical devices that address a discrete medical condition. However, I contend that that this standard of review is inadequate for technology that purports to address the frailty of the human condition and the need for assistance in daily activities of life. What is needed is a regulatory framework that not only interrogates the safety and efficacy of such technology, but also critically examines how it might affect commonly accepted rights such as dignity, privacy, and personal autonomy.
Thus, at a high level of abstraction, I recommend a regulatory framework for carebots that reviews and addresses the following issues. One, does the introduction of this technology actually enhance the capabilities of a patient, or does it just reduce the care burden of society (or family members)? Two, under what circumstances should patients have control over such technology and when should this discretion be limited or removed? Three, does the technology allow for a sliding scale of patient control that is indexed to objective measures of cognitive functioning? And four, does the carebot application for FDA premarket approval include data that captures the subjective, qualitative experiences and concerns of patients that might be subjected to its use? Without this higher level of review, there is a real danger that “safe and effective” carebot technology can be deployed in ways that are harmful to the autonomy interests of elderly patients in particular.
Fazal Khan is Associate Professor of Law at University of Georgia School of Law. You can reach him by e-mail at fkhan at uga.edu