Balkinization  

Friday, November 02, 2018

Regulating Social Robots in Health Care

Guest Blogger

Valarie K. Blake

As artificial intelligence is mainstreamed into medicine, robots are designed not just as extensions of human hands but also of human hearts. A social robot is one that is programmed through machine learning to read human emotions (typically through face or voice cues) and to respond with appropriate mimicked emotional states. Social robots may appear to patients like they understand their fears, or pain, or sorrow and might reply with encouragement, or persuasion, or something like empathy. Social robots are already being successfully integrated into medicine: Paro, the therapeutic robot seal designed for elderly patients with dementia, Robin, a robot that helps diabetic children learn self-maintenance, and QTrobot, designed to build social skills in children with autism. Social robot technology is far from attaining the humanoid superiority of Blade Runner or Westworld but the technology is rapidly advancing and it receives a strong assist from our ingrained tendencies to anthropomorphize objects. Many robot scholars think that humans will form significant emotional attachments to social robots; studies of human-robot interactions already demonstrate that humans protect robots from harm, assign them moral significance, and tell them secrets that they might not otherwise share.

The Food and Drug Administration governs safety and proper labeling of medical devices, for instance pacemakers, but these devices are inanimate; patients do not interact with them or believe them to have feelings and personalities. How to regulate the social robot which is neither person nor mere devices? That will depend greatly on their design and how patients respond to them. It is possible that a well-designed social robot could raise ethical and legal issues that evoke more medical practice and less device.

Consider the privacy and surveillance implications of a care robot that works something like Amazon’s Alexa but with much greater social valence. Care robots may be at besides or in homes twenty-four hours a day, seven days a week. If these robots are programmed to convey information back to the medical provider or programmers (as Alexa does), they may witness and record a patient’s daily health behaviors and, if they really work as designed, even elicit confidences and, in turn, be privy to sensitive information about patients’ mental states. What if a patient shares something embarrassing or private about her medical condition? Patients may not realize that information they casually tell a social robot could be relayed back to health care providers, other people on a medical team, IT personnel, or robot maintenance and developers. Or that their information could be stored for much longer than in conventional medical settings. Also, consider important exceptions to privacy in health care contexts. Imagine the stroke patient that tells her in-home care robot that she has been feeling very down and that she has recently been thinking about suicide. Or, a child who discloses to her diabetes-educator robot, Robin, that her father abused her. Care robots might extend the frequency with which providers find out about such issues. How should the care robot respond and will such information be conveyed back to a provider in some manner, how quickly, and whose responsibility will it be to make sure this process work seamlessly?

Social robots may also create opportunities for endless patient surveillance. In the churn and burn of modern medicine, providers spend little time at the bedside of patients. The presence of care robots at homes or bedsides presents the possibility of a nanny state, where robots can “narc” on patients, telling the provider about all sorts of conduct or statements that the patient would prefer the provider otherwise not know. For instance, that the patient is drinking again, or smoking, or not taking their medication regularly, or refusing to remain bed-bound. Could such information be used for important clinician decisions, such as whether the patient is eligible for a surgery or for a scarce resource like an organ? Alternatively, might providers and hospitals seek to use this information to mitigate damages in some malpractice suits?

How a care robot is programmed and deployed may make some of these issues more or less likely. But they are meant to suggest a larger issue¾ never before have we had a category of medical care that is neither perfectly human nor perfectly device. I can think of nothing less like a pacemaker than a high-functioning social robot. Nobody tells their pacemaker secrets, nobody expects their pacemaker to have any autonomy or moral authority, and a pacemaker does not have the capability of relaying secrets back to the medical team. A social robot may be programmed to be social for specific reasons- to be an authority figure, or a proxy for the physicians, or a helper and confidante. The social AI that works well does so because it creates a social relationship with the patient. The more successful, the more the robot raises important issues around autonomy, coercion, privacy, trust in the robot and in the patient-provider relationship, and other matters that look less like issues covered by FDA regulation and far more like the traditional ethical and legal rules governing health care providers.

At minimum, bioethicists, health lawyers, and health care providers need to be engaged with roboticists at the early stages in this new era in robotics to consider the capabilities of these robots and the likely ethical and legal issues they will raise in health care settings. Beyond this, regulatory models will need to be considered that address this new hybrid in medical care. One possible model is to subject the manufacturers of these robots to a form of licensure that requires compliance with a code of ethical standards, somewhat like how health care providers have to follow certain ethical standards set forth by their state medical boards. Additionally, providers who choose to deploy social robots might have additional ethical norms they sign on to speaking to proper usage in clinical practice. More thought needs to go into various options for regulation and the best way to bring such groups into a compliance scheme, without overly burdening beneficial innovations. Social robots that truly engage patients have the potential to change the face of medical care, but the better they work the more likely they are to generate significant ethical and legal challenges.


Valarie K. Blake is Associate Professor at West Virginia University College of Law. You can reach her by e-mail at valarie.blake at mail.wvu.edu and on Twitter at @valblakewvulaw

Older Posts
Newer Posts
Home