Balkinization  

Monday, October 29, 2018

Organizing the Federal Government’s Regulation of AI

Guest Blogger

A. Michael Froomkin

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

            Medical AI (by which currently we mean primarily Machine Learning or “ML” for short) can’t be understood, or regulated, in a vacuum. While Medical ML does present a few special issues of its own, most of the regulatory challenges it creates involve issues that are common to ML more generally and/or involve aspects of policy that are not especially medical, and often not especially ML either.

From this I conclude three things:


  • While the FDA does have a special role in regulating some aspects of medical ML many of the issues involve matters far outside the FDA’s normal domain or experience – and usually fall into the domain of other agencies, which in fact have relevant expertise.
  • While it’s often tempting to suggest we create a purpose-built regulator for a new technology, such as we did with the FCC for broadcast radio and TV, or FERC for the transmission of electricity and for gas and oil pipelines, this would not be a good approach to AI regulation, certainly not now, and probably not in the future. Medicine is only one of many potentially transformational applications of ML, a technology that seems poised to transform other professions, transportation, urban design, marketing, security, and many other aspects of our lives.
  • The best way forward is to set up a national think tank and resource for regulators in federal, state, and even local government who need technical information, advice, and whose efforts will need some coordination.

The legal and policy issues raised by Medical AI intersect with tort law, privacy, anti-trust, industrial policy, consumer protection, battlefield care, medical device regulation, issues relating to the training and supply of physicians, and more. Many of the disruptions ML promises for medicine will parallel similar issues in other parts of the economy. Regulatory solutions optimized for medical ML applications should at least do no harm to regulatory solutions for those other areas; ideally they should be synergistic with them. In short, smart regulation of medical ML needs to be sensitive to the unique aspects of health care, but should also fit in with AI regulation as it applies across the economy and society. 

Machine Learning systems will take over some types of medical diagnosis and even treatment in doctors’ offices and hospitals, but also in apps and self-diagnosis and self-care devices controlled entirely by the patient. I expect the demand for devices that allow patients to diagnose and treat themselves without having to see a doctor to be great both in the US (due to expense of care) and in the developing world (due to both expense of human medics and the lack of care providers). The likely demand for self-diagnosis and self-care systems in the developing world creates a substantial risk that systems will first be tested on people living in countries with weak regulatory systems. If we are not comfortable with turning poor people into self-care AI test subjects we will need rules to discourage it, likely some combination of professional ethics, international agreements, and domestic rules that either prohibit the export of unapproved systems or at least require or incentivize domestic testing.

Whatever the use case, contemporary machine learning systems require very large amounts of training data. This necessity has a number of implications for ML creation and deployment. Not all raw data are good training data; Medical ML sometimes needs the data scored by humans, and sometimes the scoring requires expert physicians. Raw data, and especially good training data, are critical chokepoints for the development of any ML system. In the medical sector there is a kind of land rush going on at present as firms try to lock in sources of raw data, both to feed the voracious data needs of ML and also to lock out possible competitors. Firms are also trying to produce quality training data so they can be first to market and lock in any first-mover advantages, a process that can sometimes be expensive. These behaviors may in some cases come to raise anti-trust issues within the purview of the FTC and the Justice Department.

On the other hand, if we are trying to get the most value out of ML, medical and otherwise, we would make it easier for entrants to have access to big data, since bigger data sets tend to benefit everyone. We would encourage standardization of the recording of data (in medicine, going from electronic health records on up), and would interpret IP law to make the use of data sets fair use, at least to the extent we could do so consistent with the need to protect patient privacy.

Medical ML may be special in that we will want to think carefully about the regulatory approval path for such ‘devices’, a job that likely falls to the FDA. We’ll need to think carefully not only about initial approvals but also about upgrade paths. Initial approvals raise issues about how much documentation about training the designers will have to supply. (I’d say, enough at least to make the ML system reproducible.) It also raises issues of how we measure quality of outputs, a tricky question in all cases (do Type I and Type II errors count equally?), and an especially tricky one in branches of medicine where we don’t have consensus in how to measure success (e.g. psychiatry).

All ML applications raise complicated issues of privacy. One known unknown is the extent to which personally identifiable information might be reverse engineered from the outputs of an ML system. Another issue is how we manage informed consent in a world where one feature of ML systems is their capacity to produce results that the designers did not foresee. (I have a separate paper on that called “Big Data: Destroyer of Informed Consent”)[PLEASE LINK TO OTHER WORKSHOP PAPER POST].)

It is already a truism that AI will have profound effect on the demand for certain jobs such as truckers and taxi drivers. Something similar is true of the medical profession. Over time we can expect ML systems to become provably superior diagnosticians first for certain conditions and then for whole swaths of diseases within particular specialties. Inevitably, once patients and the malpractice system prefer machine medicine to people, the demand for competing diagnostic and perhaps later treatment services for doctors will nearly vanish, leading to a form of deskilling.

As ML grows in importance, it may distort first the demand and then the supply of physicians, at least in some specialties, which may have long-term deleterious effects on our ability to train future ML systems (see When AIs Outperform Doctors: Confronting the challenges of a tort-induced over-reliance on machine learning).
We commonly tolerate, even celebrate, the types of deskilling that involve substitution of old skills by a superior technique. The picture is more complicated when the deskilling is ML replacing doctors. Because patients will want the best care they will prefer the machine; as a result the most able doctors will choose specialties that are not dominated by ML. Over time there be fewer if any doctors with the clinical experience required to do the tasks of creating new training data for patient data created with new technology. Because it is hard to predict when technical changes in sensors and other equipment requiring new training data will occur, ordinary labor markets could find it difficult to supply the necessary expertise – unless regulators step in to either require human participation despite ML superiority, or to create a corps of specialists who might do research but also would train to be available to create training data.

AI generally presents a host of complex problems, many of which will require legislative or regulatory responses at either the federal or state levels. Other issues might best be solved privately via professional ethics development, while still others may require international coordination. In light of these complexities, U.S. regulation of medical AI needs to be holistic, not piecemeal. The sheer variety of issues and required regulatory strategies means that the FDA cannot do it alone.

Indeed, when the effect of ML varies by sector, there someone will have to decide the trade-offs. Health is an important component of national security and industrial policy, and looms large in anti-trust, privacy, and tort law, but it is unlikely that we would optimize any of these for the health sector at the expense of others, except conceivably privacy law. Machine learning based systems likely will hit the quantity and quality of employment in other sectors more quickly and more severely than medicine, where we can reasonably expect most of the effects to take some time. 

The proposed Future of AI Act (H.R. 4625 / S.2217, 115th Congress) which would create a federal advisory committee on the development and implementation AI, has the right idea but is much too limited and its nearly two-year timetable for the committee’s report is far too slow.

What we need instead as a first step is a true broad-based national think tank, advisor, and coordinator on AI issues -- not just an Advisory Committee of experts, although that might have a role, but also an expert staff that could advise and coordinate with all the different parts of federal, state, and local governments that will confront AI-related issues. Putting the body in a cabinet department such as the Department of Commerce is not ideal. Any Department brings with it a culture and orientation that might encourage the body to prioritize that Department’s issues over others.

            The ideal location for this body would be in the White House, whether free-standing, or under the Domestic Policy Council, or perhaps—more logically, if less powerfully—as a new branch of the Office of Science and Technology Policy. We need an expert group that could not only help formulate a national strategy but also serve as advisors to regulators grappling with AI issues. Only that sort of continual engagement, dialog, and sometimes perhaps cajoling will make it possible that all the disparate regulators and policy-makers—whether the FDA, the NIH, tax policy makers considering what they might wish to give preferential treatment to, anti-trust authorities deciding what impermissibly concentrates market power, privacy enforcers at the FTC and elsewhere, state legislatures considering tort, safety, and even traffic rules, and—are all following an informed and, one hopes, somewhat coordinated strategy.

A. Michael Froomkin is Laurie Silvers & Mitchell Rubenstein Distinguished Professor of Law, University of Miami; Member, University of Miami Center for Computational Science; and Affiliated Fellow, Yale Information Society Project.




Older Posts
Newer Posts
Home