Much of
the initial excitement and application of artificial intelligence in healthcare
has been focused on population health management, data analytics, and reduction
of inefficiencies within the administration of healthcare. Many of the early
direct-to-consumer products using artificial intelligence are branded as
devices or apps that focus on general wellness and heath, not treatment of
disease. Despite growing investment in artificial intelligence for healthcare
generally, only 30 percent of investment deals fund of companies developing
products that support providers in direct patient care and treatment of
disease. But soon, artificial intelligence will be a common component of the
clinical workflow, and will offer new challenges the current regulatory
framework.
To date,
the regulatory discussions about artificial intelligence have focused on the
activities and approvals of the U.S. Food and Drug Administration. In 2017, the
FDA approved a cloud-based deep learning algorithm that serves as a decision
support tool that allows for physicians to diagnose heart conditions with
greater efficiency and accuracy. In 2018, the FDA permitted the marketing of a
medical device that uses artificial intelligence to detect diabetic retinopathy
in adults who have diabetes. These approvals, in conjunction with the
ubiquitous goal of providing better, faster care at a lower cost, may drive the
release of new products and fuel additional demand to create them. Although the
FDA is one of many stakeholders with interest and jurisdiction to influence the
development of healthcare technology, its role as the primary regulator of
medical technology devices has made it the de facto agency responsible to
review the introduction of new healthcare technologies that use algorithms. But
other regulators need to become more engaged.
Commentators
such as Pearse
Keane and Eric Topol and have commented that the attraction to
the transformative power of artificial intelligence creates an “AI-chasm” of
misunderstanding between what is necessary for the development and approval of
a scientifically sound algorithm and the proper use of such algorithms in
real-world situation. And within healthcare the general maxim that the federal
government regulates medical products and state government regulates medical
practice
frustrates the creation of a national strategy for artificial intelligence in
healthcare.
So now
is the time to ask important structural questions, such as, does the United States have a sufficient regulatory framework to regulate medical technologies that
utilize or operationalize artificial intelligence in a clinical setting? And is
the system of regulation properly designed for the day when wellness
products converge with diagnosis and treatment to create a quasi-clinical
product sold directly to consumers? If the full potential of artificial
intelligence in the clinical setting is to be realized, it is imperative that
some of the downstream effects of artificial intelligence be addressed before
promised transformations become reality. All parties must re-commit to work
collaboratively to develop a systemic approach restates core principles of
regulation and sets standards for future integration of artificial intelligence
into the medical practice.
The Federation of State Medical Boards,
along with its member state boards, is taking a proactive approach to these
questions and has begun to study what how artificial intelligence will change
the standards for medical practice and licensure. One initial challenge that
state regulators will face the delineation between a clinical decision support
tool and a tool which, under current state law definitions would be engaging in
the practice of medicine. If an algorithm is found to be practicing medicine,
and does so to the detriment of public safety, state regulators could exert
oversight and act to enforce to ensure a proper standard of care for its use in
a clinical setting. As seen with previous integrations of new technologies into
other regulated industries, this ex-ante scenario will have a chilling effect
across the industry. Inviting state regulators to participate more fully in the
development and early review of these algorithms, be it through informal review
or creation of a more formal regulatory sandbox that allows regulators to see
within the black-box, may serve to empower innovation and lead to a more rapid
integration of artificial intelligence.
The essential characteristic of deep
learning algorithms to consume data and alter the decision-making process,
which leads to unpredictable outcomes over time, adds an additional layer of
complexity to future regulation. The inherent mutability of deep learning
systems complicates the ability to know why the algorithm arrived at a result, and
frustrates the ability of a physician to explain the performance of the system
to a patient.
Accordingly, as more complex algorithms are introduced into the clinical
setting, it may be increasingly difficult for physicians to comply with core
practice requirements established by state regulations. Future regulations may also need to
ensure that a physician employing algorithm-based treatment meets levels of
clinical and ethical competence, including the ability to explain the rationale
for diagnosis and treatment to a patient, as well as maintaining the duty to
obtain any necessary consents for data collection and use
The lurking grand question, however, is
the assignment ultimate responsibility and accountability if treatment using
artificial intelligence results in harm to the patient. If artificial
intelligence is deployed in a team-based care setting, state regulators will be
pressed to determine discipline of their individual provider under their
jurisdictional authority. Artificial intelligence integrated across practice
areas will necessitate a greater understanding of the roles and
responsibilities of all members of the health care team. Early discussions of
artificial intelligence within the healthcare community have introduced, for
discussion, a framework that assigns responsibility and accountability for the
use of artificial intelligence to the person with the most knowledge of risk
and who is in the best position to mitigate the risk. This concept may provide
a guide for state regulators grappling with the same issues for purposes of
licensure and discipline.
Artificial intelligence will bring
about the dawn of a new day in healthcare. Rather than waiting for that day to
come to develop standards to its use, it is crucial that regulators act now to
understand what artificial intelligence is, what it does, and what questions
must be answered to fully reap the benefits of this technology.
Eric M. Fish is Senior Vice President of Legal Services, Federation of State Medical Boards. You can reach him by e-mail at efish at fsmb.org. The views expressed in these remarks are my own and do not necessarily reflect the views of the Federation of State Medical Boards, its Board of Directors, or any member state medical board.
Eric M. Fish is Senior Vice President of Legal Services, Federation of State Medical Boards. You can reach him by e-mail at efish at fsmb.org. The views expressed in these remarks are my own and do not necessarily reflect the views of the Federation of State Medical Boards, its Board of Directors, or any member state medical board.