William M. Sage
AI can help people understand and improve their health without forcing
them along the “final common pathway” into the paid medical mainstream.
Better health information
technology has been a consensus goal of health policy experts for roughly two
decades, with AI (“artificial” or “augmented” intelligence) the latest example
of a potentially disruptive innovation in the informatics domain. In particular, AI’s potential to improve
diagnostic speed and accuracy has created palpable excitement in radiology and
pathology for cancer detection and other clinical applications. In 2015, the Institute of Medicine (now
called the National Academy) devoted an entire consensus report titled
“Improving Diagnosis in Health Care” to reducing diagnostic errors – an effort
that continued an influential series of Academy critiques of the safety and
quality of healthcare.
It seems unobjectionable to
argue that more accurate diagnosis will lead to more effective treatment. As the IOM observed, “Getting the right
diagnosis is a key aspect of health care -- it provides an explanation of a
patient's health problem and informs subsequent health care decisions.“ However, diagnosis releases a cascade of
additional effects that have largely gone unnoticed. The IOM report failed even to acknowledge these
“social meanings” of diagnosis, such as replacing uncertainty with explanation,
inferring moral culpability or blamelessness, suggesting communicability or
lack thereof, and creating or constraining opportunities for education,
employment, insurance, and the like.
Perhaps most importantly, the
act of diagnosis channels the measurement and modification of health into
conventional medical pathways, including an assurance and perhaps even an
amplification of payment within the existing system. A dispassionate assessment of why most health
care information is produced, recorded, and exchanged has been lost in the
enthusiasm for AI and similar innovations.
On the list: professional traditions such as the physician’s “H&P”
(history and physical) and “SOAP notes” (subjective and objective data followed
by the physician’s assessment and plan), clinical performance aids such as test
results and consultation reports, and documentation to avoid inferences of
professional negligence (malpractice).
But these are decidedly partial explanations. More than anything else, the US healthcare
system collects the information it needs to collect in order to get paid.
Threshold conditions must be
met for payment to issue. For health
professionals and facilities, these include state licensing and receipt of Medicare
provider credentials; for drugs, medical devices, and similar tangible
technologies, they include FDA or other regulatory approval as well as certification
of coverage from public and private insurers.
But the sina qua non for
payment remains the individual patient encounter, in which one or more
diagnoses are rendered by a physician and one or more treatments planned or administered. Each diagnosis is represented by a code (currently
ICD-10) as is each billable treatment (CPT, which remains a proprietary set of
designations owned and profitably licensed by the American Medical
Association). In many instances, pairing
diagnosis with intervention triggers an avalanche of “claims” by providers and
suppliers through “insurance” intermediaries that may or may not bear financial
risk but ultimately receive a cut of the transactions they process. This “final common pathway” exerts a powerful
effect on the character and cost of the medical care system, including its
reactive posture, its deference to physician authority, its technological bias,
its profound inequities, and its colossal waste. Consider the ironic vernacular for claims payment,
“reimbursement”—a term that connotes happenstance and volunteerism rather than
industrial structure and commercial competition even as healthcare spending
approaches one-fifth of national economic output.
If one views diagnosis as the
gateway to payment, even highly effective AI can induce frictions. For example, applying unsupervised deep
learning to sleep disorders not only confirmed that established categories of
brain activity had an objective basis, but also drastically reduced the time,
staffing, and equipment needed to perform and interpret sleep studies. Unsurprisingly, sleep-related AI found few
supporters among existing sleep specialists, who perceived a threat to their
authority and revenue. By contrast,
sleep-related AI did find a market niche helping pharmaceutical companies
perform at low cost the sleep studies sometimes required by the FDA as a
condition of product approval.
This example illustrates a
broader point. The final common pathway
of medical payment is professionally entrenched and safeguarded by regulatory
battlements that are politically challenging to surmount. Even in the medium term, therefore, informational
innovations such as AI can improve the existing healthcare system only to the
extent that system chooses to be improved.
What AI can do, perhaps
uniquely, is help people delay and sometimes avoid entering the final common pathway
of medical payment while nonetheless offering reliable guidance about risk and
mitigation of illness. Put differently,
for AI to be a truly revolutionary medical presence, it must enable “non-diagnosis.” I do not limit the term to avoiding false positive
results, which is a probable but not certain effect of incorporating AI into
conventional diagnostics -- sensitive AI-based diagnostics may increase false
positives in the short term insofar as real findings may nevertheless be
clinically unimportant. Rather, I use “non-diagnosis”
as the elucidation of health-relevant information apart from the coding,
workflow, claims, and associated payment that are the expected consequences of traditional
diagnosis.
Non-diagnosis enables
non-treatment – an inversion of the IOM quote offered above. Decisions guided by AI-based risk analysis
will mix conventional medical care with over-the-counter and informal
therapies, lifestyle modification, and continued monitoring. In this respect, AI may have unique potential
to bridge the gap between medical care and health. Because of professional, regulatory, and
payment conventions, health IT innovators honor the conventional differentiation
of medical devices from consumer health technologies such as “wearables”
(Fitbit), even if both incorporate decision-support. The distinction between the two has always
been something of a fiction, and increasingly seems paradoxical from a policy
perspective: shouldn’t health be the measurable outcome of medical care?
In sum, AI may well improve medical
care, but its greater impact could be to serve “people” rather than “patients”
– removing many health needs and the means to address them from an overly
medicalized, professionally dominated, and claims-driven approach to gathering
and acting on information. That would be
an even more dramatic change.
William M. Sage is James R. Dougherty Chair for Faculty Excellence, School of Law, and Professor of Surgery and Perioperative Care, Dell Medical School, at The University of Texas at Austin. You can reach him by e-mail at wsage at law.utexas.edu