Balkinization  

Thursday, November 01, 2018

Artificial Intelligence and Predictive Data: The Need for A New Anti-Discrimination Mandate

Guest Blogger

Sharona Hoffman

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

A large number of U.S. laws prohibit disability-based discrimination.  At the federal level, examples are the Americans with Disabilities Act (ADA), the Fair Housing Act, the Rehabilitation Act of 1973, Section 1557 of the Affordable Care Act, and the Genetic Information Nondiscrimination Act.  In addition, almost all of the states have adopted disability discrimination laws.  This might lead to the conclusion that we enjoy comprehensive legislative protection against discrimination associated with health status.  Unfortunately, in the era of big data and artificial intelligence (AI) that is no longer true.

The problem is that the laws protect individuals based on their present or past health conditions and do not reach discrimination based on predictions of future medical ailments.  The ADA, for example, defines disability as follows: a) a physical or mental impairment that substantially limits a major life activity, b) a record of such an impairment, or c) being regarded as having such an impairment.  This language focuses only on employers’ perceptions concerning workers’ current or past health status.
Modern technology, however, provides us with powerful predictive capabilities.  Using available data, AI can generate valuable new information about individuals, including predictions of their future health problems.  AI capabilities are available not only to medical experts, but also to employers, insurers, lenders, and others who have economic agendas that may not align with the data subjects’ best interests. 

AI can be of great benefit to patients, health care providers, and other stakeholders.  Machine learning algorithms have been used to predict patients’ risk of heart disease, stroke, and diabetes based on their electronic health record data.   Google has used deep-learning algorithms to predict heart disease by analyzing photographs of individuals’ retinas.  IBM has used AI to model the speech patterns of high-risk patients who later developed psychosis. In 2016, researchers from the University of California, Los Angeles announced that they had used data from the National Health and Nutrition Examination Survey to build a statistical model to predict prediabetes.  Armed with such means, physicians can identify their at-risk patients and counsel them about lifestyle changes and other preventive measures.  Likewise, employers can use predictive analytics to more accurately forecast future health insurance costs for budgetary purposes. 

Unfortunately, however, AI and predictive analytics generally may also be used for discriminatory purposes.  Take employers as an example.  Employers are highly motivated to hire healthy employees who will not have productivity or absenteeism problems and will not generate high health insurance costs.  The ADA permits employers to conduct wide-ranging pre-employment examinations. Thus, employers may have individuals’ retinas and speech patterns examined in order to identify desirable and undesirable job applicants.   The ADA forbids employers from discriminating based on existing or past serious health problems. But no provision prohibits them from using such data to discriminate against currently healthy employees who may be at risk of later illnesses and thus could possibly turn out to have low productivity and high medical costs.   

This is especially problematic because statistical predictions based on AI algorithms may be wrong.  They may be tainted by inaccurate data inputs or by biases.  For example, a prediction might be based on information contained in an individual’s electronic health record (EHR).  Yet, unfortunately, these records are often rife with errors that can skew analysis.  Moreover, EHRs are often designed to maximize charge capture for billing purposes.  Reimbursement concerns may therefore drive EHR coding   in ways that bias statistical predictions.  So too, predictive algorithms themselves may be flawed if they have been trained using unreliable data.  Discrimination based on AI forecasts, therefore, may not only harm data subjects, it may also be based on entirely false assumptions.   
In the wake of big data and AI, it is time to revisit the nation’s anti-discrimination laws. I propose that the laws be amended to protect individuals who are predicted to develop disabilities in the future.
In the case of the ADA, the fix would be fairly simple.  The law’s “regarded as” provision currently defines “disability” for statutory purposes as including “being regarded as having … an impairment.”  The language could be revised to provide that the statute covers “being regarded as having … an impairment or as likely to develop a physical or mental impairment in the future.”  Similar wording could be incorporated into other anti-discrimination laws.

One might object that the suggested approach would unacceptably broaden the anti-discrimination mandate because it would potentially extend to all Americans rather than to a “discrete and insular minority” of individuals with disabilities.  After all, anyone, including the healthiest of humans, could be found to have signs that forecast some future frailty. 

However, the ADA’s “regarded as” provision is already far-reaching because any individual could be wrongly perceived as having a mental or physical impairment.  Similarly, Title VII of the Civil Rights Act of 1964 covers discrimination based on race, color, national origin, sex, and religion.  Given that all individuals have these attributes (religion includes non-practice of religion), the law reaches all Americans.  Consequently, banning discrimination rooted in predictive data would not constitute a departure from other, well-established anti-discrimination mandates.

It is noteworthy that under the Genetic Information Nondiscrimination Act, employers and health insurers are already prohibited from discriminating based on one type of predictive data: genetic information.   Genetic information is off-limits not only insofar as it can reveal what conditions individuals presently have, but also with respects to its ability to identify perfectly healthy people’s vulnerabilities to a myriad of diseases in the future.

In the contemporary world it makes little sense to outlaw discrimination based on genetic information but not discrimination based on AI algorithms with powerful predictive capabilities.  The proposed change would render the ADA and other disability discrimination provisions more consistent with GINA’s prudent approach.
As is often the case, technology has outpaced the law in the areas of big data and AI.  It is time to implement a measured and needed statutory response to new data-driven discrimination threats.

Sharona Hoffman is Edgar A. Hahn Professor of Law, Professor of Bioethics, and Co-Director of the Law-Medicine Center, Case Western Reserve University School of Law.  You can reach her by e-mail at  sharona.hoffman at case.edu. 


Older Posts
Newer Posts
Home