Balkinization  

Tuesday, October 30, 2018

Artificial Intelligence for Suicide Prediction

Guest Blogger

Mason Marks

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

Suicide is a global problem causing 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25% in the past two decades reaching 45,000 deaths per year. Suicide now claims more American lives than auto accidents. Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to predict suicide more accurately and thereby save lives, hospitals, governments, and internet companies have begun developing artificial intelligence (AI) based suicide prediction tools. This essay analyzes the risks these systems pose to people’s safety, privacy, and autonomy, which have been underexplored. It concludes with recommendations for minimizing those risks.

Two parallel tracks of AI-based suicide prediction have emerged. On the first track, which I call “medical suicide prediction,” doctors and hospitals use AI to analyze patient records. Medical suicide prediction is mostly experimental, and aside from one program at the Department of Veterans Affairs (VA), it is not yet widely used. Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though it is essentially unregulated, social suicide prediction is already widely used to make decisions that affect people’s lives. It predicts suicide risk using behavioral data mined from consumers through their interactions with social media, smart phones, and the Internet of Things (IoT). The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

How does social suicide prediction work? As we go about our daily routines, we leave behind trails of digital traces that reflect where we’ve been and what we’ve done. Companies use AI to analyze these traces and infer health information. For instance, Facebook’s AI scans user-generated content for words and phrases it believes are correlated with suicidal thoughts. The system stratifies posts into risk categories, and those deemed “high risk” are forwarded to Facebook Community Operations, which may notify police who perform “wellness checks” at users’ homes. In 2017, Facebook announced that its system had prompted over 100 wellness checks in one month. Its affiliate Crisis Text Line, a text-based counseling service targeted at children and teens, reports completing over 11,500 wellness checks at a rate of 20 per day. In addition to its standalone service, Crisis Text Line is embedded within other platforms such as Facebook Messenger, YouTube, and various apps marketed to teens.

At first glance, social suicide prediction seems like a win-win proposition, allowing internet platform to perform a public service that benefits users and their families. However, social suicide predictions emerge from a black box of algorithms that are protected as trade secrets. Unlike medical suicide prediction research, which undergoes ethics review by institutional review boards and is published in academic journals, the methods and outcomes of social suicide prediction remain confidential. We don’t know whether it is safe or effective.

When companies engage in suicide prediction, numerous dangers arise. For example, privacy risks stem from how consumer data is stored and where the information might flow after predictions are made. Because most companies that predict suicide are not covered entities under HIPAA, their predictions can be shared with third-parties without consumer knowledge or consent. Though Facebook claims its suicide predictions are not used for advertising, less scrupulous actors might share their own suicide predictions with advertisers, data brokers, and insurance companies, which can promote consumer exploitation and discrimination.

Advertisers and data brokers may argue that the collection and sale of suicide predictions constitutes protected commercial speech under the First Amendment, and they might be right. In Sorrell v. IMS Health, the US Supreme Court struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits. The Court reasoned that the law infringed the First Amendment rights of data brokers and drug makers because it prohibited them from purchasing the data while allowing it to be shared for other uses. This opinion may threaten any future state laws that limit the sale of suicide predictions. Such laws must be drafted with this case in mind, and to prevent a similar outcome, they should allow sharing of suicide predictions only for a narrow range of purposes such as research (or prohibit it completely).

In addition to threatening consumer privacy, social suicide prediction poses risks to consumer safety and autonomy. Due to the lack of transparency surrounding prediction and its outcomes, it is unknown how often wellness checks result in involuntary hospitalization, which deprives people of liberty and may do more harm than good. In the short term, hospitalization can prevent suicide. However, people are at high risk for suicide shortly after being released from hospitals. Thus, civil commitments could paradoxically increase the risk of suicide.

Facebook has deployed its system in nearly every region in which it operates except in the European Union. In some countries, attempted suicide is a criminal offense. For instance, in Singapore, where Facebook maintains its Asia-Pacific headquarters, suicide attempts are punishable by imprisonment for up to one year. In these countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration. This example illustrates how social suicide prediction is analogous to predictive policing. In the US, the Fourth Amendment protects people and their homes from warrantless searches.  However, under exigent circumstances doctrine, police may enter homes without warrants if they reasonably believe entry is necessary to prevent physical harm. Stopping a suicide clearly falls within this exception. Nevertheless, it may be unreasonable to rely on opaque AI-generated suicide predictions to circumvent Fourth Amendment protections when no information regarding their accuracy is publicly available.

Because suicide prediction tools impact people’s civil liberties, consumers should demand transparency from companies that use them. The companies should publish their suicide prediction algorithms for analysis by privacy experts, computer scientists, and mental health professionals. At a minimum, they should disclose the factors weighed to make predictions and the outcomes of subsequent interventions. In the European Union, Article 22 of the General Data Protection Regulation (GDPR) give consumers the right “not to be subject to a decision based solely on automated processing, including profiling,” which may include profiling for suicide risk. Consumers are also said to have a right to explanation. Article 15 of the GDPR allows consumers to request the categories of information being collected about them and to obtain “meaningful information about the logic involved . . . .” The US lacks similar protections at the federal level. However, the California Consumer Protection Act of 2018 (CCPA) provides some safeguards. It includes inferred health data within its definition of personal information, which likely includes suicide predictions. The CCPA allows consumers to request the categories of personal information collected from them and to ask that personal information be deleted. These safeguards will increase the transparency of social suicide prediction. However, the CCPA has significant gaps. For instance, it does not apply to non-profit organizations such as Crisis Text Line. Furthermore, the tech industry is lobbying to weaken the CCPA and to implement softer federal laws to preempt it.

One way to protect consumer safety would be to regulate social suicide prediction algorithms as software-based medical devices. The Food and Drug Administration (FDA) has collaborated with international medical device regulators to propose criteria for defining “Software as a Medical Device.” The criteria include whether developers intend the software to diagnose, monitor, or alleviate a disease or injury. Because the goal of social suicide prediction is to monitor suicidal thoughts and prevent users from injuring themselves, it should satisfy this requirement. The FDA also regulates mobile health apps and likely reserves the right to regulate those that utilize suicide prediction algorithms because they pose risks to consumers. These apps include Facebook and its Messenger app.

Jack Balkin argues that the common law concept of the fiduciary should apply to companies that collect large volumes of information about consumers. Like classic fiduciaries, such as doctors and lawyers, internet platforms possess more knowledge and power than their clients, and these asymmetries create opportunities for exploitation. Treating social suicide predictors as information fiduciaries would subject them to duties of care, loyalty, and confidentiality. Under the duty of care, companies would be required to ensure through adequate testing that their suicide prediction algorithms and interventions are safe. The duties of loyalty and confidentiality would require them to protect suicide prediction data and to abstain from selling it or otherwise using it to exploit consumers.

Alternatively, we might require that suicide predictions and subsequent interventions be made under the guidance of licensed healthcare providers. For now, humans remain in the loop at Facebook and Crisis Text Line, yet that may not always be the case. Facebook has over two billion users, and it continuously monitors user-generated content for a growing list of threats including terrorism, hate speech, political manipulation, and child abuse. In the face of these ongoing challenges, the temptation to automate suicide prediction will grow. Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. Similar concerns have been raised in the context of criminal law. AI-based sentencing algorithms provide recidivism risk scores to judges who use them in sentencing decisions. Critics argue that even though judges retain ultimate decision-making power, it may be difficult for them to defy software recommendations. Like social suicide prediction tools, criminal sentencing algorithms are proprietary black boxes, and the logic behind their decisions is off-limits to people who rely on their scores and those who are affected by them.

The due process clause of the Fourteenth Amendment protects people’s right to avoid unnecessary confinement. So far only one state supreme court has considered a due process challenge to the use of proprietary algorithms in criminal sentencing; the court ultimately upheld the sentence because it was not based solely on a risk assessment score. Nevertheless, the risk of hospitalizing people without due process is a compelling reason to make the logic of AI-based suicide predictions more transparent.

Regardless of the regulatory approach taken, it is worth taking a step back to scrutinize social suicide prediction. Tech companies may like to “move fast and break things,” but suicide prediction is an area that should be pursued methodically and with great caution. Lives, liberty, and equality are on the line.   

Mason Marks is a research fellow at the Information Law Institute at NYU Law School and a visiting fellow at the Information Society Project at Yale Law School. You can reach him by e-mail at mason.marks at yale.edu

Older Posts
Newer Posts
Home