Balkinization |
Balkinization
Balkinization Symposiums: A Continuing List E-mail: Jack Balkin: jackbalkin at yahoo.com Bruce Ackerman bruce.ackerman at yale.edu Ian Ayres ian.ayres at yale.edu Corey Brettschneider corey_brettschneider at brown.edu Mary Dudziak mary.l.dudziak at emory.edu Joey Fishkin joey.fishkin at gmail.com Heather Gerken heather.gerken at yale.edu Abbe Gluck abbe.gluck at yale.edu Mark Graber mgraber at law.umaryland.edu Stephen Griffin sgriffin at tulane.edu Jonathan Hafetz jonathan.hafetz at shu.edu Jeremy Kessler jkessler at law.columbia.edu Andrew Koppelman akoppelman at law.northwestern.edu Marty Lederman msl46 at law.georgetown.edu Sanford Levinson slevinson at law.utexas.edu David Luban david.luban at gmail.com Gerard Magliocca gmaglioc at iupui.edu Jason Mazzone mazzonej at illinois.edu Linda McClain lmcclain at bu.edu John Mikhail mikhail at law.georgetown.edu Frank Pasquale pasquale.frank at gmail.com Nate Persily npersily at gmail.com Michael Stokes Paulsen michaelstokespaulsen at gmail.com Deborah Pearlstein dpearlst at yu.edu Rick Pildes rick.pildes at nyu.edu David Pozen dpozen at law.columbia.edu Richard Primus raprimus at umich.edu K. Sabeel Rahman sabeel.rahman at brooklaw.edu Alice Ristroph alice.ristroph at shu.edu Neil Siegel siegel at law.duke.edu David Super david.super at law.georgetown.edu Brian Tamanaha btamanaha at wulaw.wustl.edu Nelson Tebbe nelson.tebbe at brooklaw.edu Mark Tushnet mtushnet at law.harvard.edu Adam Winkler winkler at ucla.edu Compendium of posts on Hobby Lobby and related cases The Anti-Torture Memos: Balkinization Posts on Torture, Interrogation, Detention, War Powers, and OLC The Anti-Torture Memos (arranged by topic) Recent Posts Artificial Intelligence for Suicide Prediction
|
Tuesday, October 30, 2018
Artificial Intelligence for Suicide Prediction
Guest Blogger Mason Marks
For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.
Suicide is a global problem causing 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25% in the past two decades reaching 45,000 deaths per year. Suicide now claims more American lives than auto accidents. Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to predict suicide more accurately and thereby save lives, hospitals, governments, and internet companies have begun developing artificial intelligence (AI) based suicide prediction tools. This essay analyzes the risks these systems pose to people’s safety, privacy, and autonomy, which have been underexplored. It concludes with recommendations for minimizing those risks.
Two parallel tracks of AI-based suicide prediction have emerged. On the first track, which I call “medical suicide prediction,” doctors and hospitals use AI to analyze patient records. Medical suicide prediction is mostly experimental, and aside from one program at the Department of Veterans Affairs (VA), it is not yet widely used. Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.
My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though it is essentially unregulated, social suicide prediction is already widely used to make decisions that affect people’s lives. It predicts suicide risk using behavioral data mined from consumers through their interactions with social media, smart phones, and the Internet of Things (IoT). The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.
How does social suicide prediction work? As we go about our daily routines, we leave behind trails of digital traces that reflect where we’ve been and what we’ve done. Companies use AI to analyze these traces and infer health information. For instance, Facebook’s AI scans user-generated content for words and phrases it believes are correlated with suicidal thoughts. The system stratifies posts into risk categories, and those deemed “high risk” are forwarded to Facebook Community Operations, which may notify police who perform “wellness checks” at users’ homes. In 2017, Facebook announced that its system had prompted over 100 wellness checks in one month. Its affiliate Crisis Text Line, a text-based counseling service targeted at children and teens, reports completing over 11,500 wellness checks at a rate of 20 per day. In addition to its standalone service, Crisis Text Line is embedded within other platforms such as Facebook Messenger, YouTube, and various apps marketed to teens.
At first glance, social suicide prediction seems like a win-win proposition, allowing internet platform to perform a public service that benefits users and their families. However, social suicide predictions emerge from a black box of algorithms that are protected as trade secrets. Unlike medical suicide prediction research, which undergoes ethics review by institutional review boards and is published in academic journals, the methods and outcomes of social suicide prediction remain confidential. We don’t know whether it is safe or effective.
When companies engage in suicide prediction, numerous dangers arise. For example, privacy risks stem from how consumer data is stored and where the information might flow after predictions are made. Because most companies that predict suicide are not covered entities under HIPAA, their predictions can be shared with third-parties without consumer knowledge or consent. Though Facebook claims its suicide predictions are not used for advertising, less scrupulous actors might share their own suicide predictions with advertisers, data brokers, and insurance companies, which can promote consumer exploitation and discrimination.
Advertisers and data brokers may argue that the collection and sale of suicide predictions constitutes protected commercial speech under the First Amendment, and they might be right. In Sorrell v. IMS Health, the US Supreme Court struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits. The Court reasoned that the law infringed the First Amendment rights of data brokers and drug makers because it prohibited them from purchasing the data while allowing it to be shared for other uses. This opinion may threaten any future state laws that limit the sale of suicide predictions. Such laws must be drafted with this case in mind, and to prevent a similar outcome, they should allow sharing of suicide predictions only for a narrow range of purposes such as research (or prohibit it completely).
In addition to threatening consumer privacy, social suicide prediction poses risks to consumer safety and autonomy. Due to the lack of transparency surrounding prediction and its outcomes, it is unknown how often wellness checks result in involuntary hospitalization, which deprives people of liberty and may do more harm than good. In the short term, hospitalization can prevent suicide. However, people are at high risk for suicide shortly after being released from hospitals. Thus, civil commitments could paradoxically increase the risk of suicide.
Facebook has deployed its system in nearly every region in which it operates except in the European Union. In some countries, attempted suicide is a criminal offense. For instance, in Singapore, where Facebook maintains its Asia-Pacific headquarters, suicide attempts are punishable by imprisonment for up to one year. In these countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration. This example illustrates how social suicide prediction is analogous to predictive policing. In the US, the Fourth Amendment protects people and their homes from warrantless searches. However, under exigent circumstances doctrine, police may enter homes without warrants if they reasonably believe entry is necessary to prevent physical harm. Stopping a suicide clearly falls within this exception. Nevertheless, it may be unreasonable to rely on opaque AI-generated suicide predictions to circumvent Fourth Amendment protections when no information regarding their accuracy is publicly available.
Because suicide prediction tools impact people’s civil liberties, consumers should demand transparency from companies that use them. The companies should publish their suicide prediction algorithms for analysis by privacy experts, computer scientists, and mental health professionals. At a minimum, they should disclose the factors weighed to make predictions and the outcomes of subsequent interventions. In the European Union, Article 22 of the General Data Protection Regulation (GDPR) give consumers the right “not to be subject to a decision based solely on automated processing, including profiling,” which may include profiling for suicide risk. Consumers are also said to have a right to explanation. Article 15 of the GDPR allows consumers to request the categories of information being collected about them and to obtain “meaningful information about the logic involved . . . .” The US lacks similar protections at the federal level. However, the California Consumer Protection Act of 2018 (CCPA) provides some safeguards. It includes inferred health data within its definition of personal information, which likely includes suicide predictions. The CCPA allows consumers to request the categories of personal information collected from them and to ask that personal information be deleted. These safeguards will increase the transparency of social suicide prediction. However, the CCPA has significant gaps. For instance, it does not apply to non-profit organizations such as Crisis Text Line. Furthermore, the tech industry is lobbying to weaken the CCPA and to implement softer federal laws to preempt it.
One way to protect consumer safety would be to regulate social suicide prediction algorithms as software-based medical devices. The Food and Drug Administration (FDA) has collaborated with international medical device regulators to propose criteria for defining “Software as a Medical Device.” The criteria include whether developers intend the software to diagnose, monitor, or alleviate a disease or injury. Because the goal of social suicide prediction is to monitor suicidal thoughts and prevent users from injuring themselves, it should satisfy this requirement. The FDA also regulates mobile health apps and likely reserves the right to regulate those that utilize suicide prediction algorithms because they pose risks to consumers. These apps include Facebook and its Messenger app.
Jack Balkin argues that the common law concept of the fiduciary should apply to companies that collect large volumes of information about consumers. Like classic fiduciaries, such as doctors and lawyers, internet platforms possess more knowledge and power than their clients, and these asymmetries create opportunities for exploitation. Treating social suicide predictors as information fiduciaries would subject them to duties of care, loyalty, and confidentiality. Under the duty of care, companies would be required to ensure through adequate testing that their suicide prediction algorithms and interventions are safe. The duties of loyalty and confidentiality would require them to protect suicide prediction data and to abstain from selling it or otherwise using it to exploit consumers.
Alternatively, we might require that suicide predictions and subsequent interventions be made under the guidance of licensed healthcare providers. For now, humans remain in the loop at Facebook and Crisis Text Line, yet that may not always be the case. Facebook has over two billion users, and it continuously monitors user-generated content for a growing list of threats including terrorism, hate speech, political manipulation, and child abuse. In the face of these ongoing challenges, the temptation to automate suicide prediction will grow. Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. Similar concerns have been raised in the context of criminal law. AI-based sentencing algorithms provide recidivism risk scores to judges who use them in sentencing decisions. Critics argue that even though judges retain ultimate decision-making power, it may be difficult for them to defy software recommendations. Like social suicide prediction tools, criminal sentencing algorithms are proprietary black boxes, and the logic behind their decisions is off-limits to people who rely on their scores and those who are affected by them.
The due process clause of the Fourteenth Amendment protects people’s right to avoid unnecessary confinement. So far only one state supreme court has considered a due process challenge to the use of proprietary algorithms in criminal sentencing; the court ultimately upheld the sentence because it was not based solely on a risk assessment score. Nevertheless, the risk of hospitalizing people without due process is a compelling reason to make the logic of AI-based suicide predictions more transparent.
Regardless of the regulatory approach taken, it is worth taking a step back to scrutinize social suicide prediction. Tech companies may like to “move fast and break things,” but suicide prediction is an area that should be pursued methodically and with great caution. Lives, liberty, and equality are on the line.
Mason Marks is a research fellow at the Information Law Institute at NYU Law School and a visiting fellow at the Information Society Project at Yale Law School. You can reach him by e-mail at mason.marks at yale.edu Posted 9:00 AM by Guest Blogger [link]
|
Books by Balkinization Bloggers ![]() Linda C. McClain and Aziza Ahmed, The Routledge Companion to Gender and COVID-19 (Routledge, 2024) ![]() David Pozen, The Constitution of the War on Drugs (Oxford University Press, 2024) ![]() Jack M. Balkin, Memory and Authority: The Uses of History in Constitutional Interpretation (Yale University Press, 2024) ![]() Mark A. Graber, Punish Treason, Reward Loyalty: The Forgotten Goals of Constitutional Reform after the Civil War (University of Kansas Press, 2023) ![]() Jack M. Balkin, What Roe v. Wade Should Have Said: The Nation's Top Legal Experts Rewrite America's Most Controversial Decision - Revised Edition (NYU Press, 2023) ![]() Andrew Koppelman, Burning Down the House: How Libertarian Philosophy Was Corrupted by Delusion and Greed (St. Martin’s Press, 2022) ![]() Gerard N. Magliocca, Washington's Heir: The Life of Justice Bushrod Washington (Oxford University Press, 2022) ![]() Joseph Fishkin and William E. Forbath, The Anti-Oligarchy Constitution: Reconstructing the Economic Foundations of American Democracy (Harvard University Press, 2022) Mark Tushnet and Bojan Bugaric, Power to the People: Constitutionalism in the Age of Populism (Oxford University Press 2021). ![]() Mark Philip Bradley and Mary L. Dudziak, eds., Making the Forever War: Marilyn B. Young on the Culture and Politics of American Militarism Culture and Politics in the Cold War and Beyond (University of Massachusetts Press, 2021). ![]() Jack M. Balkin, What Obergefell v. Hodges Should Have Said: The Nation's Top Legal Experts Rewrite America's Same-Sex Marriage Decision (Yale University Press, 2020) ![]() Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Belknap Press, 2020) ![]() Jack M. Balkin, The Cycles of Constitutional Time (Oxford University Press, 2020) ![]() Mark Tushnet, Taking Back the Constitution: Activist Judges and the Next Age of American Law (Yale University Press 2020). ![]() Andrew Koppelman, Gay Rights vs. Religious Liberty?: The Unnecessary Conflict (Oxford University Press, 2020) ![]() Ezekiel J Emanuel and Abbe R. Gluck, The Trillion Dollar Revolution: How the Affordable Care Act Transformed Politics, Law, and Health Care in America (PublicAffairs, 2020) ![]() Linda C. McClain, Who's the Bigot?: Learning from Conflicts over Marriage and Civil Rights Law (Oxford University Press, 2020) ![]() Sanford Levinson and Jack M. Balkin, Democracy and Dysfunction (University of Chicago Press, 2019) ![]() Sanford Levinson, Written in Stone: Public Monuments in Changing Societies (Duke University Press 2018) ![]() Mark A. Graber, Sanford Levinson, and Mark Tushnet, eds., Constitutional Democracy in Crisis? (Oxford University Press 2018) ![]() Gerard Magliocca, The Heart of the Constitution: How the Bill of Rights became the Bill of Rights (Oxford University Press, 2018) ![]() Cynthia Levinson and Sanford Levinson, Fault Lines in the Constitution: The Framers, Their Fights, and the Flaws that Affect Us Today (Peachtree Publishers, 2017) ![]() Brian Z. Tamanaha, A Realistic Theory of Law (Cambridge University Press 2017) ![]() Sanford Levinson, Nullification and Secession in Modern Constitutional Thought (University Press of Kansas 2016) ![]() Sanford Levinson, An Argument Open to All: Reading The Federalist in the 21st Century (Yale University Press 2015) ![]() Stephen M. Griffin, Broken Trust: Dysfunctional Government and Constitutional Reform (University Press of Kansas, 2015) ![]() Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) ![]() Bruce Ackerman, We the People, Volume 3: The Civil Rights Revolution (Harvard University Press, 2014) Balkinization Symposium on We the People, Volume 3: The Civil Rights Revolution ![]() Joseph Fishkin, Bottlenecks: A New Theory of Equal Opportunity (Oxford University Press, 2014) ![]() Mark A. Graber, A New Introduction to American Constitutionalism (Oxford University Press, 2013) ![]() John Mikhail, Elements of Moral Cognition: Rawls' Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment (Cambridge University Press, 2013) ![]() Gerard N. Magliocca, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment (New York University Press, 2013) ![]() Stephen M. Griffin, Long Wars and the Constitution (Harvard University Press, 2013) Andrew Koppelman, The Tough Luck Constitution and the Assault on Health Care Reform (Oxford University Press, 2013) ![]() James E. Fleming and Linda C. McClain, Ordered Liberty: Rights, Responsibilities, and Virtues (Harvard University Press, 2013) Balkinization Symposium on Ordered Liberty: Rights, Responsibilities, and Virtues ![]() Andrew Koppelman, Defending American Religious Neutrality (Harvard University Press, 2013) ![]() Brian Z. Tamanaha, Failing Law Schools (University of Chicago Press, 2012) ![]() Sanford Levinson, Framed: America's 51 Constitutions and the Crisis of Governance (Oxford University Press, 2012) ![]() Linda C. McClain and Joanna L. Grossman, Gender Equality: Dimensions of Women's Equal Citizenship (Cambridge University Press, 2012) ![]() Mary Dudziak, War Time: An Idea, Its History, Its Consequences (Oxford University Press, 2012) ![]() Jack M. Balkin, Living Originalism (Harvard University Press, 2011) ![]() Jason Mazzone, Copyfraud and Other Abuses of Intellectual Property Law (Stanford University Press, 2011) ![]() Richard W. Garnett and Andrew Koppelman, First Amendment Stories, (Foundation Press 2011) ![]() Jack M. Balkin, Constitutional Redemption: Political Faith in an Unjust World (Harvard University Press, 2011) ![]() Gerard Magliocca, The Tragedy of William Jennings Bryan: Constitutional Law and the Politics of Backlash (Yale University Press, 2011) ![]() Bernard Harcourt, The Illusion of Free Markets: Punishment and the Myth of Natural Order (Harvard University Press, 2010) ![]() Bruce Ackerman, The Decline and Fall of the American Republic (Harvard University Press, 2010) Balkinization Symposium on The Decline and Fall of the American Republic ![]() Ian Ayres. Carrots and Sticks: Unlock the Power of Incentives to Get Things Done (Bantam Books, 2010) ![]() Mark Tushnet, Why the Constitution Matters (Yale University Press 2010) Ian Ayres and Barry Nalebuff: Lifecycle Investing: A New, Safe, and Audacious Way to Improve the Performance of Your Retirement Portfolio (Basic Books, 2010) ![]() Jack M. Balkin, The Laws of Change: I Ching and the Philosophy of Life (2d Edition, Sybil Creek Press 2009) ![]() Brian Z. Tamanaha, Beyond the Formalist-Realist Divide: The Role of Politics in Judging (Princeton University Press 2009) ![]() Andrew Koppelman and Tobias Barrington Wolff, A Right to Discriminate?: How the Case of Boy Scouts of America v. James Dale Warped the Law of Free Association (Yale University Press 2009) ![]() Jack M. Balkin and Reva B. Siegel, The Constitution in 2020 (Oxford University Press 2009) Heather K. Gerken, The Democracy Index: Why Our Election System Is Failing and How to Fix It (Princeton University Press 2009) ![]() Mary Dudziak, Exporting American Dreams: Thurgood Marshall's African Journey (Oxford University Press 2008) ![]() David Luban, Legal Ethics and Human Dignity (Cambridge Univ. Press 2007) ![]() Ian Ayres, Super Crunchers: Why Thinking-By-Numbers is the New Way to be Smart (Bantam 2007) ![]() Jack M. Balkin, James Grimmelmann, Eddan Katz, Nimrod Kozlovski, Shlomit Wagman and Tal Zarsky, eds., Cybercrime: Digital Cops in a Networked Environment (N.Y.U. Press 2007) ![]() Jack M. Balkin and Beth Simone Noveck, The State of Play: Law, Games, and Virtual Worlds (N.Y.U. Press 2006) ![]() Andrew Koppelman, Same Sex, Different States: When Same-Sex Marriages Cross State Lines (Yale University Press 2006) Brian Tamanaha, Law as a Means to an End (Cambridge University Press 2006) Sanford Levinson, Our Undemocratic Constitution (Oxford University Press 2006) Mark Graber, Dred Scott and the Problem of Constitutional Evil (Cambridge University Press 2006) Jack M. Balkin, ed., What Roe v. Wade Should Have Said (N.Y.U. Press 2005) Sanford Levinson, ed., Torture: A Collection (Oxford University Press 2004) Balkin.com homepage Bibliography Conlaw.net Cultural Software Writings Opeds The Information Society Project BrownvBoard.com Useful Links Syllabi and Exams |