Balkinization  

Tuesday, October 30, 2018

Seeing Transparency Through: Healthcare Software, Data Privacy, and Regulation

Guest Blogger

Bonnie Kaplan

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

Transparency: the theme runs through much of the conference.  Improvements in both data protection and access, as well as in algorithms and their use, are hoped for through increased access to data and to more transparency about how it is generated, collected, processed, and used.  In health care, contractual issues, liability, intellectual property protection, and outdated regulation contribute to lack of transparency.

Transparency for all health care data and software is not an unmitigated good. Autonomy and dignity, public health, patient care, biosurveillance, clinical and other scientific research, marketing, and innovation also require consideration.  However, more transparency in software, data, privacy, and regulation is needed regarding health information technologies.

Healthcare information technology is based on complex software systems comprised of numerous algorithms, so concerns surrounding algorithmic transparency apply to their use in health care.  Explanability, testability, understandability, identifiability of inherent biases or outright errors, correctability—all are compromised without transparency.

This problem affects systems such as electronic health record  (EHR) systems for ordering, communicating, storing, and retrieving clinical information about each patient.  EHRs are built of algorithms.  Consider medication orders.  Ordering a medication through an EHR can involve automated dosage calculations based, in part, on the patient’s weight. The calculation may seem straightforward, though there can be errors due to incorrect input, malfunctioning software or hardware, inconsistencies in how and where weight or medication are recorded, and erroneous formulas, none of which may be transparent.  Similar opaqueness characterizes predictive algorithms used for ICU bed allocation, patient monitor alert systems, and forms of decision support ranging from vaccination reminders to differential diagnosis.

If a clinician suspects an error, the system vendor contract may include clauses that hold the vendor harmless regardless of cause of error and instead place responsibility on the clinician as a “learned intermediary” who is presumed to be able to detect any problem and circumvent it.  To protect intellectual property, contract clauses may prevent the clinician from showing the screen to others to alert them to a possible software error.  EHR vendor contracts are reported to contain such clauses, but they, like the software, are considered intellectual property, so cannot be examined to verify these claims, or clauses may be buried in a footnote somewhere in a several-thousand page contract.  Contract provisions that shield both the contract and the software, in the words of the 2013 Westat report for the Office of the National Coordinator for Health Information Technology, impede the ability “to compare different EHR technology developer systems, provide access to researchers, or even address possible patient safety concerns.”  Moreover, hold harmless clauses make clinicians liable for basing care on software errors unknown to them.  As these systems become the standard of practice, clinicians also can be liable for not using such software.

A confusing mix of regulatory agencies oversees different aspects of health information technology.  Although the FDA tests what it considers a “medical device” for safety and efficacy, EHR software is not generally under control by FDA-regulated entities, nor are telehealth and mHealth (smartphone) devices, wearables, and service and assistive robots.  Most such products are not classified as medical devices.  The FCC regulates the transmission of information between devices.  The FTC regulates vendor compliance with their user agreements, which are anything but negotiable or easily understandable to users.  HIPAA (Health Insurance Portability and Accountability Act) regulation does not cover popular devices like Fitbits and Apple Watches.  The burden, then, is on consumers and clinicians who may lack the knowledge to detect or prevent problems, and even for those with the knowledge, lack of transparency in user agreements, algorithms, communication protocols, possible user settings, etc. make evaluation or redress nearly impossible.

Regulatory complexity and fragmentation concerning health data privacy also burden those the regulations are presumed to protect.  The sensitive nature of health data is recognized and given special protection internationally.  It is not surprising, then, that the basis for US and EU privacy law surfaced in health data privacy regulation inspired by Alan F. Westin’s foundational work.  After the July 1972 US Department of Health, Education, and Welfare Advisory Committee on Automated Data Systems Records, Computers, and the Rights of Citizens report recommended a Federal Code of Fair Information Practice (FIP) for all automated personal data systems, the
Privacy Act of 1974 incorporated the principles he laid out.

Privacy protection legislation and regulation requires expertise to understand.  It is contrary to public expectations of what constitutes privacy.  Health data privacy is regulated through HIPAA (for clinical data), the Common Rule (for research data), and special regulations for some categories of data, such as for minors, genetic testing, biobanking, or the mentally ill. Patient-generated data from social media or commercial devices and apps is not privacy protected in these ways.  Individuals have little idea of what data is protected, and what is not.  Even for supposedly-protected data, they may be required to sign authorizations to release data for billing and other, more nebulous purposes.  They likely are not aware of risks of data re-identification of de-identified data, data aggregation, data sales or theft, or how such data may be used in credit ratings, insurance sales and rates, employment, policing, or advertising.

Data governance, too, is complicated.  As mentioned, clinical data, research data, and health-related data from commercial devices or social networks (including patient-generated data), data storage and data transmission, all are regulated differently, to the extent there is regulation at all.  The distinctions between these categories, though, are becoming blurred.  Moreover, each state and various federal health care providers (e.g., military, Indian) have their own set of regulations, making interoperability and data sharing across jurisdictional boundaries more complex.  Large organizations employ data governance specialists. Patients fend for themselves.

Privacy and security vulnerabilities common to devices, social networks, and other means of data generation are common to health-related data as well.  Devices and smartphones may be lost, stolen, shared, or hacked.  Collateral data about other people, such as location data, can compromise others besides the primary user.  Similarly, knowledge of clinician prescribing practices obtained from prescription data can affect patients and clinicians.  Additionally, data ownership and sales also require more transparency.  Patients do not own their data—in medical records, on social networks, from mHealth apps—or do they?

Privacy also is compromised by complex and opaque user agreements for wearable devices, smart phone health apps, home sensors, and other technologies.  Ali Sunyaev, Tobias Dehling, Patrick L. Taylor, and Kenneth D. Mandl’s 2015 study in The Journal of the American Medical Informatics Association, reported that fewer than one-third of user agreements for popular apps lack privacy policy statements while the rest do not address the specific app.  They are written in nearly incomprehensible language with the privacy sections (if any) buried in much other material.  There is little guarantee that the policy will be followed, or that there will be sanctions if it is not.  Not surprisingly, people do not read these policies nor understand them if they do. They have little choice but to accept the agreement as is, or not use the device or app, raising issues about consent.  Many have no idea that what they consider health information is not protected by regulation, so data about them may be sold, aggregated, or used in other ways without their permission or knowledge.

Lack of transparency that manifests in numerous aspects of algorithms, telemedicine, and robots used in health care, as well as electronic health record systems, mHealth apps and wearables, devices, (including implantables, such as pace makers), and social networks, cross-cut a number of legal and regulatory areas.  These include intellectual property, data privacy and computer security, contracts (for vendors, for app users), liability and malpractice, patient consent and authorizations, and data as speech (and whose speech is protected how).  As new technologies rapidly develop, existing law becomes inadequate, obsolete, and fragmented, while ethical and social issues warrant far more extensive discussion.

Law and regulation need updating in light of new technologies.  Privacy and intellectual property protection also need to allow for beneficent purposes such as patient care, public health, and research. Current law and regulation do neither.  The FIPs hinge on transparency.  Knowing what data is collected, how it is used and safeguarded, how algorithmic processes work, and what is needed to correct data and algorithms, are necessary both for control by the person whose data it is, and for accountability by organizations creating, holding, or using the data and software. Increased transparency is a crucial part of what is needed to address these concerns.  Untangling legal and regulatory complexity requires more transparency, so that regulations can be simplified, harmonized, more flexible, and effective.

Although legal issues surrounding contracts, liability, intellectual property, and privacy regulation contribute to the lack of transparency, we also need both public education and discussion, and research and analysis from different perspectives and academic disciplines.  Just as the technologies are converging, insights and methods from a variety of academic fields, together with studies of public attitudes and actual communities of practice, need to converge to address regulatory and legal change as well as social practices and personal behavior.  Improving transparency is a necessary step to better protecting patient privacy and patient care, thereby encouraging promising data use for improved health and health care.


Bonnie Kaplan is Lecturer, Yale Center for Medical Informatics; Faculty, Program on Biomedical Ethics; Scholar, Yale Bioethics Center; and Fellow, Information Society Project and Solomon Center for Health Law and Policy, Yale Law School. You can reach her by e-mail at bonnie.kaplan at yale.edu




Older Posts
Newer Posts
Home