Balkinization  

Thursday, September 30, 2010

Fired By Software

Frank Pasquale

There's a very interesting piece by Mike Elgan called "Pre-Crime Comes to the HR Dept." After describing new technology designed to predict applicants' and employees' future behavior, he concludes:
Following the current trend lines, very soon social networking spiders and predictive analytics engines will be working night and day scanning the Internet and using that data to predict what every employee is likely to do in the future. This capability will simply be baked right in to HR software suites.

When the software decides that you're going to quit, steal company secrets, break the law, post something indecent on a social network or lie on your expense report, the supervising manager will be notified and action will be taken -- before you make the predicted transgression.

Like Danielle Citron's piece on Technological Due Process, Elgan's article discloses the troubling consequences of these trends. As he points out, unlike normal legal proceedings, in personnel actions "You don't get to 'face your accuser.' You can be passed over for hiring or promotion based on what kind of person you are or what they think you might do in the future. You don't have to actually violate company rules, and they don't have to prove it."

What I find particularly troubling here is the fragmentation of data flows now accelerating on the internet. As personalization advances, there is no single set of “search results” for a person’s name. One searcher may see a collection of positive or neutral results about an individual; another might be presented with compromising material. Screeners within human resources or credit approval departments may order specialized software that scours the internet for the most troubling material about any applicant. It is unlikely that the applicants they evaluate will have access to similar software.

As I point out in my most recent article (PDF) on our ever-more-panoptic internet, rumor and innuendo circulating online can indiscriminately harm the innocent. While the investigative consumer reports (ICRs) generated by credit reporting agencies are subject to several strictures, automated services like the ones Elgan describes are escaping scrutiny. If they are protected by trade secrecy, that will make it very difficult for individuals to figure out exactly how they flag suspect behavior. Moreover, "almost half (47%) of employers request a credit check even for jobs that involve no direct access to, or responsibility for, money, according to the Society for Human Resource Management." If they're consulting credit scores, that's another "black box" evaluation which I have not seen adequately explained or audited.

Reputational systems can never be rendered completely just, but legislators can take two steps toward fairness. The first is relatively straightforward: to assure that key decisionmakers reveal the full range of online sources they consult as they approve or deny applications for credit, insurance, employment, and college and graduate school admissions. Such disclosure will at least serve to warn applicants of the dynamic digital dossier they are accumulating in cyberspace. I call for such rules to be adopted in a "Fair Reputation Reporting Act," in a book chapter soon to be published.

Effective disclosure requirements need to cover more than the users of reputational information—-they should also apply to some aggregators as well. Just as banks have moved from consideration of a long-form credit report to use of a single commensurating credit score, employers and educators in an age of reputation regulation may turn to intermediaries which combine extant indicators of reputation into a single scoring of a person. Since such scoring can be characterized as a trade secret, it may be impossible to reverse engineer its basis. Any proposed legislation will need to address the use of such reputation scores, lest black box evaluations defeat its broader purposes of accountability and transparency.

A final note: Anyone who's still smitten with automated judgment might want to consider what Wall Street's armada of computational strategies has brought us. As Amar Bhide has stated in the HBR:

In recent times . . . a new form of centralized control has taken root—-one that is the work not of old-fashioned autocrats, committees, or rule books but of statistical models and algorithms. These mechanistic decision-making technologies have value under certain circumstances, but when misused or overused they can be every bit as dysfunctional as a Muscovite politburo.


Consider what has just happened in the financial sector: A host of lending officers used to make boots-on-the-ground, case-by-case examinations of borrowers’ creditworthiness. Unfortunately, those individuals were replaced by a small number of very similar statistical models created by financial wizards and disseminated by Wall Street firms, rating agencies, and government-sponsored mortgage lenders. This centralization and robotization of credit flourished as banks were freed from many regulatory limits on their activities and regulators embraced top-down, mechanistic capital requirements. The result was an epic financial crisis and the near-collapse of the global economy. Finance suffered from a judgment deficit, and all of us are paying the price.


My own prediction is that workplaces dominated by leaders who outsmart ever-more-sophisticated HR technology will be culturally Shrutified and short-sighted. They will be more than capable of following orders in lockstep from the top, but I doubt that Pavlovian software can condition people to be innovative or spontaneous.

X-Posted: Concurring Opinions.

Older Posts
Newer Posts
Home