an unanticipated consequence of
Jack M. Balkin
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Bernard Harcourt harcourt at uchicago.edu
Scott Horton shorto at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at princeton.edu
Rick Pildes rick.pildes at nyu.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Michal Shur-Ofry’s article, "Access to Error," is a thoroughly enjoyable read, outstanding in both concept and execution. Hers is one of those arguments that makes you think, “How did we miss that?” And, “Now what do we do about it?"
Here’s the foundational insight, in brief: patents provide an incentive to disclose workable technologies. Failures are the blind spot of the patent incentive structure. But there is actually an enormous value in knowing what doesn’t work. As the author points out, the commercial value of knowing what doesn’t work makes this knowledge eligible for trade secret protection. But that is the opposite of incentivizing its disclosure, where it can do the most good.
In the health sector, other scholarshave pointed out the problems associated with allowing companies with a financial stake in pharmaceutical research to selectively release their research results. This literature has been closely tied to the context of pharmaceutical research and assumes that the need for access to unsuccessful trial results is unique to that context.
This article takes that concern to a different level, with relevance to all areas of science and technology, business models and social innovation. IP facilitates a market for invention at the end-stage, when you have a successful technology. But the secrecy along the way is counter-productive to speeding innovation.
Consider the example of the race to dominate the market for commercially successful electric lighting technology, a historical case study that I detailed in "Illuminating Innovation." Each team in this patent race generated enormous knowledge about what didn’t work---which they all scrupulously hid from competitors. Edison and his competitors recognized that there was enormous strategic value in forcing others to make the same mistakes.
The private value of concealing "negative information" may be high, but Shur-Ofry persuasively articulates the immense social value in promoting its release. The benefits of access to error are widespread, not just in pharmaceutical research data. There is immense value in access to error in technological R&D that never gets published, and in business method trial and error. If a dozen nonprofit organizations are trying to impact homelessness, they could all benefit greatly from knowing what each other has done that has failed. So the concept of access to error is bigger than technology and much bigger patent law, befitting this year’s theme of Innovation Law Beyond IP.
Shur-Ofry's illustration of the problem of access to error is highly convincing. The dispute will---and should---be about the best ways to promote the release of negative information. The author dedicates a significant portion of her paper to discussing possible ways to do that. She first offers a few doctrinal examples to make the point that aspects of IP law can be tweaked to reflect this insight. I foresee a vein of follow-on literature from other scholars with expertise in particular niches of IP doctrine to play this concept out with more examples and in more detail. It is a highly generative insight.
Ultimately, however, the author rightly and importantly points out that IP probably inevitably fails to incentivize the circulation of negative results. Accordingly, Shur-Ofry spends more time on “beyond IP” questions of institutional structure: how to incentivize a knowledge structure that will promote access to error? I hope she’ll also turn this line of inquiry into a TED talk---how to increase access to error is a tough “big question” that people in all sorts of institutions need to be thinking about.
If you are an organization trying to build technology to make books more accessible to the blind, there is a value to you in knowing what has already been tried that failed. You can avoid wasting time attempting the same thing. Or perhaps you can see a way to turn two failed components into a successful combination.
There will always be the challenge in an information-rich environment of sorting through what is available to separate what is helpful from what is not. But surely we wouldn’t want to attempt to make that separation a priori. How can one actor know what information will be useful to another actor? We should want as much information as possible to be available, so that people can find and choose what meets their needs.
In the end, this concern actually points to yet another reason why we may be under-incentivizing the sharing of error. People wrongly assume that errors have no value to others, that it might be wasting the time of others to publish this information. We systematically under appreciate the value of negative information… and so does our legal structure.
Lea Shaver is Associate Professor of Law and Dean's Fellow at the Indiana University Robert H. McKinney School of Law. She can be reached at lbshaver at iupui.edu.