an unanticipated consequence of
Jack M. Balkin
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Bernard Harcourt harcourt at uchicago.edu
Scott Horton shorto at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at princeton.edu
Rick Pildes rick.pildes at nyu.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
When Development Aid Requires Deaths For Scientific Confirmation
The field of development economics is in a period of tumultuous self-examination, on full display in a new collection by leading scholars, What Works in Development? This is the latest of a host of recent books that have questioned the value of decades and billions of dollars of development assistance. As William Easterly and Jessica Cohen put it in the Introduction, “there is no consensus on ‘what works’ for growth and development.” With the collapse of the Washington Consensus, “thinking big” is now “in crisis.” The current shift is toward small projects, experimentation, see what works: try micro-financing, pass out insecticide-treated mosquito nets (free or for a small fee), give kids school uniforms, educate parents, and so forth.
There is an ongoing battle within the field over what counts as valid knowledge and how to acquire it. Pushing the issue is a group of scholars, apparently gaining momentum, who insist that randomized evaluations—using control groups to expose differences that follow from the implementation of reform programs—are the most reliable source of “hard” evidence about the effects of development projects.
That makes sense.
But this method can have troubling implications, evident in the following passage. The authors, Peter Boone and Simon Johnson, criticize the Millennium Villages Project (of Jeffrey Sachs)--which involves the implementation of a broad package of reforms--for not providing any way to test success. Here is their proposal for a better way:
To this end [testing], we have partnered with medical statisticians at the London School of Hygiene and Tropical medicine along with local health professionals to design and implement projects to reduce child mortality in Africa and India. Although these are aid projects, they are designed as randomized controlled trials identical to a drug approval trial and are testing whether a comprehensive package of interventions, including intensive provision of community health education and contracted-out clinical services, will be sufficient to rapidly reduce child deaths. The trials are being implemented in 600 villages and cover a total population of 500,000. It will take three years to accumulate sufficient events (child deaths) to credibly determine the impact on overall child mortality for each trial.
The bottom line of this human experiment is that excess (possibly preventable) child deaths in the control groups (assuming the intervention packages have some success) are the price to be paid for determining—to the satisfaction of social scientists—whether or which reform packages effectively reduce child mortality.
The rationale for doing this is understandable, but…
Would we engage in such trials in our own society? Perhaps I am mistaken, but my understanding is that drug tests in the United States that begin to show real mortality differences are called off, giving the control group access to the helpful drug. Perhaps the authors have a similar plan in place for their intervention package, although they don't mention it on the website describing the plan. Pulling the plug early might compromise the scientific validity (or strength of the findings) of the randomized evaluation.