Wednesday, January 20, 2010

When Development Aid Requires Deaths For Scientific Confirmation

Brian Tamanaha

The field of development economics is in a period of tumultuous self-examination, on full display in a new collection by leading scholars, What Works in Development? This is the latest of a host of recent books that have questioned the value of decades and billions of dollars of development assistance. As William Easterly and Jessica Cohen put it in the Introduction, “there is no consensus on ‘what works’ for growth and development.” With the collapse of the Washington Consensus, “thinking big” is now “in crisis.” The current shift is toward small projects, experimentation, see what works: try micro-financing, pass out insecticide-treated mosquito nets (free or for a small fee), give kids school uniforms, educate parents, and so forth.

There is an ongoing battle within the field over what counts as valid knowledge and how to acquire it. Pushing the issue is a group of scholars, apparently gaining momentum, who insist that randomized evaluations—using control groups to expose differences that follow from the implementation of reform programs—are the most reliable source of “hard” evidence about the effects of development projects.

That makes sense.

But this method can have troubling implications, evident in the following passage. The authors, Peter Boone and Simon Johnson, criticize the Millennium Villages Project (of Jeffrey Sachs)--which involves the implementation of a broad package of reforms--for not providing any way to test success. Here is their proposal for a better way:
To this end [testing], we have partnered with medical statisticians at the London School of Hygiene and Tropical medicine along with local health professionals to design and implement projects to reduce child mortality in Africa and India. Although these are aid projects, they are designed as randomized controlled trials identical to a drug approval trial and are testing whether a comprehensive package of interventions, including intensive provision of community health education and contracted-out clinical services, will be sufficient to rapidly reduce child deaths. The trials are being implemented in 600 villages and cover a total population of 500,000. It will take three years to accumulate sufficient events (child deaths) to credibly determine the impact on overall child mortality for each trial.
The bottom line of this human experiment is that excess (possibly preventable) child deaths in the control groups (assuming the intervention packages have some success) are the price to be paid for determining—to the satisfaction of social scientists—whether or which reform packages effectively reduce child mortality.

The rationale for doing this is understandable, but…

Would we engage in such trials in our own society? Perhaps I am mistaken, but my understanding is that drug tests in the United States that begin to show real mortality differences are called off, giving the control group access to the helpful drug. Perhaps the authors have a similar plan in place for their intervention package, although they don't mention it on the website describing the plan. Pulling the plug early might compromise the scientific validity (or strength of the findings) of the randomized evaluation.


The clinical trials you're referring to ("drug tests in the United States") aim at individual health and are testing the effect on individuals.

The development project trials aim at public health and are testing the effect on populations. They're similar to vaccine trials, and pose some of the same ethical problems.

Post a Comment

Older Posts
Newer Posts