... and it raised for me an interesting question. Normally, I'm pretty aggressive in demanding performance metrics from public policy proposals. However, the more "life-and-death" the proposal, the more there is a real moral hazard: how many people are denied the effective aid in the name of verifying that the aid works, by being a member of a control group?
I'll take an example that Duflo cites here of deworming schoolchildren as an incentive to attend school. In order to determine the efficacy of this proposal, there would be groups of children not dewormed and groups dewormed. Worms are not fatal, so the moral hazard here seems lower-- the next year, the children from the control would be likelier receive deworming because the study determined it was effective.
This hazard is different from the one raised by pharmaceutical drug trials. In that instance, people may be a member of a control group receiving the current standard of care. In another group, people assume a certain amount of risk by receiving a treatment which may be curative... or exacerbate their condition, or create additional unanticipated conditions (see, e.g., DES).
Here, the control group definitely receives a substandard outcome-- and the test group definitely receives at least some benefits derived from additional capital and resources invested in their community, right? I'm aware of the hypotheses that suggest that the group receiving investment may in fact have a substandard outcome--e.g. that aid replaces organic development of economic capacity on a large scale, and that aid more intangibly creates a culture of subsidy, or imports culturally irrelevant preconceptions. But I also believe that raising a question raises awareness, so in the example of malaria nets, even if distribution of free nets DID decrease net usage when purchase was mandatory, it also necessarily raised the awareness of the possibility that if you purchase a malaria net you will not get malaria.... right? There are many more educated than me on that question, and I welcome comments, but for these purposes I'll say neither side is a slam dunk.
Duflo's talk mentions glancing "political difficulties" and "ideology" that cloud the use of proper control-group testing of social policy. This moral hazard is clearly contained in that category. This isn't simply a matter of favoring "science" over "ideology." It's also about weighing immediacy and individual gains against the value of positive outcomes for those being helped. The most efficient might not be the most desirable--especially for politicians. Let me explain.
Politicians operate on an electoral cycle, and budgeting is also cyclical. Data takes far longer to gather than proposals do to generate. Even when data is generated, the outcome may not be positive or the results may be inconclusive. Therefore, it is more politically appealing to offer an immediate program, where no data is available because it cannot be proven you're funding an ineffective program. It is more advantageous from an individual would-be do-gooder to not collect data, and not prove whether the program works at all. So, for politicians and workers at the programs themselves, data is not necessarily an advantage. And even if you DO care about the outcome of the program in itself, or helping for helping's sake, the value you place on helping immediately, versus helping in the most efficacious way possible, might be so high that it is still not in your interest to gather data. This rears its head most clearly in attempting to solve social problems that are pretty quickly fatal or debilitating, but is present to some degree in all of them-- after all, what motivates us to help is the same thing that motivates us to help NOW.
In order to bridge this divide, there are a few options. One option is purer efficiency focus on the part of program funders, perhaps most directly achieved through capitated funding based on outcome. Another option is to just wonder at the complexity of it all, and do nothing. A third option is messy, but allows us to phase-in efficiency while addressing the (perhaps mostly emotional) need to help NOW, rather than helping efficiently-- increase the proportion of funding that is capitated based on efficacy over time, reserving funding for messy strategies that would allow innovation to continue (and avoiding the "this hasn't been done therefore it can't be done" problem). This might even be what we have now, given the hodgepodge of funding policies in the poverty arena.
As Duflo says, there is no silver bullet, even where science is applied. If nobody cared, nobody would help, but because they care, they want to help now. The rational, in that case, has limited effectiveness. Data is not the panacea, but is an important piece. I usually get on people who ignore data, in the name of "caring"-- but that's only a part of the picture.