Follow NorthStar_Blog on Twitter

Thursday, August 25, 2011

Arab Spring Scorecard

Tunisia: Ben Ali is exiled in Saudi Arabia with his haul of gold.

Egypt: Hosni Mubarak is currently on trial while also dealing with serious health issues. Until a few days ago, his trial was broadcast on live TV. The ailing former head of state was too ill to sit in a chair, so he laid on a bed inside a cage.

Yemen: President Saleh seems to be under an informal house arrest in Saudi Arabia after being hospitalized there following an attack. Saleh has been out of the hospital for some time and frequently promises to return to Yemen.

Iran: Protests earlier this year were quickly and brutally put down. There has been little in the way of demonstrations since.

Libya: Gaddafi has a million dollar bounty on his head. Rebels are tightening their grip on Tripoli as the opposition Transitional National Council tries to quell fears about its ability to rule and to rule justly.

Syria: Bashir al-Assad recently made promises of reforms and municipal elections while continuing his horrific crackdown on civilians and militants alike. This seems to be another phase in Assad's cycle of empty promises and mass murder.

Monday, August 8, 2011

Major Field Experiments, Yes. Can we get some basic data for where we're working too?

Chris Blattman recently posted an article entitled “One of the nicest field experiments I have ever seen” praising a study that showed positive impacts from a women-focused voter awareness campaign. He is so impressed with the impact evaluation study that he ends the article with a death note to the other form of studying development tools, “if the era of simple NGO program evaluation is not dead, it is gasping its dying breath.” http://chrisblattman.com/2011/07/21/one-of-the-nicest-field-experiments-i-have-seen/

I wholeheartedly agree that “simple program evaluation” is passé as a tool for adding to our knowledge about development tools. NGO program evaluations are useless for extrapolating findings.  They study a single situation after a program has been employed. Even for their own context, they can’t really show impact (change due to the program,) because they have no counter-example proving the change wouldn't have occurred in the absence of the program. Even if a program evaluation compares multiple programs, it is impossible to tell if the difference is due to the difference in what the programs did and not how the programs were run. Program evaluations essentially miss 95% of the reality happening even within their own parameters. If Blattman is right that this method is dead, I’m not sure who will be at the funeral. But to put "program evaluations" and impact evaluations side-by-side is like comparing apples and apple trees.
   
Randomized control experiments cost millions of dollars (mostly provided by philanthropic donors). The interventions studied must be organized entirely with the study in mind, from the beginning. They often need to include a pilot study and can take over two years to complete. These studies are cumbersome, but there is great pay off in terms of answering specific questions about human economic and social behavior. It would be great if more field practitioners got more contact with the resulting articles.

However, great as they are, today’s rare experimental studies still leave a huge gap in making international development interventions more successful.  Imagine you are in the field. Instead of just doing your job in the international industry and following the log-frame passed down to you, you decide, one day, to step back and reflect on what is really needed. You need data. But wait, there is no money to collect data about the five villages you want to work with, much less the other five villages you eventually want to compare your work with so you can see if it made a difference. The only way to get funding is to act, not to assess. Substantial pre-project data gathering is just unrealistic in the current development industry. So you give up and carry-on taking orders with a sneaking suspicion that the resources you are delivering, even with winks and nods to the findings of published experimental studies, aren’t really connecting with the beneficiaries.

It is time we start to think about how local, quality research can be funded in all project sites and potential project sites. Substantial and consistent pre-project data collection (beyond rare University studies and 12th hour number pulling for project proposals) is as critical to the success of development efforts as learning the answers to questions like “are women more likely to vote if they are encouraged to do so?” Taking it one step further, communities themselves could actually get involved and excited by regular data gathering. If they could see data about themselves presented data back to them (in culturally appropriate formats), there could be all kinds of otherwise unexplainable positive impacts occurring across many different projects. But this requires support and funding. It is worth building some branches (regular local data collection) from the trees (randomized control studies) even if we just let the apples (program evaluation) rot.