- Armament and disarmament
- Conflict and peace
- Peace and development
In 2015 over $131 billion was spent in official development assistance, an increase of nearly 7% compared to 2014. Similarly, humanitarian aid grew by 11% in real terms to $13.6 billion. However, there is little evidence to suggest that this money is spent on peacebuilding interventions that work—particularly in fragile environments.
To obtain more evidence over the success (or failure) of such interventions, organizations need to conduct impact evaluations. An impact evaluation measures the outcomes—both intended and unintended—of an intervention and compares them to what the outcomes would have been had the intervention not been implemented. In peacebuilding programmes, impact evaluations have four main advantages: they help provide data in conflict environments; they serve as ‘programming by example’ for further project designs; they help prioritize the needs and objectives of policies or interventions; and they develop the epistemology of peacebuilding.
Despite their importance, conducting rigorous impact evaluations is often a luxury that many commissioners, and thus development, humanitarian and peacebuilding programmes, cannot afford due to a lack of vision, budget and time, especially within fragile settings.
There are extensive internal and external challenges as well as limitations associated with evaluations in fragile contexts. Internal challenges can range from having a strong logistics dependency on the field team to having difficulty recruiting qualified evaluators or data enumerators willing to travel and work in dangerous places. External limitations, on the other hand, include inaccessibility to sites and informants due to security concerns, reduced time in the field due to high resource costs, and collecting evidence of a lower quality than originally planned.
A clear example of an impact evaluation undermined by local conflict is the evaluation of a microfinance programme in Yemen. Due to the present war in the country, its operation was severely limited; for example, baseline surveys had to be conducted through mobile phone interviews.
One of the predominant challenges of rigorous impact evaluations is that they are underfunded and under-encouraged by the commissioners themselves. This, together with other challenges, is discussed in the paper Randomized Controlled Trials: Strengths, Weaknesses and Policy Relevance (pdf), which calls for careful interpretation of evidence, building more competence to commission randomized controlled trials (which are necessary for rigorous impact evaluations) and closer collaboration among the partners.
Even when impact evaluations are commissioned, challenges can emerge. One common challenge arises when the evaluation team does not 'speak the same language’ as the implementers of the interventions—for example, using complicated and technical terminologies when addressing a non-academic audience—which automatically reduces the effectiveness of the evaluation. Evaluators should try to close this gap. After all, it is a part of the evaluation work to educate the implementers through sharing the effects of the programmes.
Another challenge for evaluators is restrictions at the evaluation stage. Although the requirements set by the commissioners of peacebuilding interventions are essential for assessing the intervention's impacts, there is a progressive need for more adaptive or creative evaluations as opposed to the prevailing ‘mechanistic’ evaluation method that implementers or commissioners commonly expect. Giving evaluators this methodological freedom allows them to come up with techniques to ensure rigorous impact evaluations can be undertaken ethically even when constrained by limited resources.
Examples of such techniques are demonstrated in the paper What Methods May Be Used in Impact Evaluations of Humanitarian Assistance (requires subscription) by Dr Anastasia Aladysheva, SIPRI Senior Researcher, and co-authors. For instance, the statistical techniques of propensity score matching and regression discontinuity design can be used to estimate the effect of an intervention in non-experimental settings by artificially creating a ‘counterfactual’—a control group that is compared to the treatment group and mimics the hypothetical situation of what would have happened to the treatment group in case of no intervention. This flexibility by evaluators makes it important that additional attention is given by all stakeholders during the design stage of an intervention to the evaluation process itself.
Conducting impact evaluations is not easy—they are largely academically oriented, require generous funding and involve methodological challenges—but it is crucial to understanding which peacebuilding interventions work and why.
This blog post is based on discussions and presentations from the seminar Impact Evaluations in Fragile States, held at SIPRI in November. This seminar, with participants from research institutes, NGOs, consultancy agencies and government committees, is part of an ongoing SIPRI and 3ie agenda to cultivate a culture of evidence-based programming and policymaking. This initiative seeks to connect researchers with evaluators and implementers, as well as draw commissioners into the discussions, to increase the effectiveness of interventions, particularly those related to peacebuilding in fragile places.