Real World Participation in Evaluation

Editor’s Note: this is a guest Blog Post by Chris Morris. Chris interviewed us some time ago for his Master’s thesis. We thought that his findings were something worth sharing, and invited Chris to write for us.

Participation in Humanitarian Evaluations

Accountability of humanitarian relief organizations to crisis affected communities has been popularly discussed since the Joint Evaluation of Emergency Assistance to Rwanda was published in 1996. Evaluation is one means of providing accountability (Ebrahim, 2003). The evaluation policies of many international non-governmental organizations (INGOs) state that evaluations should provide accountability to the communities they work in.

Participation and accountability are closely linked. While accountability can occur without participation, participation enhances the meaningfulness of accountability to affected communities. Participation at all stages of an evaluation better ensures that communities have the ability to judge and value an intervention and influence decision making (Ebrahim, 2003; Lloyd, 2005). Participation can also help to narrow power imbalances between stakeholders because accountability becomes more about influencing decision making rather than just access to information (Blagescu et al., 2005).

A recent study I conducted looked at whether INGOs were using evaluation to provide accountability to affected communities. The research looked at accountability through the lens of participation and information feedback. I reviewed a sample of evaluation reports posted on the ALNAP website between January 2012 and March 2014 by international INGOs operating in humanitarian contexts. The sample of 10 reports included evaluations from the three[1] of the five biggest humanitarian operators (Save the Children, CARE International and Oxfam), and from 5 other INGOs (Action Contre la Faim, Christian Aid, the Danish Refugee Council, the Norwegian Refugee Council, and Support to Life). The evaluations covered interventions in both natural and man-made emergencies, as well as immediate and longer-term crises [2].

The study used a critical hermeneutic framework to analyse the evaluation reports and supporting documents. The understanding of the data was strengthened by interviews with external evaluators and INGO staff. The data was compared to INGO evaluation and accountability policies, statements, and commitments.

Limited Participation

The findings showed for the most part, participation in evaluations is very limited and INGOs are not using evaluations to provide accountability to the affected communities. This occurs even though most INGOs have evaluation policies that emphasise the importance of community participation at all stages of the evaluation. Involvement of communities in the evaluation was mainly for extractive purposes at the data collection stage. The only real exception to this was the involvement of community based organizations (CBOs) in analysing the results, and in one case in collecting the data.

There was some evidence that INGOs were using the lessons from past evaluations to improve their programs. A number of the evaluations noted that programs responded to past evaluation recommendations in current interventions. By doing this, INGOs are holding themselves accountable for program improvement. However without the participatory element, or at the very least the sharing of information, accountability is not being provided in any active form to affected communities. The most common evaluation approaches were those that focused on utilization by program staff, senior management and donors. Approaches such as House’s work on social justice (1980) or Cousins and Whitmore’s (1998) description of transformative participatory evaluation, were not utilized. Elements of MacDonald’s democratic evaluation (MacDonald & Kushnar, 2005) were found in one evaluation.

5 Opportunities for Participation

It is useful to break the stages of the evaluation down to analyse where participation does and does not occur, and how this affects accountability. For the purposes of the research I identified five possible areas of the evaluation where participation can occur; designing the evaluation, consultation during the data collection stage, helping collection data, analysing data, and receiving results.

1. Designing the evaluation
Designing the evaluation is a broad term involving many sub-stages. These include deciding the scope of the evaluation, developing a Terms of Reference (TOR), designing evaluation questions and identifying the evaluation approach to use. In many humanitarian contexts, it is unrealistic to expect full participation of the community at every sub-stage. However participation could be expected in deciding what area of a program to evaluate and the questions to ask, and this forming the basis of the TOR. The research found this was the stage least likely to have community input. None of the evaluation reports demonstrated any evidence that beneficiaries or other community members had been involved in the design of the evaluation. This finding was supported by interviews with external evaluators and INGO staff.

2. Consultation during data collection
Consultation for data collection was the stage where most community involvement was found. 9 out of the 10 evaluations asked beneficiaries, and in some cases, other community members, for their opinions on the intervention. The most popular form of collecting data from the community was focus groups, but surveys, on-site observation and one-on-one interviews were also used. It might seem straightforward that an evaluation should collect data from the people affected by the program but ALNAP’s 2003 Review of Humanitarian Action found that majority of evaluations were not doing this (Cosgrove & Buchanan-Smith, 2014). Review of the Humanitarian Accountability Partnership’s (HAP) annual reports show that around 60-75% of the sample examined in 2008, 2009 and 2010 asked beneficiaries for their opinion (HAP, 2009, 2010, 2011). In this research, 90% of the evaluations in the sample included beneficiaries at this stage of the evaluation, showing that INGOs have gradually responded to past criticism. It is interesting to note though, that none of the interview participants considered just consulting beneficiaries at the data collection stage is enough on its own to provide accountability to affected communities.

3. Helping Collect Data
The actual means of collecting data did not show the same level of participation. Much of the data collection was carried out by the external evaluators or INGO staff. In most of the evaluations, there was no participation of community members in collecting the data. Interview participants expressed the opinion that this contributed to ownership of the evaluation not being felt by community members.
The involvement of CBOs was the exception to the lack of participation at this stage. In particular, the use of most significant change in CARE’s external evaluation meant that local CBOs were heavily involved in collecting the stories of change for the evaluation.

4. Analysing the Data
The involvement of CBOs in evaluation also provided the only evidence of communities being asked to analyse the results. CARE’s evaluation using a most significant change approach demonstrated the most involvement of communities, as CBO representatives were involved in analysing and selected the most relevant stories of change. The evaluation by Christian Aid in the sample also involved local civil society partners being involved in analysing data and selecting recommendations for the evaluation report. However none of the evaluations involved the beneficiaries themselves analysing what the findings meant [3].

5. Receiving Results
There was also very little evidence that the results of the evaluation were being shared with communities and beneficiaries. In three of the evaluations, local civil society representatives were invited to feedback sessions. Further feedback to their communities is then in the hands of the community representatives. None of the evaluations invited beneficiaries to the formal feedback sessions.

Accountability to affected communities therefore appears not to be a priority in evaluations. INGOs may argue that they are making themselves accountable to make changes to their interventions based on evaluation recommendations. But these recommendations are based on evaluation criteria, scope and schedule INGOs develop themselves, and results that are analysed without the input of local communities.


There seems to be a discrepancy between the policies and statements of INGOs on accountability, evaluation and participation and actual practice. The initial planning for an evaluation often sets up a structure or expectations that prevent greater involvement of the community. TORs tend to set proscribed criteria and questions that have developed by a project officer or M&E office and do not allow evaluators the flexibility to use participatory methods. The TORs seemed to be developed based on a generic model. While TORs will often identify accountability to those affected by the intervention as a purpose of the evaluation, the requirements of the evaluator usually point to an upward version of accountability. This perpetuates the focus on donors and head offices at the expense of the affected communities.

A misperception of participation probably contributes to the limited accountability to affected communities. INGOs ask evaluators to use participatory methods but also set the scope and often the questions that the evaluation must investigate. The TORs set timelines that ask the evaluator to complete all the planning for the evaluation before they are in the country. This removes the possibility of a participatory approach to evaluation design.

Similarly the products that are asked of evaluators in TORs very much point to upward accountability. Lengthy reports in English and debriefings that do not include communities are two such examples. One evaluator identified in our interview said that it is exceptionally rare for INGOs to ask for evaluators to produce materials that are suitable for feeding back information to affected communities. This reduces the stages in the evaluation where participation can occur and means that community involvement is often extractive in nature rather than participatory. Interview participants suggested that commissioning staff often believed that consulting beneficiaries during an evaluation constituted a participatory evaluation.

Policies and statements on evaluation and accountability suggest that INGOs recognise the importance of providing accountability to affected communities and that participation in evaluation is a means to provide this. However, overall the evaluations suggested an INGO community stuck in structures and approaches that favour upward accountability. To provide accountability to affected communities through participation in evaluations, commissioner officers and program staff need to cede some level of control to affected communities (Blagescu et al., 2005). A paradigm shift must occur to provide greater accountability to communities. Evaluators need to be given more leeway to involve communities in all stages of the evaluation. To provide this leeway a different approach to commissioning and designing evaluations is needed.


[1] MSF and World Vision do not publish full evaluation reports publically
[2] I did not take a judgement on the classification of the evaluations. If the INGO published it on ALNAP’s website, it was accepted that the INGO considered the intervention to have humanitarian aspects to it.
[3] Various guidance on participatory analysis in evaluations has been produced including UNICEF’s guide on Participatory Approaches for Impact Evaluation, URD’s Participation Handbook or the American Evaluation Association’s Collaborative, Participatory & Empowerment Topical Interest Group page.


  • david turner
    6th March 2015, 1:06 pm  Reply

    good points I have worked in both emergencies and devlopemnt and often evalution are done just to please donors I did one on a year child rights project and just a waste of time and to look at why so many done

    yes questions are done in office and more talking with communities maybe half questions from them and rest ngo

    show some of commutiy findings and feedback

Leave a Comment