The emerging field of Data Visualization invites us to explore its countless possibilities to foster information sharing, analysis and learning in any discipline. Here are some of the potential uses of Dataviz I have experimented with so far around Evaluation, grouped by different purposes.
1. Mapping vulnerability to assess beneficiaries’ selection appropriateness
During an evaluation of a program for supporting very vulnerable people, we were wondering about the extent to which recipients were vulnerable. So we created a rubric defining factors that would define a very vulnerable family in that particular context, and we did the same with capacities. Then we scored the recipients and mapped them to easily assess whether they met the characteristics we thought they should as a group.
2. Mapping organizational charts in a circular way
Organizational charts are visual tools that we have been using for quite some time now. However, they haven’t evolved much and they only seem to focus on hierarchy and reporting lines. Challenged by this idea, I started thinking of alternative ways of representing an organization, and I came up with this circular organizational chart, which allows to map main information flows among the different areas.
3. Logic models can also evolve
The representation of a logic model is another example of visual tools that are generally used. It can be further developed, but here is an example of a logframe matrix just adding some lines and some icons
4. Using dashboards for reflecting about the evaluation methodology.
Dashboards are a powerful visual tool to summarize lots of information usually within only one page. Last year I created a Meta-evaluation dashboard to visualize an evaluation methodology (Vaca, 2014).
5. Visual table of contents for reports and books
Another common place in evaluation that has great possibilities for improvement are reports’ Tables of contents. Nowadays they are just a list of chapters and sections, but they could be more informative about their relationship among them (are they a process?), their comparative relevance, or their length or many other criteria that seem helpful to give more information about the report’s (or book’s) content.
6. Making Executive Summaries of evaluations and meta-evaluations more visually compelling
Summarizing the most relevant information – findings and recommendations – at the beginning of a report seems to be common practice. However, these kinds of summaries rely almost utterly in text and narrative explanations.
7. Adding icons to the evaluation report
In my opinion, any initiative to increase methodological and reasoning transparency is a way to improve evaluation practice. A report may include many different types of statements coming from different sources. On some occasions I have tried to make my reports clearer by adding icons to help making a difference between:
- A particularly triangulated finding
- A lesson learned
- A quote from some participant
- An opinion from my side
- Or just regular descriptive narrative to explain the program or the evaluation process.
8. Explaining your impact assessment strategy
On other occasions, in the case of quasi-experimental designs to assess impact, I have found that the narrative explanation of the strategy could be complemented by a visual graph to help better understand groups of beneficiaries, control groups, total populations and sampling process.
Mertens D.M. & Jesse-Biber S. (Eds.). (2013) Mixed Methods and Credibility of Evidence in Evaluation. New Directions for Evaluation, 138.
Scriven, M. (2013). Key Evaluation Checklist. Evaluation Checklists Project (
Vaca, S. (2013-2015). www.VisualBrains.info
Vaca, S. (2014). EES newsletter “Evaluation Connections”, Nov.2014. Pages 8-10.