Massive Open Distributed Evaluation

If we step back and look at all forms of evaluation in terms of the way that they are practiced in the real world – from the most scientific randomised trial to the leading edge of empowerment evaluation – we find that a single paradigm casts its long shadow behind them all.

The undercurrent of current evaluation practice is command and control. Evaluations are not spontaneous, they do not just happen, they are intentionally created by the ‘few’. Even where an evaluation is designed to be empowering for the ‘many’, it is a clearly defined event, it is a process, it is commissioned, led, funded, coordinated and – ultimately – controlled.

Strong forces give the command and control paradigm its permanence. Control lends itself well to ensuring quality, enabling focus, and delivering on time. Analogies of successful command and control abound: the military, logistics companies, Apple Inc, political parties. Faced with finite resources, command and control – evaluation managers, team leaders, terms of reference, review groups – have steadily codified professional programme evaluation.

So what is the cost command and control?

It comes with some inherent limitations.

One of these limitations is to keep evaluations within the boundaries of what a few people can manage. This limitation means that evaluation transpires as an event rather than a way of being. Another is to limit strategic evaluation as an empowering force that is connected to is social surroundings – leading to disenchantment. Command and control seems – perhaps by its nature – unable to surmount these limitations.

So what other analogies exist for how evaluation might be practiced? Wikipedia: a hosted self-organising community of volunteers creating an encyclopaedia that has surpassed all others. Occupy: a leaderless movement that embodied a sense of disenfranchisement and gave voice to popular discontent. Valve: one of the most valuable games companies in the world, with no management structure. Fredric Laloux writes about Teal organisations – places where CEOs let go of power, and lead through creating direction and vision.

What might evaluation look like – and what might this allow – if we look to these other paradigms as alternative means of organisation?

Let us take a practical example.

Imagine, for a moment, that we wanted to create an evaluation masterpiece to celebrate the International Year of Evaluation. Just as the Joint Evaluation of Humanitarian Action in Rwanda changed our view of evaluation practice, we wanted to set a new standard for the scope and inclusiveness of evaluation.

We know that command and control evaluations can scale to be very big: just look at the award winning Evaluation of the Paris Declaration Phase II. But we also know that these are extremely challenging, extremely expensive, and extremely rare. The evaluation of the Paris Declaration probably sets the limit on what command and control has achieved.

Now imagine that we want to go bigger. Massive. Let us imagine that we want to evaluate gender in international development. Every agency, every NGO, every country, every year since the Convention on the Elimination of all forms of Discrimination Against Women.

To do this with command and control would be incredibly expensive: requiring a large team of researchers, coordination mechanisms, a fund of several million USD, and a great deal of controversy.

Thus, we might choose to explore a new paradigm: one that cedes command and control for another form of organisation.

What might that look like?

Perhaps it would start with the idea of distributed evaluation. Rather than seeing evaluation as a singular process, we imagine it as a collective of processes led in different places by different people. The products of these many distributed processes come together in a common framework that makes a coherent whole – just as MediaWiki allows the millions of authors on Wikipedia to create and curate a comprehensive record of global knowledge.

There is no more evaluation team, there is an evaluation community.

Next, we might choose to borrow some of the ideas from Massive Online Open Courses (MOOCs) – an approach to education that has embraced technology to enable free and open participation. Rather than assembling an evaluation team, we might invite (and teach) anyone to be an evaluator: collecting data through the process of learning that would also leave a legacy of new capacity and experience.

Such a crowd sourcing approach might address the need for data. But, what about analysis?

Here we might choose to look to another analogy: that of the distributed computing model used by astrophysicists. Taking small sections of the night’s sky, and turning system-spotting into a game that hundreds of volunteers play.

And bringing it all together into a coherent and consensual narrative? The difference between the processes behind the Millennium Development Goals and Post–2015 is already a stack example of how far we have travelled in processes to develop consensus at a large scale. Might we reflect on what we have learnt from this experience, and from the wider field of organisation development?

We might not have all the answers yet, but the idea behind Distributed Evaluation fascinates us.

Comments(0)

Leave a Comment