Evidence vs Evidence

Let’s get one thing out of the way. This is a blog post and a not-so-subtle plug for an upcoming webinar. Check out our panel discussion – it’s part of Humanitarian Evidence Week. We’ll be working counter intuitively, taking a critical look at the call for the greater use of evidence in humanitarian decision-making.

That humanitarians should use evidence to identify the greatest needs of crisis-affected people seems like a pretty good idea.  Similarly, it seems a rather unassailable proposition to use evidence to decide what works and what doesn’t work, or what works more effectively/efficiently.

In the webinar, we aim to discuss the shortcomings and challenges in the way humanitarians use evidence, or the gaps in its quality. Importantly, that discussion will be placed within the humanitarian context.  These are not lab conditions. Uncertainty cannot be eliminated, and will often remain substantial.  The deep political pressures which characterize humanitarian crisis cannot be sidestepped, and often undermine the technocratic establishment of programming.  It is in this complex, dynamic, politicized, inhumane context that humanitarians must take decisions. To seek evidence that provides final answers is often, then, to seek an unattainable perfection.

As a webinar, the critique of sector’s use of evidence will try to remain concrete. Nonetheless, fuzzier questions abound.  Chief among them: Why it is that this sector will spend somewhere around $29B this year on the basis of faith, on the basis of its self-belief that its efforts are necessary, efficient and effective? Put differently, why has it been OK that we simply imagine our goodness? This strikes me as both a cultural issue and a structural one.  Humanitarian action lacks incentives that push in the direction of needing and therefore developing evidence.  Across the entire chain of command, from the field project to the board of directors, and over the past decades, why have so few been insisting upon proof?  My evidence-free answer: Maybe we don’t really want to know.

Looking further at the issue, recent research has shown that the cultural problem is in part a problem of multiple cultures. Practitioners and academics have very different ways of thinking about evidence, about the questions they wish to answer, and hence the nature of the data they need to collect.  Another issue is that all of this ignores the degree to which the localization of humanitarian response requires the localization of our understanding of evidence and its uses.  I don’t hear anybody talking about the risk of exporting our Western ‘scientific method’, let alone our evidence-challenged ways of working.

Finally, in the big picture, this evidence gap is a cousin or perhaps child of our accountability gap.  As such, how likely is it that we can engineer the closing of this gap by external pressure, such as pressure from donors demanding more evidence, or our own internal resolution to emphasize evidence-based programming?  We’ve seen this before.  At least in terms of accountability, two decades of trying to manufacture accountability has not produced much accountability. Our incentives, structure and culture push in a different direction. To make matters worse, one could argue that the big donors are asking for evidence not to know if things work, but to prove to sceptical politicians that things work. In the end, do these combined pressures – set within an aid sector rife with fudged narratives of success – generate evidence that is biased from within? Another evidence-free conclusion: faith and imagination will get us closer to the effective programming than bad evidence.

Leave a Reply

Your email address will not be published. Required fields are marked *