My previous post on this topic prompted a slight twitter buzz, with Patricia McIlreavy astutely pointing out that the aid ‘oligarchy’ isn’t just the big INGOs, but donors and the UN as well. Dead right on that one. They all employ wield effectiveness in an exercise of power.
Effectiveness seems to be part of a powerplay within the oligarchy as well. By way of broad generalization, the question of effectiveness functions as a spotlight, used by people in desk chairs to examine the work of people with mud on their boots. Sounds good, right? What about the reverse? When can Mud-on-Boots cast the same light on the work of Deskchair? More importantly, why doesn’t the functioning of the aid system drive Deskchairs to cast the same scrutiny on the mountain of Deskchair work in the system?
It should. The aid system comprises immense resources and people – energy and ‘action’ – at the level above the field project. Having just sat through one, how do we assess the effectiveness of a conference on effectiveness? Better yet, what has been the collective effectiveness of years of debates, research, publications, workshops, tools, guidelines and previous conferences on effectiveness? In other words, why don’t those of us thinking about effectiveness shine the light on our own work?
At great effort and cost, the aid system produces a spectacular amount of material designed to improve the effectiveness of aid. How does that translate into lives saved, or suffering alleviated? Might it even work as an impediment? I remember a medical coordinator in MSF telling me that in the span of six months in South Sudan she received over 600 recommendations from headquarters. No doubt lots of good advice, but what is the combined impact on stressed out and under-resourced field teams? Trying to regulate or rationalize this onslaught of advice leads to discord at HQ, with staff feeling disenfranchised from the operational mission. Put differently, each and every one of those 600+ recommendations represented the aspiration and drive of somebody at HQ seeking their ‘fix’ of field involvement, their dose of ‘making a difference’. That’s not speculation. That was my fix too.
[Spoiler alert. This post should end right here. The next bit is a real downer. I blame the start of a cold, wet final weekend to England’s so-called summer.]
Shining the spotlight of effectiveness on this work can be disconcerting. Much of my considerable HQ product seems rather obviously designed to have allowed me to be part of the crisis response based on a blind faith in its actual impact. This is a faith requiring permanent contortion, to avoid noticing the ten degrees of removal between my efforts in London and saving a life in the field.
It looks like this:
Start with a new / improved [fill in the blank: protocols/tools/reports/guidelines/strategies/etc].
- Did overworked field teams even read it?
- If they read it, did it change practice?
- If it changed practice, was it in a positive direction? Was the new thing better than the old thing? No shortage of examples where idealized efforts at improvement collided with reality on the ground.
- If it changed in a positive direction, by how much? In other words, did the impact actually save lives and alleviate suffering more effectively? Remember, much of aid work is pretty damned good from a technical perspective – improvement runs into the law of diminishing returns.
- If it had a positive impact, did that outweigh the cost/effort/resources that went into producing the improvement? How many meetings and emails!
- If it had a positive impact, did it last? Or: did it calcify into a tick-box exercise? Or: did new teams = old ways?
That, believe it or not, is only the first level of analysis. If it stopped there, the verdict on effectiveness might often come out OK. The key here is # 5. Here is what the process does not look like: a couple of smart Deskchairs put their heads together, come up with a new [fill in the blank: protocols/tools/reports/guidelines/strategies/etc], show it to their boss, make a few changes and ship it out to the field.
It looks more like this: a couple of smart Deskchairs put their heads together, come up with a new [fill in the blank: protocols/tools/reports/guidelines/strategies/etc] dealing with X, which then unleashes a frenzy of effort to take the good thing and make it very good. Within the various branches of the organization, there will be ten, maybe twenty or maybe a hundred who have made a heavy investment in X. Each will need to comment on the new thing. Many comments will replicate each other, others will contradict. Skirmishes will ensue – “communities” vs. “people” in paragraph 7, roll out in August (rain season!) vs November (too late!!!!), and so many more. Complaints will move up the food chain – Why weren’t we involved sooner? You need our approval! Meetings will be held and hair will turn gray. The thing will launch. Now start at Step 1.
And on it goes. I have been involved in many of these processes. Even the ones avoiding the quicksand of organizational politics involved multiple, duplicative commitments of effort. More to the point, they involved faith that improvements – objectives phrased more clearly, the addition of a resource annex, newer research included – actually mattered in some way. How could they matter? How could a rephrased set of objectives save lives and alleviate suffering? Faith is beautiful that way. Proof is not required. A collective investment in the possibility that what we do matters. The spotlight of effectiveness is unwanted here. And so is it rarely shined.