Tag Archives: Effectiveness

A Political Economy of Aid Reform?

The IRC has recently released a study of reports on studies of the Ebola crisis. Their conclusion is that these reports ‘offer valuable solutions, but they also perpetuate problems by ignoring fundamental realities.’  That is because these reports ‘reflect a persistent weakness of the global conversation about health systems: the erasing of politics.’  And now, for a bit of shameless self-promotion, IRC singled out our ODI report for not falling into this trap, for correctly saying ‘what most reports, and indeed most health systems efforts, failed to recognize: that any effort to improve health systems can only succeed if it is based on an understanding of the politics involved.’

What does the Ebola response tell us about the World Humanitarian Summit?

The fast-approaching World Humanitarian Summit holds the promise of a better humanitarianism, meaning it also holds the risk of repeating the same mistakes that have doomed so many of our good intentions in the past. Of course, there are multiple mistakes that undermine implementation of the humanitarian imperative. Shortcomings and gaps as well. Not multiple, but thousands.  But in some ways, there is only one mistake that needs fixing. We need to replace talking about what we should do with talking about how to do it. And in particular, how to do it given the incentives, architecture, political dynamics and culture which govern the ecosystem of humanitarian aid.

Thus far, and the Ban Ki Moon’s recently released report reinforces this weakness, the Summit process has traded more heavily in attractive ideas than in an analysis of how history might avoid repeating itself.  New and intriguing recommendations surface, and yet they resemble the sector’s standard recommendations, conclusions and lessons learned in the degree that their feasibility is wishful. As the UNSG admits, the measures he proposes are not new, a “testament to the failure to learn from the past and to embrace necessity and change more forcefully.”  (UNSG ¶170).  It does not help that the UN’s #1 humanitarian, ERC Stephen O’Brien, has proclaimed that the system is ‘broke’ but ‘is not broken.’

How do we change our stripes? By ending the gravy train of funding for technical evaluations, dismissing rather than embracing so-called ‘lessons learned’ approaches (see here for one of my previous blogs on lessons identified but not learned), and basing analysis on a thorough political economy of the given situation.  In other words, at the system level and at the organization or project level, stop promoting reforms based on an overly simplistic understanding of the problem. Top aid thinkers Ben Ramalingam and John Mitchell explain it a lot better than I could:

Two broad sets of reasons for this lack of change are widely cited. One is that there are many drivers of change for the sector, of which the reform agenda is only one. Reforms, moreover, are seldom, if ever, the most prominent of the internal drivers. Others include organisational interests, professional norms, donor interests and so on. These serve to reinforce the status quo of the sector. … The second set of reasons relates to the reform efforts themselves. Seldom have change and reform efforts attempted to change the fundamental rules and incentives that underpin humanitarian aid effectiveness.

The paramount question is whether we will do better in the future by examining how and why we failed in the past, replacing the question of what do we want to achieve.  In this regard, the Ebola outbreak and response signaled (once again) the need for a more transformative agenda, one that avoids wishfully imagining the dawn of a new age where global public good trumps political self-interest, and instead addresses both the shortcomings of humanitarian action as well as their underlying causes.

Ebola: Lessons not learned

[Thanks to Aid Leap for publishing this on their website. Check it out here, along with lots of excellent thinking on aid.]

Tomorrow will mark 42 days since the last new case of Ebola in Sierra Leone, meaning the country will join Liberia in being declared Ebola-free. That brings the world one step closer to a victory over Ebola the killer.

But Ebola has another identity – messenger. We listened. It told us that many aspects of the international aid system are not fit for purpose. Many – too many – of the problems the outbreak revealed are depressingly familiar to us.

Pre-Ebola health systems in Sierra Leone, Guinea and Liberia were quickly overwhelmed and lacked even basic capacity to cope with the outbreak. The World Health Organisation (WHO) failed to recognise the epidemic and lead the response, and international action was late. Early messaging around the disease was ineffective and counterproductive. There was a profound lack of community engagement, particularly early on. Trained personnel were scarce, humanitarian logistics capacity was insufficient and UN coordination and leadership were poor.

The lessons learned should also come as no surprise: rebuild health systems and invest in a ‘Marshall Plan’ for development; make the WHO a truly robust transnational health agency and improve early warning systems; release funds earlier and make contracts more flexible; highlight what communities can do, and engage with them earlier. Except these lessons learned haven’t really been learned at all: they are lessonsidentified repeatedly over the past decades, but not learned. 

Why is the system almost perfectly impervious to certain lessons despite everyone’s good intentions? The short answer: these lessons are too simplistic. They pretend that the problem is an oversight, a mistake to be corrected, when in fact the system is working as it is ‘designed’ to work.  The long answer: what is it about the politics, architecture and culture driving the aid system that stops these lessons from becoming reality?

Take a simple idea, like reconstituting the WHO as an intragovernmental agency with a robust mandate to safeguard global public health, and the power to stop an outbreak like Ebola. Sounds great, but not new. So it also sounds like wishful thinking. It does not address the inherent tension between sovereignty and transnational institutions.

Think of it this way: the more robust an institution, the more of a threat it poses to the individual states that are its members, and hence the greater incentive for those states to set limits to its power. WHO was ‘designed’ not to ruffle feathers.

A robust WHO? Can you imagine the WHO ordering the US or UK governments to end counterproductive measures such as quarantining returned Ebola health workers or banning airline flights to stricken countries? It will never happen.

Here is the true lesson to be learned: at a time of public fear and insecurity, it would be political suicide for any government to allow such external interference. The problem isn’t the institution, it only looks like it is; the problem is the governments that comprise it. That is not to say that WHO cannot and should not be improved. It is to say that the solution proposed cannot address the fundamental problem.

Or take a complex idea, such as community engagement. Our Ebola research found that the ‘early stages of the surge did not prioritise such engagement or capitalise on affected communities as a resource’, a serious omission that ultimately contributed to the spread of the disease, and hence a key lesson learned (see e.g., this Oxfam article).

Disturbingly, this is a lesson with a long history. Here, for example, is what the Inter-Agency Standing Committee (IASC) found in evaluating the international response to the 2010 earthquake in Haiti. The relevance, virtually word for word, to the situation in West Africa speaks for itself:

The international humanitarian community – with the exception of the organisations already established in Haiti for some time – did not adequately engage with national organizations, civil society, and local authorities. These critically-important partners were therefore not included in strategizing on the response operation, and international actors could not benefit from their extensive capacities, local knowledge, and cultural understanding … This is not a new observation. Exclusion of parts of the population in one way or another from relief activities is mentioned in numerous reports and evaluations.

Why is this lesson so often repeated and so often not learned? Does the answer lie in an aid culture where ‘taking the time to stop and think – to comprehend via dialogue, engagement and sociological research – runs counter to the humanitarian impulse to act’? Our report also discusses a greater concern: the degree to which people in West Africa were treated ‘as a problem – a security risk, culture-bound, unscientific – to be overcome’. 

The ‘oversight’ is hardly an oversight: people in stricken communities ‘were stereotyped as irrational, fearful, violent and primitive; too ignorant to change; victims of their own culture, in need of saving by outsiders’. Perhaps that clash of cultures highlights why we should not expect community engagement to spontaneously break out simply because the problem has been recognised.

Powerful forces work against aid actors engaging with the community during an emergency, leaving us with a lesson that has not been learned even after years of anguished ‘never again’ promises to do better.

Lessons learned are where our analysis of the power dynamics and culture of the international aid system should begin, not where it ends.

Humanitarian Effectiveness (2)

My previous post on this topic prompted a slight twitter buzz, with Patricia McIlreavy astutely pointing out that the aid ‘oligarchy’ isn’t just the big INGOs, but donors and the UN as well.  Dead right on that one.  They all employ wield effectiveness in an exercise of power.

Effectiveness seems to be part of a powerplay within the oligarchy as well. By way of broad generalization, the question of effectiveness functions as a spotlight, used by people in desk chairs to examine the work of people with mud on their boots.  Sounds good, right?  What about the reverse?  When can Mud-on-Boots cast the same light on the work of Deskchair? More importantly, why doesn’t the functioning of the aid system drive Deskchairs to cast the same scrutiny on the mountain of Deskchair work in the system?

It should. The aid system comprises immense resources and people – energy and ‘action’ – at the level above the field project.  Having just sat through one, how do we assess the effectiveness of a conference on effectiveness? Better yet, what has been the collective effectiveness of years of debates, research, publications, workshops, tools, guidelines and previous conferences on effectiveness?  In other words, why don’t those of us thinking about effectiveness shine the light on our own work?

At great effort and cost, the aid system produces a spectacular amount of material designed to improve the effectiveness of aid. How does that translate into lives saved, or suffering alleviated? Might it even work as an impediment? I remember a medical coordinator in MSF telling me that in the span of six months in South Sudan she received over 600 recommendations from headquarters.  No doubt lots of good advice, but what is the combined impact on stressed out and under-resourced field teams? Trying to regulate or rationalize this onslaught of advice leads to discord at HQ, with staff feeling disenfranchised from the operational mission. Put differently, each and every one of those 600+ recommendations represented the aspiration and drive of somebody at HQ seeking their ‘fix’ of field involvement, their dose of ‘making a difference’.  That’s not speculation. That was my fix too.

[Spoiler alert. This post should end right here. The next bit is a real downer. I blame the start of a cold, wet final weekend to England’s so-called summer.]

Shining the spotlight of effectiveness on this work can be disconcerting. Much of my considerable HQ product seems rather obviously designed to have allowed me to be part of the crisis response based on a blind faith in its actual impact.  This is a faith requiring permanent contortion, to avoid noticing the ten degrees of removal between my efforts in London and saving a life in the field.

It looks like this:

Start with a new / improved [fill in the blank: protocols/tools/reports/guidelines/strategies/etc].

  1. Did overworked field teams even read it?
  2. If they read it, did it change practice?
  3. If it changed practice, was it in a positive direction? Was the new thing better than the old thing? No shortage of examples where idealized efforts at improvement collided with reality on the ground.
  4. If it changed in a positive direction, by how much? In other words, did the impact actually save lives and alleviate suffering more effectively? Remember, much of aid work is pretty damned good from a technical perspective – improvement runs into the law of diminishing returns.
  5. If it had a positive impact, did that outweigh the cost/effort/resources that went into producing the improvement? How many meetings and emails!
  6. If it had a positive impact, did it last? Or: did it calcify into a tick-box exercise? Or: did new teams = old ways?

That, believe it or not, is only the first level of analysis.  If it stopped there, the verdict on effectiveness might often come out OK.  The key here is # 5. Here is what the process does not look like: a couple of smart Deskchairs put their heads together, come up with a new [fill in the blank: protocols/tools/reports/guidelines/strategies/etc], show it to their boss, make a few changes and ship it out to the field.

It looks more like this: a couple of smart Deskchairs put their heads together, come up with a new [fill in the blank: protocols/tools/reports/guidelines/strategies/etc] dealing with X, which then unleashes a frenzy of effort to take the good thing and make it very good.  Within the various branches of the organization, there will be ten, maybe twenty or maybe a hundred who have made a heavy investment in X.  Each will need to comment on the new thing. Many comments will replicate each other, others will contradict. Skirmishes will ensue – “communities” vs. “people” in paragraph 7, roll out in August (rain season!) vs November (too late!!!!), and so many more. Complaints will move up the food chain – Why weren’t we involved sooner? You need our approval! Meetings will be held and hair will turn gray. The thing will launch. Now start at Step 1.

And on it goes.  I have been involved in many of these processes. Even the ones avoiding the quicksand of organizational politics involved multiple, duplicative commitments of effort.  More to the point, they involved faith that improvements – objectives phrased more clearly, the addition of a resource annex, newer research included – actually mattered in some way. How could they matter? How could a rephrased set of objectives save lives and alleviate suffering? Faith is beautiful that way.  Proof is not required. A collective investment in the possibility that what we do matters.  The spotlight of effectiveness is unwanted here.  And so is it rarely shined.

The Problem with Effectiveness (1)

My first blog sent from the city of Manchester, arguably the birthplace of modern Capitalism: “there are good reasons why those in the Southern Hemisphere view [the big NGOs] as the ‘mendicant orders of Empire’” (Michael Barnett in The International Humanitarian Order). So an appropriate location for an HCRI-Save conference on humanitarian effectiveness.

What is effectiveness? As with many concepts, the further one dissects it, the more wooly it becomes. So a nice generator of the sort of navel-gazing exercises that I find so stimulating and that consume a lot of humanitarian energy.  That said, the discourse of effectiveness warrants being unpacked from a number of angles, especially within a political economy of aid. On that, two initial reflections.

First, the ‘oligarchy’ of global western humanitarian NGOs uses the language of effectiveness to defend its turf, funding and power.  Argument to donors: give us the money, because we are more effective than them.  Here, ‘them’ refers to emerging NGOs from the global south, who are almost by definition going to come up short in terms of effectiveness. After all, it is the oligarchy’s definition of effectiveness in the first place, and the oligarchy has enormous advantages in terms of resources, experience, infrastructure, etc.

Second, the discourse of effectiveness sidesteps ethical issues.  As somebody pointed out in one of the sessions, what is effective and what is right are two different questions.  Those arguing for the supremacy of effectiveness miss the problematic reality of an aid industry that is often ineffective and unaccountable.  Let’s be clear, aid is a tough business, and we should expect that it often falls short of being effective, no different than welfare programs in our home countries, which have regularly failed in efforts to lift the poor out of ghettos, improve public health or reduce drug abuse (for example).  That is the nature of the work.

But there is a fundamental difference. There is something regrettable about our ineffective efforts to do good in our backyard and for ourselves.  But there is something regrettable and unethical with our ineffective efforts to do good in their backyard, with their lives at stake, and yet where they have neither say over how it unfolds nor recourse when it does not go well.