Evaluations are an essential part of human sharing accountability. And working with a good evaluator is a fantastic opportunity for learning and improving. However, often evaluations turned into painful confrontational accountability exercises, that leaves nobody satisfied and changes nothing. This week’s guest on communitarian is Michael Patton who together with Lars Peter Nissen discusses what evaluations can do for the humanitarian sector, what they are and what they are not (an audit) and how to build trust with your clients while at the same time satisfying the need for accountability.

Transcript
Lars Peter Nissen:

Evaluations are an essential part of human sharing accountability. And working with a good evaluator is a fantastic opportunity for learning and improving. However, often evaluations turned into painful confrontational accountability exercises, that leaves nobody satisfied and changes nothing. This week's guest on communitarian is Michael Patten. And he has a long experience with evaluation, and some very clear opinions on what they can do for us and what they ought to be. Michael's perspective is very challenging, including a fundamental challenge of the human design principles as they stand today. We had a wonderful long conversation about evaluation, but also about the principles. Most of the principle discussion has been edited out of this episode, not not because it wasn't interesting on the contrary, but because it actually merits a episode of his own to discuss the principles and how we turn them into something more concrete guiding our actions. So we will prepare that episode, and publish that at a later stage. I hope you enjoy the conversation. As always, please like us on social media forward the episodes to colleagues who might be interested review us on so forth. There's a PayPal link on our website, if you'd like. Most importantly, listen to the conversation and use it in your daily work.

Michael Patton, welcome to humanitarian.

Michael Patton:

Thank you. Good to be with you.

Lars Peter Nissen:

Yeah, it's a real pleasure having you here. You are an evaluator, I think that's the best way to describe you and you. You've written several books, eight books about evaluations have a long and varied experience within that field. And what we're going to talk about today, of course, is what I evaluations for the humanitarian sector, what can they do for us, and what can't they do for us? So Michael, for me, in my experience, evaluations, it's often been something that we have this log frame we wrote a year and a half ago, and now the donor wants to know whether we use the money, right, and we didn't collect all the indicators. And actually, when we look at those indicators, now, they aren't the right ones anymore. And it's just a pain. And we always feel like we feel like it was okay operation. But this whole evaluation exercise seems just to come very late and be more of a sort of face saving exercise, rather than some kind of learning that I've been in that sort of cramps or not very comfortable evaluation on a couple of occasions. Does that? Does that sound familiar to you?

Michael Patton:

It does, I hear the complaints from the participants all the time about that very thing.

Lars Peter Nissen:

So you as an evaluator, how do you approach that situation? Is it just because we are not professionals? Or, or is it is evaluation simply not a very useful thing for us to do.

Michael Patton:

Now, I don't do the accountability stuff. So if that's the beginning point, it is a useless operation. It's based upon, you know, Theory X and Theory Y. Theory X is that people need to be controlled, and they're going to mess up and that people have to be managed tightly. And it's true that there are a few bad eggs out there. And there is some misuse of funds, and there is some corruption. And so we've created a massive accountability structure to try to control the 2% error, which is not that hard to control with good auditing. And so instead of especially, I mean, this is a reflection of our politicised situation of politics. One of our great founders, Lee J Cronbach, wrote a book with with students called reform of evaluation in 198o. And it was 99 premises, like Martin Luther that he posted on the door to reform evaluation. And one, one of this is 40 years ago, one of the premises was a call for accountability is a symbol of pathology in society. It basically means that there's just trust. And if we look at somebody like Elinor Ostrom 's work. Trust is fundamental in society and the whole accountability structure from international agencies, sends a message that we don't trust you. Instead of sending the message that you're doing important work, how can we enable you to do the best job you can and you're out in the field, tell us what you're doing. Tell us what you're finding. Tell us what's happening. Tell us how we can support you, instead of from the top saying we don't try issue.

Lars Peter Nissen:

So that's a very soft, I would say, approach to evaluation, right? Because I agree with you that the accountability mechanisms we put in place for the sector sometimes seems excessive. But at the same time, if you hand out several 100 million dollars in an operation in a country that is really difficult to operate in, that may be very corrupt where a lot of money is being done on procurement, or don't you need somebody to come in and do a bit of a bit of a check on whether this just went down the drain...

Michael Patton:

It's an auditors job is the money is getting spent or not. That's what auditors are for. They they track the money, we're tracking programmes. So David Beasley the head of the World Food Programme suddenly finds himself faced with a war in Ukraine. For the last five years, the WFP has gotten 50% of their food from Ukraine. He's not going to get that from Ukraine anymore. Does that mean he's messed up? He wasn't accountable, because his strategic plan said, we've got a great food donor, Ukraine's a breadbasket of Europe, we can count on getting food from them for the rest of the world. Now he can't get food does he have to change the strategic plan and go back to the board and say, oops, things have changed. So they're in a major pivoting kind of role. And you have to trust the people in the field. To do that, that the auditors worry about the money and get good people out there who are adjusting, especially in humanitarian conditions, adjusting to rapid changes that and report that out to you what they're learning, have good managers. And what evaluators do what you're calling soft, I actually think is quite hard. And that is helping to make those adjustments. It is true that you're not in a good position always to know how well you're doing without another set of eyes, without getting some data without getting some help. And I tell them, if you can meet, if you meet the internal commitment to do the best job you can, we can meet the external commitment to accountability. But if evaluation begins with an external commitment to accountability, it will be burdensome, it will be laborious, and people will come to hate it.

Lars Peter Nissen:

Let's say you have a situation where there is politics among the inter-agency community, they don't agree on what should be done. And actually, the way the final operation comes out is very clunky, because there wasn't agreement, it was a political deal. You talk about trust, and you talk about commitment to accountability. But but if it was politics, that drove the operation my experience is that people are quite uncomfortable then actually having the sort of hot learning that you'd like to do. Does that sound familiar?

Michael Patton:

Well, when I run into those situations, and I do, what I want to do is get those players in the room, and facilitate how they're going to be effective together, to be on the ground early enough to detect that what's going on, and not wait to the end and say, Hey, this was a political mess, that's useless. Okay, it was a political mess. It's five years later, nothing can be done about it. But if I'm there front end, as I'm getting to know these players, and helping see that they're getting themselves at a political quagmire, I will bring them together and facilitate some common agendas, some common ways forward. And I'm letting them know that I'm both there as an external set of eyes, but that they actually need that facilitation. They can't solve that political problem on their own. They either need me or they need somebody else to manage the interagency conflicts, that political power dynamics and that, that that needs to be done sooner rather than later, upfront, rather than at the end. So the kind of evaluation I do is built in right from the beginning is part of the design, it's not an afterthought, that to have evaluators there during the design of these processes, anticipating and looking at the potential political interactions. If you're funding multiple agencies and asking them to cooperate, you're going to have a political problem. That's a design element. I know that upfront, you know, that upfront, you don't wait until they're fighting to send somebody in to solve the problem. You say, Hey, we're asking five agencies to cooperate and collaborate in a difficult situation, they're probably going to need some facilitation.

Lars Peter Nissen:

So I couldn't agree more with you. I also think that what you describe is a situation where we have an extremely mature leadership, asking for this evaluation or commissioning this evaluation and I'm not sure I've always seen in that level of bandwidth with people that they actually realise this and I able to, to build into the design and evaluation taking into account the political nature of the engagement that this is happening often for you that you are called in early?

Michael Patton:

Yeah, but they haven't had that option mean, part of what I'm over trying to overcome is the way valuations gotten built over the last 50 years, is a kind of military industrial manufacturing mentality, about how programmes work. And, and that, that turns out not to work well in a complex dynamic world and system. And so I don't totally blame these leaders, because they haven't had that option of, of having any evaluation that is developmental ain, for example, this whole notion that evaluators supposed to be independent, all independent does is guarantee distance and ignorance. It doesn't, it doesn't guarantee any kind of good judgement. The point of independence is credibility. So the issue is not independence, it's credible. I get close to my, my, the people I'm working with, I view it as a relationship business, not a message business. And I can describe and will describe and my accountabilities describing in detail, how I manage the boundaries of that relationship. But my, for example, I work with HIV AIDS programmes. And when I work with one of them, I tell them, my younger brother died of AIDS early in the epidemic before they were anti retroviral drugs. Our family was ravaged by AIDS. We know other families who are in hospice. I've worked with AIDS programmes, I care about your effectiveness and age. If you're not doing a good job, I'm going to be all over you like a fly on flypaper. And I want you to want me to do that. I want to work with people who want a critical friend who's going to help them get better, if that's not what you want, we need to part ways now. And so every programme I get involved in, I share the values of the programme. And I come with skills and knowledge and facilitation to help them get better. I'm not independent, I'm not neutral about human violence, I'm not neutral about hunger, I haven't made up my mind about sexual violence, I have strong views about these things. I come with values and skills. And what I want to do is, is help people be as effective as they can, in delivering the work that that they do, and being able then to report that out. And that means being there at the design stage, to see what the issues are, as they emerge, get them earlier rather than later. You know, as you well know, when you let problems fester, they get bigger and bigger and harder to solve. It's the old ounce of prevention story. If you can intervene quickly, and make pivots rapidly, you're more likely to keep things developing, than if you wait until everybody's in a rigid stance with each other and nothing's happening. That stuff is hard to break down. That's why I want to be there early on.

Lars Peter Nissen:

It is so interesting, what you say about building trust about making yourself part of the process, about getting away from this very mechanical way of thinking about evaluations. What are your criteria? I mean, we have the standard tack criteria for evaluating development and humanitarian interventions. What happens to them

Michael Patton:

The DAC criteria epitomize military industrial manufacturing criteria, they were developed 40 years ago. And they get treated as technical criteria. They're so widespread now that it's not even recognised how value based they are, for example, efficiency. Well, who could be against efficiency, the least amount of input for the maximum amount of output, that's a very mechanistic view of efficiency. For cost accounting approach, a complex dynamic Ecosystem Approach looks at the full costs of what's going in the full cost of what's coming out the full impacts, including not just the immediate thing you're trying to do, but the effects upon the larger environment. This affects upon human health, the effects upon institutions. So for example, when the Bill Gates Foundation and World Health Organisation did their massive 6 billion intervention in Africa to eliminate polio, they virtually destroyed the African health system, because they attracted all the resources They hired the best people, they brought in the best equipment, they took over those units to focus entirely upon polio. And if you only looked at the efficiency of their operation, it looked very good. If you look at it from a systems point of view, they did major damage. And they came to realise that, that they, in fact, you need healthy people, not just people who don't get polio, you need a healthy health system. And they had to move from a vertical focus, which is traditional management, go after your goal, put all your resources on your goal, put blinders on, go for your goal, to realising this is a complex system. And we have to both build up the general health systems and go after polio, a horizontal and vertical approach. That was a pivot that came from evaluation.

Lars Peter Nissen:

So I am not an evaluation expert, and I'm property putting my feet in very hot water now, but But don't we have other criteria like connectedness and relevance that would help? also highlight the issue you for example mentioned with with the polio intervention, from WHO and Gates?

Michael Patton:

Well, relevance is a very narrow interpretation of how well it fits with a agency's plan. Suppose the agency's plan is, is bad, suppose their criteria are bad. So you're they're judging themselves internally as a self referencing are what are we doing? What our agency wants us to do? Not is it relevant to the needs of people on the ground? That's a different criteria. And if it's a the effectiveness criteria, the what they call sustainability is the worst of the bunch. Because the the DAT criteria and sustainability is, is actually continuity, it's not sustainability at all. Their their definition is the what we paid for continues not have we created a more sustainable resilient system. And so it's actually a static indicator that leads people to try to guarantee that they've moved from one status quo to a new status quo that is rigid, they actually make things worse by creating a new system of rigidity, that maintains what they did, instead of creating a adaptable, resilient, sustainable system, in terms of what we know, environmental human Ecosystem Sustainability needs to be. So the each of those criteria are deeply value based. And to do transformative work, I've suggested that we need criteria like genuine sustainability, that the work supports equity that it is bringing is recognising how systems operate in and bringing about systems change from a complex dynamic perspective. The and the that the interconnectedness of interventions is very different than the connectedness of the DAC criteria, which is, which is another one of these mechanistic kinds of criteria.

Lars Peter Nissen:

Okay, so that's all well and good. Now, I'm a donor. And I gave money for this programme for three years, and they sold me this story, they sold me this intervention. And now they hire you as an evaluator to say, okay, they didn't do any of the things they told me about. But they this guy has a nice story to tell, and it's okay. I'm not going to accept it.

Michael Patton:

Well, it depends on what the situation is. If I told you the story, that I was going to buy a 50% of my food from Ukraine and feed the world with it. And I go back to you and say it looks, Russia invaded Ukraine, I can't do that. But I procured food from other places that you got to say to me, Oh, you messed up. You told me you were gonna get the food from Ukraine. That's a zero, you didn't do your? No, you're gonna say to me, good job. And thank you for working out there without any sleep 24/7 to figure out how you're going to feed 30% more hungry people in the world because of that war. And because of the pandemic. What I should have funded you for is figure out how to feed people and not have a plan that says you're gonna get 50% of your food from Ukraine. That's micromanaging. I want to fund you to figure out where they're hungry people who can you help and do the best job you can. Because you know, the system, you've got people out there, you've trained, it's a volatile situation. There's pandemic showing up. There's politics, there's war, there's conflict, there's new refugee camps. They're more refugees than any time in history. Go do the best you can. Tell me what you need. Tell me how you do it. And we'll have an evaluator working with you but also reporting to us what you're learning And along the way, how you're making these adjustments, because you don't have time to do that.

Lars Peter Nissen:

I am trying to think through in my head, Michael, my experience of these relationships with evaluators and donors and agencies and why, in my experience, they quite rarely are the way you would describe them. Because I, I have to be honest, I like your I like your approach a lot. I like that way of thinking through the complex, complex dynamics of a system. Understanding that there are certain things we don't understand beforehand, we can't foresee everything, that it is the learning the failing forward, that's important. And it's great when a learning exercise with an external consultant. evaluator can can be like that. But I just so often see that it's not like that, that there is a lack of trust, or that there are some financial regulations in place. That means that the the way we think about accountability is pull so much closer to, to an audit than then a learning exercise. The way you describe it is,

Michael Patton:

Well, the system is designed that way. I mean, there... Yeah, the entire system is designed that way. That's why people hate it and why it's not very, very useful. I've been on the other side of it, I ran a USAID project in the Caribbean for 10 years and agricultural development project, I had to put up with that stuff. And I inherited a programme I didn't design that had nonsensical outcomes. And it was a very onerous process to be able to, to change those.

Lars Peter Nissen:

if you if you come in as an evaluator to a situation where you've been asked to do a rather mechanical evaluation exercise, and you can see that this is not going to lead anywhere, what what would you do? How would you try to turn this around? What what are the ground rules for trying to make a nonsensical evaluation more sensible?

Michael Patton:

It's a it's a fair question. I run into it all the time. So I'll give away my secret here. What I tell them is, look, the people who commissioned this don't know what they're doing or what they're looking for. So we're going to figure out what the min specs are to satisfy the accountability demand. What's the minimum that has to be done? What's the, what are they looking for as a baseline? And then not once you're guaranteed that you can meet the minimum requirements of results based management, fill out the forms, meet their criteria? What are the indicators? We're going to figure out how to do that? And that is not actually usually that hard, because it's pretty nonsensical. And then I say, so if I can help you do that, then would you like my help to actually do something that's worth doing? In in figuring out how you can be more effective, and we can build that on top of the accountability piece, so that you're not only showing you're accountable, but that you're thoughtful and making a difference. But let's, let's take care of the accountability piece, because it's on everybody's mind, let's look at it, see what has to happen, get that done, then we'll work on doing what makes sense. And so it's a kind of martial arts approach to it, this force of accountability is coming, I grab it, I turn it, and then try to get to the stuff that actually matters. And people get, they get scared of the accountability thing, and it takes over their heads, it becomes all encompassing, they worry about it, they spend a lot of time on it. And they still don't do a very good job of it. So if I can come in and make it manageable for them, and say, look, the folks up there are no evil and nasty, they just don't know what they're doing. They think this is what they should be asking there. They have to worry about whether you're spending the money, right? They have to report to their governments, or we're going to do that we're going to take care of that. Bam. Now let's do something that makes sense.

Lars Peter Nissen:

It's evaluation Kung Fu basically.

Michael Patton:

it is, it is.

Lars Peter Nissen:

I like it.

Michael Patton:

Evaluation profession has professional organisations in most countries. They are regional associations like the African Evaluation Association, the European evaluation society, I was just in Copenhagen for that meeting. The Latin American society, and, and those associations, including the American and the Canadian associations, in the last three to five years, have adopted resolutions, that all evaluations should address sustainability because of the global climate crisis, and that all evaluations should address equity because of the increasing inequitable nature of the world. Until beginning point, is for that hasn't reached the terms of reference yet that hasn't reached scope to work yet. They're still using the DAT criteria, which do not include genuine sustainability and equity. Um, And then the the additional piece is moving from project outcomes, which is what DACA is good for effectiveness, efficiency to systems change there, what we need to be doing these days is changing systems that are not sustainable and that are not equitable. And changing systems is different than doing projects. So, systems thinking from the design, beginning, designing from a systems perspective rather than a, a project perspective, designing from a complexity perspective, rather than a linear causality perspective, is the way to deal with the complex kind of problems we have. And the the uncertainties that come with a less detailed work plan get managed by having clear principles, which is also a place where I would do work. And I actually brought this up at a session devoted to humanitarian evaluation criteria at the European evaluation society, saying that it was time that the humanitarian sector relooked at the four major principles of humanitarian aid, because they're badly worded out of date and not evaluatable. And that it's time to make them valuable, bring them up to date, and make them actually able to guide humanitarian endeavours in a way that they don't now do.

Lars Peter Nissen:

So you just killed our cry our principles, humanity, impartiality, neutrality, you no longer should be the guiding principles for us.

Michael Patton:

No, it's the statement of them. It's not the titles of them. They don't actually state that particularly well, the independence impartiality and neutrality, say the same thing three different times, which tells us how how political the sector is that they have to keep repeating that over and over again, there's actually only one, one principle that has to do with the actual assistance given people. And that is not very much guidance. And it begins with a statement, it's inaccurate. The opening sentence is, wherever people are in need, we should help them. Well, that's not where humanitarian assistance is. It's not wherever people in need. It's a particular kind of need. So the beginning principle doesn't even specify the net for humanitarian aid.

Lars Peter Nissen:

So Michael, I think we agree that a lot of evaluation practice in the humanitarian space is way too mechanical compared to what it should be when you look at the situations we work in. We also agree that a lot of what we do is thrown out in one year planning cycle after one year planning cycle, making a real mess of what should be a short term, stabilising operation and turning it into some kind of protracted situation. And I think this situate this these issues, I think, much discussed and well understood in the humanitarian sector, not that any of us know what to do about it. But what can you as an evaluator do for us, if you if you had three visions for the humanitarian sector, what would you do? What how can we change? How can you change the situation with an evaluation toolbox?

Michael Patton:

Well, the first thing I would do is convene a group of the humanitarian players leadership and people in the field. And we do the principles. Secondly, I would, out of doing redoing the principles come up with criteria that are specific to the humanitarian sector, the DAT criteria are worthless for humanitarian assistance, they should not be used, they're rigid, they're not, they're not appropriate. They're there to accountability driven. And so the then the describing the role of the evaluator to use the principles with people. And this new set of criteria would change the role of evaluation to be able to do the very kind of thing I'm I'm talking about, but it doesn't begin by changing the role of the evaluator It begins by redoing the principles coming up with criteria that are meaningful to the humanitarian sector definitions of what humanitarian assistance actually is, and then a role of the evaluator that can support those criteria and those principles. I made an example of a of a radical redoing of the system. That was in the Paris Decker, I evaluated the Paris Declaration principles to change development, a desire to climate principles, but 2OO5 the world came together and adopted new principles for development aid. And one of the differences in that approach it hasn't been followed through was To make receiving companies, countries and donor countries mutually accountable, imagine a mutual accountability principle for humanitarian aid, that that the wealthier countries have an obligation to provide whatever humanitarian assistance is needed to meet the level of need. And then we're not just evaluating the humanitarian delivers and recipients, we're evaluating the donors and saying, You are living up to this principle, which says you have an obligation, because you have wealth to meet this need. And that's mutual accountability. It's not just say, Hey, be thankful we gave you anything at all go out and use it. Well, instead of saying, the world system is fucked up, and you need to donors you need to meet the obligation, and we will deliver on our part if you give us the resources, mutual accountability.

Lars Peter Nissen:

I like it. And Michael, I think that is a great note to end on. I'd like to thank you a million for coming on humanitarian. I really like your thinking. I think it is very challenging. I think it's very far from the way we do business today. And it's an A lot of the points you raised today are really food for thought in terms of how we can use evaluation as as a process that can help us evolve, and basically do better for the people we serve. So so thank you for that.

Michael Patton:

We're on the same page with that. That's the goal.