AI is transforming the world and will have profound implications for humanitarian action. But how? Will it lend itself to authoritarian regimes controlling their populations and will humanitarian organisations be complicit in this and create additional vulnerabilities for the populations we serve? Will be help us create a better user experience for “consumers” of humanitarian aid and will it help us ensure that we get spare parts for the generator just in time?

Listen in as Sarah Spencer from humanitarianai.org and Lars Peter Nissen discuss these and many more questions.

Transcript
Lars Peter Nissen:

Welcome to a co-production between Trumanitarian and Humanitarian AI Today. My name is Lars Peter Nissen and I normally host Trumanitarian, but today I'm double heading. On one side, I am the host of Trumanitarian, on the other hand, I will also be interviewed in my function is director for ACAPS and the person who will be doing that to me is Sarah. Hey, Sarah.

Sarah Spencer:

Hi, great to be here. I'm here... As you said, Lars Peter, I'm here representing Humanitarian AI Today, which is a podcast series produced by the Humanitarian AI Meetup Groups in Cambridge, San Francisco, Seattle, New York City, Montreal, Toronto, London, Paris, Berlin, Oslo, Geneva, Zurich, Bangalore, Tel Aviv, and Tokyo. And I am going to interview you on behalf of Humanitarian AI Today, but also be interviewed by you, for Trumanitarian. But just to say that the views I'm presenting here are my own and do not represent any of the official policies of the British government.

Lars Peter Nissen:

And I will also very much try to be myself. I think we should say, Sarah, that we also have Brent listening in. Brent normally organises Humanitarian AI Today, and does a tremendous work in the background on getting these groups organised and I think my sort of secret success criteria for today is that we say something so interesting that Brent actually feels compelled to intervene.

Sarah Spencer:

That's a great goal. So why don't we start, if I could just ask you, Lars Peter, about ACAPS. And for the listeners for Humanitarian AI Today, just to give them a bit of a sense of what ACAPS does, and some of its history.

Lars Peter Nissen:

Sure, so we were created in 2OO9. Essentially, an attempt to improve the way in which the humanitarian sector does assessment, or at least that is how we thought about it back then. To understand where ACAPS comes from, you have to go back to cyclone Nargis in 2OO8 in Myanmar, and the Swat Valley displacements in Pakistan. In both of those operations, there were a group of humanitarians who managed in those two countries to get the whole community to work together on doing a joint assessment. And it was quite a powerful experience in both of those settings. So a bunch of humanitarian entrepreneurs got together in 2OO9, and said, Why don't we try to take this to scale? Why don't we try to make this something that the whole humanitarian sector could do? Because we know we have a problem with the way we do assessments. And so ACAPS was created out of that. I think... so on one side, it comes from operational experience, and humanitarians want to improve. On the other side, it is also clear that there was a growing frustration among the donors back then. And a group of I think 27 of them got together and wrote to the Emergency Relief Coordinator back then saying, we basically need better assessments. And I think that was driven by a recognition of budgets not going up, but needs increasing and that therefore, you need better prioritisation. And I think there was simply a concern with the quality of the funding instruments that the donors were given by, quote unquote, the system. And so there was a solution and a problem and they met and ACAPS was created.

Sarah Spencer:

It's really interesting when you talk about budgets going up and nee-... sorry, needs increasing and budgets not increasing. Do you feel that we are in the same position now? Or at least if you get give it a bit of a forward look over the next five years?

Lars Peter Nissen:

Absolutely. Absolutely. I think I don't think we even know what's gonna hit us. I think the secondary effects of COVID are gonna make the primary effects look, you know, minor.

Sarah Spencer:

I think we've already seen that in terms of the cuts from the British budgets for official development assistance as well and some like minded donors are doing the same with regards to economic contraction. I wonder if there's other big, you know, big movements, big actions that you think need to happen, then, in the next five years for us to get to a better place to do more with less or more with the same amount?

Lars Peter Nissen:

I think there is a tremendous amount. I almost don't know where to begin by answering... to answer that question, Sarah. But the one thing I try to keep in mind these days, and have for the past year, is just how high levels of uncertainty we are... and ambiguity we're dealing with right now. We actually don't know what we're dealing with. I saw this statistic in the media about eye... makeup... lipstick dropping... sales of lipstick dropping but eye makeup increasing dramatically because nobody wants lipstick on their mask, but your eyes a really prominent feature when you're on Zoom. Now, it makes a lot of sense when we say it afterwards, but nobody thought about that beforehand. And there are gonna be 200 effects like that, and they're gonna be cascading, and they're gonna hit the most vulnerable people the hardest. So for me, there's there's no doubt that we are in for a challenge like we haven't seen for a while. And what I'm most afraid of is that we don't even realise that, but that we'll probably just continue with a very high level of path dependency without really managing to adapt and be agile.

Sarah Spencer:

Yeah, I agree with that. I think there's some very interesting challenges ahead that that will require a significant shake up in the way in which, you know, in the way in which business as usual... what it currently looks like. And... But it's difficult to figure out the first way to handle that when people are really trying to respond to the here and now and trying to look at how to respond to the emergent crises that were, you know, extent before COVID-19, and now trying to sort of look at the multiple layers, multiple needs and challenges, that communities are facing as a result of the response to COVID-19, as well as the sort of existing challenges before the pandemic.

Lars Peter Nissen:

So given all this uncertainty, and given the situation we're in now, you're right now writing a paper for ODI on how we can use artificial intelligence for the humanitarian sector. What do you see in your work that that can help us understand this? What can AI actually do for us in this situation?

Sarah Spencer:

Yeah, I think it's interesting. So I'm writing this paper for the Humanitarian Practice Network, which is supported by ODI (and the Overseas Development Institute, rather than the Open Data Initiative or Open Data Institute). And I think there's a couple of interesting assumptions that need to be tested. The first one is that AI and machine learning will help us to do more with less. And I suppose that the point of this paper that I'm writing at the moment, is trying to unpack that a little bit to see, you know, can... what are the current use cases? Are those use cases that can be brought to scale and really deliver significant impact into the lives of people who need it most? Should we be using it? And why should we be using it? And that will explore a lot of the ethics and risks related to using AI and machine learning?

Lars Peter Nissen:

Just run us to the main categories of use cases you see.

Sarah Spencer:

Well, so, let's get back to the conver-... some conversations we've had previously around, you know, trying to really break down and understand what we mean when we talk about AI in the humanitarian field. I think there are some interesting use cases that focus on internal operations that wouldn't necessarily feel humanitarian exclusively, but that would lend itself some.... or that would yield some benefits for humanitarian agencies. So fraud detection for humanitarian agencies that are turning over, you know, several hundred million dollars a year. There are internal algorithms and systems that have, you know, been tested for many, many years, and are not expensive, easy to roll out, that can help reduce risks of internal fraud and misuse of funds, you know, potentially saving some low percentage points in budgets. There are some interesting ones around detecting threats. So I know that there are some agencies who are looking at remote sensing technologies, satellite technologies, plus sort of computer vision machine learning algorithms, to identify explosive remnants of war and to reduce that threat and do it quickly, more quickly. And I think the one that's the most sort of prevalent, I've seen sort of around the use of predictive analytics for food insecurity, epidemics, and population movements. And on the last one, I think there is a significant amount of debate there. It tends to be the number one go to, I suppose, when you think about humanitarian AI and predictive analytics, people automatically go towards sort of trying to predict population movements. But there's some critical assumptions within them that need to be unpicked. And the first one is, you know, that the reason that there is an action or the reason that our response isn't as effective as it could be is because of a lack of data. And I think that's a assumption worth discussing and unpacking.

Lars Peter Nissen:

I think you're being very diplomatic there. I think we can all see that in most of the operations where we got there too late, or where we messed up. I don't think the key bottleneck was a lack of data or information or even understanding of the problem, it was a lack of political will to actually intervene.

Sarah Spencer:

Yeah, I mean, all of these are political choices. I think that their, you know, funding is ultimately driven is a political choice. And there will be reasons why some governments give significantly to some crisis, but not to others. And some of those will be due to historical relationships and connections to those countries or for some political reasons other than others. So I think you're right. And the second assumption is really about actually the reaction or the response to that crisis will be benevolent. So if you give... if you collect the right kind of data and you say, We are going to predict that 80,000 people are moving across the border, or we can predict, down to a household level, these communities or these districts will be significantly impacted by floods or landslides, the response by political actors, you know, is not necessarily going to be benevolent. And it really depends on the context. In some cases, those can be used for ill and I know that there's other actors out there who started to unpack that. The data science ethics groups, DSEG, has done a really good decision tree to help humanitarian actors think through all those implications for using data science and data science methods in their design. So I think those are the two assumptions that I'm hoping to explore a bit more in that paper and to unpack and to really try and bring out some of the political choices and the political angles with some of those decisions.

Lars Peter Nissen:

I think if I can chip in from an ACAPS perspective on what you just said, I told a bit of the story about the origins of the project, but I didn't speak up the actual experience we had. We started early in 2O1O, right around the time of the Haiti earthquake. And actually, within a week of me coming to Geneva and starting ACAPS, we deployed the team to Haiti. And we messed up royally. That exercise was a nightmare in every single sense. We spent a lot of energy and efforts trying to understand and collect data around the situation. And I'm not going to go into detail how we messed up, but basically, nothing worked. It was one of those things. And then half year later, we had the Pakistan floods in 2O1O, where we actually found out that we both worked. I was there as the the on deck team leader, on the first team going in, and you were with one of the clusters, is that right?

Sarah Spencer:

Oh, yeah, I was supporting the GBV (Gender Based Violence) sub-cluster there.

Lars Peter Nissen:

Yeah

Sarah Spencer:

... on behalf of UNFP and UNICEF.

Lars Peter Nissen:

And so I don't know if you remember the big assessment exercise, the big joint assessment exercise we did, where we had 24OO household interviews, we had focus group discussions and left right and center. It was a massive exercise, and in many ways, a really solid technical exercise, but it just did not make a difference. I did it was too slow, it seemed to be somehow precisely wrong, it didn't really inform decision making, we gave all of you clusters a whole bunch of printouts of cross tabulations of the different questions we asked, and it just didn't seem to do anything. And that was then the time for real soul searching for ACAPS. We almost crashed there because it seemed like nothing we would do was working. What we came out with, that invaluable learning we came out with, was two things. Our implicit assumption was that the lack... or the key bottleneck was a lack of data and it clearly wasn't, it was a lack of analytical capacity. And secondly, I think we thought that it was a lack of capacity of the humanitarian system to make sense out of the crisis. But I would say that it's probably more... it is a capacity issue, but it's also an architecture issue or a political issue and what is lacking is a gap in terms of not having an independent voice coming out, holistically analysing the crisis. And when I say independent, what I mean is, on one side, that you're not operationally involved, and secondly, that you have editorial control over your products and are able to say what you actually think. And so that's the space that ACAPS kept trying to squeeze into, to analyse from an independent and operationally independent point of view. And what our approach boils down to is basically three simple rules that we define based on those two disastrous operations for us, really painful operations. One, know what you need to know. Be clear on your information requirements. So who is your decision maker? What What is he going to do with the data? What is the decision on the table? What's the objective of the exercise? Very often, you will find people building dashboards or collecting data without having any clue about who's going to use it. And if that's the case, I'd suggest you stop. Right? So that's the rule number one, know what you need to know. Rule number two, make sense, not data. If you don't have a solid analytical framework, if you don't have capacity to analyse what you collect, stop collecting data and start analysing, start making sense. And thirdly, don't be precisely wrong, be approximately right. So there needs to be a robustness and a good enough approach in what you do so you don't end up with a granular snapshot that's outdated in seven days, and then you don't know what's going on. That was our take on it.

Sarah Spencer:

Yeah, that's really interesting. I mean, I feel like what... that sounds very similar in sort of humanitarian AI world is, you know, trying to try to keep people focused on the problem they need to solve, and therefore what data you need, and how that data is going to help you change what you do on the ground. And I think there is a real question here around, you know... from what I'm seeing, or for what I'm hearing, there really is, someone called it an arms race for data collection from different humanitarian agencies. And that may be fueled in part by pressures from donors or incentives created by, you know, private sector involvement, etcetera, to collect and amass more and more and more data, without really being clear on why they need it when what problem they're trying to solve and how to act on it, and how that will shape and transform already operations underway. I wonder if you have comments about, you know, where you think donor priorities are with regards to data and what that looks like on the ground? Because I've heard someone say to me recently that, you know, on this on this question around data versus analytical capacity that you were raising, or the challenge around data collection versus analytical capacity, it does strike me that donors still really value, whether it's explicitly or subconsciously, really value, agencies that have operational human to human contact, so that regardless of you know, the big datasets that you're collecting, or the advanced analytical capabilities that you... that one... that any firm agency is able to produce, that there might be this inherent bias for people who are able to go and speak to someone person to person and that's therefore... that will create, that will skew the way in which data and information is fundamentally collected and prioritised.

Lars Peter Nissen:

I once read a report that said something along the lines of agencies and donors construct and solve crisis without evidence ever really entering the equation. And I think that's the problem. I think the problem is that the way we shape the humanitarian narrative, if you want, is just not solid enough, and that we somehow can get away with it because the key dynamics are the power centre of the businesses between the donors and the agencies and the affected population is somehow out of the loop. Right? I think... I mean, just reflecting on... I mean, we have a project in the sector, talking about accountability to affect the population. That's a project. It's not something that automatically is generated from the way we do our business. It's not like the customers king here, right? Unless the customer is the donor, which I don't think we want it to be. And so I think the... I think it's a really complex issue. On one side, I don't think you really need to truly understand a crisis to be able to get funding for it. I think some agencies can do that on the back of the street credibility, their operational presence, as you say. And in a sense, part of me thinks that's okay, because these are black swan events that will evolve and change and so we can't really tell a very fixed story about what's going to happen, or what we're going to do. So there is... it is also actually a way of creating that flexible space or that soft stone that you need to be able to do a good job in a crisis. So I get that. But at the same time, I really think we are not telling the right story about the link between evidence.. or say data, information, evidence, knowledge and decision making. We're just not telling that story right. I once spoke to a guy who had been in West Africa and he said, he was a researcher studying the humanitarian sector during the Ebola outbreak and he said, You know, the hardest thing in the world is to get a humanitarian to admit that he or she makes a decision. And I think about that a lot, right? So there's a risk adverse nature to our community. And I don't think we want to recognise that a decision is inherently political in nature. It's about choosing one thing over another.

Sarah Spencer:

So interesting because I think that's... that feels like to me, you know, connecting us to another wider debate that's being held right now across the humanitarian community around decentralisa-... sorry, decolonization and localization: Who gets to decide? And I think I haven't seen enough argument or debate where they break down those issues related to decolonization and localization and how to solve it. And I think what would be very interesting is to look at the decision making and you know, fundamentally, who gets to decide, and how we get to use some of that data to help... you know, if we were looking at a local community in, you know, in North America somewhere, you know, whether that's a sort of disaster or post-disaster context or a low resource context, populations will be consulted, but also be able to shape the response. And I think one of the biggest challenges right now, and particularly one that is critically linked to this conversation around decolonization localization, but not ever explicitly referenced, is how do you improve decision making? How do you get communities to decide, actually, we have enough health centre but what we're really desperate for is X. And how does it... how do we factor in... how do we factor their voices in or bring their voices in in a way that's a little bit A) more transparent, but B) more consistent. And I think some of these tools do have the potential to do that. Particularly when you think about, you know, natural language processing and work that Translators Without Borders and others are doing to look at non-digitised languages and bring them into the sort of main. You know, there's there's potential there, but because people haven't prioritised that, we haven't really gotten to that space yet on the sort of AI and machine learning, as well as the sort of emerging technologies front. You know, what role do emergent technologies have to play in localization, and indeed, colonisation. And this is where I worry, or I query, whether predictive analytics and population movements may not yield as much in terms of impact as other issues that are really prescient and also have been, you know, sort of extinct for decades.

Lars Peter Nissen:

I'm probably gonna get in trouble for this. But every time I hear about these predictive analytics, and you know... I keep thinking about the Hitchhiker's Guide to the Universe, right, where they built this machine to answer the question of what's the meaning of life, the universe and everything. It's called deep thought. It thinks for three-and-a-half million years. I'm not sure this is a... (sorry to Hitchhiker's aficionados who... if I get the facts wrong)... but it thinks for a lot of millions of years, and it comes up with the answer: 42. That's how I feel about it. It's like, it is... we have to start with, this is a narrative. This is a story we tell. No data makes sense without an analytical framework. That's number one. And the fundamental problem we have, and I think the fundamental question I have to AI when they enter into the space of humanitarian, How do you do it in a information space which is characterised by large P and small n, as the mathematicians will say. Or if you're in an Excel spreadsheet, you don't have 10,000 lines and five columns, you have 20 lines, and 250 columns? So how do you, using AI, detect patterns when you have so few limited observations, when it's the inter relationship between the many variables in the individual observation that determines outcomes? How do you train an AI to do that? Nobody has been able to explain that to me. So in other words, how the heck do you catch a black swan with AI?

Sarah Spencer:

Well, I think what's interesting on that one is that, it strikes me that... in the interviews that they've had so far, it strikes me that the models that are based on epidemiological data, or a historic data, or meteorological data, people tend to favour those over ones where there's some kind of human behaviour that needs to be accounted for. Because we can account for the Black Swans of human behaviour up until the present, but there's no telling what sort of outlier behaviour might be modelled or exhibited by the world's leaders tomorrow, or in a week, or in a year. And there is actually... there was some very interesting stuff around cholera and cholera predictions at a community or household level and using historic data, public health data, to help prevents cholera epidemics (and cholera epidemics are defined by 10 cases or more, right). So getting to the point where you can get to, you know, a cup-... like just below zero, you know, just where you're about to get the first case, and then intervene... you know, using that data to trigger an effective response. What's the response? The response is going hard on handwashing and public health measures and vector control, and actually having some pretty impressive results there. I mean, it deserves a bit more explanation, but I think that's... that is a very interesting model, whereas thinking about predicting 50,000 people moving across a border a week before they do, it's har-... it's... I think there are a lot of ethical risks in there that need to be really thought through and understood. And I think it's... the response to me is still a little unclear. What I've heard is it amounts financing more quickly, and it makes it available more easily. But from my perspective, financing, and funding, you know, is not always a data driven decision. A lot of that is politically driven, as we as we discussed.

Lars Peter Nissen:

Yeah, exactly. And I think the... What worries me is also... I think it's two things. I think, when we talk about AI, can we please start talking about what it is not? You know, it obviously has a domain where it is useful, as one you just outlined with cholera. And then obviously, there's some areas where it really can do more harm than good. And then secondly, what concerns me is that you sometimes get the feeling that this is driven by a wish to do more with less, because we know we'll have less, as you said. And it's like, that's a really... that's really dangerous, right? Because then people will think that solves the problem and that detracts, again, from the need to actually make some real political prioritizations. You can't cover the needs. It's not possible. We have to prioritise. And that is a political decision. And that decision has to be informed by evidence. So we have to have the best possible evidence. But a lot of it is about humanitarians being afraid of making decisions. And so preferring a technical solution to a political point.

Sarah Spencer:

I also wonder about how, you know, how much this is about the environment that exists or the indus-... you know, the competition that exists within the industry itself. And there isn't really enough freedom for humanitarian actors, agencies, NGOs, institutions, etc, to behave necessarily as they as they might, because they are ultimately... don't have access to unrestricted resources the way that, you know, private sector actors might in that sense.

Lars Peter Nissen:

So if I can challenge you on that, right, if you go to any large NGO, I'll tell you the one part of the house that uses data extremely effectively, it's their own fundraising department. They know how to target people, they know how to use news tools, they know how to make data-driven decisions. That's how they get their money. So what's interesting for me is, the discrepancy between the way in which data is used in operational core business, and then the way it's used in getting money for ourselves.

Sarah Spencer:

I think one of the problems with the whole question around AI and machine learning and their applications for humanitarian data, as well is that the solutions that are there, and that might be the most effective, or that might yield the greatest impact, are not the ones that are newsworthy, and that might not buy you the sort of, you know, Gold Medal of the Year Award. Because it'll effectively be about, you know, using customer relations management tools, you know, chatbots, trying to... my goodness, the amount of humanitarian NGOs that are still sort of looking at indicators related to information about services, ensuring that, you know, displaced populations or affected populations, understand their rights, understand their services. I mean, there are ways to automate some of that because it is consistent in that, you know... it's... to find out where your health services are, to find out what time UNHCR office is open, to be able to understand what rights you have as an asylum seeker... You know, there are ways to improve how we provide that information and how we increase access to that information. But those aren't necessarily going to land you on the front page of The Guardian or on the front page of Time Magazine. I think that's also where some of this challenges is, you know, the people who own the problem set, for example, might not be the ones who are really designing the capabilities or feeding into the capabilities. There's a lot of reasons why that's not happening at the moment, but I would suggest, or I would hypothesise, that a large amount of the design happening right now aren't... isn't happening in tandem or in lockstep with the people who really understand the nuances of the challenge ahead, and who have operational experience or who are continuing... you know, who are working on the frontlines.

Lars Peter Nissen:

Yeah, I couldn't agree more, actually, I really think you have a, it's a very strong point that if it's not front page worthy, then we don't really get much attention to it. I think. And I think that draws the attention to, to the sort of tech development being something we do. It's almost like a Boys and toys approach to things. And I, you know, I, every time I see a drone flying around, I think about how much energy we have spent on that without that necessarily yielding massive operational efficiencies, as far as I can see, and I do think that we have an issue around. Why are we doing this? Why are we talking about AI? While we're talking about because there's obviously a massive potential, but then why come? How can we mess it up so badly? I think we have to look at what actually drives those processes, as you say, who owns those problem? Who gets to define those? Well, it's,

Sarah Spencer:

it's a really good question. And I think there's a big question here about whether this is just a flash in the pan and a bit of snake oil and hype, with some potential room for operational improvements and operational gains, or efficiency gains, but largely, you know, a development or phase in human evolution, that will not dramatically impact the humanitarian aid industry at all. And there are some people who do feel that this is this is a passing fad. And, you know, we had, you know, mobile phone banking before, and we've had, you know, various bits of technology that threatens or promised to change the way in which we do things. And some people don't know that that's gonna hold true. And then there's others who think that the way in which human beings, you know, connect and relate to their own communities and societies will change so dramatically, that there's no way that the humanitarian industry itself can not be effected. Today, if you've got these,

Lars Peter Nissen:

I think the state of the human tendency is to report maybe it wasn't 2O15 2O14 or 15, I can't remember, said something along the lines of the human centred sector is changing, but it's changing significantly slower than society around us. And so, for me, there's no doubt that AI will play a major role and will shape humanitarian outcomes and will shape the way in which we work over time, I seriously doubt that the sort of AI that will do that is developed by humanitarian, so in the humanitarian sector, and I think the challenge for us probably is mainly to position ourselves. visa vie, this tech that's being developed with with resources that I don't know how many 100 times bigger than what we have at our disposal. But, but a lot of stuff is happening. And obviously, we're going to use some of that in the future, we are going to change in many ways, but I don't think we will be in the driving seat. I think that's the problem.

Sarah Spencer:if you want to go out and buy:Lars Peter Nissen:

Yeah, I think that that happens a lot. I think it's driven by a couple of things on one side. These systems and ideas that are developed by different organisations are seen as something We give you bragging rights. So we developed this thing, this is developed by X and X NGO, is one side of it. And of course, we don't want to let go of our own babies, even if Microsoft Office 365 have a functionality that is 10 times better than what we developed 10 years ago, is one thing. And I think there's a lot of, at the technical level with the, in the... in community, I think there's a lot of, of pride and egos as play as well, in terms of, oh, I developed this index, or look at this fantastic platform for collecting data or, and a lot of it is just not, it's just not scrutinised enough. It's just not pressure tested enough. And so you leave these little fantastic, you know, creations to grow over the years. And you see some quite remarkable examples of things that actively don't work, but I still have been used. And I'm not going to make mention any examples here. But I don't think it's it's difficult to find things that the probably detract from an operation rather than adding to it, and still have some really, really strong advocates within the the I am community.

Sarah Spencer:

Yeah, I completely agree with you on that one. And I think there are risks that AI machine learning could replicate that phenomenon, and which is why I think, you know, the number one question we all have to ask ourselves, and it's and it goes to data collection, as well, and analysis is, you know, what problem are we trying to solve? What are our biggest problems? I had a colleague say to me, you know, AI sounds great, but will it? Will it fix my generator? And you just think, well, actually, that's, that's the greatest? Bet it Yes, it can, it should do. I mean, we should get to a point where generators are connected. And you know, you've got algorithms and automated processes, that we'll be able to diagnose or anticipate failures and flaws, or at least shortcomings in in the operations of our equipment, things down to like fleet management, things down to, you know, operating centimetres across places that are off the grid. And, again, that's those aren't necessarily newsworthy, or really sort of exciting applications that might excite donors, but it might have some great operational impact. I wonder if you could talk a little bit about the sort of some of the ethical risks in collecting data, I think, you know, one of the commonalities that strikes me in terms of some of the work that a Capps is doing and the push as well, to look at the ways in which artificial intelligence and machine learning could be applied to the humanitarian space is really around data and data collection and talking about big P, little n, it's some, therefore there is created disincentive, or there is an added incentive to collect more data and amass more data. And I wonder if you've got thoughts on some of the challenges some of the challenges around that over the next coming years that humanitarian agencies need to think about.

Lars Peter Nissen:

Where your mind inevitably goes, when it comes to this is, oh, my God, I hope I don't have some individual data that will get somebody in trouble. I think we all really afraid of working in a country with a hostile government. And then the data we have collected about vulnerable populations, is then being shared with the government. And as in being used to target these people, I think that's the deep fear that a lot of us have. And I think that is a very relevant fear. I'm not an expert in this area. But for me, it just seems obvious that we have to really, I mean, before we start collecting data, again, back to the three rules, why are you actually collecting this data? It's, it's not just good. Why? Why are you spending energy on it? And are you actually creating a risk? I think those are questions we have to ask ourselves constantly. But I don't think it ends there. I think those are very visible risks, where we, through our actions, can can do harm or put people at risk. And I think we naturally think of those, but I think there's another category of risks that we don't think about. And you can call that street licence, chateaus, for example, that we tend to look where the data is, or where we worked last time, or wherever we have an operational footprint, but not where the real needs are. So for me, data collection and shaping the human centred narrative and setting basic priorities. How do we do that? If you want to think about COVID-19, when this thing started, what were your assumptions about where it's going to hit the worst? My mind immediately went to Zimbabwe, where I've worked for a number of years with HIV, and I thought, oh my god, if you have food security and HIV, at a high high rate, and then on top of that, you get this COVID That might just be disastrous. Now if that country has been affected, but what it seems to have been that some flashpoints for example, in Latin America and Peru and Brazil, where it's been much heavier or South Africa, maybe and we don't know why that is. So the point I'm trying to make is, if you have a black swan like this, if you if you have something as unknown as this, the more sophisticated data collection methods you use, the more vulnerable you become to, to just looking at the data you have. It actually becomes an obstacle you look at if you drop your keys, you look where the street light is shining, not in the shadows. You don't you don't look with there's no light.

Sarah Spencer:

Yeah, I think that's right. I think I think there's a real challenge as well. I mean, you were saying it earlier in our conversation, but you know it with the, with the analogy for lipstick and eye makeup, you know, we oh, it makes sense now, but we can never could have predicted it, you know, five months ago or a year ago. And it's the same thing, I think with data collection and amassing huge datasets. I think the threat to individuals and communities in particularly marginalised or vulnerable communities, you know that that is quite clear, particularly for people who work in humanitarian industry, because we are trained to work with in some ways the most most vulnerable and most marginalised in the world and society. I think what is what actors are currently struggling with? And, and even the emphasis because I think a lot of the questions around ethics and AI ethics have a hard time finding their sort of operational legs and the teeth and what that looks like when the rubber hits the road. But there's some work happening in some think tanks in North America and Europe explore, you know, the rise of digital authoritarianism. And for example, if humanitarians are amassing very large datasets, how does that strengthen the hand of states and state actors who may not have the best intends for their most marginalised and most vulnerable members of their communities? And I think cut that risk, coupled with the naivete in the purest sense of the definition. So like, really just a just a lack of solid understanding of how secure data can actually ever be, how it's stored, but equally, you know who with whom it's shared. So if it's shared with international organisations, who have a function of their mandate, share their information with state authorities. And you know, looking at the customer data data, I think if you couple those two things together, there's a real conversation that needs to happen around humanitarian data. And getting back to your questions around what data do we actually need? How is it going to be used? Is there a different way to solve that problem? Other than collecting these 500 points of data on this one person? Can we do this differently, and in a more ethical way, that doesn't increase those risks?

Lars Peter Nissen:

I totally agree. I also have the feeling that that train left a long time ago, and that it's not us driving the train, I think that force is far bigger at play, than what data the humanitarian sector can collect, that enable authoritarian governments to target their populations in the way you describe it. Now, that is not in any way an argument against what you said,

Sarah Spencer:

I agree, I think it just helps them. I think it's just an added bonus, an added benefit.

Lars Peter Nissen:

I agree with that but I think where we really have to turn our attention is to the things that we can really influence. And if there's two things we know about decision making, which by and large is a black box in this sector is, it is that we have a high level of path dependency. So we pretty much do what we did last time, plus minus 10%. And we know that the biggest voice or the strongest voice dominate. So the interests of the major agencies will tend to dominate what is done or focused on, and then whatever the last operation was, will tend to play out more or less in the same way. I think there's a massive danger there in terms of us not focusing on being truly needs-based, on actually being agile enough to redirect our operations to meet changing needs. And if that point was ever salient it's in these years. I think like, 2O21, I mean, damn, it's been dynamic, right? And so my question to you is, is AI gonna strengthen this dynamic? Is it gonna... because we know AI is trained on what you put into it, right? So that that speaks to past dependency and if outcomes is shaped by the big actors, won't AI just reproduce a lot of what we've done already and actually be an obstacle for us to be truly needs-based?

Sarah Spencer:

Yeah, I mean, I think it depends on what you're what we're talking about in terms of AI. And it gets back to your question about, you know, can we break this down and really try and understand what we mean when we talk about AI in humanitarian applications? Yes, for sure, if the data is biassed and the data only represents past history, then yes, it will recreate itself. The other interesting thing, and I'm no data scientist or, you know, computer scientist, but the other interesting thing is that, you know, you can design a system to react to you, or to dial up and dial down frequencies in terms of how it responds to certain information. So, if you want to control for certain variables, you can do that in the design of your algorithm. And what does that mean, then, also for being able to identify outliers? I suppose it really just, you know, comes back to the question about what problem are we really trying to solve? So I've seen some really, really quite fascinating use cases at the individual level, to support social workers (and these are these are being designed and delivered by huge tech companies) to support individuals and deliver better outcomes at the individual level by identifying individual risk. Now a lot of it feels like algorithms that are used for predictive policing, and minimum sentencing, which are, you know, at best problematic at worst, you know, racist and, you know, create an augment and amplify the inequalities in our societies that we're trying so hard to unlearn, to unpick, right, and to correct. There are tools that are in play at the moment that can be used to analyse past case data, to identify and assess lethality in cases of GBV, for example, or intimate partner violence. So that if you're a caseworker, managing on the better part of, let's say, 90 to 100 cases, every three months, every six months, you know, having another. an additional, tool that helps you identify whether your client is at heightened risk for lethality, right, based on, you know, history, based on her history, or medium risk, and helps you allocate resources in a different way, rather than just trying to, you know, share out resources across the board to all the clients.

Lars Peter Nissen:

It was really interesting listening to you describe that problem. It speaks to a problem I hadn't thought about in that perspective. So, one of the things that seems clear to me over this past year is that we have to start thinking about humanitarian action as a narrative. And I was thinking about it in terms of how do we get an evidence-based narrative so that we can make better decisions and we can make sure that we are guided by evidence and by people's needs or situations rather than by our own desires of whatever we have on the shelf. And that, of course, is because I'm very much... I work with ACAPS and that's what we do. We try to throw some bits and bobs of evidence on the table to make sure that decision making becomes better. But I think there's a different way in which we need to think about the narrative and shaping the narrative and that's exactly what you describe at a much more micro level. How do we counter spin disinformation? How do we counter rumours? I think Internews has done work on this. I had an email from Internews a couple of days ago, they've done a big project on trust. And of course, trust is something that's at the core of the way we operate, right? Our security, when we operate, at least traditionally in many difficult countries, was based on people trusting the... what we were doing and understanding what we were doing, so that we put our security in the hands of people. Now, misinformation about humanitarians, doing a, b and c, of course, will undermine that and so I do think you're absolutely right, we need to go in and look carefully at how do we actually counter the disinformation and the bad actors, spreading rumours, smashing a sensible narrative about people in trouble, and in effect, destroying our ability to be there and assist them. I do think that that is an important aspect. And I hadn't thought about it like that. So thank you.

Sarah Spencer:

I think that's... I think it's also much easier and cheaper than it would have been maybe 15-20 years ago. I think a lot of us, and a lot of your listeners probably will, have stories in mind where, you know, misinformation or rumours are used or perpetuated by non-state armed groups or state actors to create a panic or chaos and, you know, prevent populations from leaving on a certain date or from repatriating and... you know, propaganda has been, you know, existing for thousands and thousands of years but I suppose the tools to disseminate it are cheap and easy. And it doesn't... it really doesn't take much to put in misinformation out there or active disinformation campaigns out there, which ultimately sow the seeds of violence or, you know, perpetuate beliefs and rumours and mistruths that prevents adequate support from reaching the most vulnerable. I mean, I guess it comes down to also, you know, what is truth and who owns that narrative? And I... you know, with the greatest of respect, I think, in some locations, humanitarian actors have had... their hubris sometimes gets in the way of really, you know, strategic political analysis and understanding the power players on the ground and who was actually controlling that narrative and what they need to do to mediate that.

Lars Peter Nissen:

Yeah, I agree. I fully agree. And I think sometimes we don't even think about what we do as a narrative, we think of it as the truth. And I think that's incredibly dangerous, especially when you combine it with an approach to assessment or analysis that seems to be driven by the same logic at your... as your operations, namely, that you can duplicate. So you can't have two analytical reports on the same issue. And that's really dangerous, because it's actually the opposite logic. If you want to talk in medical terms, you of course only want one treatment, so you don't want to distribute to the same family twice, but you may want a second opinion, or in other words, to have redundant analytical situations in situations of high levels of ambiguity. It's not wasteful. It's a pretty smart strategy for actually achieving predictable outcomes.

Sarah Spencer:

Can I come back to data for a second? And just reflecting back on some earlier parts of our conversation, particularly about, you know, sort of some of the potential harms related to AI machine learning and where that's going, you know, it's hard to have a discussion about AI and machine learning without having a discussion about data. It's a bit like talking about getting to Mars without talking about the fuel. And, you know, in all of our conversation today, it just strikes me that we still lack minimum standards that are sort of either that donors hold people to account, or there's some kind of accountability mechanism linked to minimum standards related to the collection of data beyond PII in humanitarian settings. So I don't know if ACAPS or if you have personal views on the feasibility of even designing such a thing, and the utility of doing so.

Brent Phillips:

Hey, Sarah, are you talking about open data sharing like framework... standardised information framework, or just standardising? You know, lessons learned and so to speak.

Sarah Spencer:

at the operational level, I guess the first step is to think about how can... how should humanitarian actors collect, store analyse data? To what... against what standards and... to like how that gets shared? And then how can that data be used at a sort of more strategic level? And the reason I asked that is because a colleague, quite rightly said, you know, until the SPHERE standards were there, until donors got together and said, "Look, these are the minimum standards we have for humanitarian response related to the sectors", there was a wide range in terms of quality and consistency of humanitarian response. And what strikes me as some of the risks and ethics that we've raised in this conversation about the type of data that's being collected, how it's being collected, you know, with whom it's being shared, you know, whether it's open source data, or otherwise, we will continue to have that inconsistency until there is some kind of standard or some kind of accountability, I would have suggested, or potentially I would have thought around data collection. And I don't know... I think a lot of people... there's a fair amount of people I've spoken to who think that's not feasible. Like, that it is so... it's such a vast subject, and it's actually almost not worth the time and the resources invested to deliver what will that effectively be a lacklustre product.

Lars Peter Nissen:

So I think the first thing is, we have to divide into the sort of data you use to decide what you want to do and the data you collect, once you have an operation going. So one thing is about how to lay down the tracks and the other one is to drive the train back and forth on these tracks. Right? So once you're in operational mode, you have a lock frame, you were doing microcredits, or whatever, you will collect data on the individual recipients of this... of these recipients. I think, yeah, I think you can set some standards for that. And I think that a lot of work is going on data protection of individuals and privacy and so on and I think we can learn a lot there. So I think I do think that there is a... there is space to do something, especially because we do work in situations where some of our interlocutors are less than benevolent government may use the data for bad things. So I think we need to go there and figure that out. So that's one side of the coin and yeah, I think we I don't see why not. I don't see why any other industry in the world is, these days, being scrutinised for how they collect and store data and why we shouldn't on the contrary. So, yeah, there, I think we need to do something. Then the second part of data, and that's where my mind always goes, because, again, that's what we do at ACAPS is around how do we decide what to do? How do we actually... How do we shape the problem? So that's when you go out and collect the assessment data to beneficiary, from affected populations, or the like, and there I think it's not so much guidelines for data collection, it's simply some kind of manifesto or some kind of approach document that helps us just avoid the massive mistakes that are being made today where we waste resources and time and effort on basically collecting garbage that we don't use for anything and that is definitely not changing any decisions anywhere and that essentially leaves us orderly and accountable for how we actually chose to shape our interventions the way we did.

Sarah Spencer:

I mean, it almost sounds as well, when we were talking about, you know, establishing a taxonomy of use cases for AI and machine learning to start breaking it down a bit and making it a bit more... making our debate more focused, and therefore more productive, it almost feels like we need to be doing the same thing around data. And there's some really concrete rules and regulations, and also minimum standards... There are minimum standards, not enforceable, but there are minimum standards for the ethical collection of data on sexual violence, for example, the WHO standards, and there are those tools that exist out there, and agencies and organisations themselves have their own data protection guidelines, it's just not standardised across the whole of the system. And I suppose just trying to figure out which data we're actually talking about first, and for which purposes. But it strikes me as interesting that there are some international organisations, as I understand it, are not subject to GPR, and are only policing themselves with regards to data collection. And increasingly, people are beginning to understand that data is an asset, a valuable asset, for lots of different reasons. And an asset, which an individual should have control over how they use it, and to whom they give it, and for what purposes. And I don't know that that's... that penny has dropped yet in this industry, in the humanitarian industry. It's starting to draw up on the commercial side of things, and certainly for big tech companies, and how they are accountable for using our data and explicit... and additionally, how they're collecting children's data, and how children's lives and their development and the development of their own personalities is being shaped by the way in which they participate in that data landscape. But I would [inaudible]... you know, when you were talking about, you know, the humanitarian system evolving in our industry evolving, but doing it much slower than the rest of society, I reckon that this is going to be one of those areas where it will be problematic that we're not keeping pace, at the very least, with the evolution of understanding of data protection and, you know, the standards that exist right now for commercial actors.

Lars Peter Nissen:

And it is frustrating that we don't seem to focus on this. And I don't see the argument, maybe I... I am not an expert, again, but I don't see the argument for us being treated more leniently than a private company. Why would we? It's not about us, it's about the people's data we take.

Brent Phillips:

So, do you guys have any hope for the future in terms of AI applications? You know, one crazy question we're trying to ask our guests is, Do you have a science fiction style, futuristic application you'd love to see, or you just expect in 20 years will exist relative to AI or technology or something like that? What comes to mind?

Sarah Spencer:

I think on my side, I think the greatest benefits will be in the operational gains. I think there will be things that will be... the things that humans do very well will be increasingly automated and easier to do more... from a more cost effective perspective. And that might be just regular monitoring and reporting, managing our information better... Agencies will say, Well, we've been in the humanitarian or refugee business for 75 years or 150 years, but actually, it's really been somewhere like six months, because the average turnover of staff is somewhere between 6 months through here in any any given role or at any given given agency. And I think just trying to think about how humanitarian agencies use the data that they've generated from the countless life projects, but also the historic projects. I mean, think about how many of these agencies have been present in DR Congo for the last 30 years and how much information they've collected from there. And you know, sitting on some of i-... most of it these days, is digitised is sitting on servers, and is there for virtual assistants and chatbots to help them access that in a more friendly way. Stuff around financial reporting, you know. There's... there are ways to automate stuff that poses the least amount of direct threat, I would have said, to individuals and potential service users and clients for aid agencies. So that to me feels positive. I think the worry I have is about how AI machine learning, and particularly data and the use of of individualised data ,will affect instability and conflict and violence and for that trend to shape how humanitarians respond and understand that digital landscape in which they're intervening, not just the sort of socio political, geographic landscape in which they intervene.

Lars Peter Nissen:

I think for me, if I had to go sci-fi, it's very much frontline. t's very much at the individual level. And it is something enabling this, the emergence of these response organisations that we sometimes see come out in, in big disasters. And suddenly, collaboration just happens at scale and solves massive problems. If you look at people being ferried off Manhattan after 9-11, for example, by a flotilla that just came together, without there being any central leadership, or that sort of... if we can somehow turn crisis affected populations from victims into a swarm of helpers that collaborate and actually crack problems in the field, and if AI can help us do that, and if tech can connect people in a way that enables them to collaborate much more seamlessly, then I think that would be fantastic. I have no clue how to do that.

Brent Phillips:

You know, I guess we're getting close to the end, do you have any takeaways?

Lars Peter Nissen:

Let's not get so excited about the tech that we forget about how we as a industry, with our current business model, how we use it. If we lose eye on the use case, on the value produced by this tech, and just focus on the wonderful capabilities of our new thing, then I think we go wrong. And that is probably my basic concern with AI. I also have a takeaway which is that we don't have much of a choice. I don't think this is in our hands. I think the world is evolving very quickly and that we will have to adapt. And I don't think we are in the driving seat and I don't think we should pretend to be. It doesn't mean we can influence anything, but we just have to be realistic when it comes to the forces we are up against and then we have to adapt some pretty smart tactics to be able to maximise our impact in that situation.

Sarah Spencer:

I think what's interesting about that... I mean, I'm dealing with those guys who... you know, there's a lot of really great work, there... out there, and I guess what... the challenge I'm seeing is not only how do we develop use cases and capabilities that are solving the right problems, as identified and owned by the industry, but also really, how do you bring those to scale? And I think I've seen... I'm no economist, but I've seen sort of a huge, sort of, confluence of effort on developing new capability and bespoke capability, but not as much effort at trying to figure out how you bring out the scale, so that there's consistent application of them to really drive that impact. I think there's a lot of reasons why that's happening. But there's also... the exam question feels like, find the right capability and it will... and we'll solve all the problems, rather than find a good enough capability that can be scaled and shared amongst a range of agencies that we all like and that can be openly marketed as a universal tool, not just a tool for X agency or Y agency without naming any of them in particular. And I think that's a problem. I think, you know, someone shared with me quite recently that donors, in some part, are really responsible for undermining the startups that they all fund through their innovation funds, because they have startups compete... you know, identify, address a problem or compete to solve a problem that's identified by the donor, and/or potentially an NGO or UN agency. The, you know, startups all compete with each other, there's one model that wins, they win a sort of financial reward for it, you know, largely on the back of volunteer hours. And then the donor turns around and says, right, you... that's now open source, because it's a public good, that code has to be open source because of the public good. So the firm itself is unable to sort of market that in the traditional ways that it would and bring that to scale, but then equally, the NGO or UN agency turns around and says, we weren't really sort of bought into this process in the first place so thanks very much for your time, it's really interesting, but I've got to go and fix a generator in South Sudan. And I think there's this like the way in which this sort of business ecosystem has been designed isn't really enabling AI to... for AI applications to accelerate the way it might do in the private sector, and I think it deserves a little bit more attention, a little bit more analysis.

Lars Peter Nissen:

So Sarah, thanks a million for coming on Trumanitarian. Thanks a million for interviewing me. Great conversation.

Sarah Spencer:

Thank you so much. And on behalf of Humanitarian AI Today, thanks very much for your time.