In this thought-provoking episode, host Lars Peter Nissen and guest Sarah Spencer, Consultant specialized in AI explore the complex relationship between AI and humanitarian aid. They discuss the critical issues of transparency in AI-driven decision-making, the management of digital identities of aid recipients, and the ethical aspects of using AI to find ‘legitimate’ targets in conflict zones.

The conversation wraps up with Spencer’s brighter and grimmer envisioned scenarios of how the digital integration in humanitarian work could look two years from now, emphasizing the need for technology to serve humanity in ethical and empowering ways.

Listen in and check the pulse of the evolving role of technology in humanitarian efforts.

Also check out the last episode with Sarah and Lars Peter from 2021. Listen here: https://trumanitarian.org/episodes/arms-race-for-data/

Transcript

0:44 Lars Peter Nissen

Welcome to Trumanitarian. I'm your host, Lars Peter Nissen. It's a bit more than three years ago I last had Sarah Spencer on Trumanitarian. It was in episode 17, called "An Arms Race for Data," and if you're into AI, you should go back and have a listen to that episode. It is, of course, a bit outdated, and a lot has happened with respect to AI over the past three years, but it is still worth a listen. You'll find the link in the show notes. A couple of weeks ago, Sarah and I happened to be speaking at the same conference, and we took the chance to catch up in a hotel bar. You will notice the noise in the background. Sorry about that, and revisit where we are as a sector when it comes to AI. The overall themes are still the same: part of the risks, what are the benefits for the most vulnerable populations that we serve.

How will it change humanitarianism? That quote unquote, "Decisions are being made by an algorithm and not a human." And how will this and misinformation shape the humanitarian narrative?

What about efficiency? Can we do more with less? It's, as always, a lot of fun to speak to Sarah, and I'm sure you'll get plenty out of this episode. But before we jump in, don't forget to like, review, share, make noise on social media. All of that stuff. But most importantly, as always, enjoy the conversation.

Sarah Spencer, welcome back to Trumanitarian.

2:24 Sarah Spencer:

Thank you for having me. It's a pleasure to be here.

2:26 Lars Peter Nissen:

It's been a couple of years actually since we last had a chance to talk, and we weren't face to face back then. This was online. Here we are in a lovely hotel on the outskirts of Brussels. We're both here for the global ingo security forum. Where you'll be talking about AI, and I will be talking about black swans and black boxes, as I always do. And we thought there would be a great opportunity to catch up on where we are with AI since we last spoke.

2:54 Sarah Spencer:

Well, a lot has changed, hasn't it? I can't remember what year it would have been, but it probably was before the public launch of ChatGPT in November 2022. And since then, obviously, we've had a flurry of excitement around the use of AI in the humanitarian sector as well as for the sustainable development goals at large.

3:16 Lars Peter Nissen:

I think the predominant feeling I have is FOMO—fear of missing out. There seems to be a total panic in the humanitarian sector, "Oh my God, we must get with this AI. What do we do?" Is that something you're feeling?

3:27 Sarah Spencer:

Oh, yeah, I mean, I think every day, every week, I ask myself, "Look, where has this excitement come from?" It's not just the excitement around ChatGPT; it's not just the initial valuations of these companies and these models. I mean, that's important, let's not forget the money is immense, and its capabilities are immense. And even the most recent launches this week of new products are quite impressive. So, there is this real amazement and shock about how capable some of these tools are. And you know, Sam Altman right now is traveling the globe to raise 7 trillion U.S. dollars—trillion with a 'T'—to redesign the semiconductor. So the money and therefore the stakes to get it right, or to prove that it's this sort of panacea magic tool, those stakes are high but it's I still am left wondering, why is it that the humanitarian sector is so panicked and is so focused on trying to find these use cases and I have a couple thoughts on that.

4:37 Lars Peter Nissen:

Yeah I have my ideas but maybe you know you're the guest. Maybe you come with your ideas first.

4:43 Sarah Spencer:

Well I mean I think one that's obvious for probably all of us is that, you know, the new shiny object is has a potential to raise donor money and donor interest. And money is tight, money is decreasing, the need is increasing, depending any way you cut. It's getting increasingly more complex I would argue and this is perhaps different for a different podcast and a different guest, but there is there is a pre 9/11 world which I'm old enough to remember, and a post 9/11 world where humanitarian ethics are no longer really at the core of what we do. Which makes response in complex crises a little bit more challenging.

So there's that, but there is also, I think if I'm being generous… I do also think there are people that you and I know, and maybe you and me as well, who have dedicated decades of our career to challenges which are just so immovable and they just get worse and the conflicts this year last year everything that's happened… I think there are people who desperately do want to find a solution like how can we actually do better? And I think the most recent conflicts really do push people to the brink of frustration thinking we've gotta... We've gotta do something differently and that's against the backdrop of an international order and humanitarian system that is no longer fit for purpose etcetera. But there is that push as well I think. And we can't, you know, I'm not so cynical as to neglect that or overlook that.

00:06:21 Lars Peter Nissen:

We should mention that we have ICRC sitting in the corner thinking very deeply about what we say. Welcome ICC. It's great to have you on the podcast. You have kindly declined to publicly participate but it's great to have you here and we won't say who you are. Don't worry. You'll be anonymous. Yeah I think my thinking is very much along the lines of what you're saying on one side. Clearly in a situation where we have to cut down and a lot of the big organisations are facing a very difficult year and having to let people go. We need a Hail Mary. And somehow this AI sounds like it's going to be so transformative that it might be we start chasing that. I think that's part of it.

00:07:06 Sarah Spencer:

But isn't there something here where the idea that AI becomes the next savior or this policy of this cure-all… Is predicated on two assumptions: One that it drives or delivers efficiency gains and the second is that data-driven insights will get us better programming and not something you speak on a lot. But I would question whether better data will give us better programming given that we've failed to use the data we have already.

7:35 Lars Peter Nissen:

But you're getting a bit ahead of yourself, because right now we're not talking about what it'll actually do. We're talking about where's the hype coming from right?

7:40 Sarah Spencer:

Well I think those are intertwined right. I think most of it is you know McKinsey Publishing reports saying it's going to add X billion dollars to the global economy or trillion dollars to the global economy because of efficiency gains. And if you think of efficiency gains practically for humanitarian aid and you're thinking about you know donor reporting or proposals. Does that mean that we lose grant managers in Bucaramanga who are writing proposals because now that GPT can do it.

8:08 Lars Peter Nissen:

I think it is a desperate need to pretend that something is changing flavor of the month. Look I'm doing something different so things will be different. I think it is secondly a true search for something better. I think we can all see that this technology has the potential to be truly transformative in a way that we may not have seen before. And I think we can all see that we need to get on that bandwagon. I think that's fair enough. And all of this against the backdrop of not being able to pretend that what we do work anymore or that just throwing more money at us would solve the problem. I simply don't think that the business model we have today is scalable and I think it is getting harder and harder but to pretend that it is.

8:52 Sarah Spencer:

And that's why… that explains the hype or the interest and the excitement and buzz from the humanitarian side. Then there's a whole other piece about why the AI providers, the AI vendors, the tech agencies, what's in it for them to demonstrate that it has a 'for good' angle.

9:11 Lars Peter Nissen:

They're very good at moving fast and breaking things, as they say, and I think that's going to be even more true in this case. I think it's going to be maybe I mean we know AI is built on what is there. It's built on our existence, right? And if I look at the world the way it looks now, it's not a particularly fair or equal place. And so of course, the big problem we have with AI is the way that the current biases or power structures of the world are baked into it.

9:40 Lars Peter Nissen:

And I think the AI companies are very aware of this and trying to find a way of pretending like they are the good guys instead of the money guys.

9:47 Sarah Spencer:

But I think that, I don't disagree and I think for sure there are communications teams within these companies. You know, we're talking about companies that have a market capitalization of trillions of U.S. dollars. So the power is immense, the financial power is immense in ways that we haven't seen on the planet Earth probably arguably ever. But there are some very interesting parallels between, I mean sort of a hundred 150 years ago and you know the names of like Carnegie and Rockefeller and Ford, right? Ford, those names were not necessarily kind when you consider things like labour rights and unions and, you know, just equality. And Carnegie found his sort of calling at the very end of his life and towards the end of his career in terms of his then therefore getting into the business of philanthropy. So I do think there are some parallels here to think about. The leaders of these technology firms, and not only does it serve a PR benefit. It serves, I think in some ways, bakes into this idea of corporations providing some kind of social duty, which that is a whole other debate. Whether or not that stands. But there are people who think that corporations should act with more benefit to the greater good. And so therefore, you've got these ideas of corporate social responsibility or corporate philanthropy. Which are in some ways replacing the role of the state, right? You've got the state-citizen contract, the citizens buy into what the state is going to give them, gives over some of their freedoms and the state provides them back. But there's now all of a sudden this idea of the corporation stemming from the late 19th century as a service provider, as a donor, as a bank for those kinds of things. And I think that's a challenge, that's a space that needs greater academic rigour and thought around it.

11:52 Lars Peter Nissen:

When we started talking about doing this episode, you wrote that you were getting frustrated that you know, either it is sort of headline blue-sky thinking, we can do more with less, or transform the whole industry sort of really just bold statements without anything in them, or sort of extremely granular use cases. Look how I can redesign our logo which had GPT or write a SitRep or whatever, right? I mean and there must be something in between, right? So, where are we? What's the Goldilocks scenario?

12:28 Sarah Spencer:

I think it's a good question and I think that the first thing to say before I answer is that there is a real difference here. Definitions matter. So what can AI deliver versus what can machine learning deliver? And sort of expert systems that can do some kinds of levels of preparation.

12:49 Lars Peter Nissen:

Just unpack that for the listener. What do you mean by that difference?

12:51 Sarah Spencer:

Well so this gets into like what is AI. I don't want to get drawn too into debates about how, what the relationship is between machine learning and AI because academics and computer scientists get very animated about it, the same way I say it's...

13:06 Lars Peter Nissen:

Yeah they don't listen to this podcast, so knock yourself out.

13:10 Sarah Spencer:

The same way in the world of gender-based violence, which is where I started my career. You know we would have animated debates about sexual violence versus gender-based violence, which term is more appropriate? Machine learning is like the engine that powers the AI car, but it's not the only engine and it relies on a bunch of methods, some of which resemble very, very advanced statistics like linear regression. So for those who are familiar with statistics, a linear regression, you know if you're plotting some data on a chart and you've got a diagonal line, you can anticipate where the next dot on the line would go based on where the past data was. But that's not the sum total of AI, and it's really important for us to remember that the advances that are being made both with large language models and small language models, as well as text to video and the way in which robotics engineers are looking to use video sensory and text data to accelerate the next...To get us closer to basically artificial intelligence, like an intelligence that is non-human and non-biological, and most of the use cases that we've talked about so far and the ones that hold promise and the ones that I find very interesting so far feel like we're still on the machine learning land. That said, I think there's going to be a couple of things that AI will deliver that have humanitarian impact, but they may not be delivered by humanitarian agencies. So a good example is drug discovery. Can we get a malaria drug that actually outperforms RTS,S which has hit the market this year? This year, last year, maybe. And AlphaFold, DeepMind, Google's DeepMind, think they can… and they're using AI to find new drugs. So drug discovery is a big one. The other thing that I think is interesting is public health surveillance. And that's going to stem from some of the work that we've already done to develop machine learning models to anticipate how a disease will spread and where it will spread, and in which places, to deploy preventative measures sooner with more impact. So a while ago, a long time ago now, several years ago, there was work to look at how you could marry data from satellite imagery, geospatial imagery with other data around population density that you would get from the MIX or DHS, USAID data, etc. Think about maybe even call detail records, so you could work out where people were at which given time, you know… hashtag wait for the conversation about ethical risks around pooling all these data sets. But let's park those for now and you marry these data sets and you can anticipate where cholera would spread and therefore deploy hand-washing stations and other interventions which would reduce the likelihood of the spread of cholera or contain it more effectively. Now that, to me, is worth a look right? I think the problem with all of these use cases, there are Goldilocks use cases.

The thing that I started saying recently is that if we're talking about AI for good writ large we need to talk about AI for good as a public good. And it needs to be publicly available. The results of these models need to be publicly available. They can't be owned by Palantir and WFP. They have to be made publicly available especially if you're doing some kinds of predictions about the impact of the humanitarian impact of a natural disaster or an epidemic or pandemic. You know that needs to be publicly accessible so that all humanitarian actors, little H, across the whole of the system can see where the impact is happening. The second reason that's important is you can well imagine that kind of public health or epidemiological predictive model, let's pretend it's 100% successful. UNICEF does it, develops it, or WHO develops it with Microsoft. Now if they come out as the cluster lead for health or nutrition or whatever angle you want to take and they say don't worry everyone it's fine. We've got this great model. It's 100% accurate. We've, you know, it's been tested and we've done all the assurances, regularly, it's all fine. How in the world are you supposed to be a member of the cluster receiving funding from a donor or working to a strategic priority that are set by the cluster lead, if you can't understand how those decisions were reached anyway? If you don't understand the backend of their analysis. So it taps into not only AI for good as a public good but the critical need for us to be transparent about the extent to which algorithms have informed our decisions. And not made the decisions, but how much have their algorithmic outputs factored into our decision-making. And then, if it has, show us the MOT, show us the health checks, make sure that your algorithm, like the drift between the training data and the live data, that the drift is minimal but it's still performing the way you say it was.

18:40 Lars Peter Nissen:

Yeah, I hear you. Picking up different things here, right. On one side, there is clearly the fact that AI will transform a number of different industries and that there will be positive effects of that on the people we serve. For example, malaria, as you mentioned, right? And that, as you say, most likely will not come from inside the sector, it will come from the commercial sector. Then the second thing I hear you're saying is will this concentrate power further or will we get a more distributed power among humanitarian actors, right? And I think one of the big problems we have today is an increasing concentration of power in very few agencies and donors' hands, right. And so the concerning thing here is if AI becomes this powerful panacea that's owned by an unnamed UN agency, what happens to the diversity of the sector that is really important to make sure we get it right? And then the third one, which is I think really key, and from an ACAPS perspective… because we already know how difficult it is to get humanitarians to admit that they make decisions. That they're not just doing what the data tells them, that's going to multiply by 10 or 100 or 1000 if we get an algorithm we don't even know how it works, right? And I think that risk aversion I see us having today combined with a powerful algorithm and a concentration of power, that's not a great vision for a humanitarian architecture that's supposed to be able to deal with the crisis coming in the future.

20:33 Sarah Spencer:

Yeah, I mean this also… there are parallels, in the debates around the role of AI in humanitarian action, there's a parallel debate happening around the role of AI in the delivery of public services at large and there's some interesting trends for us to watch as humanitarians around what academics, what civil society organization, and what legislators are saying about how AI or algorithms can make distinctions and determinations about who gets which services and when. So if you look at the Netherlands, if you look at Poland, if you look at the UK, there has been a recent rollback on the role of algorithms. For the UK, it's the Department of Works and Pensions, right, to identify the quote-on-quote benefit fraudsters, right? Like the people who are trying to cheat the system, and in the Netherlands, Algorithm Watch did some amazing investigative reporting around how the algorithm fundamentally got it wrong partly because of the algorithm, partly because of the data, partly because the engineers who designed it didn't necessarily have the expertise in terms of risk factors for benefit fraud. And then there's a whole other debate right now in Poland around the use of AI to determine eligibility for healthcare and social services. Now, let's take as a proxy measure that in those in the global North, people are starting to push back and starting to push for the right to a human decision, rather than an algorithmic decision. For me, that feels like the canary in the coal mine, and but on the flip side, the humanitarians were all saying Hooray, this is going to solve all our problems when we're talking about a population that is arguably even more vulnerable than those who are in need of state support in the UK, in the Netherlands. You're talking about 200 to 300 million people displaced, forcibly migrated, forcibly displaced or affected by crises and where taking what is essentially a very high-risk technology to try and deliver a fantasy maybe.

22:45 Lars Peter Nissen:

And putting it into the hands of agencies that don't have the capacity, the professionalism, nor the resources of the state.

23:00 Sarah Spencer:

Well correct, correct. Or the accountability, I mean that's the thing right? It's the accountability function. So we can, you know, thankfully because of GDPR and also with the EU's AI act as well as the Council of Europe’s AI convention which bears mentioning because it has such a focus on the rule of law and democracy and human rights. Those tools help empower you and me to challenge how a state uses algorithms to shape the course of our lives. But when we're talking about people who…You know, I normally live in Nairobi, I'm in Brussels today, wherever I am… but people who live in Kakuma or people who live in Dadaab do they have a right to tell UNHCR: Don't use this algorithm to make a decision about whether you give more aid this year to Dada versus Kakuma? And this is the part that when you get down to the ethical nitty-gritty, and you don't just use words like ethical considerations or ethical challenges or ethical principles. There are real trade-offs to be made. And I don't have the right answers. I do not pretend that there are easy answers here. But let's imagine a scenario where Microsoft and or IBM and some and another agency… WFP have been able to develop a model that predicts down to five square meters the impact of a flood on a specific population. And we can therefore preposition our aid and preposition our support to an extent that… actually we do we deliver better impact in that scenario. I mean that is quite possible in the future. Now there will be a community that therefore doesn't get that limited funding, because humanitarian funding is fixed, it's not limitless so the money now is going to Freetown instead of going to somewhere in Malawi. And do those communities have a right to know that there was an algorithm, even at the very very top strategic policy level, that has informed that decision? I think the answer is yes.

24:59 Lars Peter Nissen:

Yeah. And how are we held accountable as humanitarian actors I think that's key.

25:00 Sarah Spencer:

Correct, correct, correct.

25:03 Lars Peter Nissen:

So if I… so the way I hear what we talked about so far is one; There's a bucket of use cases around development of new technologies, medical treatment, maybe a more drug-resistant crop. What do I know… different technologies that that will help us, help everybody actually, have access to better services. There's a piece around prediction. Can we actually know how different communities will be impacted and can we predict and be faster and, you know, be there when it starts. And then thirdly there is a peace around the service to crisis affected population, a selection of individuals… I don't want to call them beneficiaries… where you mentioned reluctance beginning to emerge in some states around is his right to determine welfare services for example the base of an algorithm and so on. Are there other buckets? Are those the three big ones you're thinking about?

26:05 Sarah Spencer:

I'm sure there are other buckets, but I think those… I mean I think underpinning that or perhaps related to all three of those and I don't know whether it deserves an separate bucket or if it's just the golden thread through them is… just being more specific in the way in which we speak about AI and think about AI. Because I've now reviewed a lot of proposals recently for funding AI pilot projects and innovation with AI in the humanitarian sector and I can say… It's very very hard to find detailed thinking and detailed analysis on how – not only how they will procure and assure the performance of an AI model of a system, and how they will manage the risk related to integrating an AI system into other IT architecture and infrastructure related to the agency. But it's also really difficult to find arguments that go down to the granular level of the challenging ethical trade-offs that will emerge. And I think that's on us. That's not us saying like all the tech companies are throwing this at us. That's for humanitarians to then sit down and ask as many questions as we can and you know really take to heart, that there is no such thing as a dumb question. I mean, I'm a GBV and child protection person by training. That was my first ten years of my career. So I'm not a physicist or a computer science scientist or engineer at all. But I've spent a lot of years asking a lot of questions which you know some people would be too embarrassed to ask and… as a mother of three boys I'm not embarrassed. Not embarrassed to humiliate myself at all. But I think that's where we need to start. We need to start from a position. We need to acknowledge the fact that we as humanitarians bring a certain quote-on-quote domain expertise to this conversation and that the stuff that sits outside that – we should not be afraid to ask any and all questions about how these things work, and think about the context in which we have all worked. The operational context in which we have worked. The key players there, the politics behind data, the politics behind population movement who controls population movement etcetera etcetera.

28:50 Lars Peter Nissen:

Yeah I think that brings us towards sort of the last thing that I I'd love to talk to you about. And I think we should say that we won't be talking about all of the ways in which bad actors can use AI to make humanitarian operation more difficult… disinformation, false narratives, being propagated. We can all see that that is a big risk but I think let's leave that for now. The thing that's on my mind is… you began talking about now… you know, when we had our last conversation a couple of years ago – We called that ‘An Arms Race for Data’. That was the title we gave that episode. And what you meant by that was that all of the big agencies were scrambling to make sure that they owned as much data as they could so that they could strengthen their position within the sector. And that's just an utterly depressing way to think about these things. Do you still see that?

29:33 Sarah Spencer:

Probably even more, and I think a colleague of mine, Helen McElhaney, who's the executive director of the CDAC Network, and I just co-authored this piece for the New Humanitarian which I think published this week. But in that we talked about the need for a paradigm shift in the concept of data ownership. Whose data is this? You now, Lars Peter and Sarah are now empowered on the back of some very stringent legislation to believe that that data belongs to us and we can tell companies to forget us. We have the right to be forgotten. And we have the right to consent to, you know, as much as everyone particularly in the UK find cookies very annoying… and trying to give consent at every single website or cookies or reject consent. That is part of trying to figure out who owns your data and who doesn't. There's a very interesting piece in the Financial Times today, actually. Around how car manufacturers are collaborating with risk with basically data owners. The big data companies like LexisNexis who pool data and then also assess risk and share that with insurance companies. So now there is a whole group of drivers in the United States who unbeknownst to them have been sharing data on accelerated braking or acceleration in their driving, with LexisNexis who have then shared it with insurance companies and insurance companies have taken collective action to raise the premiums for these drivers, without them knowing that they were effectively being surveilled.

31:00 Lars Peter Nissen:

For the individual driver?

31:05 Sarah Spencer

Yeah. So I feel like there is a parallel here. There's a very interesting parallel about pooling data resources to the extent that on the one hand, like I said before, there is good intent. There is really good intent behind some of these ideas to say like the cholera example that I said before right… pooling all these data sets together and bringing them together so that we can deliver our interventions in a more targeted way and with less time. But what are the risks in doing so? And I think fundamentally the this idea of aid for data still exists. I think there are stories, there's anecdotes, that I've heard quite recently coming out of some of the camps in Kenya that even if they tried to say I want to be forgotten. Please don't hold my data on my name. It's not. It's not a reality. It is not a reality. And I think that puts us as humanitarian agencies and thinking about you know the hacks and the ability for humanitarian agencies to be targeted in the sort of cyber world. It makes them seem incredibly naive frankly. I mean if you're holding the data on tens of millions of people who are arguably some of the most vulnerable on the planet, and you're constantly under threat of attack and cyber attack. I think it's just a naive approach.

32:38 Lars Peter Nissen:

I just couldn't agree more and I think we fundamentally have to rethink the way we deal with privacy of the people we serve. I basically think that if you deliver services to an individual, you cannot be the same agency providing a digital identity for that person. I think that has to be an independent provider of the digital identity that works for that person – not for UNHCR or for WFP.

33:07 Sarah Spencer:

Well, so going back to that example with the cars with the drivers and how they share data up with LexisNexis. I’m not a lawyer, but the legislation essentially requires you to say if you're consenting to data being shared with third parties and then also naming who those parties are. And the article in the Financial Times had indicated that that wasn't happening consistently. But it's a principle or practice that we could easily do. And this is where you could say as a UN agency who operates with an MOU that's been agreed with the state in which they are operating – if the MOU says they have to disclose all the data they collect to the Ministry of Interior in that state, which most of those MOUs say, then why aren't we saying: Right Lars Peter, I'm giving you assistance and by the way I'm collecting all your data through your refugee status determination interview as an agreement to receive health services at our clinic etcetera. But it's going to go straight to the Ministry of Health by the way. And also by the way, this country in which you are currently you know seeking asylum or refuge or whatever, also has a data sharing agreement with the neighboring country which may be your point of origin etcetera etcetera. We just… we're not even disclosing that let alone giving people the right to opt-out.

34:34 Lars Peter Nissen:

And why don't we have some humanitarian actors who beat up the system for not being transparent on this. And how come that data collected with taxpayers' money suddenly become the property of WFP or whoever right? I'm not shooting at WFP in particular, I think you're all equally bad. How come? We simply need to govern data much better.

34:50 Sarah Spencer:

Correct. In fact, you know what, with our ICRC anonymous witness here, Robert Mardini, I think at the end of 2023 in a public forum, said that we just need to stop collecting data, period. But we need to imagine a humanitarian system where data, actually individual-linked or linkable data, is not collected from anyone. And even if it is, for the point of verification, it's destroyed within 48 hours or something.

35:28 Lars Peter Nissen:

But also because the only part of these organisations that really have sophisticated data operations are their own fundraising departments.

35:31 Sarah Spencer:

Yeah, yeah.

35:33 Lars Peter Nissen:

Arms race for data. Yeah. We're going to end on this note. That's a bit depressing. So maybe the last question, Sarah, what are you going to tell me two years from now? When we have the third AI episode? What do you see happening? What's the best case scenario? Let's keep it positive.

35:56 Sarah Spencer:

Well, can I do not so great case and then some other good best cases? I'm going to anyway. So I think the way in which algorithms, broadly, are being integrated into military and defence operations, is not being tracked closely enough by humanitarian actors. When I started my career post-9/11, mainly just after 9/11, everyone in the sector… it was so young and everyone could talk very knowingly in the humanitarian system and in my agencies about cluster bombs. It was amazing the detail that humanitarians could speak about specific types of artillery, back then, because of the impact it was having on civilians and children in particular. And we're not having the same level of detailed discussions around how algorithms are informing decision making. There's a lot of really good work about autonomous artillery, autonomous weapon systems, and you know very public campaigns like Stop Killer Robots and you know people like Stuart Russell speaking passionately about automated artillery and automated weapon systems. But there's a whole other side to this as well. And I think the way in which algorithms are informing target selection, are supporting intelligence and surveillance, will support detention, and who is detained and who is not detained as a prisoner of war? I mean it pushes up against the whole corpus of IHL in a way that humanitarians should be thinking about that. So I think in two years time, you know, having what's going on in Europe right now with Russia and Ukraine, what's going on in the Middle East with Gaza, and target identification, and the fact that the IDF is quite actively using algorithms to identify quote-on-quote legitimate military targets is… needs to be… will come to the center of attention I hope in two years. I did a piece of work for a client last year around nuclear threat and nuclear weapon systems and believe it or not, there are lots of discussions, very public some of them… obviously, some of them public from think tanks, but obviously very private in terms of nation-states – about the role in which algorithms and AI should play in the command and control of nuclear weapons systems. And also communication around potential nuclear events. Now the one...

38:36 Lars Peter Nissen:

Let me just check with you Sarah, is this your best or your worst-case scenario?

38:38 Sarah Spencer:

This is still the worst case. This is what you said, where are we going to be in two years' time? What should we be talking about? And I'm saying we need to be talking about how AI is being integrated into defence and military systems. But just on the nuclear side of things what's interesting is that for… and I'm again just to clarify, just like I'm not a computer scientist or a lawyer, I'm also not a strategist. But what those experts say is that what you're banking for in terms of nuclear deterrence is more time for decision making, not less. You don't want to speed up that time which is a very interesting paradigm in terms of integrating any kind of algorithmic decision-making into any part of the nuclear weapon system.

39:24 Lars Peter Nissen:

Exactly. And I this may be a bit far but we're dealing with an industry who hates friction. It wants to scale and thinks about how can I, how can I build systems that are more and more frictionless so I can I can scale? And that is not a great strategy in a crisis. And I think actually just like AI will solve a lot of problems for us, there are some new vulnerabilities being created by these feedback mechanisms where suddenly decisions are being made without a human being in the loop… is really frightening.

39:55 Sarah Spencer:

So that's the bad side but here is where like… it could deliver some great things in two years right. It could be that it makes the maintenance of fleets of vehicles and aircraft and ships incredibly more efficient – ICRC, WFP etcetera. It could invent or identify new forms of fuel so that actually the carbon footprint of humanitarian operations or of any of us human beings is much lower. It could invent a new drug. The thing about precision agriculture that you mentioned, is actually quite relevant and important. So the way in which we are using pesticides and chemicals to increase crop productivity, there are some very, very interesting, not very glamorous, not you know going to put you on the front page of the Guardian or the front page of the New York Times… but some things in which AI is very good at managing and finding efficiencies in the system, so you use less pesticides and increase crops etcetera…supply chain management, volume or demand forecasting etcetera.

40:54 Lars Peter Nissen:

OK that's your best case scenario?

40:56 Sarah Spencer:

I think that's… I mean that leads to a world that's a little bit more automated and controlled and surveilled and etcetera. But I think that that's where the AI for good advocates would say we're going. It is that we're creating new tools that would help us support the world's growing population and address and reduce our carbon footprint.

41:20 Lars Peter Nissen:

It's interesting because one thing we have not talked about today… we have spoken about how we can develop new technologies, how it can make our fleets run even more seamlessly than they do today. We have not talked about how it can enhance the agency of the crisis-affected populations themselves.

41:39 Sarah Spencer:

Well, it can, amazingly. But I think in ways that maybe humanitarians aren't necessarily thinking about the moment. I mean how are people using ChatGPT right now? They're using it for job applications, they're using it for potentially immigration forms and applications. There was an argument by someone in The New Humanitarian last year I think, saying that generative AI will level the playing field and the local organisations will be able to now write better proposals. You can see people who are using GPT 3.5 or 4 from OpenAI to build their own or fine-tune their own models for very specific purposes like: Help me understand this 250 page request for proposals from USAID. I mean those documents are very dense. So yeah maybe it will level the playing field. I suspect I have no evidence to suggest one way or the other. But if I were putting money on it I suspect that donors would very soon start to request that people disclose when they have used generative models to write donor reports and or proposals. Because otherwise then you just get ChatGPT writing 15 proposals for the same money. And so USAID is evaluating 15 proposals that have essentially been written by ChatGPT. And how does that therefore distinguish any of the agencies?

43:02 Lars Peter Nissen:

Unless they develop an AI themselves to read the proposals.

Well correct, they have. The Australians have done that. So then you get into a very weird sort of future where computers are writing proposals for computers.

43:12 Lars Peter Nissen:

It's a wonderful conversation with potentially a quite depressing conclusion but I've really enjoyed this discussion. It's fantastic to see you again. Thank you so much for coming back to Trumanitarian. Thank you for dragging ICRC into the conversation. We enjoyed the listening ears in the corner very much.

43:33 Sarah Spencer:

Thank you. Thank you for having me. It was such a pleasure and so fun to talk about this stuff as always.