This conversation between host, Lars Peter Nissen and Pierrick Devidal, Senior Policy Adviser at the Law, Policy and Humanitarian Diplomacy Division at ICRC debates on whether the sector’s excitement about AI is a progressive step or a dangerous diversion.
We discuss ethical considerations and the potential for tech to overshadow fundamental humanitarian principles. How do we distinguish meaningful innovation from harmful overreliance? What are the pitfalls of datafication and AI fixation in humanitarian efforts, and when should we not take part in the race?
Join this conversation that seeks to navigate strategies for evaluating AI technologies for real added value in humanitarian efforts.
Transcript
00:21 Lars Peter Nissen
Welcome to Trumanitarian. I'm your host, Lars Peter Nissen. How should the humanitarian sector use information technology, and what is information technology doing to the humanitarian sector? Those are the key questions in this conversation with Pierrick de Vidal, who works with the ICC in their policy department. He has thought long and deep about these issues. And try to grasp the wider implications of the information technology revolution that we have seen over the past decades. Pierrick is sceptical towards the technology, as you may have guessed from the title of this episode. I chose to call it 'The Technophobe' because I once heard him introduce himself like that – as a joke, of course. Pierrick himself would actually have preferred 'Strategic Luddism' or 'Humanitarian Luddism,' but I think no matter which title we had chosen, you get the point. In good old ICRC style, Pierrick goes back to basics and analyzes tech in the light of the four basic humanitarian principles: humanity, impartiality, neutrality, and independence. It's a really useful frame for the discussion, and I'm sure that you will find his perspective thought-provoking and useful. The conversation is based on an article Pierrick has written, and you will find the reference to that article in the show notes. As always, please leave a review of the show, promote it on social media, share it with colleagues and friends, but the most important thing, as always, enjoy the conversation.
Pierrick Devidal, welcome to Trumanitarian.
02:13 Pierrick Devidal
Hello!
02:16 Lars Peter Nissen
It's great to have you here. You are the senior policy advisor at the Law, Policy and Humanitarian Diplomacy Division at the International Committee of the Red Cross, the ICRC, here in Geneva, Switzerland. And that was quite a mouthful. What do you actually do?
02:37 Pierrick Devidal
I'm still trying to figure it out, basically. I think that's what policy advisors do. They're trying to understand things and figure it out. What is it that I do? Over the years, I've worked on many different subjects. For the past two years, let's say I've been focusing on the intersection between digital technologies, people affected by conflict, but also with humanitarian action and humanitarian organizations. How we deal with all this. So that's what I've been doing. Policy is a big word. It means many different things, but it's basically trying to reflect, understand, make analyses, and support critical thinking in humanitarian organizations so that we can come up with positions, guidance for operational activities, or for external engagement at the diplomatic level and other things like this. We are the critical thinkers within organized humanitarian action.
03:37 Lars Peter Nissen
And today, we will be discussing an article you wrote for the International Review of the Red Cross called "Lost in Digital Translation: The Humanitarian Principles in the Digital Age," and we should say that this is an article that you have written in your name, so whatever you will be saying today, of course, represents your opinion, of course informed by your role at the ICRC. But it is Pierrick who's talking today.
04:05 Pierrick Devidal
Exactly. That's exactly right. No, I am very privileged to have the opportunity to work at the ICRC, also because it allows me to develop sometimes my own thinking on slightly more academic issues. And that's the format of that article. So it's of course very much taken from my experience at the ICRC and the work I do there, but it's also my own thinking where I try to develop a few arguments that maybe go beyond a little bit what the ICRC usually do or work on.
04:36 Lars Peter Nissen
We met at a conference where we were both doing panels on different topics. Yours was on AI. And you introduced yourself at that conference as a technophobe. You're not on LinkedIn, you didn't even have a picture… there was just an ICRC logo. You have a very low profile. Are you sure you're the right person to talk about digital issues?
04:57 Pierrick Devidal
No, I'm sure, but what I'm also sure is that obviously I use that label of technophobe to just like create a bit of attention in the conversation because we were talking about AI. So obviously, you know, that's the way I used to kind of get a little bit of attention at the beginning of the chat. But I'm very much not a technophobe. I kind of don't like these labels that we put on things and on people and that is so true around technology. So I think the technophobe thing was kind of there for me to say like, yeah, I'm more schizophrenic around technology. I am absolutely fascinated by it. I am absolutely convinced that it can bring so much to humanity, to humanitarian action, and all of these things. But I'm also very concerned about some of the trends that we see, what it's doing to people and to societies. So, I'm sceptical, but in the good sense of the term. I think we need to ask ourselves questions…there's no tech so-called techno determinism. We are not just the victims of technology; we can shape it if we understand it, and that's kind of what I'm trying to do because I'm not a technologist. I don't have a technical background, but it's everywhere in my life. It's everywhere around me, and so I'm just curious and I like to understand.
06:33 Lars Peter Nissen
And we should mention that the Internet got you back today. You managed to get lost. Google Map took you in the wrong direction to the old ACAPS office, not the new one. So you were a bit late for this. So let's take that as the Internet getting back at the technophobe.
06:48 Pierrick Devidal
But I think it's actually a little bit of an interesting example of how much… so I love being in the mountains. I need to have a sense of orientation otherwise I get lost, and I've lost my sense of orientation because I rely too much on apps and things like this to find my way to things. And so that's what interests me also is just like, it's funny how all these tools that are bringing us more capacity and capabilities are also undoing some of our capabilities, including very natural ones. My grandparents had a much better sense of orientation than I have, and they didn't have these tools. So, it's just an example. Yes, I got lost. Sorry about that.
07:11 Lars Peter Nissen
So in this article, summarize for the listener, what is your basic argument? Are we lost in digital translation? What is the issue you hinted a bit at it here? The loss of situational awareness because we have better tools to aid us. But what's the basic argument? How would you lay it out?
07:28 Pierrick Devidal
So it's two things. On the one hand, I think we are a little bit overwhelmed by technology. It's spreading, it's going very fast, and I think there's a general tendency, and that was very much my first reaction, of people in general, but humanitarian also, to be like, 'whoa it's going too fast, I don't understand what's going on, you know what? I'm just going to give up and I'm going to focus on my work.' Yeah, that's great. But technology is catching up, and technology is getting into your work, and it's getting into your tools, and it's getting into your thinking, and sometimes you don't even realize. So we can't give up on trying to understand technology. We don't have to be AI engineers. That's not the point. But we can find our own way as humans, as humanitarians, as citizens, to understand technology and try to, it's like you know when you drive a car you need a driving license. Well, when you use technology you should have a basic digital literacy basically. And I think that's kind of what's missing, and that's very much true for society in general, not specifically for the humanitarian sector, but also in the humanitarian sector. And I think one of the things that I find really useful to not get excluded from that conversation as a humanitarian worker is to take over the conversation and build that conversation and the language to talk about technology that we understand. The humanitarian language, and we can do that. There are many ways to do this, and I just simply myself found that the fundamental principles – humanitarian principles principles were very useful to deconstruct the technological difficulties, the big problems around technologies. They really help me analyze and think. And I was just like, 'oh it's funny how we don't think about mixing technology with the humanitarian principles.' And when I made that connection, I was just like, 'what? That's really helpful.' So maybe maybe I can raise that. It's not the perfect solution, but it's one we have at our disposal.
09:28 Lars Peter Nissen
Yeah. So basically, what you're saying: It's a roller coaster. We can't get away from it. It's infiltrating everything. The danger is that the tech uses us instead of us using the tech. And we need to shift the discourse back to basics, to humanitarian principles. How do they relate to tech?
09:44 Pierrick Devidal
Exactly, exactly. Tech is a little bit happening to us, and it's doing a lot of things for humanitarian actions. What we're not looking into too much is what it's doing to humanitarian action. And I think that's one of the things we have also to look into, you know technology we know it's dual in nature and it has two sides, a good one and a bad one. It's always the case with every technology that has been invented. You can always use it for good, and you can always use it for bad, that creates dilemmas. Dilemmas in the humanitarian sector we know about that – in the fields, in politics, in diplomacy. And one of the tools that we have to deal with those dilemmas are the fundamental principles, the humanitarian principles. And so I thought, 'oh maybe we can try to look at technological dilemmas through that prism.'
10:33 Lars Peter Nissen
What are you most worried about with technology and the way it influences humanitarian action?
10:39 Pierrick Devidal
I don't know if I'm necessarily super worried, I'm really what concerns me is that there is a risk for us that we give up on trying to understand because it seems too complex and it goes too fast and I don't think we can do that. I think we have a duty as humanitarians, a professional duty to increase our knowledge and our awareness and to make an effort to understand what's going on and how it impacts the people we work with and for, and how it impacts the things we do and how we do them. And so to me, that's really the part of ethics and professionalism and I think humanitarians have a duty to do a little bit better in trying to bridge that gap that there is right now between most professionals and the technological issues.
11:31 Lars Peter Nissen
You know when I was preparing for this interview, one of the things I was thinking about was, maybe it's just the ICRC being difficult and grumpy because they're losing their monopoly they used to have. If we take something like restoring family links, that used to be something the ICRC did really well. Red Cross messages, contacting families.
I mean today you put a post on Facebook, 'I'm OK, I'm safe, I'm here on the other side of the border,' and people know where you are. So isn't it also just us as humanitarians being threatened because we don't have the monopoly, we're not the gatekeepers we used to be?
12:04 Pierrick Devidal
No, I don't see it this way at all. First of all, the ICRC never had a monopoly on establishing family links. We never work alone in the true sense, we are just there working with people who are affected, always. They are the first responders, we work with national societies so we are part of an ecosystem, or different ecosystems actually, in plural, and so it's definitely not about doing that.
I think the ICRC, just like many other organizations, we focus on where we had added value. It just happens that within the history of the Red Cross and Red Crescent Movement, the establishment of family links was something very important where we could make a difference. And so we continue to try to make that difference. But if and when we are not needed because tech can replace us, or because there is not such need, well so be it. Great. That's less suffering for people. That's fantastic. Now, It's also true that, while you know, just because you mentioned Facebook. Facebook was born as a thing to connect people. Yes, it does connect people but in a particular way. And I don't think the aim of those platforms are the same aims than the humanitarian objectives. So there's probably also value in complementarity between what these companies or tech companies can do and what humanitarians do.
13:29 Lars Peter Nissen
But I could argue, why does that matter? I just want to talk to my grandmother?
13:29 Pierrick Devidal
Yes, so talk to your grandmother. Whatever is more convenient for you and you think is right. If you don't need a Red Cross message but you have access to a phone, just do that, and I think that's also where humanitarians have been challenged. They have to keep up with those opportunities that come to make their work easier, more efficient, and so that's why it's important to again… we're not here to try to keep up with technology just because it's moving too fast, but we have a duty to ensure that we explore how technology can help us make our job better, how to be more effective, how to use resources, the finite resources we have in a better way. And I think technology can go a long way to do that. But it's more complex than what it seems, and so that's why I want to use, yeah again the principles but also critical thinking to look a little bit further because there's a lot of let's say… misconceptions around technology and what it does, and technology is surrounded by a lot of marketing, and so we need to be smarter than just fall for the marketing stories and narratives that we get. There's a lot of grey and we need to, we have a duty to look into that grey zone.
14:38 Lars Peter Nissen
And we will dive into the grey, but let's first look at the principles. Let's go through them one by one and just hear your analysis of how tech relates to them. If we start with humanity, I think that's a good one to start with.
14:52 Pierrick Devidal
Yes, that's a very good one to start with. It's the most important one. Just as a side note, we have a tendency to put the principles all together and then separate one by one. It's very important we understand there's one principle that is over the others, and that’s humanity. The other ones are just instrumental to support that aim and that goal. That principle of humanity, which is about trying to do everything you can to alleviate suffering and protect the dignity of people. And so the other ones are tools that allow us in certain circumstances to do that mission a little bit better. But it's just, yeah. So it's a good idea to start with the humanity principle.
15:32 Lars Peter Nissen
Who am I to disagree with the ICRC policy division? But I might actually introduce a slight distinction there and say I agree with you on humanity, but I think impartiality is different from neutrality and impartiality where the latter two are more sort of operational...
15:52 Pierrick Devidal
Yeah, absolutely right, absolutely right.
15:55 Lars Peter Nissen
Whereas impartiality is non-discriminatory, right? That's... I don't think that's just a...
15:58 Pierrick Devidal
Yeah, I know exactly, but people tend to refer to the humanitarian principles as a bunch of things that are at the same level. No, actually, there's a little bit of an order and a hierarchy. Humanity comes first, impartiality is the second one, and the other two, neutrality and independence, are coming in support of these two objectives. But there's definitely an order. You're right to point this out here.
16:19 Lars Peter Nissen
Cool, humanity.
16:21 Pierrick Devidal
So let's go with humanity. So there, I think, as always when we talk about tech, there's two sides. And here, the first side is, can technology help us alleviate suffering? The answer is absolutely yes. There's no question about this, and we need to make sure that we look into again, we come back to what I just said, like if technology can help us alleviate more suffering or find better solutions to alleviate suffering, go faster in alleviating suffering, we absolutely have a duty to explore that.
16:54 Lars Peter Nissen
So do more with less.
16:56 Pierrick Devidal
That's one of the things that come with the technology. We need to be careful because also sometimes it's a trap. You have the impression that you are going to do more with less, but actually, you don't do much more. And sometimes doing with less could have a different impact. So again, you need to look further than that. But on the principle, I think as humanitarians, we have a duty to explore technology. And if and how we can help it can help us. We should not confuse that with the duty to innovate. With the duty, and I've heard that many times from colleagues like; ‘oh we should keep up with technology. We should keep up with innovation’. That's not why we're here for. That's the job of other organizations. That's the job of the private sector. That's fantastic. They can do that. We can't do that. First, we don't have the means to do so. Like you know, every 10 minutes there's a new AI system that comes up... We will never be able to keep up with technology. And also, again, we are not here to innovate.
Innovation is not an end. It's a means to an end, and the end is humanity, alleviating suffering. So if technology can help, we have to use it, and when it makes sense. But it's, we're not here to just innovate, and I think that's where I've seen a little bit… many have seen in the humanitarian sector a little bit of a trend to go toward the innovation hype and if you remember the World Humanitarian Summit a few years back that's when innovation came kind of on top of the agenda and there were competitions to you know innovate, come up with digital applications and blah blah blah. And I think there were hundreds of digital applications. I'm not sure any of them were actually used by people. I'm not sure any of them actually helped anything in alleviating suffering. So that's where we need to keep that tendency under control because the risk is there is to actually lose the finite resources that we have, to waste our money and energy in innovation. That doesn't make a difference for people.
18:48 Lars Peter Nissen
I couldn't agree more. I think I sometimes call it happy clapping tech fetishism right? Like simply looking at the new shiny thing I have developed. Isn't it wonderful? But there is no serious consideration about what does this do for anybody right. And how does this link to the purpose of what we're doing, is there even a use case here? Is it just because you were able to do this new cool thing with your computer. I mean the lack of seriousness in a lot of what I see being developed is just staggering.
19:18 Pierrick Devidal
Yeah. And I think that's we need also to be kind a little bit to ourselves like: Look we're humanitarians, we are faced with amazing unbearable suffering of people. We don't have the means to do what we need to do and what needs to be done. We don't have the means to address all these needs. So when something comes up like this and you have hope that maybe tech can help… we will look at it with a little bit of fascination and hope. And so it's not necessarily ill-intentioned but it is a bias that we have that we need to mitigate.
19:46 Lars Peter Nissen
I think the only softening I would do of that statement is I think we do need to innovate but I think we need to have others do the heavy lifting for us and then what we can do is figure out how do we then apply these technologies in the very specific context we work in.
20:01 Pierrick Devidal
Yeah. So there's two kinds of innovation right? There's the indigenous local bottom-up innovation that has always been in humanitarian action since ever. Humanitarians have had to innovate. They had no choice. They had to find solutions, build things from makeshift materials that they had around, and move on. So I think we are very strong in that kind of innovation.
And that's the good kind of innovation. Why? Because it responds to needs. It responds to gaps that you face immediately. And so you innovate with what you have to respond to that need. That's good. That's bottom-up. And it's needs-driven. It's problem-driven. The other kind of innovation that we see more and more is the other way around. It's innovation driven by solutions. So you know: ‘Oh you have AI. Alright that's a new solution. What can we do with it? Let's innovate’. That's the wrong way to go about it. We have innovation that is top-down that comes from the leadership of the organization and things. And it's just like ‘oh why don't we do this with that? It looks cool.’ and I understand that that's a natural trend that everyone has but we need to manage this because that's not the right way to go. So yes absolutely to tech and innovation. But let's do it in a problem-driven way so that when we use innovation it responds to problems that we face that people face and it helps us solve those problems. And it's just not innovation for the sake of it because that's not our job.
21:25 Lars Peter Nissen
So humanity tells you that we have to use the resources we have in the best possible way. Alleviate as much suffering as we can. Not go down some innovation rabbit hole but really keep the eye on the ball and use tech to be more efficient in a sense.
21:46 Pierrick Devidal
Yes and that's not enough. The root of the word humanity is humans and we need to be human.And there's a tendency, there's a risk rather, in losing the human in humanity because it is sometimes being overly replaced by technology. And so I think that – I always use the same example – but I really think it's an illustrative one. It's the multiplication of humanity. An organization using chatbots to communicate with affected populations.
Look I understand. Like of course. You will be able to deal with a very high number of requests and you can just provide information at scale, reach more people, and do that in a most cost-effective way. A chatbot doesn't get tired, it's a little bit of an investment at the beginning but then it doesn't cost as much as having a team of humans doing that thing.
So that's fine. But then it really depends on what you use it for. I mean, I don't know about you, but when you call a customer service line and you end up speaking to a chatbot, it's not exactly a great feeling and you don't necessarily always have the kind of attention and assistance that you need. So I think if we put ourselves in the shoes of people who are affected by conflicts. Who are faced with trauma with tremendous needs.. if the first interaction they have is with a chatbot? I'm not sure that's the best way to go about understanding their individual situation, that show the empathy that we need to show to them to respect their dignity and things like this. So chatbot yes they can help in certain circumstances, but they're not a magic tool. And I think humans need to stay at the centre of those interactions that we need to have with affected people.
23:36 Lars Peter Nissen
I was trying to make up my mind as you described this about what I actually think and… I think on one side I don't hear you saying that chatbot is a no-go. It could be a useful tool for certain things…
23:48 Pierrick Devidal
Oh yeah absolutely. It depends on what. But it's not the silver bullet that it's presented to be in terms of managing interactions and communication and engagement with affected populations.
24:00 Lars Peter Nissen
Yeah I think I agree with that. But I had something else popped into my mind and I think sometimes these days we almost use humans as chatbots. Right. The scripts that people have to react to. And the way we go through different procedures so mechanical and the clipboards people sit and… “then I asked the next question that pops up on my screen”. That's almost worse to me because that's not even a human… and that is actually an algorithm you're talking to… a procedure you're talking to and not a human being. It's just…
24:33 Pierrick Devidal
he creation of chatbots since:26:25 Lars Peter Nissen
So the two key points here coming out around humanity is: Let's use it to be more efficient and be able to alleviate more suffering but let's make sure that technology doesn't dehumanize humanitarian action.
26:39 Pierrick Devidal
Absolutely. And I think that's particularly concerning nowadays with the spread of artificial intelligence. Artificial intelligence by definition is a process that is meant to be placed… or rather to help machines do human tasks or human activities. And so in a way artificial intelligence is there to dehumanize certain activities, certain processes and that's fine.
But there are some activities that we should not dehumanize. And I think empathy should not be dehumanized. It should not be given up to computers and robots. And I think humanitarian action should not be dehumanized because that will affect our ability to act and operate in line with that fundamental principle of humanity. And so we need to make sure that we build safeguards around our use of technology so that we don't end up you know pushing robots to talk to people who need to be understood, and who have very difficult situation…and have it to deal with a lot of complexities in their suffering in their situations. And I think humans are better placed to show that empathy. Sometimes of course, if you're trained. Again we need to be good humanitarians to do that. And that takes a little bit of training and a little bit of experience.
27:56 Lars Peter Nissen
Yeah I was actually thinking what is the problem here? Is the problem technology or is the problem the industry we have become rather than the solidarity-based, sort of activist that we used to be…
28:19 Pierrick Devidal
It's all of that at the same time. There's not one problem. There's a combination of problems. Technology is also not just you know… you often hear: ‘Technology well it's just a tool’. No it's not. Technology is a representation and a combination of many different socioeconomic political factors. And they come together.It's always not just the machine. It's the machine that was made by human. That is used or misused by another human and that affects at the end of the day a human. So it's not that simple. All these things interact. And these trends intersect and we need to keep an eye on them to keep control.
28:45 Lars Peter Nissen
Right. So is that more or less humanity, or do you have more points?
28:49 Pierrick Devidal
No I think that that captures it.
28:52 Lars Peter Nissen
Impartiality.
28:54 Pierrick Devidal
Impartiality. So. I'm going to take a shortcut and focus on the key discussion around impartiality and I think it is… well, there are different ones… OK, the main one and then we're going to come back on here. I don't want to focus on AI, there's already too much conversation around AI. But in this particular case, I think it's important. The sector has been focused for the past 15 years on trying to use so-called back in the days ‘big data’.
To find solutions, to bridge the gaps that we have in our analysis, to fix some of the lack of information we had… some of the human biases that we had around the analysis of humanitarian situations and humanitarian needs. And big data came as the promise of just like… that data can be so helpful to complement that vision. And to help you have a better understanding of those things. And so that was very promising. And I think there was a clear investment if not over investment in those solutions for predictive analysis, for better placement of response systems and mechanisms, and understanding of needs. And so we have invested a lot into this. But I think now with algorithms coming in there's this whole issue of data and algorithmic biases. And that's where the concern is with the principle of impartiality.
30:19 Lars Peter Nissen
And maybe I can just inject because of course working in ACAPS that whole discussion is something that's really close to my heart. And I think yeah… there was an investment and there was a lot of hype around data and what it could do for us not unlike the hype we have around AI today. But what we never invested in, at least initially really struggled to invest in, was analysts. Right. We had information managers left, right and centre. But analysts, that was something that only came later and I think… For me, that has always been one of the key problems in this whole use of technology by the sector. If you go to any other industry the military, the private sector… You name it… I mean… they invest massively yes in data but also in the analysis of that data. It doesn't just appear magically because you have the numbers, you need somebody to make sense out of it and we never made that investment. And I think that's part of the reason why it has not fulfilled the promise that it had.
31:21 Pierrick Devidal
I think you're right. This is even another problem. Before that… so first of all the data hype let's be clear. It was not specific to the humanitarian sector. That was just what was happening everywhere in every sector in the economy, in politics and everywhere. We were again particularly keen to that. Why? Because we are aware of our limitations and because we had that bias of trying to do more with less. So we were just like OK if data can help, that's great. But then on your point of not investing enough in analysis. I think before that we had a problem with just data collection. We don't understand data, there's no data literacy within the sector, and we don't know how data works, how its validity can be a problem, how its quality can be a problem, how its quantity can be a problem. And so we were kind of launched in this. It was just like OK let's just collect data as much as we can. We'll figure out later what we do. That's a catastrophe. That's a catastrophe because it's completely not in line with the principles of data protection. And I think if to me if there was a key lesson to be learned from the big data experience it's data minimization. Just get what you need, make sure you understand that. You know why you're going for a certain type of data and you can keep control of that because otherwise you're overwhelmed.
In not only analysing it. But just in managing it. And it gets lost and it doesn't get updated and it doesn't get curated and it doesn't get deleted. And then that's a massive liability issue in terms of data protection, in terms of cybersecurity, and in terms of your ability to make sense of it. And so… and we've seen in other sectors also how it's called you know… big data turned into bad data as in you are just completely overwhelmed by it. It's not good quality and it's gonna lead you to wrong solutions or wrong interpretations. So we need to probably find a better balance but to me the lessons there is of course we need to invest in the ability to analyse as you rightly said. But I think first step is we need to invest in our data literacy. We need to make sure that humanitarians for whom data is so key actually understand what is data how it works and what are the limits and the opportunities around it.
33:30 Lars Peter Nissen
Yeah. And I might even take it a step further back and say we have to focus on decision making. What are you actually trying to change? What are you going to use that data for? And I think we have a great tendency to separate data collection… analysis… whatever from the actual decision-making. And those two processes seem to operate pretty much in isolation of each other.
33:50 Pierrick Devidal
Absolutely. And that's where we come back to the conversation on solution-driven versus
problem-driven. Solution-driven was ‘big data is the solution to your problems. Just go, collect data, we'll figure it out, and we'll find a solution’. That's not right. The first step is actually what is exactly your problem. What are you trying to figure out? What are you trying to understand? What is your gap of knowledge and analysis and information? And only then you know what is the kind of data that you need to go to. And only then you can actually manage it because it's not overwhelming and because it makes sense. And then the analysis will be much easier. So and there's this tendency you know…like people when is just like very often just like ‘I don't want to hear about problems, I want to hear about solutions’. Yeah great. But if you don't know what your problem is how are you going to find the right solution? And I think that was very much the case with data and big data.
34:39 Lars Peter Nissen
We boiled it down to three golden rules that sort of guide our work. Rule #1 know what you need to know. Right. So have clarity on what your decision-maker needs to know. Rule #2 makes sense not data. So if you're collecting data but you don't know why – stop doing that and figuring out what’s on your analytical framework. And #3 don't be precisely wrong. Be approximately right. So when you work in the sort of situations we work in – very complex fast-moving ambiguous… high level of precision turns into an outdated snapshot in two days flat. And that good enough analysis that you need, that sweet spot you need to hit about being approximately right, that's really key. And that is very difficult in a risk-averse sector such as ours.
35:27 Pierrick Devidal
Yes. And I think also there was uh maybe over the past few years for different reasons… another focus on the quantitative aspects of what we do. A lot of questions on humanitarians not to show that they are effective and impactful and these are good questions, legitimate questions. But I think the tendency was to respond to those questions with data and quantitative measurements which are very important. But they are not enough. They are only half of the picture and I think the qualitative analysis that used to be very much at the center of things has kind of been yeah overwhelmed by the quantitative elements. And we need to probably find a better balance. So absolutely yes to data. Absolutely yes to quantitative elements but they are not enough. And if you don't want them to bias your analysis and understanding you need to bring qualitative analysis and assessment in there yeah.
36:25 Lars Peter Nissen
you how to prioritize between:37:13 Pierrick Devidal
Absolutely. That's what I meant by balancing the two. Because the story you mentioned of the person who's able to do nice storytelling coming back from an anecdotal three days trip somewhere and tells you ‘I understand the humanitarian problems’. Yeah. No that doesn't work. That's not right. And it's good when we can countercheck these perspectives with data because that helps us to identify and reduce our human biases. Now let's do the same with data biases and algorithmic biases. Can we? Most of the time we cannot. Because we don't understand those biases. We don't have a clear understanding of how does data interact of how algorithms work because we can't open the black box and so you're not able to make an explanation of what potential bias is coming to the picture, how you mitigate them, how you address them. That's much easier to some extent with human biases because we know them and they are documented by psychology, by sociology, by all of these things. And we also have a lot of humanitarian experience. So by now we know the traps we systematically fall into and we should be able to manage them with better use of other sources of quantitative information and data. So it's all a very much an equilibrium exercise. But I think we have a tendency to move from one extreme to another and for way too long. We were purely in the qualitative. That was really not enough and sufficient and satisfactory. And then we move to the overly quantitative and data-driven. And I think that's also not right. So let's go towards the balance.
38:49 Lars Peter Nissen
Yeah I think I agree with you. And what I hear you saying on impartiality is on one side we thought big data was going to solve the problem. It didn't. We have probably swung too far on the quantitative side. I would argue we probably did not have enough human resources to really leverage the big data in the way it should have been. Especially in very uneven information spaces such as the one we operate in in the humanitarian sector. And then the last thing you bring out here is that the big problem with AI is that there we don't even see the biases. We just get the results coming out of the machine. You push the button the result come out. You actually don't know how it worked.
39:29 Pierrick Devidal
Exactly. And I think there it's very much linked. So impartiality is this whole point of being able to explain why the responses you provide are positioned in a particular place. These have a particular type of population. These are very particular type of need and that needs to be done without discrimination. Based on any potential consideration. And so that's what impartiality is. In order to be faithful to the principle of impartiality you also need to be accountable which means you need to be able to explain how you measured, how you took that decision, how you came up with the prioritisation, and how you organised your response accordingly. Nowadays with the introduction of AI-driven systems in the analysis of needs or the design of programmatic responses... This is done by a mathematical formula that is an algorithm that we haven't built. It is based on the use of data that we have maybe collected maybe not. We can't open that box most of the time because it's commercial and it belongs to companies that protect them because these are their assets. And so if you can’t open that box you're not able to explain the relationship between the input and the output. And when you do you lose your ability to explain how you respected the principle of impartiality. And we also know that is very well documented. AI has a huge bias problem. And it's always the same bias. It's always the same discrimination because we talk about algorithmic biases. But what are they? They are discriminations, they are forms of discrimination and they effect of course disproportionately women, disproportionately people of color, and all of that. So I think instead… it's already highly complex to try to be faithful to the principle of impartiality. I think if we systematically add these algorithmic biases in the work we do in our assessment it's going to become a nightmare and it's going to become impossible to be faithful to that principle, but also to be
accountable. To explain how and why we do things and that's probably not a good thing for Humanitarians. We know we have a problem with accountability. We're making efforts and I think yeah AI has the potential, because it can also be done the right way. But the potential to make that impartiality objective and ambition more difficult to achieve.
41:43 Lars Peter Nissen
So when you spoke about humanity you were like OK it can help us be more efficient and help more people. So that's good. But we have to be careful that we don't dehumanize. On impartiality. All I hear you saying is it'll get worse.
41:57 Pierrick Devidal
Not necessarily because I think… I hope with the amount of… billions not millions… billions that are being invested by tech companies and so on and so forth in the AI world… I hope we can eventually get there. Where we have ways to better identify the the problems with the famous… you know the problems with the data. The garbage in garbage out problem that comes with algorithms. And so, and I also hope that the ability of AI engineers, to understand themselves, how the algorithm that they created work and interact and move from input to output is going to improve. So I don't think it's necessarily a lost battle. I'm just saying we're not there yet. And if we're not there yet then maybe we should be more prudent about using these tools because… and it's also the same like… you know people are just like ‘oh is AI fit for the humanitarian sector?’ Yeah that's a valid question. But is the humanitarian sector fit for AI? That's another one. We always have had a problem with data. Data is the root of AI systems. And if we don't manage that problem we will face more problems. Because they are going to amplify, multiply and accelerate those problems. And our ability to manage them is limited.
43:12 Lars Peter Nissen
Yeah I couldn't agree more. I think the fundamental problem we have is the relationship between data evidence decision making and accountability of that decision making. That's messed up the way it is right now. And the problem as you say is... If you then sprinkle some AI on top of that.. its making the actual decisions invisible. Or actually there is no decision, just the result. Right. There's only one way to do this. Then that will just make it worse. But the fundamental problem is that we're not in great shape today.
43:46 Pierrick Devidal
Exactly. And I think that's also… The point is to say… maybe we need to be a little bit more strategically patient about using all these technologies. Because we need to be in a better position to understand and control them. And right now I think the gap is really a little bit problematic. So let's work on our data and digital literacy. Let's work on focusing where really we know AI can make a big difference. And let's refrain from using them where we're not sure. And when we're not sure we are able to manage those things responsibly. And I think that's where we also have another duty. So duty to explore technology yes, but duty to do that responsibly and ethically. And if you cannot do that responsibly and ethically probably better not to do it at all.
44:34 Lars Peter Nissen
Maybe just to be a bit of a contrarian.
44:36 Pierrick Devidal
Please.
44:38 Lars Peter Nissen
You know we're going to be faced with a significant increase in needs in coming decades. I think we can all see that right. Just climate change alone will drive a significantly higher number of people to be displaced and on the move. We have this do no harm principle we talk about. Maybe it's time to replace that with sort of a net positive sort of way of thinking about things right? I mean, if AI really can help us do so much more and reach so many more people, by really scaling our interventions, you know… because I don't think what we do today is really scalable right. And so… you sit there in your corner and say oh we must be ethical. But you don't scale. Isn't it unethical not to meet the challenge and shouldn't we
take off the gloves and start focusing on net positive?
45:28 Pierrick Devidal
So it can be. So first of all I want to be contrarian to your contrarianism. The first thing I'm just like… this increasing needs… yes because we… without being doom and gloom, it's really scary and concerning where we are right now. And when you open your eyes in the morning and look at the newspaper or whatever is. It's just really really scary. So I think it's right to be afraid of that massive increase in needs, in particular also because of the combination of different factors. International conflicts are back. Violence is maintained at least the same level than before if not more. And then climate change which is going to be a major amplifier of vulnerabilities and suffering so absolutely in that. But when was the last time that you heard the humanitarian sector saying that humanitarian needs were decreasing? Never happened. And so it's also part of a legitimate humanitarian narrative to come just like ‘needs are increasing’. And I think it's right because we want to raise awareness of the needs to do more for people who suffer. But at the same time we need to be also able to identify when this increase is due to different factors. A complex multiplicity of factors. And so that was just on the point of increasing needs but I think generally speaking it's very true. Under do no harm versus how do you call it… net positive? So first of
all do no harm. Let's deconstruct that one too. We love it right? It's just really something identarian in the sector and I think that's great. But we do harm. Every single humanitarian action does harm to some extent to someone somewhere. Hopefully it's not much harm and I think what we mean by do no harm is actually do as little harm as you can and mitigate that harm the best you. But I think that's a completely different approach that coming in, you know, with the humanitarian toolbox and saying like ‘well we're here we're good because we do no harm’. It's more complex than that. And we need to be humble in that with that. So it's good as an as an aspirational goal to do no harm. But it's more complicated and sophisticated than that. In reality we need to be aware of that reality and it's something very important. And on the net positive then we come back to: Who's doing it? So and there I think like there are different kind of humanitarian responses right? It's like the humanitarian sector is a combination of different ecosystems. Some actors are so-called principled others are not. Some abide by the principle of neutrality and independence others do not. It's not a question who's the best. Whoever can respond and provide responses to needs and alleviate the suffering of people. That's great. If we can combine those different ecosystems and those different approaches and responses that's fantastic. Why? Because it responds to the first principle which everyone has in common, which is humanity, alleviate that suffering. So I think we should see it this way, as a combination of this. Now if the private sector or the tech companies wants to chip in and also contribute to that because they have the ability to respond at scale – also a bit fine –they can also contribute and do their things. But it is not the same than the humanitarian response that we are working on because we have to be principled we have to abide by those principles. Why? Because we are operating in situations that absolutely require it. We don't do it because we like it. We do it because we have no choice. We want to operate on both sides of the front lines. Well try not to be independent. That will make it impossible. You want to reach out you know in very polarised settings, different communities. You have to be neutral because if you're not you wont get there. And if you do, you probably will be in danger. So there are these different elements and I think the net positive is one consideration. Yes, but let's not be overly utilitarian. Sometimes, it's not to how much needs you respond but also how you respond to those needs. Who is getting it and how it's reaching there and how we do this. So yeah let's avoid this overly simplistic. Do no harm but net positive probably both things that are useful to keep in mind but overly simplistic.
49:36 Lars Peter Nissen
Yeah but what I also hear you're saying is that it actually might be useful to have some less principled actors do some of the heavy lifting and then the principled ones such as the ICRC can pick up some of the residual needs.
49:48 Pierrick Devidal
All of that it's true at the same times. There are situation where the ICRC will not be needed. Because others are doing their job and they're doing it according to their own principles or their own methodologies and that can also be fine. And if there are no needs for us to come in – well so be it. But we know from over 160 years of history that there are always places where you need neutral independent impartial actors to come in. Because the situation is too complicated and polarised and local actors don't have the ability to reach everyone, to go beyond frontline and things like this. So let's also not throw the humanitarian and principal action away. It is still very relevant. Unfortunately I want to say. But it's still very relevant and so different kind of responses can coexist within the humanitarian ecosystems. And I think principled humanitarian action is still a very important one. And we see nowadays with the return of international armed conflict where I think neutral impartial independent humanitarian actors are the only one who will be able to. Maybe not the only one. But definitely can make a difference.
50:57 Lars Peter Nissen
Good. Neutrality?
50:59 Pierrick Devidal
Neutrality. So we're getting now… It's slightly more complex with neutrality and independence. So neutrality it's about not taking sides, to oversimplify again, between parties to a conflict, or in political dynamics and things like that. Now, we have a tendency to say ‘well technology that’s just a tool’. It depends on what you do with it. Well, no, it's slightly more complicated than that. Technology is not inherently good or bad, but it is also not neutral. When you aim for neutrality as a guiding principle of your action and you use something that is by definition not neutral, there's a problem. The problem is not being aware of that tension. Too many people think that technology is neutral, but it's not. Technology by defintion represents the values of those who create, design, develop, use, promote, and fund it. These values can be good or less so, but they are sometimes politically motivated and politically interested. Let's not be naive about technology, especially today with AI. You see the international so-called race for AI, where private companies and states are competing for control of the entire supply chain—data, microchips, and all that. So technology is about power and politics. We need to relate to technology as we relate to politics.
52:33 Lars Peter Nissen
So what are you saying? Is it a problem if I have a Mac or a PC that sends different signals? Or what are we talking about?
52:39 Pierrick Devidal
So not necessarily, but it could be. It could be now and it could be in the future. So the first point is that technology companies used to be mostly commercial companies driven by profit, like all other private sector companies. But nowadays, if you read the newspapers, you see them and you see their behavior and their posture, and they're moving it a little bit away from that and they are more and more active in the political sphere. They are taking positions on societal debates, mental health, protection of data, trust and monopolies, and things like this. They're also taking positions on international issues, politics, security, and conflict. And so now we're actually dealing with companies that are providers of the technology we use, but that are at the same time political actors and sometimes conflict actors, and that completely transformed the way we should relate to them according to the principle of neutrality.
::So let's be concrete. If we take Ukraine. On one side we have Elon Musk, who owns Starlink and X/Twitter, and he geofences Starlink so that it can't be used in Crimea. Does that mean that ICRC should stop using Twitter?
53:52 Pierrick Devidal
That's a very complicated question. I really don't have an answer to that. I think that's probably a shortcut and a difficult question, but I think it's not useless to ask the question and to be just like, "Whoa, hold on. We used to use that technical tool as it was, but now we realize it's very heavily politically connotated. It's forbidden in some countries’. So if we can use it in some countries and not in other countries; should we continue to use it? Are we going to be perceived as neutral if we do that? Do we have a choice? I'm not sure we have a choice. So we become hostages of the politicization of technology and then we come back to the problem is like what alternative do we have? Can we continue to operate at scale and all of this without this technology? Are we in control of the fact that they belong to a few companies and a few individuals? Absolutely not. So ideally, I would say yes, there is an alternative: Build your own tech. Build humanitarian tech, and that would be fantastic. But how do you do that? You need funds, you need technology.
54:55 Lars Peter Nissen
And you just told us not to waste our money innovating, right? So we can't really, right? Because and it's serious, right? Because you can take Microsoft and the role they played in helping the Ukrainian government evacuate all their data into the cloud before the full-scale conflict. I mean, does that mean that we can't have PCs and Microsoft operating systems on ICRC computers or on ACAPS computers? What are the implications?
55:19 Pierrick Devidal
Exactly. That's what is emerging and coming to us. And we're waking up, I think we are not behind at the ICRC in thinking about that. We have been thinking about that for years. That's also why we set up a so-called delegation to cyberspace, which is actually based in Luxembourg, which is dedicated to think about these things and to try to see where and if we can do R&D ourselves to manage those tensions and those dependencies that we have. Yes, we are independent, but we are not independent. We are always dependent on many things, we depend on donors for money, for people, for access and trust, and all of this, and we also depend on technology to do our work and to do it at scale. So that dependency is becoming problematic because it used to not come with such a heavy political baggage, and now it's there. And when tech companies that you are depending on actually take a position for one side versus the other, that creates problems of perception. It's not enough to be neutral and independent, you need to be perceived as such. So what does it say when we come with the whole matter of technological tool that may be perceived by local communities or mostly local authorities as "Oh, you are not on the right side because of your tech tools"? Well, we need to pay attention to that. And then there are also other factors, and one of them is the increasing use of sanctions in the technological domains. So when you see the EU or the US taking measures against particular tech companies in China or something like this, that restricts our ability to choose and to do these things. And so the fragmentation of the digital landscape at the political level will turn into something at the operational level. How are we gonna navigate this? It will become a nightmare, and so I think it's not too early to have a discussion about this and to talk to those responsible, which are tech companies and states, and say, "Hey, are we gonna make sure that all of you doing at the political level… we understand that's your problem, not ours. But we don't want it to affect our ability to do our mission and to do it the way we should do it, which is in line with the fundamental principles." So all these problems are emerging, they're coming to our face, and we are a little bit overwhelmed, but we can't give up. We have no choice. We can't give up. That's why we need to think, and that's where we need to think critically. And that's where I think the humanitarian principles are helpful for us to do this. It's not to find solutions, it's to anticipate the problem, to prepare, to think, and to try to build solutions as we move forward.
57:48 Lars Peter Nissen
And when it comes to neutrality, the main issue you see is around the shifting role of tech companies as they become incredibly powerful and have this direct impact on conflict, that we see – that we, by association, are not tainted by that?
58:03 Pierrick Devidal
There is a risk... We are not tainted by a particular tech company of which we use the services of product – taking a stand on a conflict that's not our decision. We are not tainted by that. But there is a risk that our perception will be tainted by that and that could lead to problems and that's what we want to prevent and mitigate. So let's do that right now. Let's talk about it so that we don't run into those problems.
58:31 Lars Peter Nissen
Great. Independence.
::So independence, to me, it's closely linked to neutrality because it's where we are, we should not be seen or perceived as being associated with particular political agendas. And then we come back to the question of technology being a tool for politics as well, a tool for power. And so we need to navigate those things. And I think, to me, there's also. So now I'm making a little bit of a stretch with the principle of independence. But I think in practice, how do you measure independence? It’s when you are able to take your own decisions, to decide how you do things and how you organize your operations to achieve your own objectives. And so, and I think nowadays we come back to what we discussed at the beginning. We are more and more dependent on technology to do these things. And so, that dependency, when a technology is politically connotated, if you use it, for instance, it could be said AI is a tool for neoliberalism, for neo-capitalism. Some people see that or conceive that. So if we use AI, are we going to be perceived by those people as neoliberal capitalists and things like this? I don't know. Maybe. But we need to do this. We need to think about this and when it also reduces our ability to be… so for instance… we depend on connectivity nowadays. If you don't have connectivity, most people can't communicate because they use smartphones. Most people can transmit data back to HQ and all of these kinds of problems. But we know connectivity has also become a huge political issue. I mean, look at the number of Internet shutdowns we see everywhere, in particular in places where we work, Sudan, Myanmar, and many others. And so when we depend on something that has become so politicized, we threaten our own ability to operate autonomously. And if we shut down the Internet, can we run our operations? Properly because we have backups and stuff. But is it always going to be the case? I don't know. And if we don't have that operational autonomy, can we really consider ourselves as independent? So again, it's all adding complexity to the interpretation and the implementation of the fundamental principles that we need to realize. It comes with technology. There's no other choice. We don't have perfect solutions right now, but we absolutely have a duty to ask ourselves the questions because there is a risk in really ignoring them and keep moving forward at full speed as we have seen in the sector for the past few years and I find this very dangerous.
We use the expression… It's a little bit… but we've been binging on technology and I'm very afraid that we could have a massive digital hangover if we continue to do that without being more responsible. So yes, let's be responsible like with drinking, let's be responsible.
::Pierrick, thank you for coming on Trumanitarian. It is so nice to have a conversation with somebody who is a bit of a Luddite. I guess. I don't think you're insulted by that label, who is careful with the political implications of technology, who doesn't just jump at the new shiny app or portal or whatever, but really tries to very carefully and deliberately think through what is technology doing to us and what can we do to technology to be better humanitarians, to have a better impact, to be more principled in the way we act. I wish we had more thinking like what you have presented today in the sector. Because I do think that we are a bit too much on autopilot, at least for my comfort. So thank you for taking us back to basics and for coming on Trumanitarian.
::