The ethos of ‘move fast and break things’ doesn’t work for humanitarians. If we break things, we break people.
But technology is changing the nature of conflict. International Humanitarian Law cannot evolve to meet these challenges without input from the private tech actors shaping the battlefield.
This week’s guest, Philippe Stoll, Senior Techplomacy Delegate at the ICRC, works to connect humanitarians to tech entrepreneurs and other relevant minds over the dilemmas presented by new technologies in conflict.
From biometric systems to the ethical risks of data misuse, Philippe shares how the ICRC is developing cautious, problem-driven tech policies aimed at protecting vulnerable populations. He also discusses his obsession with giving concrete meaning to abstract ideas and how immersive “Digital Dilemmas” installations can help tech developers and humanitarians understand each other’s worlds.
Questions about how to handle tech in conflict zones aren’t going anywhere. For anyone interested in the future of humanitarianism, this conversation is essential.
Transcript
Recently, I've begun noticing LinkedIn posts with hand-drawn notes and mind maps about AI and humanitarian action. The person behind the post turned out to be Philippe Stoll, who works with the International
The discord between my mental image of a rather traditional, hierarchical humanitarian organization that I worked with some 20 years ago, and this untraditional concept of a techplomat, combined with an effective, innovative way of communicating around AI, tweaked my interest, so I contacted Philippe and he agreed to come on the show. It's a great conversation about the potential of AI to do good and to do harm, and about how we as humanitarians can use AI responsibly.
As always, the ICRC has done its homework, and the perspective that Philippe lays out is principled, thoughtful and smart. I should mention that the episode was recorded before the dramatic shift in US policy on foreign aid, so you will not hear any reflections on that whole issue. Once you have listened, I'd like to encourage you to share the episode and your perspective with your network. It's important.
especially in these times where we need to find new solutions quickly, that we as a community have a robust conversation about a technology that has the potential to disrupt us for good and for bad. Enjoy the conversation.
Philippe Stoll, welcome to Trumanitarian.
Thank you.
You are a tech diplomacy delegate with ICRC. And to be honest, you are the first one I've ever met. So maybe let's begin with that. But what is tech diplomacy and what does a tech diplomacy delegate do?
Philippe Stoll
I have to say it's a bit of a provocation to call myself a “techplomat” in order to create some reactions. This is the first objective. But we didn't invent this terminology. It comes from Denmark.
who appointed a few years ago an ambassador for digital technology. And I think the simple fact that technology has such an impact on armed conflict is the main and first reason for having such a position. And let me explain to you a little bit what I do somehow on a regular basis, on a daily basis. I'm trying to connect several, maybe,
different sphere, different sector. The Humanitarian sector, the diplomatic sector, which is something quite traditional. But let's bring in technology and academic institutions. And this is where I'm sitting at the intersections between these four sectors in trying to bring the perspective of affected people or the impact of digital technologies in armed conflict.
Lars Peter Nissen
It's interesting that you were inspired by Denmark and being Danish, of course, I'm really happy that.
that we do something useful sometimes. But what strikes me also is that I think that the ambassador that was appointed to the tech companies by Denmark actually struggled to get traction. I don't think he found the same respect with the tech companies that he would find with member states. Is that an experience that you have had as well? Given the fact that it's pretty new, I'm convinced that...
Everyone is struggling, but I want to give you another interesting figure. Now we have 45 countries around the world who have an ambassador dedicated to digital technology. Shows that there is a need, there is attractions, there are people working around that. And I think it's a little bit a period where everyone tries to understand who does what. And that's why we try to invest a little bit. So we are two people.
one of my colleagues who is based in San Francisco and myself, and try to find the right space to have these sometimes difficult discussions with companies about their role, the impact that they have in places where sometimes they don't understand how it functions, and we try to bring this perspective. I think this is not different from everything the ICRC has been doing for 150 or 160 years now.
h worked in ICRC in the early:I'm surprised to see that there is still this level of... The word delegate has been invented because we were delegated to work in a context. And here, I think it's the same. There is a little bit of latitude to explore, to test things, but it's also the simple fact that everyone agrees, digital technologies has an impact. The same way...
Business companies in the past had an impact, and there is the Montreux document that was signed by companies. The same way we know that religions has an impact on conflict, et cetera. So there was always a space for exploration. There were other initiatives, such as the Health Care Danger initiative, or the...
initiative on women in war, etc. So maybe this is part of this natural desire to try to follow the evolutions of conflict and this is one of them. Yeah and I think it's really fantastic. But at the same time I also have this experience of working with ICRC.
And that was in the good old Lotus Notes days. And when you had to write a message from the field to the headquarter, you couldn't just send a message. You had to submit to send, and it was controlled by the head of delegation and the deputy head who, what could be said to headquarters of very, very sort of controlled use of technology. And a very, very careful and very deliberate approach to technology, and rightly so. I fully get that. You've been there for more than 20 years now.
Do you see a change in the ICRC's approach to technology? Have we become a bit more loose in the way we use technology? I think there is still this strong mindset that we have a responsibility and that element remains. So there is an internal debate all the time and I think this was already the case in the past. There were people who were agreeing, disagreeing and for me what matters and this is something that's.
my role is also about is to create this internal dialogue. In the past, before I started, there were some discussions, but they were very polarized. So one of the thing that is not visible from the external world that my role implies to bring everyone around the table. So having someone from the ICT department, a very tech person with our legal colleagues, the protection delegates in charge of digital technology.
sitting with the data protection officer, etc. And this is where we get better, clever, I think, because we have all the different opinions. And as you know very well, there are never easy answers to these complex questions. And this is something that I think part of the process has helped to allow a good discussion, but also to make the ICRC a little bit more, how to say,
not modern, but at least going in the right direction of embracing technology, but not cluelessly. Yeah, I guess in a sense, we have all been disrupted. Institutions have been disrupted by the way information flows freely today. And if you can't control it, you can't sit in a corner and pretend like you can control it. So you have to evolve. And I guess that's also what's happening internally in the ICRC. Absolutely. And again, you have,
like in any organization or in any society, the progressive forces against the more conservative, et cetera. And what matters to us is to have this exact balance between those who have an experience from the field, those who have a tech experience, et cetera, and find the space where we can have this dialogue and therefore after take an informed decision.
For me, the worst case would be a decision taken by one group of people or one person based on his or her own experience and that's it. Whereas we know that the world is complex, the space where we work, especially conflict, is even more complex. So bringing all these different voices, ideas and intelligence together is what matters. And I can say another element, when I mentioned the four
areas that I'm trying to interconnect the academic world and this is where we have also noticed our own limits. As a human rights organization we don't have how to say financial resources to do too many research etc. So working with the academic sector allows us to bring also another level of understanding knowledge that helps us to take the right decision.
One thing that has struck me as I followed you for a while, your work and your posting on social media and so on, is sort of the language or the look and feel or the brand, if you want, of the tech diplomacy you promote. I mean, it's very often organic, hand-drawn sketches of something from your notebook. It's very not tech, actually, the way you try to communicate about this. I know you-
You've explored art installations with the UN. Could you just share with us a bit the... You're thinking around not just what you say, but how you say it? I have this obsession of trying to make things as concrete as possible. And in my previous posting, I was working on the question of how do we expand neutrality?
So this was already kind of abstract. And I come from the background of trying to explain to people, armed carriers, all governments, these elements. And with digital technologies, I took myself as an example. There are many things I don't understand. And how to make it as concrete as possible is through simplification, but not making it simplistic, but to make it accessible. And working with artists, working with
installation, so small tricks like choose your own adventure type of experience, or the drawings is making these topics much more accessible I think for audiences. And it's also, it's a new language. Speaking with the tech sector is learning a new language also. They don't understand us. If I come with the classic jargon, they will run away. So I have to listen to them, I have to understand
So these are the kind of small elements I've started to work at the beginning of my position to understand what will work, what will not work. And so where do you get traction? I think the element of bringing the conflict lens, but not in a scary way, but making them understand things that they don't know. Very often the tech people have a very, I would say, black and white.
vision of conflict, the good persons, the bad persons. We know very well that the world there is much more complex than that. And if you bring it in a simple way, and again not in a simplistic way, elements that they don't know, a door is open. And there, when the door is open, then you have a little bit more time to explain things. The fact that my title is Techplomacy is already opening a door, because people are saying,
And this is the first trick. And then after you bring one or two examples of, okay, we work in South Sudan. Your product, will it work because there is maybe no, not always electricity or there is dust or there is no data or the data exists, but only handwriting. And then people like challenges and that's how we try to bring it. The other element is to try to make it as
personal as concrete as possible. And the installation called Digital Dilemmas is about that. For my colleagues, speaking about biometrics is something that can seem abstract. So we wrote a small story that makes it very concrete for colleagues who don't understand technologies. But at the same time, the situation for tech companies who understand biometrics is also simple because they understand the tension of
s a physical installation. In:this as a physical installation, in small conferences where it's just a screen and you engage. But we had also big ones such as the War Memorial Museum in Seoul for 10 weeks on 500 square meters. It was much more immersive and people were really dragged into this universe of conflict and technology. The installation and the principle is quite simple. You get into the shoes.
of a person affected by an armed conflict or a human rights worker, and you have to take a decision based on a small story. This is the dilemma. Very often it shows the complexity and sometimes the little choice that people have. And this gives this perspective that allows for a stronger dialogue. We had it at the World Economic Forum. We had it in New York at the UN headquarters. We had it in many places around the world.
And every time the reaction is like, wow, I didn't understand this dimension. And I think it's really brilliant, right? Because one of the problems we're facing is that the people, the tech bros, if you want, who are developing this technology, probably have very rarely felt as vulnerable as the people we work with in conflict, right? Have not experienced that loss of agency and that threat to their physical integrity. And so making that personal for them, I think, I think is really very powerful.
Absolutely, and this is what you are saying is exactly the genesis of the idea. When we speak about torture, when we speak about hunger, when we speak about bombing, people feel it. Even if you haven't lived through such a traumatic experience, you can understand. But if I speak about cyber attack, disinformation, etc., in a conflict setting, it's even totally abstract. So the objective is to make it as concrete as possible, to have a good basis for that.
So one of the things you have been working on in addition is a new policy for ICRC on AI. Just walk us through what's in the box. On top of the fact that it's a societal landslide, where you can see AI everywhere, we noticed that both humanitarian organizations, but also armed groups, armed forces, fighting parties were using AI. And we thought that we have to find this...
right balance of using it to take the benefit or the potential benefit, but we're doing it in a careful and I hope clever manner. And that's why we created this document, which is what I call a common grammar for our own internal purposes. And we started from pretty basic things. First is who we are.
We are not a business, we are not here to make money. And therefore the use of AI is not to make more benefit, but just to be sure that we are using it to help populations. So we look at it from a problem driven and not a solution oriented, which is often the case. We heard very often like people coming, oh, there is this great AI application, we should use it. And then we try to find the problem to... Yeah, yeah, exactly, a solution looking for a problem. Exactly.
So this is something which is important. The second element is the people we work for. They are not clients. This is something we need to be clear about it. There are vulnerable people. There are people who have lost most of them everything, including their dignity. And this is something we have to be careful. If we test things, if we try and use tools that are not properly understood,
there is a risk of creating more harm than benefit. And that dimension of understanding a tool as opposed to just deploying it is getting more and more complex with AI because even specialists don't understand how this tool functions. So there is this second dimension. The third element is the places where we work. In areas, in sub-Saharan Africa, for example, there are...
places where the data, the number of data that concerns the context where we work is very small. And it comes from one, two or three sources. So the representativity is not there. Sometimes it's only oral information or it's written one. It's not pure data that we see, for example, for weather forecasting in Europe that works very well, but in some places. So
using tools that were developed in the West for places where we work might not work straight away. And the last element is we need to be also coherent with our external position on the use of AI in armed conflict. We have a strict position on the fact that AI, especially in autonomous weapon system or decision making, should have a human in the loop, that the human should understand the
that is prompted by a machine. And this is something we don't want to be contradicting ourselves of deploying, let's say, a chatbot without a human answering another human. And this is the kind of things that push us to create this document. Yeah, so really it's about beginning, understanding your identity, understanding the people we work with or serve or whatever you want to say.
and then walking the talk. I like that a lot. It's not easy. I'm not saying that this document will solve all problem, but at least it's a good basis for dialogue between our colleagues. And as I mentioned before, what matters is to have all the people around the table to take this decision with this common grammar. Yeah, I totally get that, that it's a tool for an internal alignment in the ICRC.
One of the things we have been struggling with in A-Caps is that this technology moves so quickly that one of my concerns is that we start something and we fall so much in love with what we're doing that we stop looking outside the window and meanwhile technology is moving with 120 km an hour and we end up with some kind of proprietary bricolage project internally.
while something better exists outside. How do you write a policy for a technology that is unfolding as rapidly as this? The policy, it's really right now, if you read it at the high level, we are aware that this is a big frame. And it means that potentially for new development, if there is something specific for, let's say, use of AI for medical diagnosis,
we need to have something maybe more granular. If we want to have something for use of AI to detect landmines, for example, we need to have maybe something more specific. So we have the big frame, which was, it was very important, because then this helps us to put the cursor and then look at different technologies. But the first step to have this framework was...
long, intense, but at least we have it and this is very good for discussing the more specific technology. Now I agree with you the fact that it goes fast, but when you discuss more specifically with experts from the tech sector or the academic world, yeah it doesn't go so fast. There is a lot of marketing, people want to sell their product, but the... how to say...
the evolution is not that fast. So there is a space for discussing that, for trying to take a little bit of a step back. And there is something where we want also is to take, how to say, to own the timing. We don't want to be pressured by the fact that things go fast. We decide the way we want the things to go also. So I like a lot the position you outlined and the approach to
leveraging this technology. But a question in the back of my mind is, so do no harm is very central to what you do. I think any humanitarian would be petrified if we actually harm the people we're trying to help. But at the same time, we can't really explore this technology without experimentation. So how do you test and experiment without doing harm to the people we serve?
clearly. And back, I think in:So we found some solutions. On one side, we have signed some agreement partnership with leading academic institutions around the world. So we have a very strong agreement with the two Swiss Institute of Technology in Zurich and Lausanne. And there is a common part for doing research and it's called engineering, you might in action challenges. And there we can try to find researchers who are ready and they often.
ultra enthusiastically ready to help, to find solutions, do research. This is very, it's intense, but at the same time the results are very interesting. The other element is that we have decided to create kind of a sandbox, a space where we can do this research. This is the creation of our office in Luxembourg, and I know this sounds very strange to have the international…
of the Red Cross sitting in Luxembourg, but there is a space there where we have servers, where we have a space where we can explore things. So we do all this experimentation outside our own system. So it's a system that works by itself. We might use data which are totally anonymized or random or coming outside of our own data. And once we are quite happy with
in our own system, the discovery that we have done. How come Luxembourg, I have to ask? Luxembourg is kind of a coincidence between several elements. First, the country itself was very keen in financing something that was different from the classic human action. They have means, technological means, and we were interested to discuss with them, and that's how it happened.
Because, I mean, of course you could have picked Nairobi or Brazil or... That's next steps, absolutely. We had to start somewhere where the good energy was. So this is, we have to have a kind of a first use case. But definitely this is something we are looking at. Right now we are developing small partnerships, direct engagement, maybe with a university, etc. But this is in the making.
Clearly, we want to go also there and we know that if the discovery, the constraint that we bring will be also beneficial in the places where we develop this test and this research. Maybe to jump to a different part of ICRC's business. You are sort of the custodians of...
Every four years there is a international conference where all the signatories to the Geneva Convention, the National Society, the Federation of Red Cross Red Crescent and ICC are present. Now, when we talk about this, of course the presence or lack of presence of OpenAI and Tropica, Meta, Google, SpaceX, what do I know? I mean, isn't that a problem that in this space?
Really the battlefield is being reshaped by private actors who are developing these incredibly powerful technologies, yet the platform we have for discussing how to evolve IHL, they're not there. Yes, that's part of the status, it's how it is. The Geneva Convention International Humanitarian Law is state-driven. It means that only them can sign convention, adopt new resolution, etc.
Within the framework of the conference last October, we have managed to pass a resolution which says specific elements related to the protection of civilians and civilian objects against the potential harm of ICT activities. So we are trying to continue to push the boundaries of international human rights and law to say that it also applies the...
the way it applies in a classic physical world, it also applies in the ICT world. That's the first thing. Now we have been discussing with several tech companies, given the fact that they have a more proactive role than in the past, and there is a provision in international human rights law, the notion of direct participation to hostilities. The moment you are directly participating in hostilities, you lose
your civilian protection. And some tech companies approach us to say, if we provide some technicians, if we do this, where is our status? And this is a question that has been addressed by a colleague in a blog post. And it's interesting, while before engaging with tech companies was sometimes a bit difficult, now they come to us with that question. So if I do this, will my staff lose the protection?
What is my liability as an employer towards them, etc. I totally had not thought about that, but that's really interesting. So hypothetically speaking, if an unknown company, you fences some satellites, so they can only be used in part of a territory or somehow influences the conflict, actually they could be a legitimate target in that conflict. That's a legal debate that is happening now. If you provide some...
technologies in favor of one government or one party to a conflict to which extent, this is giving a military advantage and therefore you might become a target. That will make them listen. That's what we hope, but not only that, but definitely this is a way to engage a substantial dialogue. Philippe, what is your opinion on the
Great conversation. It's always fantastic to talk to the ICRC. You guys really, you dot all of the Ts and cross all of the Is, so to speak. And we only have one thing left, and that is you have to give us a prediction. So do you have a prediction for something consequential that will happen in the world, the humanitarian world, if you want, over the next six months? One of my big worry, and therefore in the sense that this is...
something I'm seeing as the big challenge for the next six months or even more, is how do we build trust with communities, with governments, in a world that is more and more polarized, where truth is not always seen the same way, where information is manipulated, etc. So for me, that's the biggest challenge for the next, not only six months, but the next years. And you have a prediction?
One of my predictions or it's maybe a hope that we can find a tool that validates, that says that what we are saying is true. Kind of a watermarking for pictures or something that will allow to be stronger in our communication. Fantastic. We will check in with you in six months, Philippe, to hear whether you found your magic marker.
And so finally, just thank you so much, Philippe, for coming on True Minitarian. It's fantastic to meet you in person. I'm a huge fan of the work you do and of the ICRC, by the way. So thank you for the work you do in helping all of us sort of position ourselves in this extremely difficult space of household leverage AI as the powerful technology it is. Thank you to you. And don't hesitate to challenge us also, because that's how we grow.
And having contradicting opinions is always important, so don't hesitate.