The International Federation of Red Cross and Red Crescent Societies (IFRC) has launched the World Disasters Report 2026, which frames harmful information as a de facto humanitarian crisis — one that can undermine access to aid, erode trust, and destabilize social cohesion, ultimately affecting safety and principled humanitarian action.
The report also includes contributions from WITNESS on key approaches to addressing AI and harmful information, including Detection of synthetic content in critical contexts (Contributor Insight 1.9) and Verifiable provenance and the challenge of trust in the digital age (Contributor Insight 2.9).
To mark the report’s launch, the IFRC hosted an expert panel featuring WITNESS Executive Director Sam Gregory alongside other experts. The full discussion can be accessed here.
The following is a Q&A with Sam Gregory from the panel discussion:
Question: Please tell us a bit about your organization and, from the technology perspective, particularly with the rise of AI-enabled harmful information, what tools are available today for humanitarian actors?
WITNESS is a global organization that works with the frontline human rights defenders, journalists, humanitarians and civil society actors who are trying to show, share, and defend the factual reality of what is happening in this age of AI. We work very closely to support them directly, as well as run a global rapid response mechanism on high-stakes deepfakes, and work on the technical standards and policies that enable us at a global level to understand what is real and what is synthetic — thereby better supporting the frontline actors to defend the factual reality of what they experience.
As my co-panelist from the Spanish Red Cross flagged, we’re seeing an increase in how AI is being used to direct coordinated campaigns of misinformation, disinformation, and harmful information on social networks. We’re also seeing an escalation of AI-generated content. This is very visible. Over the past year, it has gone from a small percentage of the content that fact-checkers engage with to — potentially, in the current conflicts we’re seeing in the Middle East — a majority.
This increase involves both impersonation of people – a critical risk in the humanitarian space when you look at the credibility of local leaders and community organizations. It is also the falsification and modification of scenes — not only creating something from whole cloth, but also subtly changing elements. In addition, we’ve seen an increasing number of cases of subtle changes to images and videos that are used to compromise the credibility of actors in a space.
I want to also place this in the context of compromised access to information and internet shutdowns — the ability to share accurate information is further reduced in these environments. In addition, what we’re seeing, currently in the conflicts in the Middle East, is the use of AI itself to counter AI deception as an element that is spreading confusion: when AI is used as a fact-checking tool, it creates inaccurate results and misrepresents factual reality and the actions of humanitarian agencies.
In terms of tools, at WITNESS, we don’t start with the technologies, and I think this reflects the themes that the IFRC has reflected in this year’s World Disasters report — you start with community-based knowledge, understanding, and community-based verification. With AI-based content, this is not about telling communities to “look harder at images” and make a guess. That is a terrible set of guidance. It places the blame in the wrong place and with the wrong people for their inability to spot technical glitches that are disappearing rapidly.
The basics of media literacy, stopping, checking the source, as well as what we describe as OSINT practices — open-source investigation practices that try to understand whether something is corroborated by all the other information that exists on the open web and in our social media environment — remain essential.
Grounded in the foundations of media literacy and OSINT, let me talk about two technologies that are relevant. They are both flawed but necessary, and they could be improved. And I think there is a political onus for us all to push on this.
First: AI detection tools. Even in the best of circumstances, they are about 85-90% accurate. They work best with high-quality content, that is not shared in compressed bandwidth messaging environments, and that is in English or Spanish or other major languages. That is not the majority of the world nor the context in which we’re all working. The detection tools are flawed. They could be better, but they are necessary — but it’s important to know that they only work in a limited number of circumstances.
Second: Content provenance technologies. These can be approaches like watermarks, invisible to the eye that show whether something was generated with AI. A notable example would be a watermarking tool like SynthID from Google. If I run much of the AI-generated content that is coming out of the conflict in Iran through that tool, it will show that it is AI-generated — it is quite transparent and effective.
Then there is the C2PA — content provenance standards that show you essentially the recipe underlying a piece of content. This is important because it’s not just “is it AI or not?” — there are human elements, or it is a modified image made with a real camera, or you’re trying to prove that something was just made with a camera and has no AI elements. The C2PA standard is a way of surfacing that recipe, that set of ingredients so you can decide: Is this creative? Is this humorous? Or is it harmful?
I would note that the platforms are still not taking seriously their responsibility to enable us to discern what is synthetic and what is real, and to make good determinations about that.
Practical Steps for Building Trust
Question: What practical steps can humanitarian organizations take to strengthen trust and confidence in humanitarian information?
First, we have to acknowledge the environment that is described in the World Disasters Report. We’re in an environment where trust is low, including in institutional actors. The information environment is being weaponized, and AI is complicating that. That environment is not going to change. So we have to start from that assumption as we think about how to build trust.
We also have to realize it’s not just about debunking individual claims — yes, we’re going to have to do that when it’s high stakes — it’s about a broader fog of doubt that is in the interests of some actors to maintain, including the actors articulated in the case the Spanish Red Cross was describing of the harmful information circulating in relation to the floods in Valencia.
This includes scenarios where someone is outright denying that something happened — the scenario in Valencia — and claiming it is AI-generated. Or simply saying, “We don’t know, and we can’t know in this environment.” A third of the cases that come to our Deepfakes Rapid Response Force are examples of the so-called liar’s dividend or of plausible deniability – where people are flat out denying that an event occurred and putting the onus on human rights and humanitarian actors to prove it happened or is real.
We also sometimes see doubt being perpetuated because of the ways people are interacting with content in normal communicative ways. People use AI to create playful content. They create a meme from a protest. Then it gets recirculated as real, and people say, “Wait — that’s not real, so that whole protest didn’t happen.” Or someone uses AI to enhance an image because they think it’ll make it easier to understand, and then it gives a false result on a detection tool, and people claim it’s AI and weaponize that.
So, in that environment, I think there are four key steps:
- Invest in community-level verification capacity so that they can use the AI detection tools and the OSINT approaches I described. Support that within the communities you work with, bringing that capacity closer to communities – that’s what we work on with WITNESS’ community-based guides to verification and our accessible AI detection guidance. We can’t be in a world where only a few people can prove what’s real — where that lies only with institutional power and not with communities. Investing in the community capability to do verification and detection of deceptive AI is critical.
- Strengthen factual reality. This is about knowing that information enters a weaponized environment and making choices about how to present it in a way that is transparent, strong, and uses some of these emerging tools like provenance tools. So you can then say, “Look, you’ve just claimed this was made with AI. No — we can categorically show this is a real image from Valencia or from another location,” or “We can show that AI was used, but in a non-malicious way.”
- Collaborate with trusted intermediaries. Think closely about who your audience trusts. That could be influencers, it could be local media — and I should note that the role of local and independent media is critically important. This is all so tied together with the collapse of local media and the independent media ecosystem. The danger is that people think, “Just do it with TikTok influencers.” It’s not about whether it’s a TikTok influencer — it’s whether that TikTok influencer is trustworthy to the audience.
- Be transparent rather than certain. We should be completely transparent. With detection tools, we might say, “We got to 85% confidence on the technical side. Here’s why there’s a 15% gap in our confidence, and we fill it with human reporting.” That validation of the 15% comes from the trust you have in us, but you have to be transparent. That’s particularly important for actors in our space, because we rely on trust. And when we make a mistake, it’s weaponized against us. So being transparent is better than being certain, because if you’re wrong, it will be used against you.
Further Interventions During Q&A
Why is this harmful information shared – who has a stake in destabilizing trust?
There is a diverse range of actors who have an interest in this, and it really is context-specific, though it includes many actors who predate the current crises. However, one thing I’ll point to — and I think this is enabled by AI — is that a lot of this is about monetization of attention. We’ve seen this consistently in different contexts. It follows on from patterns we saw with so-called “cheap fakes” and “shallow fakes” before, where people are sharing content because they can raise visibility and money off it, and the platforms are very slow to respond to that. AI has made that easier. It’s much easier to spin up a channel that looks like conflict footage, and you have no vested interest in the truth or the integrity of the organizations you might undermine with that content. You have an interest in rapidly building up a profile or generating money from it, and the platforms enable that.
The platforms are also failing in another way. What we see now is that fact-checking is being deferred to chatbots. People ask Grok on X, and those are fundamentally flawed with their information. It’s of a piece with the rush of the AI platforms to push out AI overviews and fact checks as the way we access information. So you can bundle together individual motivations to monetize with platform motivations to push forward AI, alongside more traditional motivations of why actors want to shape an information space in their own interests.
Why do we have to have cross-sectoral collaboration?
None of us want the potential future where AI creates of 100 versions of every situation or event, and a 100 clones of each public figure speaking, and we have no idea which is real or true. That is not a good vision. There is a cross-sectoral need for us to work together on those shared building blocks of reality and the ability to discern them. That isn’t unique to the humanitarian sector — it is shared with other sectors that also care about verifiable reality.
That brings me back to some of the solutions I talked about earlier from detection to provenance. Making authenticity and provenance architecture accessible to local media, humanitarian actors, and human rights actors — rather than the flawed deployment so-far of this provenance architecture that enables you to know the recipe of what is human and what is synthetic — that is a political will question, a technical question, a question of companies getting on board and doing it. We should all have a stake in it, and governments have a stake in it, as well as civil society and the humanitarian sector. We have cross-sectoral interests that are aligned.
A final takeaway
We are in an information environment that is increasingly mediated by AI and full of AI content, and we’re not ready. We can’t have the question of “Is it real?” become completely unknowable. We must maintain the ability to know what is real. These challenges are relevant to all of us, and the solutions are practical, they are urgent, and they are achievable.