In Minneapolis, United States, videos recorded by bystanders and independent observers played a decisive role in exposing the violence of federal immigration agents and dismantling false narratives constructed against the victims earlier this year. The footage documented confrontations, executions, and abuses, directly contradicting official statements and forcing political and institutional responses. In this case, video functioned as evidence, truth, and accountability.
This moment, however, is defined by a deep contradiction. Never before has it been so easy to document reality, and never before has it been so easy to fabricate it using generative media. This tension lies at the heart of a recent feature by The New York Times, titled “More Than Ever, Videos Expose the Truth. And Cloud It, Too.” The article opens with a fundamental question for our time: Is seeing still believing?
It describes a paradoxical moment. Documentary evidence continues to reveal the truth and produce real political, legal, and social consequences. At the same time, AI-generated images and videos are eroding the credibility of visual media at an unprecedented pace.
The piece features WITNESS’ work and reflections from Sam Gregory, the organization’s Executive Director. For Gregory, the Minneapolis case is “clearly an affirmation that we can still show what’s real with video.” He also warns of a greater, structural danger: the political power of doubt itself.
After the killing of Alex Pretti, one of the Minneapolis residents shot by federal agents, AI-altered images circulated widely online. The image was based on real footage but enhanced to appear clearer and more dramatic, and its circulation became a tool for discrediting authentic evidence. As Gregory explains, “you just need to cast doubt.” This reflects the logic of the liar’s dividend, in which the mere existence of synthetic media is enough to undermine trust in truth itself.
Each manipulated clip, synthetic recording, and viral fabrication does more than mislead. It corrodes the basic assumption that audiovisual content has a meaningful relationship to reality. When trust collapses, disengagement follows. People withdraw from public debate, civic participation, and demands for accountability.
This is why at WITNESS, we argue that while it is becoming harder to film, verify, and preserve a trustworthy record, individual efforts alone are not enough. The defense of truth must be structural, collective, and sustained. In this context, defending reality also means building systems that allow evidence to remain credible, verifiable, and usable. It requires creating the technical, legal, and institutional foundations that support trust in public information.
Through the Coalition for Content Provenance and Authenticity (C2PA), we focus on building global technical standards that allow people to verify where content comes from, how it was created, and whether it has been altered. This work creates the infrastructure for provenance, enabling transparency about the role of AI and human intervention in images, video, and audio.
Through the TRIED framework, we assess AI detection tools based on real-world conditions. TRIED evaluates whether these tools are trustworthy, robust, interpretable, effective, and deployable in the contexts where they are most needed, including journalism, human rights documentation, conflict settings, and civic accountability.
At WITNESS, we continue to invest in these foundations because preserving a trustworthy record remains possible. It requires infrastructure, standards, legal frameworks, and collective action that protect evidence, strengthen verification, and limit the harms of synthetic media. Defending truth in the age of AI is now a systemic challenge, and meeting it is essential for accountability, democracy, and human rights.