By Nkem Agunwa and Joojo Cobbinah

A political firestorm erupted in Ghana after a video surfaced of Dr. Matthew Opoku Prempeh, the Vice Presidential Candidate of the former ruling New Patriotic Party (NPP), making a disturbing promise to small-scale miners. Speaking at a campaign event, he assured them that the government would return all seized excavators and allow them to operate without interference.

“We will give all seized excavators back to you so you can do your work peacefully,” he declared, flanked by party supporters. “This will help everyone get money to fend for themselves and contribute to the development of their communities. Dr. Mahamudu Bawumia will not collapse small-scale mining.”

Within minutes, the clip went viral, sparking intense debate. Could this really be true? Given Ghana’s long-standing battle against illegal mining, known locally as galamsey, and its devastating impact on forests and water bodies, the implications of his statement were enormous. For environmental activists, the video was nothing short of a nightmare. However, before journalists could investigate further, Dr. Opoku Prempeh’s campaign team issued a swift response, dismissing the footage as a doctored fake. Journalists across the country were stunned. One, clearly exasperated, remarked, “What kind of gaslighting is this? Are they seriously trying to make us question what we just saw in multiple videos?”

The challenge of verifying the truth

The statement by Dr. Opoku Prempeh, recorded from a campaign stage with a sign language interpreter translating his words, presented a unique challenge. Seeking to trace the truth, Ghanaian media outlet GHOne consulted two independent sign language interpreters. Their findings? Two different interpretations. This only deepened the confusion. Determined to uncover the truth, journalists in Ghana analyzed multiple versions of the video uploaded to social media, each showing a different angle. In every version, Dr. Opoku Prempeh’s words remained unchanged. Yet, the claim of manipulation persisted, forcing journalists to consider forensic video analysis to prove the authenticity of the video. 

In Ghana, newsrooms faced a critical obstacle: they lacked the tools and expertise for such a detailed investigation. The need for expert verification became urgent. That’s when they turned to WITNESS’ Deepfakes Rapid Response Force, a global team of media forensic specialists who provide timely assessments of suspected deepfakes.

Verification outcome in the age of political manipulation 

As soon as the campaign team labeled the video as fake, the stakes grew even higher. This was no longer just about one political statement, it was about the lengths politicians (or their opponents) might go to distort reality and manipulate public perception.

Four expert teams reviewed the footage. Their findings were unanimous: there was no evidence of AI manipulation. Each team noted that current AI technology is still incapable of generating full-body deepfakes from multiple angles in perfect synchronization with lips, audio, and hand movements. Two of the teams used AI detection tools to scan the footage, but neither found any signs of tampering.

The silence from Dr. Opoku Prempeh’s camp following these findings was telling. By dismissing authentic footage as fake, they had exposed a growing political tactic: using AI’s mere existence to cast doubt on real events. In fact, in 2024 alone, a third of cases escalated to WITNESS’ force involved politicians attempting to discredit authentic content by falsely claiming it was AI-generated.

VIDEO: Deepfake or Not? Findings from WITNESS Deepfakes Rapid Response Force 

How mis/disinformation impact newsrooms and communities

This case underscored a troubling reality: newsrooms bear an unfair burden in defending facts against political disinformation. At the same time, communities that rely on these news outlets for credible reporting are left vulnerable to deception. A major gap exists in access to advanced detection tools and capabilities. While AI verification technology is improving, many of these tools remain expensive and the expertise inaccessible to the journalists who need them most. For example, to further underscore the gap in existing knowledge on AI and access to detection tools, experts from the Deepfakes Rapid Response Force swiftly assessed the Vice Presidential candidate’s video even without relying on AI detection tools. Their expertise in AI capabilities, combined with meticulous content analysis, enabled them to verify its authenticity. By examining multiple angles, the sign language interpreter, the crowd, and other visual cues, they were able to reach a confident conclusion while also using AI detection tools to further validate their findings.

In Ghana, ‘news cards’ which are visually appealing news graphics branded with media house logos, have become a popular way to disseminate information quickly. These snapshots help readers stay informed, particularly during high-stakes moments like the 2024 election season. However, as the campaign intensified, these news cards became a weapon of mis/disinformation. Fake versions which are complete with forged logos of trusted media houses, spread across social media, creating confusion and shaping public opinion with misleading content.

To counter this, many media outlets began stamping fake news cards with bold “FAKE NEWS” labels, posting corrections on their official social media handles. But an unexpected challenge emerged: politicians and their supporters co-opted this strategy. Whenever an unfavorable news card surfaced, whether true or not, partisan actors quickly labeled it as fake and spread their own “FAKE NEWS” stamps. The result was a deliberate blurring of truth and falsehood, making it increasingly difficult for the public to distinguish real reporting from political spin.

Screenshot: Example of a “FAKE NEWS” stamp in circulation on social media

Employing voice notes against disinformation

Recognizing the limitations of the “FAKE NEWS” stamp, GHOne introduced a new strategy: voice notes embedded in news cards, marked with a prominent “SOUND ON” tag. This innovation gave the public direct access to unfiltered audio recordings of politicians’ actual statements. By allowing audiences to hear words exactly as they were spoken, the tactic made it harder for political actors to dismiss or distort their own remarks.

On the other hand, while audio verification has helped counter mis/disinformation, the challenge is far from over. Deceptive audio remains a powerful last-minute disinformation tool, strategically deployed just hours before elections to create confusion and influence voter decisions. During Ghana’s election season, several false audio claims surfaced. For example, this audio clip, uploaded to the Nana Kakra YouTube channel, allegedly features President John Mahama ridiculing Ghanaian voters and attempting to influence the Electoral Commission. Some of these were shallow fakes and were consequently easier to debunk, but others were more sophisticated.

According to Dubawa, one of Africa’s leading fact-checking organizations, existing AI tools for detecting manipulated audio vary in effectiveness. While some paid tools offer robust verification, many newsrooms simply cannot afford them. The challenge is even greater when manipulated audio is embedded within videos, making verification more complex and time-consuming.

The use and misuse of AI in elections worldwide

WITNESS has identified key ways synthetic media is being used in elections, categorizing them into two broad types: identity-based deepfakes and context-driven deepfakes.

Identity-Based Deepfakes This represents a new wave of AI-driven risk – deception that is both hyper-personalized and highly sophisticated. In elections this shows up in dangerous ways that target political candidates or parties, often aiming to manipulate public perception. These fabrications falsely attribute actions to individuals, create misleading evidence of events that never happened, and seek to sway voter decisions.

Context-Driven Deepfakes are more insidious in their approach. It is in many ways an amplification of existing mis/disinformation but at scale and unprecedented speed. This form of deceptive use of AI spreads false information about specific groups, coordinating influence operations, reinforcing divisive narratives, and even undermining election integrity by discouraging voter participation or distorting public discourse.

We continue to observe the deliberate use of AI-generated content to target women in politics. A striking example is the deepfake video falsely showing independent candidate Abdullah Nahod Nigar withdrawing from an election – an attempt to mislead voters and manipulate electoral outcomes.

Similarly, in Bangladesh and Pakistan, AI-generated non-consensual sexually explicit images have been weaponized against female politicians. Notable figures such as Nipun Roy Chowdhury, a member of the BNP Central Executive Committee, and Rashed Iqbal Khan, Acting President of Bangladesh Jatiotabadi Chatra Dal (JCD), have been targeted with deepfake images designed to shame and expose them to violent physical harm. These are contexts where modesty is deeply ingrained in societal expectations. In cases like these, detection alone isn’t enough—since the primary intent is reputational harm, the damage is often immediate and irreversible.

Bridging the gaps in Africa’s AI and tech governance 

Africa stands at a crucial juncture, navigating both the opportunities and risks of AI. While some governments are focused on harnessing AI’s potential, civil society organizations are calling for stronger safeguards. Yet, AI governance across the continent remains weak. Only a handful of African nations have dedicated AI strategies. In many cases, regulation is simply folded into broad cybersecurity laws, while others take a risk based approach in their formulation which does not guarantee human rights protections. 

Big Tech’s response has been inadequate. Compared to wealthier nations, African elections receive little investment from social media platforms in terms of content moderation, AI labeling, or political ad transparency. Encouragingly, organizations like the African Association of Election Management Bodies are stepping up, pushing for transparency in digital election practices. But will tech companies and governments act with urgency?

Building resiliency in media a key step to fortifying the truth

In 2024 WITNESS launched the Fortifying Community Truth project which focuses on supporting local and community based journalists to use video based strategies including OSINT to build local, national and cross-border accountability for perpetrators of human rights violations, including defending claims of communities facing abuse. In 2024 under this project WITNESS brought together 17 cohort members from West and Central Africa in a workshop on digital verification using a community based approach. 

While we grapple with the new and emerging threats that AI poses to information integrity it is important to note that the journalistic principles of verification, accuracy, fairness, transparency and accountability still apply. Frameworks such as the SIFT method (Stop, Investigate, Find other Sources and Trace the origin) are still effective in authenticating suspected AI generated content. 

On the other hand, ensuring local journalists and fact-checkers have timely access to assessments of the most difficult deepfake cases and claims of deepfakes remains critical. WITNESS’ Deepfakes Rapid Response Force is expanding its election support in Africa, moving beyond Ghana to Gabon in 2025 and Uganda in 2026.  

As threat actors gain increased capabilities to target, censor and surveil dissenting voices, initiatives like the Fortifying Community Truth project play a crucial role in bridging the gap between community response that are contextually relevant and systems level advocacy that have a pervading influence. By making long-term investments in newsrooms and strengthening their capacity for visual verification and AI resilience, mis/disinformation faces stronger resistance particularly during sensitive times like elections.

Africa must adopt a human rights-based approach to AI regulation to address harms that a purely risk-based framework might overlook. Privacy and data protection must be at the forefront, ensuring that at-risk groups are safeguarded. More critically, those most at risk of the malicious use of deepfakes and synthetic media including journalists and human rights defenders must be protected, and their voices must be centered in shaping an AI-powered information landscape.

 

RESOURCES ON IDENTIFYING ELECTION RELATED MISINFORMATION AND DISINFORMATION

Tipsheet: Combating Misinformation and Disinformation in Elections

Video: How to spot AI Generated Content in Elections

Video: Understanding the Dangers of Gendered Disinformation

Webinar: AI, BIG TECH and THE BALLOT: Shaping Africa’s Election in the Digital Age

 

Joojo Cobbinah is the Group Managing News Editor – EIB Network Ltd in Ghana

Nkem Agunwa is the Africa Program Manager at WITNESS

 

Leave a Reply

Your email address will not be published. Required fields are marked *