Co-authored by Temiloluwa Alalade

One of the targets of SDG 5 is to “ensure women’s full and effective participation, and equal opportunities for leadership at all levels of decision-making in political, economic and public life.” Over the years, there has been a steady increase in the percentage of women’s leadership and participation in politics and governance. But on the flip side, cultural barriers, patriarchal structures, sexism, and economic inequality are some of the challenges that continue to hamper the participation of women in politics. Equally challenging, is the threat that gendered disinformation poses to women in politics, and democracy.

Gendered disinformation can be seen as the spread of deceptive or inaccurate information and images that draw on misogyny and gender stereotypes. It typically frames women as untrustworthy, unintelligent, or pits women against other women. It attempts to paint having human emotions as undesired traits, with the exception of the sexualising or hypersexualising of girls and women. Amongst those caught in the crossfires are women and gender-diverse human rights defenders, voters, activists, government office-holders, political leaders, and female public figures. 

The impact of gendered disinformation not only disadvantages women in politics, but it is also a threat to democracy as it puts women interested in politics, elections, and  governance at risk.  In the report #ShePersisted. Women, Politics & Power in the New Media World, a majority of participants interviewed reported being extremely concerned about the pervasiveness of gender-based abuse and disinformation online. They described it as a key barrier for women who want to engage in politics, and a deterring factor for young women who consider having a political career. 

Globally, gendered disinformation has encouraged the harassment and abuse of female politicians and public office holders – especially in the digital space. In Kenya, female politicians are targets of image-based disinformation where manipulated media are used to sexualize them, create false narratives and sway the conversation away from political discourse. In the case of Esther Passaris, the Nairobi county woman representative in the National Assembly of Kenya, an alleged text and audio was released by a male politician. This was done to portray a sexualised narrative intended to cause her disrepute and shame. Though Passaris pushed back, the allegation however fed into online and offline abuse and harassment – as it often does – for several women who are targets of disinformation attacks. 

The same tactic of sexualised distortion also came into play in the 2017 Rwandan elections primarily targeting the only female presidential aspirant Diane Shima Rwigara. Just days after Rwigara made her announcement to run for the highest office, photoshopped nudes of her were circulated online in a bid to discredit her. Similarly, in South Africa, an image of former public protector, Thuli Madonsela, was manipulated to show her posing in a picture while dressed in the old South African flag used during apartheid. This shallowfake image shared on Facebook portrayed her as untrustworthy, supportive of the apartheid regime, and was damaging to her reputation. These targeted attacks often make it difficult for women in politics to engage the electorate on serious issues, as the focus is shifted away from their experience and expertise and rather placed on sexual and moral judgements. The resultant effect is an undue scrutiny of the competency of women in politics and leadership.

Forms of gendered disinformation in elections

In elections, gendered disinformation often uses narratives that target women’s gender and sexuality and aims to keep women out of politics; distort the views about women and gender diverse people’s participation in politics and governance; erode women’s rights and undermine women’s free and equal participation in politics. Gendered disinformation in elections manifests in a broad range of contexts.

  • Sexist & misogynistic content

This content preys on existing gender stereotypes and cultural norms that characterise leadership as a masculine trait. Such content seeks to portray women as distrustful, unintelligent, vulnerable, and thus unfit for public life or civic participation. It can be presented in the form of ideas and comments like; “You know how women are”, “A woman belongs in the kitchen.” or “Women are weak and cannot balance leadership and looking after their family.” 

  • Use of pseudoscience to denounce women

This tactic uses a collection of beliefs wrongly regarded as being based on scientific methods, to claim that women are not intelligent. Some populist leaders have irresponsibly propagated several non-factual assertions in order to silence female critics. For example, in 2017, a Polish Member of the European Parliament went on a rant in Parliament over the weakness of women, saying; “Do you know how many women are in the first hundred of chess players? I tell you  – no one. Of course women must earn less than men because they are weaker, they are smaller, they are less intelligent”. These tactics give gendered disinformation drivers the avenue to paint women as incompetent for public office, and incapable of being experts.

  • Sexualised distortion

Sexualised distortion is common in disinformation campaigns against women – from sharing doctored nude photos of women politicians, to sexualized deepfake videos (deepnudes) which use artificial intelligence to digitally insert an individual’s image into sexual videos and photos without their consent, to screenshots purporting to be from sex tapes, to accusations of illicit affairs. These tactics thrive on elevating socio-cultural norms about women’s sexuality and sexual purity. Sexualized content succinctly appeals to norms and ideals about how women ‘should’ behave, which can influence voter behaviour. 

  • Demonising feminism

Gendered disinformation agents often accuse individuals with feminist views of having a hidden agenda and being funded to oppose the state. This supports the narrative that women rights advocacy efforts are a foreign concept funded by the west. Thus, gendered disinformation agents frame women rights advocacy as illegitimate, unsuited, and dangerous. 

Drivers of gendered disinformation

To counter gendered disinformation, it is essential to identify sources through which it emerges and is spread. This helps in spotting red flags and challenging them.

  • Religious fundamentalists

Religious zealots are known to oppose the participation of women in politics and governance. They hide behind religion to achieve their aim. This makes it difficult to challenge because religion is hardly scrutinised. They draw on religious symbols and themes to increase the persuasion of their content

  • Trolls

Trolls intentionally antagonise others online by posting inflammatory, irrelevant and offensive comments, or other disruptive content. They usually create new anonymous/fake accounts to enable them to spread disinformation undetected. They also tend to use images of real people as their profile pictures in order to make their account look legitimate. One of their strategies is to create new accounts solely for the purpose of trolling and then discard them afterwards. Troll armies are often deployed and injected into online community spaces and forums by authoritarian governments to propagate a misleading political cause. If the account is new, anonymous, and constantly shares false and offensive content, then this is a warning sign that it might be a troll account. 

  • Non-state actors

These are individuals who exploit digital spaces to disseminate misogynistic and harmful messages. They may be lone actors or part of a coordinated disinformation effort. They bypass content moderation by using memes and other images which are difficult for content moderation mechanisms to detect. Oftentimes, their content tends to be circulated on mainstream digital spaces, however, there is also a robust network of male-dominated virtual spaces, sometimes referred to collectively as the “manosphere,” where these harmful gendered messages can garner overwhelming support before jumping to mainstream social media platforms. 

The “manosphere” includes online blogs, forums, and image boards that host a variety of anonymous, misogynistic, racist, and extremist content creators and audiences. These actors can be used to organise troll farms where community members carry out coordinated attacks against political opponents, female public figures, and women journalists. They equally weaponise the content moderation tools made available by platforms to further demonise women in the public sphere. For example, when a person blocks, mutes or limits interaction with a particular account, a screenshot of the prompt is taken and shared publicly to suggest cowardice, secrecy, or incompetence to engage in civic debates.

Talking back and speaking up

A key way to push back against gendered disinformation and counter it is to adopt a proactive approach and prevent its spread through critical thinking and analysis. The ESCAPE framework helps you to evaluate information that you encounter and asks 6 critical questions as shown below. 

If disinformation is shared online, you can be an active bystander by safely intervening, confronting the inappropriate behaviour, and doing something to improve the situation.

The following are steps that could be taken;

  • Report the account and content, 
  • Encourage people to disengage with the disinformation, 
  • Post comments admonishing people on the sexist and misogynistic nature of content, 
  • Remind people of the active and passive role they play in the spread of content that trivialises harassment, entrenches gender stereotypes, and denounces women’s capability in governance and elections. 

If the disinformation is offline, you can also safely intervene by not laughing at sexist or violent jokes, and if safe, talk to the individual constructively about the problematic nature of their behaviour.

Where do we go from here?

While tech platforms have created community guidelines to help curb online harassment, there is still so much that is left undone to better protect users from harassment and other forms of online violence. There are existing features on platforms that amplify the spread of gendered disinformation. For example, while the quote-tweet and retweet features on Twitter are positively used for public discourse, they have also resulted in a “pile-on” effect. This is a situation whereby a user is a target of a disinformation attack and other users fuel the amplification of such content. Platform features should continue to evolve to respond to the needs of users, particularly of vulnerable groups.

Furthermore, while the privacy settings that allow people to lock their account serve a protective function, they can simultaneously drive women out of the online public spaces, thereby making them less visible- a measure their male counterparts do not necessarily need to impose on themselves. Leaving it to the users to protect themselves against gendered disinformation and other forms of violence online is a burden too heavy to bear. Tech platforms must step up in their responsibilities to the users. Likewise, the adoption of a generic approach to content moderation without adequate investment in understanding the local context provides loopholes for disinformation agents to leverage, especially in the Global South. 

Useful resources:

Published 5th August 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *