Back to Africa Check

How to avoid disinformation traps on Twitter

This article is more than 3 years old

Bell Pottinger is dead, but disinformation that preys on divisions in South Africa remains. Some say social media users should ignore disinformation – the deliberate spread of false information to cause harm – because any engagement helps malicious actors spread their messages. But is doing nothing really the only option, particularly when disengagement is what some of these campaigns hope to achieve? Liesl Pretorius looked for answers.

In a polluted information ecosystem, our actions – even if well-intentioned – can make the disinformation problem worse.

“Online, everyday actions like responding to a falsehood in order to correct it or posting about a conspiracy theory in order to make fun of it – case in point, QAnon – can send pollution flooding just as fast as the falsehoods and conspiracy theories themselves,” writes Whitney Phillips. She is the co-author of You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape.

“Once we publish or send or retweet, our messages are no longer ours; in an instant, they can ricochet far beyond our own horizons, with profound risks to the [information] environment. At least potentially.”

In part two of this three-part series: How a South African comedian’s tweet became a politically motivated campaign. Missed part one? Read it here

Is Phillips saying we should stop tweeting?

No, she says, some things are worth saying even if they bring attention to polluted information, which includes disinformation

Not responding to polluted information “prevents people from telling the truth, educating those around them … and pushing back against bigots, manipulators, and chaos agents”.

Instead, her advice is to be more strategic about who and what we amplify. We should question what we don’t know, whether we might be giving free publicity to malicious actors, and if the possible benefits of our actions outweigh the pollution we might cause.

Considering who and what you’re amplifying is an important consideration for journalists too.

Dr Claire Wardle, co-founder of anti-misinformation nonprofit First Draft, warns that reporting on disinformation prematurely – before it reaches a “tipping point” where it is increasingly visible – could boost misleading content. “All journalists and their editors should now understand the risks of legitimising a rumour and spreading it further than it might have otherwise travelled …”

The flipside is also true: wait too long and a falsehood may turn into a “zombie rumour” that refuses to die. 

Why we fall for disinformation

Herman Wasserman, professor of media studies at the University of Cape Town, has researched “fake news” in South Africa, Kenya and Nigeria. What surprised him about surveys in these countries, he told Africa Check, was how many social media users shared false information despite suspecting that it was unverified or made up.

What Mandy Jenkins learned while studying consumers of disinformation during her John S Knight journalism fellowship at Stanford University in the US, was that the people she interviewed overestimated their ability to distinguish between “fake” and real. They tended to rely on search engines for verification but, unfortunately, search engines often “give you what you want to see”.

They were also overwhelmed with information. “It’s very tempting to close it off and just say: ‘You know what … I only want this stuff from my friends and my circle,’” Jenkins says.

The challenge for those who want to counter disinformation is that the way we process information isn’t always rational.

Many factors are at play, including our biases, the fact that familiarity might make something seem true and our need to belong.

In their report on disinformation disorder, Wardle and media researcher Hossein Derakhshan argue that when we share news on social media, we’re not simply transmitting information. We become performers for “tribes” of followers or friends.

“This tribal mentality partly explains why many social media users distribute disinformation when they don’t necessarily trust the veracity of the information they are sharing: they would like to conform and belong to a group, and they ‘perform’ accordingly.”

How best to deal with people who fall for disinformation is not yet clear, says Ben Nimmo, former director of investigations at network analysis company Graphika.

“A lot of the time, what we see is that people will share the false content, either because they believe that it’s true, or because they want to believe that it’s true – it’ll confirm some kind of political leaning or political bias that they already have. 

“And so part of the question is: Who is [going to tell them that] they’ve shared a piece of disinformation? Because if it’s somebody from what is seen as the other side … then there’s a danger that you’ll actually reinforce their resistance to the truth …”

Correcting false information doesn’t always work.

The News Literacy Project’s advice for rectifying falsehoods spread by friends and family includes trying to find common ground and using “an empathetic and respectful tone”.

With health information, researchers Leticia Bode and Emily K Vraga recommend including a link to a credible source in your correction to increase its chances of success.

Disinformation researcher Nina Jankowicz’s book, How To Lose the Information War: Russia, Fake News, and the Future of Conflict, makes a case for solutions that consider the divisions in society that make us vulnerable to disinformation in the first place.

She writes that in countries where disinformation has long existed, “empowering people to be active and engaged members of society through investments in the information space and in people themselves is always part of the solution”. 

Estonia, for example, focused on education and invested in both media and contact between people to “repair the gaps in trust and crises of identity” that made the country’s Russian-speakers an “easy target” of Russian disinformation campaigns.


Don’t be an accidental co-conspirator 

People who spread disinformation rely on unsuspecting social media users to amplify their content. Renée DiResta, research manager at the Stanford Internet Observatory in the US, noticed a shift in the tactics used on Twitter in 2018. “Twitter’s self-imposed product tweaks have already largely relegated automated bots to the tactical dustbin. Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead.”

So how can we avoid becoming accidental co-conspirators in a disinformation campaign?

The advice of UCT’s Wasserman is to: 

  • Actively look for “good information”. Examples include “independent, critical, rigorous journalism” and – in the case of Covid-19 – official sources of health information.
  • Don’t share unverified information “just in case” it might be true. “It’s like passing on a virus – your fake post or false information can go on to multiply, infect many others, and do real harm.” 
  • Verify before you share, and develop the necessary skills to do so. 

These skills include knowing how to do a reverse image search.

A reverse image search caught out an instance of false context – where an old photo taken in a different country was redistributed as part of the #PutSouthAfricansFirst campaign. Read more here.

Prof Camaren Peter and Yossabel Chetty of the Centre for Analytics and Behavioural Change, a nonprofit that tracks narrative manipulation on social media, urge Twitter users to carefully look at the tweeter’s account before interacting with its content.

“Check the account out to see how recently it has been set up, how many followers it has, and the frequency and types of posts they put out. Scrutinise the bio. There are many parody accounts using the names of well-known South Africans or celebrities to broker trust.”

A new account that tweets about the same topic every few minutes could be a red flag. (See part two of this series.)

A blue tick is useful when you’re trying to verify someone’s identity, but Peter and Chetty warn that it doesn’t guarantee the accuracy of the account’s tweets.

Disinformation research scientist Richard Ngamita encourages social media users to report suspected disinformation “so that the social media companies log that information and this will be used in their algorithms in the long run”. Peter and Chetty also recommend flagging disinformation with Twitter, by clicking on the three dots above a tweet and reporting it as “suspicious or spam”.

Ngamita’s tips for Twitter users who want to identify disinformation include checking if the same text has been used by multiple accounts. “You can copy parts of the text and search to see if there are other users using this same content.”

To counter disinformation, he says, you should “think twice before you retweet, comment or like a tweet”.

Says Phillips: “We can’t control social media platform policies. We can’t control government regulation. We can’t control the various industrial polluters who make a killing by killing democracy. What we can control is how and when we choose to post; and by extension, the amount of pollution we filter into the landscape.”

This is the final part in a three-part explainer about disinformation on Twitter – the result of a collaboration between Africa Check and the Atlantic Council’s Digital Forensic Research Lab (DFRLab). Part one focuses on disinformation actors, their behaviour and content. In part two, we ask: How much damage can a hashtag do?

Republish our content for free

We believe that everyone needs the facts.

You can republish the text of this article free of charge, both online and in print. However, we ask that you pay attention to these simple guidelines. In a nutshell:

1. Do not include images, as in most cases we do not own the copyright.

2. Please do not edit the article.

3. Make sure you credit "Africa Check" in the byline and don't forget to mention that the article was originally published on

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
limit: 600 characters

Want to keep reading our fact-checks?

We will never charge you for verified, reliable information. Help us keep it that way by supporting our work.

Become a newsletter subscriber

Support independent fact-checking in Africa.