Back to Africa Check

AI-powered disinformation: deepfakes, detection technology and the weaponisation of doubt

Despite the hype around artificial intelligence, the bigger threat in Africa remains “cheap fakes” and other low-effort forms of information deception.

Human rights organisation Amnesty International was criticised in early 2023 for using images generated by artificial intelligence (AI) that appeared to be photographs of widespread police brutality during protests in Colombia in 2021. 

These events were well documented by Amnesty International and others when they took place. But the organisation opted to create images of protesters and police officers that did not exist. 

Amnesty said its intention was not to endanger protesters by potentially revealing their identities. But in a climate of distrust of the news media, there were concerns that using AI undermined the organisation’s credibility and the events it documented.

Experts have warned of a dystopian or frightening future powered by AI. One example is election tampering. As the US’s Public Broadcasting Service reported, some concerns are: “Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave.”

But the peddling of altered content for disinformation purposes is not a new phenomenon. And some of the best tools to spot these fakes are the same that have previously been used to debunk them. (See our guide to spotting AI-generated images and videos.) 

To clear the smog, we spoke to an expert in disinformation and AI technology about the future of AI, what tools are being developed to detect it, and what we should and shouldn’t be worrying about in 2023. 

Deepfakes and cheap fakes 

Deepfakes are videos that have been digitally manipulated, often to combine or create human faces and bodies. According to Jean le Roux, a research associate at the Atlantic Council’s Digital Forensic Research Lab, this audiovisual manipulation exists on a spectrum. At one end, there are true deepfakes, which are realistic and still require specialised tools to be convincing. 

In 2017, for example, researchers at the University of Washington created a now-popular deepfake of former US president Barack Obama. They used a video clip of him speaking, and trained a tool to map individual features of the audio to their corresponding mouth shapes. This allowed them to create a convincingly realistic, entirely different video of Obama speaking. 

This was a long and technical process by skilled university researchers. Despite significant advances in technology since then (see some recent deepfake examples here), this type of hyper-realistic manipulation is still resource-intensive and technical, according to Le Roux.

On the other end of the spectrum, “cheap fakes” are quicker and less resource-intensive. They can be similarly misleading, though less realistic. Cheap fakes range from videos taken out of context, to simple edits such as speeding up or slowing down video or audio to misrepresent events. Cruder face-swapping and lip-syncing methods also fall into this category. 

According to Le Roux, a fake being completely convincing isn’t as important as you might think. Take this video, shared on social media in March 2023, which shows South African president Cyril Ramaphosa appearing to announce controversial changes to tackle the country's energy crisis. Despite being unrealistic, this cheap fake went viral, and seemed to be believed by some social media users. (See our fact-checking report on the video here.) 

At the moment, ultra-cheap fakes are more of a problem for disinformation, Le Roux told Africa Check, than deepfakes. It takes lots of time, effort and resources to make a really convincing deepfake. And if, after all that, people still identified it as a fake, that investment would have been wasted. It is easier to produce large numbers of quick cheap fakes. Although individually less convincing, these provide more opportunities to fool people. 

Because of various psychological mechanisms, something may not need to be convincing for it to be shared as real on social media. Some research suggests that people are quick to share false information if it confirms or fits with their existing beliefs. Other studies suggest that the social media environment itself may distract people from prioritising the accuracy of what they share. 

Detecting AI … with AI?

With all the talk of AI-powered disinformation campaigns and their potentially catastrophic impact on society, companies are scrambling to develop effective detection tools to identify AI-generated images, videos and text.

Software is trained using machine learning to distinguish between real and AI-generated content. These tools, as the magazine Scientific American put it, could in theory perform better than people, as “algorithms are better equipped than humans to detect some of the tiny, pixel-scale fingerprints of robotic creation”.

But they still have major limitations. For example, OpenAI, creators of the popular ChatGPT text generator, admitted that even their own detection tool had a dismal 26% success rate in identifying when text had been generated by an AI tool. 

Tools for detecting AI-generated images and videos don’t seem much more promising. Experts say that because image detectors are trained to identify content from one specific generator, they may not be able to detect content generated by other algorithms. They are also vulnerable to generating false positives, where real images are labelled as AI-generated. 

Another major limitation is that these detectors have difficulty identifying AI-generated images that are low-quality or have been edited. When images are generated, information in each pixel contains clues about their authenticity. But if these are changed, for example by lowering the image resolution or adding grain, even images that are very obviously fake to humans can fool software. 

But the core problem, some experts say, is that the very nature of detecting AI-generated content means that it will always be a game of cat and mouse. Detection tools will always need to be reactive, constantly adapting to advances in image generators. 

The ‘liar’s dividend’ – when doubt is all you need 

Lack of public awareness of deepfake technologies has been identified as a challenge in the fight against disinformation. There is little research on this in Africa, but a 2023 survey of 800 adults in five African countries, including South Africa, found that around half of respondents were unaware of deepfakes. 

According to KnowBe4, the cybersecurity awareness company that conducted the survey, participants had some awareness of visual disinformation, with 72% saying they did not believe every photo or video they had seen was real. However, the company also pointed out that the remaining 28% “believed that ‘the camera never lies’”. This suggests a possible vulnerability to this type of deception. 

On the flipside, public awareness poses its own challenge. The dilemma, identified as the liar’s dividend in a 2018 research paper, is this: the more people are aware of AI and its ability to generate convincing content, the more people might doubt the authenticity of something real. 

“The fact we know AI generation technology is there means it’s a useful excuse,” Le Roux told Africa Check. 

The concept is not new – we saw a similar tactic play out in the political arena during Donald Trump’s US presidency. Almost any piece of information he deemed unflattering was denounced as “fake news”. As the BBC wrote in 2018: “What began as a way to describe misinformation was quickly diverted into a propaganda tool.”

The concept of evidence

The liar’s dividend further raises the bar for what counts as convincing evidence – or undermines the concept of “evidence” altogether. 

This isn’t theoretical, either. At a conference in 2016, Elon Musk, billionaire tech entrepreneur and chief executive of Tesla Motors, claimed that Tesla’s Model S and Model X self-driving cars could “drive autonomously with greater safety than a person. Right now”. A video recording of this statement has been available on YouTube since 2016.

Two years later, a person was killed when a Model X car in autopilot mode crashed into a safety barrier. The victim’s family sued Tesla, with lawyers claiming he was killed because the car’s autopilot mode failed, citing Musk’s 2016 statement about its safety. 

In response, Musk’s lawyers tried to cast doubt on the accuracy of the statement, saying Musk did not remember making it. They cited examples where deepfakes had been made using Musk’s likeness before. In court, Tesla reportedly said: “[Musk], like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did.” 

The judge in the case was not convinced and expressed concern, saying that this kind of argument could allow famous people to “avoid taking ownership of what they did actually say and do”. 

Similar examples of doubt have emerged in politics. In January 2019, a small group of soldiers in the West African state of Gabon attempted a coup, motivated in part by the poor health of president Ali Bongo Ondimba. He had suffered a stroke the previous year and a lack of public appearances or details about his health in the following months sparked rumours that he was unfit to govern.

When Bongo finally released a video addressing the nation on New Year’s Day, something was off. The president looked very different from previous appearances, the Washington Post noted, including not moving his face much.

While these features are consistent with the appearance of someone who has had a stroke, they also led some to speculate that the video was not authentic. Opposition members reportedly called the video a deepfake, and social media users suggested it could have been created by “machine learning software”. This contributed to a general state of confusion and controversy, culminating in a group of soldiers taking control of the national news radio, before being overpowered.

A balancing act

Deepfakes, cheap fakes, and everything in between will all become part of the disinformation landscape on the continent, especially as AI technologies become more accessible and less resource-intensive. When they do, the public will have to contend with an erosion of the concept of evidence as we know it. 

But it’s also important to balance these fears with an awareness of current threats. AI is one in a long list of tools used to mislead. And at Africa Check, it’s overwhelmingly outweighed by good old-fashioned forms of deception, such as images and videos taken out of context or crudely manipulated visuals. 

While we keep an eye on developments in AI, and hope we don’t have to update this report too soon, this is where our current focus as fact-checkers should be.

Nothing but the facts

Get a weekly dose of facts delivered straight to your inbox.

Republish our content for free

We believe that everyone needs the facts.

You can republish the text of this article free of charge, both online and in print. However, we ask that you pay attention to these simple guidelines. In a nutshell:

1. Do not include images, as in most cases we do not own the copyright.

2. Please do not edit the article.

3. Make sure you credit "Africa Check" in the byline and don't forget to mention that the article was originally published on africacheck.org.

Further Reading

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
limit: 600 characters

Want to keep reading our fact-checks?

We will never charge you for verified, reliable information. Help us keep it that way by supporting our work.

Become a newsletter subscriber

Support independent fact-checking in Africa.