Raymond Joseph ANALYSIS: Deepfakes threaten the public’s faith in facts

New technology is making the manipulation of video easier, and the faked results more convincing and thus more likely to be shared widely online. But how worried are the experts and how concerned should you be? 

Speaking in a sinister tone, Facebook CEO Mark Zuckerberg boasts in an online video that “whoever controls the data, controls the future”.

In another video, former US president Barack Obama calls his successor, Donald Trump, a “total and complete dipshit”.

Both videos are fake. They were made to highlight the dangers posed by fake videos featuring well-known people saying and doing outrageous things.

Welcome to the world of “deepfakes” – a merging of  “deep learning” and “fake” – which use machine learning and artificial intelligence to create fake videos.

From the world of porn to the Pentagon

Deepfakes first came to wide public attention in 2018. But they had their origin – as did much now commonly used technology, like e-commerce, live streaming and video cams – in the murky world of pornography.

A basic internet search for “deepfakes” plus the names of celebrities like Daisy Ridley, Emma Watson, Taylor Swift or Katy Perry returns multiple “not safe for work” links to a wide variety of pornography websites with famous women allegedly involved in sex acts.

Deepfake porn originally surfaced on the internet in 2017. Since then the release of free software that has made it relatively easy for anyone to fake video. 

But it’s not only video that can be altered: a tool developed by a group of scientists can alter dialogue in a video, simply by editing a script.

So concerned is the US government about the implications for national security that the House Intelligence Committee recently held hearings into deepfakes, while the US Department of Defense has stepped up efforts to combat them.

The emergence of deepfakes has set off an “arms race” among researchers and technicians to build tools to combat faked videos.

AI researchers outnumbered

But many top artificial intelligence researchers say they are outgunned. University of California Berkeley computer science professor Hany Farid told the Washington Post that researchers are “lagging behind, largely because there are so few of us”. He said the good guys are outnumbered “probably to the tune of 100 to 1.”

Farid is leading research to develop a biometric tool that maps facial data. This includes mannerisms that are distinct to an individual, like how they move their heads, bodies and hands while speaking. But it is time-consuming work.

While deepfakes are not yet a major problem, Farid said it is “only a matter of time” before they are widely deployed in politics.

“If you look at … how sophisticated and convincing and compelling these fake videos are, it’s just a matter of time. Whether it’s [the] 2020 [US election], then the next election.”

But he said a bigger problem is the issue of trust. “What happens when we enter a future where we simply don’t believe anything we read, hear or see hear online? How do we have a democracy, how do we agree on basic facts of what’s happening in the world?”

Other experts urge caution

Farid’s team is just one of several around the world building tools to fight deepfakes, even as the technology used to make them continually improves.

Claire Wardle, the head of research at First Draft News, an organisation aiming to address challenges relating to trust and truth in the digital age, has said that she is not yet overly concerned about deepfakes. 

“Maybe I’m being naive, but this isn’t what I’m worried about at all,” she wrote in a blog for Niemann Labs earlier this year. “Academics and technologists agree that we’re roughly four years away from the level of sophistication that could do real harm, and there is currently an arms race afoot to produce tools to effectively detect this type of content.”

What she said she is “very worried” about is “the drip, drip, drip of divisive hyper partisan memes on society”.

“I’m particularly worried because most of this content is being shared in closed or ephemeral spaces, like Facebook or WhatsApp groups, SnapChat, or Instagram Stories. As we spend more time in these types of spaces online, inhabited by our closest friends and family, I believe we’re even more susceptible to these emotive, disproportionately visual messages.”

An escalation in information warfare

Her sentiments were echoed by Ben Nimmo, a senior Fellow for Information Defense at Atlantic Council’s Digital Forensic Research Lab, who was in the forefront of unmasking Russian bots that interfered in the US elections.

“At the moment, we haven’t seen deepfakes used,” he said in a recent email interview. “The Russian government has run plenty of shallow fakes, like manipulated images, which have been caught out. Deepfakes would be yet another escalation in the information warfare. It’s probably only a matter of time.”

But deepfakes are nevertheless a risk because they could lead to journalists making mistakes, he warned. “Journalists must be aware of the problem of deepfakes and always look for corroborating sources,” he said.

“Ultimately, though, they’ll need to develop a stronger relationship with the tech platforms, who have the best technical expertise and who have a big stake in not letting their platforms be taken over by fakes.”

Journalists go back to basics

Kyle Findlay, who played a key role in identifying Twitter bots which helped sow racial tensions in South Africa, told Africa Check: “For now, deepfakes have statistical patterns present in them that make them identifiable by machines. Over time, these might be smoothed over by the makers.”

He said the war against deepfakes “will turn into an evolutionary arms race. Tools for detection will arise and be circumvented.”

“We might need to supply journalists with automated ‘image provenance’ tools, like the plug-ins that you use for reverse image search to automatically trace back the ‘share’ trail of all media to their sources.”

But his advice is ultimately non-technical and just good old-fashioned journalism.

“Treat everything with suspicion. Focus on names you trust and insist on visible trails linking the media that you are viewing back to those trusted sources.”

© Copyright Africa Check 2020. Read our republishing guidelines. You may reproduce this piece or content from it for the purpose of reporting and/or discussing news and current events. This is subject to: Crediting Africa Check in the byline, keeping all hyperlinks to the sources used and adding this sentence at the end of your publication: “This report was written by Africa Check, a non-partisan fact-checking organisation. View the original piece on their website", with a link back to this page.

Leave a Reply

Your email address will not be published. Required fields are marked *


Africa Check encourages frank, open, inclusive discussion of the topics raised on the website. To ensure the discussion meets these aims we have established some simple House Rules for contributions. Any contributions that violate the rules may be removed by the moderator.

Contributions must:

  • Relate to the topic of the report or post
  • Be written mainly in English

Contributions may not:

  • Contain defamatory, obscene, abusive, threatening or harassing language or material;
  • Encourage or constitute conduct which is unlawful;
  • Contain material in respect of which another party holds the rights, where such rights have not be cleared by you;
  • Contain personal information about you or others that might put anyone at risk;
  • Contain unsuitable URLs;
  • Constitute junk mail or unauthorised advertising;
  • Be submitted repeatedly as comments on the same report or post;

By making any contribution you agree that, in addition to these House Rules, you shall be bound by Africa Check's Terms and Conditions of use which can be accessed on the website.


This site uses Akismet to reduce spam. Learn how your comment data is processed.