Back in March 2023, a Reddit user uploaded a clip generated using artificial intelligence (AI) tools, of Hollywood star Will Smith eating spaghetti. The video shows him shovelling heaps of pasta into his mouth, sometimes with both hands.
But his movements looked stiff and unnatural. The clumsy result, made with an early AI tool, was easy for anyone online to spot as fake.
Still, the clip took on a life of its own. It has become the internet’s benchmark for tracking AI video progress, with people recreating it again and again using newer tools to show just how far the technology has come.
Fast-forward two years, and Google’s Veo 3 model transforms the same prompt into something far more convincing: a lifelike Will Smith eating spaghetti, moving naturally against a warm and realistic backdrop. The leap is striking.
Veo 3 isn’t flawless, but it’s good enough to fool many people online. It serves as a reminder that the visual gap between real and fake video is disappearing, fast.
How Veo 3 works
The tool generates short videos, complete with synchronised audio, directly from text or image prompts, making it one of the first to do so.
Compared with earlier versions, Veo 1 and 2, the new model offers major upgrades: smoother motion, more accurate, lip-syncing, natural dialogue and a higher level of overall realism.
Other platforms, like Sora and MidJourney, can also create AI videos, but they require users to add audio manually.
Veo 3 has quickly become popular, with users testing its limits. But there’s a catch, though: access is only through Google’s paid AI Pro and AI Ultra plans.
Veo 3 is currently available in 159 countries, including Kenya, Nigeria, Senegal and South Africa.
The potential for misinformation
Veo 3 has quickly emerged as one of the most powerful AI video generation tools. On TikTok, users are producing content that looks strikingly authentic, even in highly localised settings.
The model can recreate African contexts, including distinctive housing styles, traditional clothing and local languages. One clip, for example, shows Nigerian women accurately speaking fluent pidgin English. Another depicts a woman selling local foods in a market. The realism is both convincing and culturally relevant.
But this same capability also carries risks. While some videos have been created for fun, others have crossed into deception, showing extreme weather events that never happened, promoting bogus health product reviews or staging fabricated news broadcasts.
Identifying videos generated by Veo
While it’s becoming increasingly difficult to catch AI-generated videos, there are methods to identify those produced by Veo.
1. Look for the Veo tag
Since May 2025, Google has added a subtle watermark to videos generated with Veo. It appears in the bottom right corner, but it's small, easy to miss and can be cropped before a video is shared.

Videos made by AI Ultra subscribers in Flow, Google’s premium AI filmmaking tool, won’t display the watermark. Google’s solution to this is an invisible watermarking system called SynthID.
2. Use SynthID
Google created SynthID to help spot Google’s AI-generated content and promote transparency and trust in generative AI. SynthID embeds digital watermarks inside AI-generated images, videos and texts. But they are only detectable by SynthID's technology, not visible to the naked eye.
For now SynthID is limited to early testers and hasn’t been rolled out to the public.
Silas Jonathan, who heads the Digital Technology, Artificial Intelligence, and Information Disorder Analysis Centre at the Centre for Journalism Innovation and Development in Abuja, Nigeria, told Africa Check that the lack of access to such advanced detection tools puts fact-checkers and journalists at a disadvantage.
Without tools like SynthID, he said, most fact-checkers on the continent had to rely on open-source techniques, mobile devices and often unstable internet connections to investigate misleading content.
“Tech companies like Google should prioritise local integration,” Jonathan said, calling for training partnerships and stronger support for fact-checking networks in the Global South.
He also said that companies releasing lightweight application programming interfaces (APIs) would make a difference. An API is a set of rules that lets different software or apps communicate and exchange data. Lightweight APIs are designed to be efficient and easy to use in low-resource environments.
For Google, lightweight APIs would encourage the global usage of SynthID in various systems– potentially making spotting AI contents accessible at the fingertips of journalists, media researchers, or even social media users.
“If they’re serious about fighting information disorder, then equity in access has to be part of the plan,” said Jonathan.
3. Check for errors in the text
Even with major improvements, Veo – like most AI video generators – still struggles with adding text to videos, especially in African contexts. In one video featuring an AI news presenter, the text on the background screen, meant to show a breaking news headline, was illegible. Later in the same clip, a podium sign reads “Minister of Inforimation”.


4. Listen out for native tongue disparities
Africa’s linguistic diversity still poses difficulties for AI. Veo often makes mistakes with local languages, as seen in some videos. One shows a model wrongly pronouncing the name of a local Yoruba food, “amala” and another, where the Yoruba word “babalawo”, meaning “high priest” was mispronounced. The models couldn't deal with the highly tonal language.
5. Short videos could be a red flag
Most Veo clips run for only about eight seconds. Users can add more prompts to build a storyline and make longer videos, but each scene remains brief, a telltale giveaway that it was made with Veo.
6. Technical glitches
Keep an eye out for technical glitches, particularly objects suddenly appearing and disappearing. For example, in this video, an umbrella suddenly appears in a reporter’s hands. In another, the wooden spoon a woman was using to gesture suddenly disappears.


‘Lack of legal framework is a serious concern’
Victor Famubode, an AI policy researcher and member of the United Nations Development Programme Reference Group on AI Development, told Africa Check that while countries such as Nigeria and Kenya had some digital rights laws, they fell short when it came to regulating synthetic media, especially considering the speed and scale of AI-generated content.
He stressed the need for stronger coordination among stakeholders, citing the European Union's General Purpose AI Code of Practice as an example.
“[Africa] needs to learn from that,” he said, “and create frameworks that don't discourage innovation, but encourage better ways to improve on the current ecosystem.” This included how these tools were developed and used.
Add new comment