IN SHORT: In early August 2023, there was a public outcry when mainstream and social media reported that Zoom's terms of service allowed it to use customers' data to train artificial intelligence models without an option to opt out. Days later, the company updated its terms to say it would not use data for this purpose. But the claim continued to circulate.
In August 2023, social media was abuzz with fears of privacy invasion after an article on tech site Stack Diary gained traction. The article, titled Zoom’s Updated Terms of Service Permit Training AI on User Content Without Opt-Out, was published on 6 August.
It quickly made its way to social media, with one particularly popular tweet linking to the article saying: “Well time to retire @Zoom, who is basically wants to use/abuse you to train their AI”. The tweet was viewed over two million times. The link was also posted here, here, here, here, here, here and here.
The posts and the article claim that teleconferencing company Zoom’s terms of service (ToS) states that the company can use customers’ data, including from the content of calls, to train artificial intelligence (AI), with no way for users to “opt out”. We looked into it.
Zoom’s policy up to 6 August
According to Stack Diary, Zoom’s ToS explicitly stated it could use customer data “for machine learning and artificial intelligence, including training and tuning of algorithms and models”. The author concluded that the company did not have to provide an “opt-out option” for users who did not want their data used.
We found an archived version of Zoom’s website from 6 August and can confirm that in section 10.4 of its ToS, the company did assert that users agreed to have their content data used “for the purpose of … machine learning” and “artificial intelligence”. Associated Press News reported that this data could include audio or chat transcripts, according to internet privacy experts.
Associated Press was told that the language in the ToS was “wide-reaching and could have opened the door for the company to use that data without additional permission if it wanted to”.
Large media organisations took notice of this. As of 6 August, the claim that Zoom could use customer data to train their AI models without an explicit opt-out system was correct.
Zoom backtracks, starting 7 August
The same day the Stack Diary article was published and social media crowds gathered, Zoom published a blog post attempting to address the issue. But, Stack Diary reported, “the updated terms were immediately shot down as they still didn't address the specific wording of the terms”. Zoom ’s chief executive then assured the public that the issue would be resolved.
On 11 August, Zoom updated its ToS again, now saying: “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models."
As of 11 August, the claim became incorrect. But this didn’t stop social media users from making it, including here, here, here, here and here.
Republish our content for free
For publishers: what to do if your post is rated false
A fact-checker has rated your Facebook or Instagram post as “false”, “altered”, “partly false” or “missing context”. This could have serious consequences. What do you do?
Click on our guide for the steps you should follow.
Publishers guideAfrica Check teams up with Facebook
Africa Check is a partner in Meta's third-party fact-checking programme to help stop the spread of false information on social media.
The content we rate as “false” will be downgraded on Facebook and Instagram. This means fewer people will see it.
You can also help identify false information on Facebook. This guide explains how.
Add new comment