NPR or National Public Radio is an American nonprofit media organization headquartered in Washington, D.C. In March 2022, it exposed a fake video depicting Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down their arms and surrender the fight against Russia. Called “deepfake,” the short video circulated on social media and was placed on Ukrainian news website by hackers.
A New York Times article on February 7, 2023—The People Onscreen Are Fake. The Disinformation Is Real—talks about two broadcasters, purportedly anchors for a news outlet called Wolf News, which are not real people. They are computer-generated avatars created by artificial intelligence software.
“In one video, a news anchor with perfectly combed dark hair and a stubbly beard outlined what he saw as the United States’ shameful lack of action against gun violence. In another video, a female news anchor heralded China’s role in geopolitical relations at an international summit meeting. But something was off. Their voices were stilted and failed to sync with the movement of their mouths.”
New York Times said videos of the broadcasters were distributed by pro-China bot accounts on Facebook and Twitter, in the first known instance of “deepfake” video technology being used to create fictitious people as part of a state-aligned information campaign.
“This is the first time we’ve seen this in the wild,” said Jack Stubbs, the vice president of intelligence at Graphika, a research firm that studies disinformation.
Graphika said it discovered the deepfake videos while following social media accounts linked to a pro-China misinformation campaign known as “spamouflage.” In these campaigns, political spam accounts plant content online and then use other accounts that are part of a network to amplify the material across platforms.
AI software, which can easily be purchased online, can create “videos in a matter of minutes and subscriptions start at just a few dollars a month,” Stubbs said. “That makes it easier to produce content at scale.” Graphika linked the two fake Wolf News presenters to technology made by Synthesia, an AI company based above a clothing shop in London’s Oxford Circus. The five-year-old startup makes software for creating deepfake avatars. A customer simply needs to type up a script, which is then read by one of the digital actors made with Synthesia’s tools.
Victor Riparbelli, Synthesia’s co-founder and CEO, said those who used its technology to create the avatars discovered by Graphika had violated its terms of service, which said that the company’s technology should not be used for “political, sexual, personal, criminal and discriminatory content.”
Although the use of deepfakes in the recently discovered pro-China disinformation campaign was ham-handed, it opens a new chapter in information warfare. The New York Times said: “In China, AI companies have been developing deepfake tools for more than five years. In a 2017 publicity stunt at a conference, the Chinese firm iFlytek made deepfake video of the US president at the time, Donald Trump, speaking in Mandarin. IFlytek has since been added to a US blacklist that limits the sale of American-made technology for national security reasons.”
“Deepfake technology has the ability to create talking digital puppets. The AI software is sometimes used to distort public figures, like the Zelenskyy video announcing surrender. But the software can also create characters out of whole cloth, going beyond traditional editing software and expensive special effects tools used by Hollywood, blurring the line between fact and fiction to an extraordinary degree.”
With few laws to manage the spread of the technology, disinformation experts have long warned that deepfake videos could further sever people’s ability to discern reality from forgeries online. Those predictions have now become reality.
Experts said deepfakes have been used to create fake news, false pornographic videos and malicious hoaxes, usually targeting well-known people such as politicians and celebrities. Given the people’s natural inclination to believe what they see on social media, deepfakes can “potentially distort democratic discourse; manipulate elections; erode trust in institutions; jeopardize public safety and national security; and damage reputations.”
It would do well for our lawmakers to set clearer rules about how artificial intelligence, including deepfake technology, could be used in the country.