It wouldn’t be entirely out of character for Joe Rogan, the comedian-turned-podcaster, to endorse a “libido-boosting” brand of coffee for men.
But when a video that recently circulated on TikTok showed Mr. Rogan and his guest, Andrew Huberman, selling coffee, some eagle-eyed viewers were shocked, including Dr. Huberman.
“Yes, that’s false”, Dr. Huberman wrote on Twitter after seeing the ad, in which he seems to praise the testosterone-boosting potential of coffee, though he never did.
The ad was one of a growing number of fake videos on social media made with AI-powered technology. Experts said Rogan’s voice appeared to have been synthesized using artificial intelligence tools that imitate the voices of celebrities. Dr. Huberman’s comments were taken from an unrelated interview.
Making realistic fake videos, often called deepfakes, once required elaborate software to put one person’s face on another’s. But now many of the tools to create them are available to everyday consumers, even in smartphone apps, and often for little or no money.
The new altered videos — mostly, so far, the work of meme creators and marketers — have gone viral on social media sites like TikTok and Twitter. The content they produce, sometimes called cheap fakes by researchershe works by cloning celebrity voices, altering mouth movements to match alternate audio, and writing persuasive dialogue.
The videos, and the accessible technology behind them, feature some AI researchers worrying about its dangersand have raised new concerns about whether social media companies are prepared to temper growing digital counterfeiting.
Disinformation watchdogs are also bracing for a wave of digital fakes that could mislead viewers or make it harder to tell what’s true or false online.
“What’s different is that everyone can do it now,” said Britt Paris, an assistant professor of library and information science at Rutgers University who helped coin the term “cheap fakes.” “It’s not just people with sophisticated computer technology and fairly sophisticated computer skills. Instead, it is a free app.”
The spread of misinformation and falsehoods
- Cutting: Job cuts in the social media industry reflect a trend that threatens to undo many of the safeguards that platforms have implemented to prohibit or suppress disinformation.
- A key case: The outcome of a federal court battle could help decide whether the First Amendment is a barrier to virtually any government effort to stifle disinformation.
- A top disinformation spreader: A large study found that Steve Bannon’s “War Room” podcast had more falsehoods and unsubstantiated claims than other political talk shows.
- Artificial intelligence: For the first time, AI-generated personas were detected in a state-aligned disinformation campaign, opening a new chapter in online manipulation.
Lots of manipulated content has circulated on TikTok and elsewhere for years, usually using more homespun tricks like careful editing or swapping one audio clip for another. In a video on TikTok, Vice President Kamala Harris seemed to say that everyone hospitalized for covid-19 was vaccinated. In fact, she said the patients were not vaccinated.
Graphika, a research firm that studies disinformation, detected deepfakes from fictitious newscasters that pro-China bot accounts distributed late last year, in the first known example of the technology being used for influence campaigns aligned with China. the state.
But several new tools offer similar technology to everyday Internet users, giving comedians and partisans the opportunity to make their own compelling parodies.
Last month, a fake video circulated showing President Biden declaring a national draft for war between Russia and Ukraine. The video was produced by the team behind “Human Events Daily,” a podcast and live stream run by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.
In a segment explaining the video, Posobiec said his team had created it using AI technology. A tweet about the video from The Patriot Oasis, a conservative account, used a breaking news hashtag without indicating that the video was fake. The tweet was viewed more than eight million times.
Many of the video clips that featured synthesized voices appeared to use technology from ElevenLabs, an American start-up co-founded by a former Google engineer. In November, the company introduced a voice cloning tool that can be trained to replicate voices in seconds.
ElevenLabs drew attention last month after 4chan, a message board known for its racist and conspiratorial content, used the tool to share hate messages. In one example, 4chan users created an audio recording of an anti-Semitic text using a computer-generated voice impersonating actress Emma Watson. Motherboard previously reported on 4chan’s use of audio technology.
ElevenLabs said on Twitter that introduce new safeguards, such as limiting voice cloning to paid accounts and providing a new AI detection tool. But 4chan users said they would create their own version of the voice-cloning technology using open source code, posting demos that sound similar to audio produced by ElevenLabs.
“We want to have our own custom AI with the power to create,” wrote an anonymous 4chan user in a post about the project.
In an email, an ElevenLabs spokeswoman said the company was looking to collaborate with other AI developers to create a universal detection system that could be adopted across the industry.
Videos using cloned voices, created using ElevenLabs’ tool or similar technology, have gone viral in recent weeks. One, posted on Twitter by Elon Musk, the site’s owner, featured a fake profanity-filled conversation between Rogan, Musk and Jordan Peterson, a Canadian men’s rights activist. In another, posted on YouTube, Rogan appeared to interview a fake version of Canadian Prime Minister Justin Trudeau about his political scandals.
“The production of such fakes should be a felony with a mandatory sentence of ten years,” Peterson said in a tweet about fake videos featuring his voice. “This technology is dangerous beyond belief.”
In a statement, a YouTube spokeswoman said the video of Rogan and Trudeau did not violate the platform’s policies because “provide enough context.” (The creator had described it as a “fake video.”) The company said its disinformation policies prohibited content that was manipulated in a deceptive manner.
Experts who study deepfake technology suggested that the fake ad featuring Mr. Rogan and Dr. Huberman had likely been created with a voice-cloning program, though the exact tool used was unclear. Mr. Rogan’s audio was spliced into a actual interview with Dr. Huberman discussing testosterone.
The results are not perfect. The clip of Mr. Rogan was taken from an unrelated interview published in December with Fedor Gorst, a professional billiards player. Mr. Rogan’s mouth movements don’t match the audio and sometimes his voice doesn’t sound natural. Whether the video sold on TikTok users was hard to tell: It garnered far more attention after it was flagged for its shocking falsehood.
TikTok’s policies prohibit digital fakes “that mislead users by distorting the truth of the facts and causing significant harm to the subject of the video, to other individuals, or to society.” Several of the videos were removed after The New York Times flagged them to the company. Twitter also removed some of the videos.
A TikTok spokesperson said the company used “a combination of technology and human restraint to detect and remove” manipulated videos, but declined to elaborate on its methods.
Mr. Rogan and the company featured in the fake ad did not respond to requests for comment.
Many social media companies, including Meta and Twitch, have banned deepfakes and manipulated videos that mislead users. Meta, which owns Facebook and Instagram, held a competition in 2021 to develop programs capable of identifying deepfakes, resulting in a tool that could detect them 83 percent of the time.
Federal regulators have been slow to respond. A federal law of 2019 requested a report on the use of deepfakes on weapons by foreigners, required government agencies to notify Congress if deepfakes were targeting elections in the United States, and created an award to encourage research into tools that could detect deepfakes.
“We can’t wait two years until the laws are passed,” said Ravit Dotan, a postdoctoral researcher who directs the AI Collaborative Accountability Lab at the University of Pittsburgh. “By then, the damage could be too much. We have an upcoming election here in the US It’s going to be a problem.”