For almost 30 years, Oren Etzioni was among the most optimistic artificial intelligence researchers.
But in 2019, Dr. Etzioni, a professor at the University of Washington and founding executive director of the Allen Institute for ai, became one of the first researchers to warn that a new generation of ai ai-based-forgery” title=”” rel=”noopener noreferrer” target=”_blank”>accelerate the spread of misinformation online. And in the middle of last year, he said, he was concerned that ai-generated deepfakes could influence an important election. He founded a non-profit organization, TrueMedia.org in January, hoping to combat that threat.
On Tuesday, the organization launched free tools to identify digital misinformation, with a plan to put them in the hands of journalists, fact-checkers and anyone else trying to figure out what's real online.
The tools, available in the TrueMedia.org website for anyone approved by the nonprofit, are designed to detect fake and manipulated images, audio, and video. They review links to media files and quickly determine if they should be trusted.
Dr. Etzioni sees these tools as an improvement over the mosaic defense currently used to detect misleading or deceptive ai content. But in a year when billions of people around the world will vote in elections, he continues to paint a bleak picture of what lies ahead.
“I'm terrified,” he said. “There's a good chance we'll see a tsunami of misinformation.”
In just the first few months of the year, ai technologies helped create fake voice calls from President Biden, fake images and audio ads from Taylor Swift, and an entire fake interview that appeared to show a Ukrainian official taking credit of a terrorist attack in Moscow. Detecting this type of misinformation is already difficult, and the tech industry continues to release increasingly powerful artificial intelligence systems that will generate increasingly convincing deepfakes and make detection even more difficult.
Many artificial intelligence researchers warn that the threat is gaining strength. Last month, more than a thousand people, including Dr. Etzioni and several other prominent ai researchers, signed an open letter calling for laws to hold developers and distributors of ai visual and audio services accountable if their technology is easily used to create harmful deepfakes.
At an event organized by Columbia University On Thursday, Hillary Clinton, former secretary of state, interviewed Eric Schmidt, former Google CEO, who warned that videos, even fake ones, could “drive voting behavior, human behavior, moods, everything.”
“I don't think we're prepared,” Schmidt said. “This problem is going to get much worse in the coming years. Maybe or not for November, but certainly in the next cycle.”
The technology industry is well aware of the threat. Even as companies rush to advance generative ai systems, they struggle to limit the damage these technologies can cause. Anthropic, Google, Meta and OpenAI have announced plans to limit or label election-related uses of their ai services. In February, 20 tech companies, including amazon, Microsoft, TikTok and x, signed a voluntary pledge to prevent misleading ai content from disrupting voting.
That could be a challenge. Companies often release their technologies as “open source” software, meaning that anyone is free to use and modify them without restrictions. Experts say the technology used to create deepfakes – the result of huge investment by many of the world's largest companies – will always outperform technology designed to detect disinformation.
Last week, during an interview with The New York Times, Dr. Etzioni showed how easy it is to create a deepfake. Use a service from a non-profit sister organization, CivAIwhich relies on artificial intelligence tools available on the Internet to demonstrate the dangers of these technologies, instantly created photos of himself in prison, in a place he has never been.
“When you see yourself being fooled, it's a lot scarier,” he said.
He later generated a deepfake of himself in a hospital bed, the kind of image he believes could influence an election if applied to Biden or former President Donald J. Trump just before the election.
TrueMedia tools are designed to detect fakes like these. More than a dozen startups offer similar technology.
But Dr. Etzioni, while highlighting the effectiveness of his group's tool, said no detector was perfect because they were driven by probabilities. Deepfake detection services have been tricked into declaring images of kissing robots and giant Neanderthals as real photographs, raising concerns that such tools could further damage society's trust in facts and evidence.
When Dr. Etzioni fed TrueMedia's tools a known deepfake of Mr. Trump sitting on a porch with a group of young black men, they labeled it “highly suspicious,” his highest level of confidence. When another well-known deepfake of Trump with blood on his fingers was uploaded, they weren't “sure” if it was real or fake.
“Even using the best tools, you can't be sure,” he said.
The Federal Communications Commission recently banned ai-generated robocalls. Some companies, including OpenAI and Meta, are now watermarking ai-generated images. And researchers are exploring additional ways to separate the real from the fake.
The University of Maryland is developing a cryptographic system based on QR codes to authenticate unaltered live recordings. TO study published last month asked dozens of adults to breathe, swallow and think while they spoke so that their speech pause patterns could be compared to the rhythms of the cloned audio.
But like many other experts, Dr. Etzioni warns that image watermarks are easily removed. And while he has dedicated his career to fighting deepfakes, he acknowledges that detection tools will have a hard time surpassing new generative ai technologies.
Since creating TrueMedia.org, OpenAI has introduced two new technologies that promise to make your job even harder. A person's voice can be recreated from a 15-second recording. Another can generate full-motion videos that look like something straight out of a Hollywood movie. OpenAI is not yet sharing these tools with the public as it works to understand the potential dangers.
(The Times has sued OpenAI and its partner, Microsoft, over allegations of copyright infringement involving artificial intelligence systems that generate text.)
Ultimately, Dr. Etzioni said, combating the problem will require broad cooperation between government regulators, companies that create artificial intelligence technologies, and the tech giants that control the web browsers and social networks where misinformation spreads. However, he said the likelihood of that happening before the fall election was slim.
“We're trying to give people the best technical assessment of what's in front of them,” he said. “They still have to decide if it's real.”