Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.
On Tuesday, Clegg proposed a solution. Meta said he would promote technological standards that companies across the industry could use to recognize markers in photographic, video and audio material that would indicate that the content was generated using artificial intelligence.
The standards could allow social media companies to quickly identify ai-generated content that has been posted on their platforms and allow them to add a label to that material. If widely adopted, the standards could help identify ai-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that let people create artificial posts quickly and easily.
“While this is not a perfect answer, we didn't want the perfect to be the enemy of the good,” Clegg said in an interview.
He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and flagging content as artificial, so it would be easier for everyone to recognize it.
As the United States enters a presidential election year, industry observers believe that artificial intelligence tools will be widely used to publish false content to misinform voters. Over the past year, people have used artificial intelligence to create and spread fake videos of President Biden making false or inflammatory statements. The New Hampshire attorney general's office is also investigating a series of robocalls that appeared to use an ai-generated Biden voice urging people not to vote in a recent primary.
Meta, owner of Facebook, Instagram, WhatsApp and Messenger, is in a unique position because it is developing technology to spur widespread consumer adoption of artificial intelligence tools while also being the largest social network in the world. world capable of distributing content generated by artificial intelligence. Clegg said Meta's position gave him particular insight into both the generation and distribution aspects of the problem.
Meta focuses on a series of technological specifications called IPTC and C2PA standards. They are information that specifies whether a digital medium is authentic in the content metadata. Metadata is the underlying information embedded in digital content that provides a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photographs or videos.
Adobe, which makes Photoshop editing software, and many other technology and media companies have spent years pressure their peers to adopt the C2PA standard and we have formed the Content Authenticity Initiative. The initiative is a partnership between dozens of companies, including The New York Times, to combat misinformation and “add a tamper-proof layer of provenance to all types of digital content, starting with photos, videos and documents,” according to the initiative .
Companies offering ai generation tools could add the standards to the metadata of the videos, photos or audio files they helped create. That would indicate to social networks such as Facebook, Twitter and YouTube that said content was artificial when it was uploaded to their platforms. Those companies, in turn, could add labels indicating that these posts were generated by ai to inform users who saw them on social media.
Meta and others also require users who post ai content to indicate whether they have done so when uploading it to companies' apps. Failure to do so results in sanctions, although the companies have not detailed what those sanctions might be.
Clegg also said that if the company determines that a digitally created or altered post “creates a particularly high risk of materially misleading the public on a matter of importance,” Meta could add a more prominent label to the post to give the public more information. information and context about its origin.
artificial intelligence technology is advancing rapidly, which has spurred researchers to try to keep up with the development of tools on how to detect fake content online. Although companies like Meta, TikTok, and OpenAI have developed ways to detect such content, technologists have quickly found ways to bypass those tools. Artificially generated video and audio have proven to be even more difficult to detect than ai photos.
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over their use of Times articles to train artificial intelligence systems.)
“Bad actors will always try to get around any standard we create,” Clegg said. He described the technology as a “sword and shield” for the industry.
Part of that difficulty arises from the fragmented nature of how technology companies approach it. Last fall, TikTok ai-generated-content” title=”” rel=”noopener noreferrer” target=”_blank”>announced a new policy that would require your users to add tags to the videos or photos they uploaded that were created with ai YouTube ai-innovation/” title=”” rel=”noopener noreferrer” target=”_blank”>Announced a similar initiative in November.
Meta's new proposal would attempt to unite some of those efforts. Other industry efforts, such as Association on aiThey have brought together dozens of companies to discuss similar solutions.
Clegg said he hoped more companies would agree to participate in the standard, especially ahead of the presidential election.
“We feel particularly strongly that during this election year, it would not be justified to wait for all the pieces of the puzzle to fall into place before acting,” he said.