Google, whose work in artificial intelligence helped make ai-generated content much easier to create and disseminate, now wants to ensure that such content is also crawlable.
The tech giant said Thursday that it would join an effort to develop credentials for digital content, a type of “nutritional label” that identifies when and how a photograph, video, audio clip or other file was produced or altered, including with ai The company will collaborate with companies such as Adobe, the BBC, Microsoft and Sony to refine technical standards.
The announcement follows a similar promise announced Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels identifying such material.
Google, which has spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate digital certification into its own products and services, although it did not specify its timeline or scope. Its Bard chatbot is connected to some of the company's most popular consumer services, such as Gmail and Docs. On YouTube, which is owned by Google and will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating about current events with voices powered by text-to-speech services.
Recognizing where online content originates and how it changes is a high priority for lawmakers and technology watchdogs in 2024, when billions of people will vote in major elections around the world. After years of misinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable ai detection tools made people even more doubt the authenticity of the things they saw and heard on the Internet.
Setting up digital archives to include a verified record of your history could make the digital ecosystem more trustworthy, according to those who support a universal certification standard. Google will join the steering committee of one of those groups, the Coalition for Content Provenance and Authenticity, or C2PA. He C2PA Standards They have been supported by news organizations such as The New York Times, as well as camera manufacturers, banks and advertising agencies.
Laurie Richardson, Google's vice president of trust and safety, said in a statement that the company hoped its work would “provide important context for people, helping them make more informed decisions.” She highlighted Google's other efforts to give users more information about the online content they found, including tagging ai material on YouTube and offering details about images in Search.
Efforts to attach credentials to metadata (the underlying information embedded in digital files) are not perfect.
Open ai said this week that its ai image generation tools would soon add watermarks to images according to C2PA standards. Starting Monday, the company said, images generated by its online chatbot, ChatGPT, and independent image generation technology, DALL-E, will include a visual watermark and hidden metadata designed to identify them as created by artificial intelligence. . The move, however, is “not a silver bullet for addressing provenance issues,” OpenAI said, adding that labels “can be easily removed, whether accidentally or intentionally.”
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement, accusing the tech companies of using Times articles to train ai systems.)
There is “a shared sense of urgency” to reinforce trust in digital content, according to a blog post last month from Andy Parsons, senior director of Adobe's Content Authenticity Initiative. The company launched ai tools last year, including its Adobe Firefly ai art generation software and a Photoshop tool known as generative fill, which uses ai to expand a photo beyond its borders.
“The stakes have never been higher,” Parsons wrote.
Cade Metz contributed with reports.