Google is attempting to be more transparent about whether content was created or modified using generative ai (GAI) tools. After joining the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member earlier this year, Google has revealed how it will begin implementing the group’s digital watermarking standard.
Next to Google, which includes amazon, Meta, and OpenAI, has spent the past few months trying to figure out how to improve the technology used to watermark content created or modified by GAI. The company says it helped develop the latest version of Content Credentials, a technical standard used to protect metadata detailing how an asset was created, as well as information about what was modified and how. Google says the current version of Content Credentials is more secure and tamper-proof due to stricter validation methods.
In the coming months, Google will begin incorporating the current version of Content Credentials into some of its core products. In other words, it will soon be easier to tell if an image was created or modified using GAI in Google search results. If an image that appears has C2PA metadata, you should be able to find out what impact GAI had on it through the Tool. Also available in Google Images, Lens, and Circle to Search.
The company is looking into using C2PA to inform YouTube viewers when footage was captured on camera. More information on this is expected later this year.
Google also plans to use C2PA metadata in its ad systems. It hasn't revealed too many details about its plans, except to say that it will use “C2PA signals to inform how we enforce key policies” and that it will do so gradually.
Of course, the effectiveness of all this depends on whether companies such as camera manufacturers and GAI tool developers actually use the C2PA watermarking system. The approach also won't stop someone from stripping out metadata from an image. That could make it harder for systems like Google's to detect any use of GAI.
Meanwhile, throughout this year, we've seen Meta talk about revealing whether images were created with GAI on facebook, instagram, and Threads. The company has just ai-generated-content-and-manipulated-media/” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:changed its policy;cpos:6;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “> To make labels less visible on images that were edited with ai tools. Starting this week, if C2PA metadata indicates that someone (for example) used Photoshop's ai tools to retouch a genuine photo, the “ai Info” label no longer appears in the foreground. Instead, it appears hidden in the post's menu.