You can't put the genie back in the bottle when it comes to generative ai forever shaking our trust in photographs, but the tech industry has a responsibility to at least be as transparent as possible when using these tools. To that end, Google has announced that starting next week, Google Photos will record when an image has been edited with the help of ai.
“Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from the International Press Telecommunications Council (IPTC) to indicate that they have been edited using generative ai,” John Fisher, director of Google engineering. photos, wrote ai-editing-transparency/”>in a blog post. “Now we're going a step further, making this information visible along with information like the file name, location, and backup status in the Photos app.”
You will find the “ai Information” section in the image details view of Google Photos both on the web and in the application.
These labels will also not be strictly limited to generative ai. Google says it will also specify when a “photo” contains elements from several different images, such as when people use Pixel's Best Take and Add Me features. It's encouraging to see. However, those who try to do so intentionally can easily bypass this metadata.
“This work is not over yet and we will continue to gather feedback and evaluate additional solutions to add more transparency around ai edits,” Fisher wrote.
Until now, the metadata attached to Google's ai tools has been virtually invisible to end users. The lack of something obvious this was done with ai The etiquette in Google Photos was one of my concerns when I wreaked a lot of chaos with Magic Editor's Reimagine toolwhich allows you to add ai-generated objects to an image that were never present in the original scene. Both Google and Samsung allow you to do this with their respective artificial intelligence tools. But Apple, which will roll out its first imaging features with iOS 18.2, has said it is moving decisively away from photorealistic content. Apple's Craig Federighi said the company is “concerned” about ai casting doubt on whether photos are “indicative of reality.”