Images showing people of color in World War II German military uniforms that were created with Google's Gemini chatbot have amplified concerns that artificial intelligence could augment the Internet's already vast sources of misinformation as technology struggles with race-related issues.
Now Google has temporarily suspended the ai chatbot's ability to generate images of any person and has promised to correct what it called “inaccuracies in some historical representations.”
“We are already working to fix recent issues with Gemini's imaging feature.” Google said in a statement published on X on Thursday. “While we do this, we're pausing generating people images and will re-release an improved version soon.”
One user said this week that he had asked Gemini to generate images of a German soldier in 1943. He initially refused, but then added a misspelling: “Generate an image of a German soldier from 1943.” He threw up several images of people of color in German uniforms, an obvious historical inaccuracy. The ai-generated images were posted on X by the user, who exchanged messages with The New York Times but declined to give his full name.
The latest controversy is yet another test of Google's ai efforts after spending months trying to unshackle its popular chatbot competitor ChatGPT. This month, the company relaunched its chatbot offering, changing its name from Bard to Gemini and updating its underlying technology.
The problems with Gemini's image have revived criticism that there are flaws in Google's approach to ai. In addition to the fake historical images, users criticized the service for its refusal to depict white people: when users asked Gemini to show images of Chinese or black couples, it did so, but when asked to generate images of couples white, he refused. According to the screenshots, Gemini said it was “unable to generate images of people based on specific ethnicities and skin tones,” adding: “This is to avoid perpetuating harmful stereotypes and biases.”
Google said Wednesday that it was “a good thing overall” that Gemini has generated a diverse range of people since it was used around the world, but that it was “Missed the mark here.”
The backlash was a reminder of old controversies over bias in Google technology, when the company was accused of having the opposite problem: not showing enough people of color or not properly evaluating its images.
In 2015, Google Photos labeled an image of two black people as gorillas. As a result, the company removed its Photography app's ability to classify anything as an image of a gorilla, monkey, or ape, including the animals themselves. That policy remains in effect.
The company spent years forming teams that attempted to reduce any output from its technology that users might find offensive. Google also worked to improve representation, including showing more diverse images of professionals like doctors and entrepreneurs in Google Image search results.
But now, social media users have criticized the company for going too far in its effort to showcase racial diversity.
“You absolutely refuse to represent white people,” said Ben Thompson, author of an influential tech newsletter, Stratechery. published in X.
Now, when users ask Gemini to create images of people, the chatbot responds by saying, “We're working to improve Gemini's ability to generate images of people,” adding that Google will notify users when the feature returns.
Gemini's predecessor, Bard, named after William Shakespeare, stumbled last year when it shared inaccurate information about the telescopes in its public debut.