Google has apologized for what it describes as “inaccuracies in some historical imaging representations” with its Gemini ai tool, saying its attempts to create a “wide range” of results missed the mark. The statement follows criticism that it represented specific white figures (such as the Founding Fathers of the United States) or groups such as German soldiers of the Nazi era as people of color, possibly as an overcorrection for technology/23738987/racism-ai-automated-bias-discrimination-algorithm”>Long-standing racial bias issues in ai.
“We are aware that Gemini offers inaccuracies in some historical imaging representations,” Google's statement said. published this afternoon in X. “We are working to improve these types of representations immediately. Gemini's ai imaging generates a wide range of personas. And that's generally a good thing because people all over the world use it. But here it misses the mark.”
Google began offering image generation through its Gemini (formerly Bard) artificial intelligence platform earlier this month, matching offerings from competitors like OpenAI. However, in recent days, social media posts have questioned whether they are failing to produce historically accurate results in a bid to achieve racial and gender diversity.
ai-gemini-white-people/”>As the daily point Chronicles, the controversy has been largely, though not exclusively, promoted by right-wing figures attacking a technology company that is perceived as liberal. Earlier this week, a former Google employee posted on of a Swedish woman. American Woman.” The results seemed to overwhelmingly or exclusively show ai-generated people of color. (Of course, all of the places you listed have women of color living in them, and none of the ai-generated women exist in any country.) The criticism was picked up by right-wing accounts that requested images of groups or historical figures such as the Founding Fathers and allegedly resulted in overwhelmingly non-white people generated by ai. Some of these accounts positioned Google results as part of a conspiracy to prevent represent white people, and at least one used a coded anti-Semitic reference to place blame.
Google did not reference specific images that it considered errors; in a statement to The edgereiterated the content of his post about X. But it is plausible that Gemini made a general attempt to boost diversity due to a chronic lack of this in generative ai. Image generators are trained on large corpora of images and written captions to produce the “best” fit for a given message, meaning they are often prone to amplifying stereotypes. technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/”>TO Washington Post investigation Last year it found that messages like “a productive person” resulted in images of all-white, almost exclusively male figures, while a message like “a person in social services” uniformly produced what looked like people of color. It is a continuation of the trends that have appeared in technology“>search engines and other software systems.
Some of the accounts that criticized Google defended its fundamental goals. “It is good to portray diversity ** in certain cases **”, noticed one Person who posted the image of racially diverse German soldiers from the 1940s. “The stupid move here is that Gemini isn't doing it in a nuanced way.” And although the results are totally dominated by whites for something like “a German soldier from 1943” make historical sense, that's much less true for topics like “an American woman,” where the question is how to represent a diverse, real-life group in a small batch of invented portraits.
For now, Gemini appears to simply reject some imaging tasks. It wouldn't generate an image of Vikings for one. Edge reporter, although I was able to get a response. At the desk, he resolutely refused to give me pictures of German soldiers or officials from Germany's Nazi period or to offer me a picture of “a 19th century American president.”
But some historical requests still end up misrepresenting the past. A colleague managed to get the mobile app to deliver a version of the “German soldier” message, which had the same problems described in X.
And while a query for photographs of “the Founding Fathers” returned group photos of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for “a 19th-century U.S. senator” returned a list of results that Gemini promoted as “diverse.” ” including what appeared to be black and Native American women. (He first female senatora white woman, served in 1922.) It's an answer that ends up erasing a real history of racial and gender discrimination: the “inaccuracy,” as Google says, is more or less correct.
Additional information from Emilia David