Google’s Gemini Generates Racially Innacurate Historical Depictions

We are still yet to have an AI image-generating tool that does not create content that has the tendency to be racially biased. Even with one of the most advanced AI yet, Google's Gemini, the racial issue seems to still be present.

(Photo : Lorenzo Di Cola/NurPhoto via Getty Images)

Gemini's Inaccurate Image Generation

If certain AI tools worsen the racial stereotypes in the images they create, Gemini seems to have the problem with the opposite. In their attempt to eliminate the flaw, the AI tool appears to have overcorrected what should be a simple image of a particular race.

For instance, asking the tool to generate German soldiers in the Nazi period ended up showing various people of color, as reported by The Verge. While diversity plays an important part in being an inclusive AI tool, it still needs to work as intended.

Google expressed that they are aware of the "inaccuracies in some historical image generation depictions," and that they are working to improve the flaws immediately. The search engine giant addressed the issue on X

The post on social media states that "Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here."

A former Google employee even said that it's "embarrassingly hard to get Google Gemini to acknowledge that white people exist," given that the images generated for "an American woman" are either mostly or only people of color.

It might still be hard for the chatbot to understand certain historical contexts, such as why Nazi Germany would only have white soldiers instead of a diverse lot, or why you wouldn't see any black politicians around the 1800s.

Read Also: OpenAI's Sora Alarms Experts Over Impacts on Deepfakes, Political Propaganda

General Racial Bias in AI

What's important to understand is that developers don't intend to make AI models and tools that way. Whatever the outcome is, artificial intelligence gets it from the training data that was used, meaning that technically, its behavior and knowledge come from a variety of people.

For instance, some AI models are trained using social media posts and online forums for a more conversational tone, but considering the vast amount of data, it's not exactly easy or ideal to check out every one of them to determine which ones might be racist or prejudiced.

With some content coming from such perspectives, these in turn could influence the generated text or images that AI tools could create. What's unfortunate is that image or text generation is not the only function affected by this at all.

Facial recognition has also been heavily criticized for not being able to discern people from one another if they have a darker complexion. This can be a huge problem when facial recognition is used to aid in criminal investigations.

As per Interesting Engineering, a study showed that facial recognition had a 99% accuracy rate when it comes to white male faces, compared to the identification of women and people of color, the error rate can go as high as 35%.

Related: Expert Says Facial Recognition Using AI May Lead to False Arrests Due to Bias

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost