Experts Found That AI Tools Are Becoming More Racist As They Advance

AI tools are continuously developed so that they will become safer to use in various applications and scenarios, but with the complexity of the technology, it will take a lot before it's completely safe. In this case, it might even get worse before it gets better.


(Photo : Jonathan Raa/NurPhoto via Getty Images)

AI's Discriminative Tendencies

Large language models have shown signs of discrimination in many cases, and developers try to fix it every time. However, the effort is still not enough as experts found that not only are AI tools still racist in some aspects, it's actually getting worse.

Sure, AI companies managed to get rid of obvious racist outcomes and prevent prompts that explicitly contain such content, but there is still the matter of covert racism through stereotypes. This can be harmful, especially when the users are unaware of the potential effects.

For instance, AI technology is already being used in screening job applicants. Since companies only ever resolved overt racial biases, AI systems could react to other markers like dialect such as African American Vernacular English (AAVE), which is used by Black Americans.

The paper that details the occurrence pointed out that Black people who use AAVE speech are "known to experience racial discrimination in a wide range of contexts, including education, employment, housing, and legal outcomes."

Allen Institute for Artificial Intelligence researcher Valentin Hoffman found that AI models like OpenAI's ChatGPT and Google's Gemini were significantly more likely to assign AAVE speakers to lower-paying jobs as they were seen as "lazy" and "stupid," as per The Guardian.

There are many ways that this can affect job applicants. Even their social media posts could play a role in certain outcomes. "It's not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence," said Hoffman.

Even in hypothetical criminal defenses, AIs were more likely to recommend the death penalty to those who used AAVE in their court statements. While we are still far from using AI in the justice system, being aware of its flaws is a good starting point if it ever gets to that.

Read Also: Expert Says Facial Recognition Using AI May Lead to False Arrests Due to Bias

Overcorrection in AI Models

Even with fixing the AI's tendency to be less exclusive and racially biased, there is such a thing as overcorrecting. That has been proven by Google Gemini's problematic image-generating functions and its generation of inaccurate depictions.

Just a month ago, the AI chatbot faced criticism for what Google describes as "missing the mark." When asked to generate images of Nazi German soldiers, Gemini generated images of soldiers who were also people of color, as reported by The Verge.

When it was asked to generate a photo of an American woman, the results were either mostly or exclusively people of color as well. The same happened when a user tried to generate an image of historical figures like the Founding Fathers of America.

A former Google employee even stated that it was "embarrassingly hard to get Google Gemini to acknowledge that white people exist." Google immediately took down the chatbot's image generator feature to fix the issue, and is already back up.

Related: Google's Gemini Generates Racially Inaccurate Historical Depictions

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost