Explicit Deepfake Images Remains a Problem but Barely Anyone is Talking About it

Explicit and sexual deepfake images of real people remain a rampant problem online, yet efforts to stop it has barely moved forward.

Despite several instances of people, often women and children, becoming victims of the almost unmanageable technology, awareness and advocacies to protect them are often short-lived.

Explicit Deepfake Images Remains a Problem but Barely Anyone is Talking About it

(Photo : Kyle Marcelino/iTech Post via h heyerlein/Unsplash)

Taylor Swift AI Deepfakes is Only the Tip of the Iceberg

Although the problem has long existed for quite some time, deepfakes' impact on women and children has become apparent after explicit AI-generated images of Taylor Swift were uploaded on social media.

The explicit fake images show the American pop singer being sexually harassed by supposed San Francisco 49ers.

The images and the accounts that shared the images quickly gained the attention of the internet as fans of Swift demanded online platforms, as well as AI firms, to prevent blatant sexualization of women through their technology.

The ensuing controversy on the AI has brought to light past issues of deepfakes being used to harass people, including one where it resulted in someone committing suicide, in hopes of calling for better guidelines against such acts.

Issues of existing market for

Also Read: Taylor Swift Deepfakes: The Danger of AI on the Wild West of Internet

Explicit Deepfakes: Still a Problem on Social Media

Despite repeated instances of AI posing danger to children and women online, actions to resolve the problem could probably be years away from implementation.

Although the US government has stated to act upon the issue following the AI-generated Swift, a concrete response remains locked behind deliberations and considerations amid increased lobbying regarding the technology.

As of writing, the US only has the presidential executive order in October to propose safety measures and guidelines for handling AI growth in the country. Actual laws and standards to regulate AI have yet to be finalized.

AI firms, on the other hand, have been rolling out new policies to protect their technologies from being abused for sexual exploitation, albeit at

Just last January, Reuters reported that the US government received 88.3 million of reports regarding AI-generated child abuse content since the technology became widely accessible in 2023.

Some of these reports supposedly even came from AI firms themselves as people are still able to bypass their safety measures against specific results.

In response, the US government is planning to require tech companies to disclose more information in the development of their new AI projects to look for vulnerabilities.

At the moment, however, the proposal remains on paper as the respective agencies handling its implementation have yet to finalize guidelines and rules on how will it be implemented.

Most AI companies, like OpenAI, often do safety tests and development behind closed company doors allowing flaws and zero-day vulnerabilities open for hackers and exploiters to use once detected.

Related Article: DignifAI: How an AI From 4Chan is Making Sweeps to the Feminist Movement

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost