Pope Francis Expresses Concerns Over AI Deepfakes After the Viral ‘Balenciaga Pope’ Photo

You will probably never see the head of the Roman Catholic Church in a puffy white coat, but thanks to generative AI, it's as easy as providing a few specific prompts. While this might be entertaining for others, the Pope himself addressed the dangers of such capabilities.

Pope Francis
(Photo : Vatican Media via Vatican Pool/Getty Images)

Pope Francis on Deepfakes

It's been a while since the "Balenciaga Pope" deepfake circulated on social media in March last year, where people were actually fooled into thinking it was a real photo. It's only now that the Pope is addressing the matter and speaking out about the dangers of AI.

While the Pope in a puffy white jacket might be harmless enough, the fact that one can easily create fake photos is the part that concerns a lot of people. Many have already taken advantage of AI tools to spread misinformation.

Regarding deepfakes, the Pope wrote during the 58th World Day of Social Communications: "We need but think of the long-standing problem of disinformation in the form of fake news, which today can employ deepfakes," as reported by Ars Technica. 

It's not just the fake AI images that the church leader was concerned about. There has also been an uptick in circulating deepfake audio. With that said, the Pope warned about using AI tools to "use a person's voice to say things which that person never said," as well.

"The technology of simulation behind these programs can be useful in certain specific fields, but it becomes perverse when it distorts our relationship with others and with reality," the Pope continued.

Read Also: OpenAI Bans Political Chatbot Developer for Mimicking Presidential Candidate

The Dangers of Deepfakes

When the photo of the Pope in a puffy white jacket went viral, many admitted that they thought it was a real photo. Given the quality of the generated photo, it can easily pass as a genuine image that was shot using a camera.

Anyone who doesn't have the expertise to discern AI-generated photos from real ones can be tricked, and unfortunately, there are only a few people who can watch out for the small details that tell whether a photo is fake.

Just recently, a robocall reached residents of New Hampshire encouraging them not to vote, and it was in the voice of the US President Joe Biden. Experts say that it's most likely that the robocall was made using an AI voice tool.

The New Hampshire attorney general's office expressed that the call was an unlawful attempt at suppressing voters from putting down Biden's name for the Democratic presidential primary Tuesday, as reported by NBC News.

So far, investigators have not found the creator of the calls or the AI tool that was used to make them. All bad actors need is a few audio samples to mimic someone's voice, which is easy to acquire when you're trying to replicate the voices of politicians.

While there are now tools to determine whether certain contents are AI-generated, they aren't as foolproof as one would hope. AI tools are becoming more advanced, meaning that the obvious signs that something is AI-generated might not be there anymore.

Related: Deepfake Audio of Biden Tells Democrats Not to Vote, Alarms Watchdogs

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags Deepfake AI

More from iTechPost