YouTube Creators Need to Disclose Whether Videos are AI-Generated, Manipulated

Generative AI tools have come to the point where they can create realistic images and videos, so much so that the untrained eye can easily mistake generated or manipulated content for real ones. To prevent such things from happening, YouTube requires its creators to disclose if AI was used in the making of videos.

YouTube

(Photo : Mehmet Futsi/Anadolu via Getty Images)

Disclosure for AI Use

This can come as good news for some people, especially for those who already adopted AI tools to edit or generate videos, given that YouTube is not completely banning AI manipulations or generations altogether. If anything, it only gets rid of users who try to pass them off as real.

Deepfakes are becoming a larger concern online. Not everyone can discern fake content from altered ones, and AI has been exploited by bad actors to make fake information based on photos and videos more convincing.

With that said, YouTube will now require its creators to disclose the use of AI so that the video would be labeled as "altered content" as a warning, as reported by Ars Technica. This would mostly tackle videos that depict real people or events.

The instructions in question could use a bit more clarity as it says that disclosure is only required when a viewer can easily mistake it for a real person, place, or event. That might mean that labeling it as such won't be needed if the video is clearly fake.

Examples are provided for creators that can be found in the new upload questionnaire, which states that using the likeness of a realistic person, altering footage of real events or places, or generating realistic scenes will fall under the new policy.

Under the "Yes" and "No" options, it says that to follow the new rule, "you're required to tell us if your content is altered or synthetic and seems real. This includes realistic sounds or visuals made with AI or other tools. Selecting 'yes' adds a label to your content."

Google announced that the labels will begin rolling out "across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV," although it did not specify the dates for when that will be.

Read Also: EU Passes First AI Regulation Law: How it Works and Why it Matters?

Deepfakes in Election Season

The new rule might be Google's way to lessen the proliferation of deepfakes, specifically concerning the election. Misinformation has been a tool used to swing voters in certain directions, and AI-generated content can easily be used for smear campaigns.

In fact, Google has taken the extra step by restricting its AI chatbot, Gemini, from answering election-related questions. According to 9To5Google, this will not only affect the US elections but elections around the globe as well.

The search engine giant says that the restriction is in "preparation for the many elections happening around the world in 2024." Users can still ask, but Gemini will respond by saying that it is "still learning how to answer this question," and ask you to try Google Search.

Related: Experts Found That AI Tools Are Becoming More Racist As They Advance

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost