Meta’s Updated Political Advertising Policies Now Address AI-Generated Content

Meta has always been strict with the advertising content that appears on its social media platforms. It has several policies in place to inform advertisers what kind of rules they have to follow. With the emergence of AI, Meta has to update its policies as well.

Meta
(Photo : Omar Marques/SOPA Images/LightRocket via Getty Images)

Disclosure of AI-Generated Content

Elections are nearing and many are already launching political ads through social media sites. With these at hand, there is already false or fake content that is meant to spread misinformation and pull voters away from certain candidates.

Now that AI tools are more accessible, anyone can create realistic scenarios through text prompts that can be hard to differentiate from real events. Meta aims to resolve this issue by requiring advertisers to disclose when they use AI to create political ads.

This also applies to other digital techniques which are also used to create ads for social issues. The kind of content that falls under digital generation or manipulation are photorealistic images or videos, as well as realistic-sounding audio.

It also applies to the digital creation or alteration that generates a realistic-looking person that does not exist, or a realistic-looking event that never happened. The same goes for altered real videos, or recreation of images, audio, and video of alleged events that occurred.

This kind of vigilance will also be observed with other election-related concerns. For instance, Meta will be more strict when a user violates Community Standards through election and voter interference, hate speech, coordinating harm, publicizing crime, bullying, and harassment.

Meta boasts that it has around 40,000 people who only work on its social media site's safety and security. The company has already invested $20 billion in teams and technology since 2016 and reviews misinformation in as many as 60 languages.

These teams have already effectively taken down over 200 malicious campaigns on its platforms with the operation called Coordinated Inauthentic Behavior. Over 700 hate groups and 400 supremacist organizations have also been assessed and taken down.

Read Also: Facebook's 'State-Controlled Media' Labels Lower Engagement

It Was Worse Three Years Ago

With Facebook's new policies and detection tools, misinformation is now reduced. Although it's unclear by how much, users can now see certain posts that have been fact-checked, even specifically stating which part of the content is false.

Back in 2020, the social networking platform was still scrambling to contain misinformation. Reports say that misinformation is viewed six more times than factual news, which is a very alarming number, especially during the election period.

According to the Washington Post, Facebook reasoned that the numbers in the report only measured the people who engaged with the content, but it's not the same as the number of people who viewed it.

Facebook spokesperson Joe Osborne said that when you look at the content that gets the most reach across Facebook, it is not at all like what this study suggests. Unfortunately, Facebook impressions were not available to researchers at the time, making it impossible to confirm the numbers.

Related: Experts Say Advanced AI Could Affect US Elections

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost