EU Urges Tech Companies to Label AI-Generated Content to Prevent Disinformation

The European Union has been vigilant when it comes to the fast development of AI, setting down measures to make sure that the technology does more good than harm. One of its latest proposed solutions is adding labels to AI-generated content to prevent disinformation.

European Commission Vice President Vera Jourova
(Photo : KENZO TRIBOUILLARD/AFP via Getty Images)

Labeling AI-Generated Content

With generative AI becoming more and more widely used, experts are starting to see how the tools can be used to spread false information. It has reached a point of advancement where it's now difficult to differentiate AI-generated content from real ones.

With AI's capabilities, threat actors may start using it to spread both disinformation and misinformation, with the former being an act of spreading false information on purpose, and the latter doing it unknowingly.

Vera Jourova, EU Commission's Vice President, stated that the ability of a new generation of chatbots to create complex content and visuals create "fresh challenges for the fight against disinformation," as reported by PBS

Several tech giants such as TikTok, Google, Meta, and Microsoft have been asked to work on the issue, especially platforms that have integrated generative AI in their products and service like Microsoft's search engine Bing, and Google's chatbot Bard. 

Although there were no specifics and there are yet to be policies made, EU Commission VP Jourova said in a briefing that the companies should build safeguards to prevent those who intend to use AI from generating content meant for disinformation.

While advocating for free speech and protecting it, Jourova expressed that machines don't have the right to freedom of speech, and so content generated by AI should be recognized through a clear label stating that artificial intelligence was used to create it.  

For instance, it was easy enough to generate a photo where black smoke was visible near the Pentagon, implying that there had been an explosion near the government headquarters. This photo briefly affected the stock market, proving how dangerous fake cont can be.

Read Also: OpenAI Launches AI-Written Text Detection Tool: AI Text Classifier

AI Could Even Affect Elections

Disinformation and misinformation can cause damage to people or organizations, but they can reach a widescale issue when generative AI starts getting used for smear campaigns, which has been a problematic part of politics for years.

Through AI, the opposition can generate fake audio, videos, images, or any other content,  which could ruin an election candidate's reputation. In turn, it could have a huge effect on the final turnout of elections. 

Even the creator and CEO of the AI company giant OpenAI, Sam Altman, said that the models behind the latest generation of AI technology can be used to manipulate users, saying that it was a "significant area of concern."

Altman added that regulation would be wise, which has already been in the plans of both the US and the UK. The OpenAI CEO says that people need to know if they are talking to AI or if the content that they're looking at is generated or not, as mentioned in The Guardian.

This idea is the same as the EU Commission Vice President's proposal, which upon implementation, could impact disinformation on a massive scale, especially since AI can easily be exploited by threat actors to conduct fraudulent activities as well.

Related: Experts Say Advanced AI Could Affect US Elections

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost