Developer Creates OpenAI-Powered ‘Disinformation Machine’ to Show Dangers of AI

Artificial intelligence has given us the opportunity to develop technologies that could benefit humans in a lot of ways. Unfortunately, not everyone has the same good intention and aim to misuse AI for their own gains. A developer proved that by creating a machine that spreads fake information.

Code
(Photo : Getty Images)

Impacts of Online Misinformation

The developer behind the tool, who chose to stay anonymous, called it CounterCloud. It is powered by OpenAI technology which is also used to run ChatGPT. All that's known about the creator of the misinformation machine is that the person is a cybersecurity professional.

Calling himself/herself Nea Paw, the developer claims that the project was created in two months and has the ability to create mass propaganda. All it takes is $400 for it to operate, as mentioned in Business Insider, making it easy for others to do it.

The purpose of the project is to see how AI disinformation would "work in the real world," also showing how "strong language competencies of large language models are perfectly suited to reading and writing fake news articles."

All the user would need are articles from the web, and CounterCloud would be able to create opposing articles. It would be written in different styles and viewpoints, all while negating the original article that has accurate and factual content.

To further sell the fake articles, the AI tool can also create fake journalist profiles along with fake comments below the articles generated. In just two months, the system managed to create content that was convincing "90% of the time, 24 hours a day, seven days a week."

Of course, the developer of CounterCloud did not actually release the false content on the web, since it would then spread disinformation. However, the tool can be used as an educational example of what AI is capable of in fabricating stories.

Read Also: Google DeepMind's SynthID Puts an Invisible Watermark on AI-Generated Images

AI Being Misused

AI, especially generative AI, has the potential to create pretty convincing content. That, at the hands of a bad actor, could prove to be very harmful. Most still don't have the knowledge to discern fake content created by AI from real ones.

With the presidential election coming this 2024, generative AI has been used to spread political ads that contain misleading information. Given that AI-generated images are sometimes too realistic, there's a good chance that people will actually believe what they see.

One proof of that is how fast the photo of "Balenciaga Pope" spread, which shows the pope wearing a white puffer jacket. A lot of people were convinced it was real because the signs that it was AI-generated were barely noticeable.

Only a few in the field could point out the factors that show it was generated by artificial intelligence. As shown by CounterCloud, it's easy enough to create convincing articles that might push people in a certain direction.

For instance, it might convince them that certain candidates for the election would not be a fitting leader, which makes for an effective smear campaign for the benefit of the opposition. All a person needs is access to generative AI tools to create hundreds if not thousands of fake content.

Related: Experts Say Advanced AI Could Affect US Elections

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost