AI Tool ‘WormGPT’ from the Dark Web Can Be Used to Conduct Cyberattacks

Just when AI companies and government agencies have started working on issues with AI regarding copyright, it looks like another threat is starting to emerge. AI can now potentially be used as a weapon to conduct fraudulent activities and create malware.

Hacker
(Photo : Getty Images)

WormGPT as a Weapon

Like the mainstream AI tools provided by companies like Microsoft and OpenAI, this dark web AI tool also has advanced features. The only difference is that it is not limited by moderation and other guardrails, meaning that it can be used for whatever threat actors need it for.

It has unlimited character support, chat memory retention, and code-generating capabilities. Without the restrictions, hackers are able to create malware with the help of AI. With access, hackers can modify its source code, as reported by Interesting Engineering.

Threat actors and criminals have already paid for access to WormGPT and used it to launch cyber attacks like phishing emails, identity theft, and malware attacks, according to the managing partner at the Australian cyber firm Tesserent, Patrick Butler.

What's more terrifying is that does a significantly better job than humans in creating a convincing scam. Since it's basically a machine that has access to vast knowledge, it can create phishing emails in several languages with no mistakes in grammar or spelling.

Without much work, hackers can use the AI tool to create new variants of malware that detection and antivirus software have no defense against yet. It can be used to search for vulnerabilities in systems as well so hackers can exploit them.

The creator of WormGPT himself, who is a 23-year-old Portuguese hacker called "Last" says that it can do "everything blackhat related that you can think of." An advertisement can be found on the dark web and anyone can pay to have access to it.

If that's not bad enough, WormGPT is not the only rogue AI tool that can be found on the dark web. Butler says there are several others such as FraudGPT, EvilGPT, DarkBard, WolfGPT, and XXXGPT. Since they are spreading fast, it's becoming more difficult to find ways to stop them.

Read Also: Developer Creates OpenAI-Powered 'Disinformation Machine' to Show Dangers of AI

Generative AI Used for Fraud

There are numerous aspects of fraudulent activities in which generative AI can play an important role. The one that's been previously discussed where AI tools are used to generate malware, is just one instance of how AI can be abused.

In fact, threat actors are getting creative with the use of AI when it comes to luring victims as well. Through deepfake, hackers can create more convincing scenarios in which they can convince people to download certain apps or click on links.

For example, threat actors can appeal to people's sympathy by generating fake images of casualties in the current Palestine vs Israel issue, and then provide a malicious link claiming that it leads users to a site that lets them donate to those in need.

Not only does it help with making a scam look legitimate, but it also contributes to the already rampant misinformation that circulates around social media and other sites. This yet again proves that with the advantage and convenience that AI provides also comes risks and threats.

Related: Hassan Taher Warns: AI Is Shaping the Future of Cyberattacks

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost