Researchers have sounded the alarm about a dangerous AI tool known as “WormGPT” that is being marketed on the dark web as a potent weapon for hackers.
Unlike legitimate AI models such as ChatGPT and Bard, WormGPT operates without ethical boundaries or limitations, making it an alluring choice for cybercriminals.
Designed specifically for malicious activities, this tool generates human-like text for phishing attacks and crafts code for malware assaults.
Tests conducted by researchers revealed its uncanny ability to deceive and strategize, highlighting its potential for sophisticated cyberattacks.
WormGPT, trained on diverse data sources with a focus on malware-related information, poses a significant threat in the hands of cybercriminals.
Unlike legitimate AI tools that incorporate safeguards against misuse, WormGPT is purpose-built to facilitate criminal activities.
It generates persuasive emails and text that can easily dupe unsuspecting individuals, making it a powerful weapon for phishing attacks.
The anonymous developer of WormGPT views it as a formidable adversary to well-known AI models like ChatGPT and emphasizes its capacity to aid users in executing illegal actions.
Law enforcement agencies, including Europol, have expressed deep concerns about the potential misuse of large language models
like ChatGPT by cybercriminals.
These models have the ability to generate highly authentic and convincing texts, making them valuable tools for phishing schemes. In the past, grammatical and spelling errors often exposed basic phishing scams.
However, with LLMs like ChatGPT, cybercriminals can now impersonate organizations or individuals in an incredibly realistic manner, even with minimal English language skills.
This enables hackers to execute cyberattacks faster, with greater authenticity, and on a significantly larger scale.