
Anthropic’s latest Threat Intelligence Report warns that hackers, scammers, and state-backed groups are increasingly using its Claude chatbot to carry out sophisticated cyberattacks.
The report, published this week, outlines how criminals have used AI to automate data theft, extortion, fraudulent employment, and ransomware development, posing new challenges for cybersecurity defenders.
One of the most serious cases highlighted by Anthropic involved a cybercriminal operation codenamed GTG-2002. According to the company, the actor used Claude Code to carry out large-scale data theft and extortion against at least 17 organizations, including hospitals, emergency services, government agencies, and religious institutions.
Rather than encrypting files like in typical ransomware attacks, the hacker threatened to leak stolen information unless victims paid ransoms, which in some cases exceeded $500,000.
Anthropic said the attacker used its AI to an “unprecedented degree”, automating tasks such as scanning for vulnerable systems, harvesting credentials, and deciding which stolen files were most valuable. The chatbot also generated ransom notes and analyzed victims’ financial data to suggest “psychologically targeted extortion demands.”
North Korean job scams supercharged by AI
Another worrying trend flagged in the report involves North Korean IT operatives using Claude to secure remote jobs at US Fortune 500 companies. By generating convincing resumes, passing coding tests, and even performing technical tasks, these operatives allegedly funneled salaries back to Pyongyang in violation of international sanctions.
Anthropic said the use of AI has removed long-standing barriers for such schemes.
“Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies,” the report noted.
The FBI has previously warned about similar schemes, but Anthropic’s findings suggest that generative AI is making such operations more accessible and harder to detect.
AI-generated ransomware for sale
Another case involved a cybercriminal with limited coding skills who used Claude to create multiple ransomware strains. These were marketed on underground forums for between $400 and $1,200, each featuring encryption and anti-recovery functions.
Anthropic said the actor was “dependent on AI to develop functional malware”, highlighting how advanced cyberweapons are now within reach of low-skill criminals.
Beyond individual hackers, Anthropic said nation-state actors also exploited its tools. A Chinese-linked group allegedly used Claude to enhance cyber operations against Vietnamese critical infrastructure, integrating the chatbot across nearly all MITRE ATT&CK tactics during a nine-month campaign.
The group is believed to have compromised telecom providers, government databases, and agricultural systems, suggesting the campaign had national security implications.
Anthropic’s response and industry implications
Anthropic said it has banned the accounts tied to these operations, implemented new “preventative safety measures,” and shared its findings with the authorities. The company also acknowledged that AI-assisted cybercrime is advancing more rapidly than many had expected.
“Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators,” Anthropic warned.
Anthropic’s August report is the latest indication that AI misuse is no longer theoretical, as cybercriminals are incorporating it into their playbooks in ways that make attacks faster, cheaper, and more difficult to defend against.
Want to see how Anthropic is fighting back? Check out Claude Code’s new always-on AI security reviews, which help catch vulnerabilities before attackers can exploit them.