Anthropic admits its AI is being used to conduct cybercrime

Anthropic’s agentic AI, Claude, has been "weaponized" in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose "vibe hacking" extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government.

Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an "unprecedented" reliance on AI assistance. The report claims that Claude Code, Anthropic’s agentic coding tool, was used to "automate reconnaissance, harvest victims' credentials, and penetrate networks." The AI was also used to make strategic decisions, advise on which data to target and even generate "visually alarming" ransom notes.

As well as sharing information about the attack with relevant authorities, Anthropic says it banned the accounts in question after discovering criminal activity, and has since developed an automated screening tool. It has also introduced a faster and more efficient detection method for similar future cases, but doesn’t specify how that works.

The report (which you can read in full here) also details Claude’s involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of the three cases, according to Anthropic, is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.

Claude isn’t the only AI that has been used for nefarious means. Last year, OpenAI said that its generative AI tools were being used by cybercriminal groups with ties to China and North Korea, with hackers using GAI for code debugging, researching potential targets and drafting phishing emails. OpenAI, whose architecture Microsoft uses to power its own Copilot AI, said it had blocked the groups' access to its systems.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss
Établi 6d | 27 août 2025, 18:50:45


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

The Morning After: DJI’s tiny Mic 3 can record four subjects at once

It was a quiet Labor Day weekend for tech, but the Engadget team has kept busy testing out new gear from the likes of DJI, Sony and Bose. I want to kick things off with the new flagship

2 sept. 2025, 11:40:17 | Engadget
The best cheap Android phones to buy in 2025

You don’t need to spend a fortune to get a new phone that handles your daily tasks with ease. The best cheap Android phones pack impressive features into affordable price tags, making them great op

2 sept. 2025, 09:30:10 | Engadget
Chinese social media platforms roll out labels for AI-generated material

Major social media platforms in China have started rolling out labels for AI-generated content to comply with a law that took effect on Monday. Users of the likes of WeChat, Douyin, Weibo and RedNo

1 sept. 2025, 21:50:25 | Engadget
Apple's MLS Season Pass drops to as low as $25 for the rest of 2025

The end of any sports season is usually the most exciting part, and MLS fans can watch the climax of the 2025 campaign for a discount. As it has done each year around this time since it

1 sept. 2025, 17:20:21 | Engadget
Hollow Knight: Silksong costs $5 more than the original

After years of waiting, there's only three days left until Ho

1 sept. 2025, 14:50:25 | Engadget