Anthropic admits its AI is being used to conduct cybercrime

Anthropic’s agentic AI, Claude, has been "weaponized" in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose "vibe hacking" extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government.

Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an "unprecedented" reliance on AI assistance. The report claims that Claude Code, Anthropic’s agentic coding tool, was used to "automate reconnaissance, harvest victims' credentials, and penetrate networks." The AI was also used to make strategic decisions, advise on which data to target and even generate "visually alarming" ransom notes.

As well as sharing information about the attack with relevant authorities, Anthropic says it banned the accounts in question after discovering criminal activity, and has since developed an automated screening tool. It has also introduced a faster and more efficient detection method for similar future cases, but doesn’t specify how that works.

The report (which you can read in full here) also details Claude’s involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of the three cases, according to Anthropic, is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.

Claude isn’t the only AI that has been used for nefarious means. Last year, OpenAI said that its generative AI tools were being used by cybercriminal groups with ties to China and North Korea, with hackers using GAI for code debugging, researching potential targets and drafting phishing emails. OpenAI, whose architecture Microsoft uses to power its own Copilot AI, said it had blocked the groups' access to its systems.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss
Létrehozva 1d | 2025. aug. 27. 18:50:45


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

Over 450 Diablo developers at Blizzard have unionized

More than 450 Diablo developers at Blizzard Entertainment

2025. aug. 28. 20:30:23 | Engadget
Kobo ereaders are swapping out Pocket for Instapaper

Rakuten and Instapaper

2025. aug. 28. 20:30:21 | Engadget
Microsoft introduces a pair of in-house AI models

Microsoft is expanding its AI footprint with the

2025. aug. 28. 20:30:19 | Engadget
Sony RX1R III review: Waiting 10 years to be underwhelmed

The RX1R III is an incredible camera, capable of capturing stunning photos. However, for something Sony waited nearly a decade to update and is charging $5,100 for, it also feels like a missed oppo

2025. aug. 28. 18:10:19 | Engadget
Fubo Sports will make a play for football fans' dollars on September 2

Fubo is making a move to attract new subscribers ahead of the NFL season. The company's new Fubo Sports bundle includes content from ESPN, Fox and local affiliates. The football-friendly package co

2025. aug. 28. 18:10:17 | Engadget
Meta is experimenting with long-form text on Threads

Meta seems to be working on ways for Threads users to share long-form writing within a single post. Several users have

2025. aug. 28. 18:10:16 | Engadget
NVIDIA is (really) profiting from the AI boom

NVIDIA has revealed that its revenue for the second quart

2025. aug. 28. 15:50:18 | Engadget