Anthropic admits its AI is being used to conduct cybercrime

Anthropic’s agentic AI, Claude, has been "weaponized" in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose "vibe hacking" extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government.

Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an "unprecedented" reliance on AI assistance. The report claims that Claude Code, Anthropic’s agentic coding tool, was used to "automate reconnaissance, harvest victims' credentials, and penetrate networks." The AI was also used to make strategic decisions, advise on which data to target and even generate "visually alarming" ransom notes.

As well as sharing information about the attack with relevant authorities, Anthropic says it banned the accounts in question after discovering criminal activity, and has since developed an automated screening tool. It has also introduced a faster and more efficient detection method for similar future cases, but doesn’t specify how that works.

The report (which you can read in full here) also details Claude’s involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of the three cases, according to Anthropic, is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.

Claude isn’t the only AI that has been used for nefarious means. Last year, OpenAI said that its generative AI tools were being used by cybercriminal groups with ties to China and North Korea, with hackers using GAI for code debugging, researching potential targets and drafting phishing emails. OpenAI, whose architecture Microsoft uses to power its own Copilot AI, said it had blocked the groups' access to its systems.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss
Établi 20h | 27 août 2025, 18:50:45


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

DJI's much smaller Mic 3 can record four subjects at once

DJI's busy engineering team is at it again with the creation of the company's

28 août 2025, 13:30:33 | Engadget
Dyson's Labor Day sale includes a 50-percent discount on the 360 Vis Nav robot vacuum

Dyson is holding a Labor Day sale right now, with discounts

28 août 2025, 13:30:30 | Engadget
It's the perfect time to buy a cheap used EV

Early this summer, my wife and I had an inconvenient realization: we may need to be in two places at once. An urgent doctor appointment could conflict with YMCA day camp pickup, or our kids would g

28 août 2025, 13:30:28 | Engadget
Sonos headphones and speakers are up to 25 percent off for Labor Day

The Labor Day and back-to-school season isn't only a good time to save on things like a new laptop. Case in poi

28 août 2025, 13:30:27 | Engadget
Early blogging service Typepad is shutting down for good

Typepad, a blogging service that launched in the same year as WordPress, has announced that it's shutting down on September 30. "We have made the difficult decision to discontinue Typepad," its tea

28 août 2025, 13:30:24 | Engadget
Apple Labor Day sales include the MacBook Air M4 for a record-low price

Whether you need a new MacBook for the upcoming semester or you've just be

28 août 2025, 13:30:21 | Engadget