Anthropic admits its AI is being used to conduct cybercrime

Anthropic’s agentic AI, Claude, has been "weaponized" in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose "vibe hacking" extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government.

Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an "unprecedented" reliance on AI assistance. The report claims that Claude Code, Anthropic’s agentic coding tool, was used to "automate reconnaissance, harvest victims' credentials, and penetrate networks." The AI was also used to make strategic decisions, advise on which data to target and even generate "visually alarming" ransom notes.

As well as sharing information about the attack with relevant authorities, Anthropic says it banned the accounts in question after discovering criminal activity, and has since developed an automated screening tool. It has also introduced a faster and more efficient detection method for similar future cases, but doesn’t specify how that works.

The report (which you can read in full here) also details Claude’s involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of the three cases, according to Anthropic, is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.

Claude isn’t the only AI that has been used for nefarious means. Last year, OpenAI said that its generative AI tools were being used by cybercriminal groups with ties to China and North Korea, with hackers using GAI for code debugging, researching potential targets and drafting phishing emails. OpenAI, whose architecture Microsoft uses to power its own Copilot AI, said it had blocked the groups' access to its systems.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss https://www.engadget.com/ai/anthropic-admits-its-ai-is-being-used-to-conduct-cybercrime-170735451.html?src=rss
Creato 2d | 27 ago 2025, 18:50:45


Accedi per aggiungere un commento

Altri post in questo gruppo

HoYoverse's Star Rail spinoff is Honkai: Nexus Anima

HoYoverse's next gacha game has shades of Teamfight Tactics an

29 ago 2025, 19:40:18 | Engadget
Instagram adds inbox management tools for creators and big accounts

Big-time creators on Instagram just got a bit of welcome news. The platform is introducing inbox managem

29 ago 2025, 19:40:17 | Engadget
IFA 2025: What to expect from Samsung, Acer, Lenovo and more

IFA, Europe's answer to the CES, kicks off on September 5 in Berlin, Germany. The show likely won't be the biggest source of news in September — Apple's

29 ago 2025, 19:40:13 | Engadget
Meta is re-training its AI so it won't discuss self-harm or have romantic conversations with teens

Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company's chatbots. The company says it's adding new "guardrails as an extra precaut

29 ago 2025, 19:40:12 | Engadget
Libby is adding an AI book recommendation feature

Overdrive's digital book lending app Libby is adding — you guessed it! — AI. The new Inspire Me feature

29 ago 2025, 19:40:10 | Engadget
The White House reportedly ordered xAI's Grok to be approved for government use

Despite some fallout between President Trump and Elon Musk, the White House appears to still be in Musk’s corner. Wired is

29 ago 2025, 17:20:25 | Engadget