AI firm claims Chinese spies used its tech to automate cyber attacks

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

The makers of artificial intelligence (AI) chatbot Claude claim to have caught Chinese government hackers using the tool to perform automated cyber attacks against around 30 global organisations.

Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research.

The company claimed in a blog post this was the “first reported AI-orchestrated cyber espionage campaign”.

But sceptics are questioning the accuracy of that claim – and the motive behind it.

Anthropic said it discovered the hacking attempts in mid-September.

Pretending they were legitimate cyber security workers, hackers gave the chatbot small automated tasks which, when strung together, formed a “highly sophisticated espionage campaign”.

Researchers at Anthropic said they had “high confidence” the people carrying out the attacks were “a Chinese state-sponsored group”.

They said humans chose the targets – large tech companies, financial institutions, chemical manufacturing companies, and government agencies – but the company would not be more specific.

Hackers then built an unspecified programme using Claude’s coding assistance to “autonomously compromise a chosen target with little human involvement”.

Anthropic claims the chatbot was able to successfully breach various unnamed organisations, extract sensitive data and sort through it for valuable information.

The company said it had since banned the hackers from using the chatbot and had notified affected companies and law enforcement.

Anthropic’s announcement is perhaps the most high profile example of companies claiming bad actors are using AI tools to carry out automated hacks.

It is the kind of danger many have been worried about, but other AI companies have also claimed that nation state hackers have used their products.

In February 2024, OpenAI published a blog post in collaboration with cyber experts from Microsoft saying it had disrupted five state-affiliated actors, including some from China.

“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks,” the firm said at the time.

Anthropic has not said how it concluded the hackers in this latest campaign were linked to the Chinese government.

It comes as some cyber security companies have been criticised for over-hyping cases where AI was used by hackers.

Critics say the technology is still too unwieldy to be used for automated cyber attacks.

In November, cyber experts at Google released a research paper which highlighted growing concerns about AI being used by hackers to create brand new forms of malicious software.

But the paper concluded the tools were not all that successful – and were only in a testing phase.

The cyber security industry, like the AI business, is keen to say hackers are using the tech to target companies in order to boost the interest in their own products.

In its blog post, Anthropic argued that the answer to stopping AI attackers is to use AI defenders.

“The very abilities that allow Claude to be used in these attacks also make it crucial for cyber defence,” the company claimed.

And Anthropic admitted it’s chatbot made mistakes. For example, it made up fake login usernames and passwords and claimed to have extracted secret information which was in fact publicly available.

“This remains an obstacle to fully autonomous cyberattacks,” Anthropic said.

source

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Editor's Pick