Microsoft claims Chinese, Russian, Iranian, North Korean hackers are using its Gen AI tools to attack the US

Microsoft claims Chinese, Russian, Iranian, North Korean hackers are using its Gen AI tools to attack the US

Feb 15, 2024 - 17:30
 0  139
Microsoft claims Chinese, Russian, Iranian, North Korean hackers are using its Gen AI tools to attack the US

In a statement released on Wednesday, tech giant Microsoft revealed that a number of adversaries of the US, including Iran North Korea, Russia and China, have begun employing generative artificial intelligence (AI) for orchestrating offensive cyber operations.

The company, headquartered in Redmond, Washington, disclosed that it had identified and thwarted threats, in collaboration with its partner OpenAI, which utilized or attempted to exploit AI technology developed by these adversaries.

According to a blog post, Microsoft described these techniques as “early-stage” and not particularly innovative but emphasized the importance of publicly exposing them. The move comes as rival nations increasingly leverage large-language models (LLMs) to bolster their capabilities in breaching networks and conducting influence operations.

While cybersecurity firms have traditionally employed machine learning for defence purposes, the emergence of LLMs, notably led by OpenAI’s ChatGPT, has intensified the cat-and-mouse game between defenders and malicious actors.

Microsoft, having invested significantly in OpenAI, coincided with its announcement with the release of a report highlighting the potential of generative AI to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. This poses a significant threat to democratic processes, especially amid the upcoming elections in over 50 countries.

The report detailed instances where Microsoft disabled generative AI accounts and assets of various groups. This includes North Korea’s cyberespionage group Kimsuky which used the models to research foreign think tanks and spear-phishing hacking campaigns.

Iran’s Revolutionary Guard is also involved, who leveraged LLMs for social engineering, troubleshooting software errors, and studying network intrusion evasion tactics. The Russian GRU military intelligence unit Fancy Bear focused on researching satellite and radar technologies related to the conflict in Ukraine. China’s cyberespionage groups Aquatic Panda and Maverick Panda explored ways to augment their technical operations using LLMs.

In a separate statement, OpenAI mentioned that its current GPT-4 model chatbot offers limited capabilities for malicious cybersecurity tasks beyond what non-AI powered tools can achieve. However, cybersecurity researchers anticipate advancements in this area.

Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, has previously highlighted the dual threats posed by China and artificial intelligence, emphasizing the need for AI development with security in mind.

Critics argue that the public release of large-language models, including ChatGPT, without adequate consideration for security measures was hasty and irresponsible. Some cybersecurity professionals urge Microsoft to focus on enhancing the security of LLMs rather than selling defensive tools to address vulnerabilities.

Experts warn that while the immediate threat from AI and LLMs may not be apparent, they could potentially become powerful weapons in the arsenal of every nation-state military.

(With inputs from agencies)

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow