Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
OpenAI and Microsoft Terminate State-Backed Hacker Accounts
Hackers Used LLMs to Perform Tasks That Non-AI Tools Can PerformNation-state hackers including Russian military intelligence and hackers backed by China have used OpenAI large language models for research and to craft phishing emails, the artificial intelligence company disclosed Tuesday in conjunction with major financial backer Microsoft.
See Also: GDPR & Generative AI: A Guide for Customers
Threat actors linked to Iran and North Korea also used GPT-4, OpenAI said. Nation-state hackers primarily used the chatbot to query open-source information, such as satellite communication protocols, and to translate content into victims' local languages, find coding errors and run basic coding tasks.
"The identified OpenAI accounts associated with these actors were terminated," OpenAI said. It conducted the operation in collaboration with Microsoft.
"Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," the Redmond, Washington-based technology giant said. Microsoft's relationship with OpenAI is under scrutiny by multiple national antitrust authorities.
A British government study published earlier this month concluded that large language models may boost the capabilities of novice hackers but so far are of little use to advanced threat actors (see: Large Language Models Won't Replace Hackers).
China-affiliated Charcoal Typhoon used ChatGPT to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. The group targets sectors including government, higher education, communications infrastructure, oil and gas, and information technology. It primarily focuses on Asian countries and those that oppose China's policies. It is also called Aquatic Panda, ControlX, RedHotel and Bronze University.
Salmon Typhoon, which also has links to China, used ChatGPT to translate technical papers, retrieve publicly available information on intelligence agencies and regional threat actors, code, summarize technical papers, and research common ways to hide processes on a system. Also called Sodium, APT4 and Maverick Panda, the threat actor has previously targeted U.S. defense contractors, government agencies, and entities within the cryptographic technology sector. It recently resurfaced after having been dormant for over a decade.
"This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies," Microsoft said.
Iran-linked Crimson Sandstorm used ChatGPT to script support related to app and web development, generate content likely for spear-phishing campaigns and research common ways for malware to evade detection. Potentially connected to the Islamic Revolutionary Guard Corps and active since at least 2017, Crimson Sandstorm targets victims in the defense, maritime shipping, transportation, healthcare and technology sectors. It is also called Tortoiseshell, Imperial Kitten, and Yellow Liderc.
North Korea's Emerald Sleet used ChatGPT to find experts and organizations focused on defense issues in the Asia-Pacific region, seek publicly available information about vulnerabilities, obtain help with basic scripting tasks and draft content that could be used in phishing campaigns. "Highly active" in 2023, the hacker group impersonated academic institutions and nongovernmental organizations to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. The group is also known as Kimsuky and Velvet Chollima.
Russian Forest Blizzard used GPT-4 for open-source research into satellite communication protocols and radar imaging technology and for support with scripting tasks - such as file manipulation, data selection, regular expressions and multiprocessing - to potentially automate or optimize technical operations. The military intelligence actor, also called APT28 and Fancy Bear, focuses on victims in defense, transportation, government, energy, nongovernmental organizations and information technology. The threat group has been "extremely active" in targeting organizations involved in and related to the Russia-Ukraine war.
OpenAI said it "will not be able to stop every instance" of illicit activity, but it has set up a team to detect and neutralize threats and it works with the broader AI industry to exchange information. "Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries," Microsoft said.