The Growing Threat of Generative Artificial Intelligence in Cyber Operations

Microsoft recently revealed that U.S. adversaries, specifically Iran and North Korea, are using generative artificial intelligence (AI) to launch offensive cyber operations. Additionally, Russia and China have also begun utilizing this technology, albeit to a lesser extent. While these techniques are still in their early stages and not particularly novel, Microsoft believes it is crucial to expose them publicly as these countries leverage large-language models to breach networks and conduct influence operations.

Generative AI, particularly led by OpenAI’s ChatGPT, has heightened the game of cat-and-mouse between cybersecurity firms and criminals/offensive hackers. Microsoft, which has invested heavily in OpenAI, warned that generative AI has the potential to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. This poses a significant threat to democratic processes, especially in a year when numerous countries are conducting elections, amplifying the spread of disinformation.

Microsoft provided various examples of how these U.S. rivals have used generative AI. The North Korean group Kimsuky utilized the models to research foreign think tanks and generate content for spear-phishing campaigns. Iran’s Revolutionary Guard employed large-language models for social engineering, troubleshooting software errors, and studying methods to avoid network detection, including creating phishing emails. Russia’s GRU military intelligence unit, known as Fancy Bear, researched satellite and radar technologies related to the conflict in Ukraine. The Chinese cyberespionage groups Aquatic Panda and Maverick Panda have also explored how large-language models can enhance their technical operations.

However, OpenAI clarified that its current GPT-4 model chatbot offers limited capabilities for malicious cybersecurity tasks beyond what already exists with non-AI powered tools. Nevertheless, cybersecurity researchers anticipate that this will change in the future.

The Director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, emphasized the significance of both China and artificial intelligence as epoch-defining threats and challenges. She stressed the need for AI to be built with security in mind. Critics have raised concerns about the release of large-language models, arguing that security was an afterthought during their development.

Moving forward, it is crucial for organizations like Microsoft to not only address vulnerabilities in large-language models but also prioritize making them more secure. The use of generative AI and large-language models, though not an immediate threat, could potentially become one of the most potent weapons in the offense of every nation-state military. It is imperative for the cybersecurity community to stay vigilant and actively work towards developing stronger security measures in the face of evolving AI technology.

An FAQ based on the main topics and information presented in the article:

Q: What is generative artificial intelligence (AI)?
A: Generative AI refers to the use of AI technology to create new and original content, such as text, images, or videos.

Q: How are U.S. adversaries, specifically Iran and North Korea, using generative AI?
A: These countries are utilizing generative AI, specifically large-language models, to breach networks, conduct influence operations, engage in social engineering, and create phishing campaigns.

Q: Which other countries have started using generative AI?
A: Russia and China have also begun using this technology, albeit to a lesser extent than Iran and North Korea.

Q: What are the concerns associated with the use of generative AI?
A: Generative AI has the potential to enhance malicious social engineering, leading to the creation of more sophisticated deepfakes, voice cloning, and the spread of disinformation. This poses a significant threat to democratic processes, especially during election periods.

Q: How has OpenAI’s ChatGPT contributed to the use of generative AI?
A: OpenAI’s ChatGPT, a generative AI model, has heightened the game of cat-and-mouse between cybersecurity firms and offensive hackers by providing advanced capabilities for malicious tasks.

Q: What examples did Microsoft provide of how U.S. rivals have used generative AI?
A: North Korea’s Kimsuky group used generative AI to research think tanks and create content for spear-phishing campaigns. Iran’s Revolutionary Guard used large-language models for social engineering and troubleshooting software errors. Russia’s GRU military intelligence unit researched satellite and radar technologies related to the conflict in Ukraine. Chinese cyberespionage groups Aquatic Panda and Maverick Panda explored how large-language models can enhance their operations.

Q: Can generative AI be used for malicious cybersecurity tasks beyond non-AI powered tools?
A: Currently, OpenAI’s GPT-4 model, a chatbot, offers limited capabilities for such tasks. However, cybersecurity researchers anticipate that this will change in the future.

Q: What is the stance of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) on the use of artificial intelligence?
A: The Director of CISA, Jen Easterly, emphasized the significance of China and artificial intelligence as epoch-defining threats and challenges. She stressed the need for AI to be built with security in mind.

Q: What should organizations like Microsoft prioritize regarding large-language models?
A: It is crucial for organizations like Microsoft to address vulnerabilities in large-language models and prioritize making them more secure. The use of generative AI and large-language models could potentially become one of the most potent weapons used by nation-state militaries.

Q: What should the cybersecurity community do in response to the evolving AI technology?
A: The cybersecurity community should stay vigilant and actively work towards developing stronger security measures in the face of evolving AI technology.

Definitions:

– Generative AI: AI technology used to create new and original content.
– Large-language models: AI models that have been trained on vast amounts of text data to generate human-like content.
– Spear-phishing campaigns: Targeted email phishing campaigns that aim to trick individuals into revealing sensitive information or performing certain actions.
– Social engineering: Manipulating individuals to gain unauthorized access to information or systems.
– Deepfakes: Doctored or manipulated media, such as videos or images, that appear real but are actually synthetic.
– Voice cloning: The ability to mimic someone’s voice using AI technology.
– Disinformation: False or misleading information spread deliberately to deceive or manipulate public opinion.
– Non-AI powered tools: Tools or methods that do not rely on artificial intelligence technology.
– Chatbot: An AI-powered computer program designed to simulate human conversation through text or voice interactions.
– Cyberespionage: The use of computer networks to gain unauthorized access to confidential information for intelligence or military purposes.

Suggested related links:
Microsoft
OpenAI
U.S. Cybersecurity and Infrastructure Security Agency

The source of the article is from the blog elektrischnederland.nl

Privacy policy
Contact