Hackers Utilize AI Chatbots to Enhance Malicious Online Activity, NSA Official Warns

According to a senior official at the National Security Agency (NSA), hackers and propagandists are increasingly using generative artificial intelligence (AI) chatbots to appear more convincing to native English speakers. At the International Conference on Cyber Security, NSA Cybersecurity Director Rob Joyce revealed that cybercriminals and hackers affiliated with foreign intelligence agencies are leveraging AI chatbots like ChatGPT to simulate fluent English communication.

Joyce highlighted that these chatbots have significantly improved their English-language capabilities, making them more adept at producing grammatically correct and persuasive content. This enhancement poses a significant concern as hacking operations often employ phishing schemes to deceive individuals into divulging personal information. With the assistance of generative AI, hackers can refine and generate their communications, making it increasingly difficult to identify malicious online activity.

The use of generative AI has particularly facilitated online propaganda campaigns orchestrated by nation-states like Russia. These campaigns involve creating deceptive accounts that falsely represent American users. By leveraging AI technology, hackers can seamlessly generate compelling English-language content, further amplifying their malign influence.

Although Joyce did not disclose any specific AI company, he emphasized that this issue is widespread and affects all major generative AI models. Notably, companies such as OpenAI and Google offer AI chatbots like ChatGPT and Bard, respectively, which have been demonstrated to produce convincing phishing emails.

While some generative AI services claim to prohibit the use of their products for criminal activities, enforcement remains challenging. Reports have shown that it is relatively easy to manipulate AI chatbots into generating deceptive content.

However, AI is not just a tool for malicious actors; it is shaping up to be a valuable asset in cybersecurity defense. The NSA recently established the AI Security Center, aimed at developing best practices and risk frameworks to promote the secure adoption of AI capabilities within the national security enterprise and defense industrial base. By leveraging AI, machine learning, and deep learning, cybersecurity agencies can identify and thwart malicious activities more effectively.

As AI continues to evolve, it is crucial for both security experts and AI developers to recognize the implications and continually improve safeguards against AI-enabled cyber threats.

Privacy policy
Contact