US National Institute of Standards and Technology Establishes Strategies to Combat AI Cyber-Threats

The US National Institute of Standards and Technology (NIST) is taking significant steps to address cyber-threats targeting AI-powered chatbots and self-driving cars. In a recent paper released on January 4, 2024, NIST outlined a standardized approach for characterizing and defending against cyber-attacks on AI.

Titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” the paper was a collaborative effort between NIST, academia, and industry. Its purpose was to document various types of adversarial machine learning (AML) attacks and propose mitigation techniques.

NIST’s taxonomy categorizes AML attacks into two groups: attacks targeting “predictive AI” and attacks targeting “generative AI.” Predictive AI refers to AI and machine learning systems that predict behaviors and phenomena, such as computer vision devices and self-driving cars. Generative AI, a subset of predictive AI, includes generative adversarial networks, generative pre-trained transformers, and diffusion models.

The paper highlights evasion, poisoning, and privacy attacks as the primary types of attacks against predictive AI systems. Evasion attacks aim to generate adversarial examples that can change the classification of testing samples with minimal perturbation. Poisoning attacks occur during the training stage of the AI algorithm, while privacy attacks attempt to learn sensitive information to misuse it.

Meanwhile, attacks targeting generative AI systems, classified as abuse attacks, involve inserting incorrect information into legitimate sources to deceive the AI. Examples of abuse attacks include AI supply chain attacks, direct prompt injection attacks, and indirect prompt injection attacks.

The paper provides mitigation techniques for each category of attack, but NIST admits that these strategies are still insufficient. There is a pressing need for more robust defenses against AI vulnerabilities, as catastrophic failures can occur with dire consequences.

NIST’s efforts align with the US government’s focus on developing trustworthy AI. This paper supports the implementation of NIST’s AI Risk Management Framework, which was released in January 2023. Additionally, the creation of the US AI Safety Institute within NIST aims to establish standards for AI safety, security, and testing, as well as foster an environment for researchers to address emerging AI risks.

Overall, NIST’s groundbreaking work in developing strategies to combat AI cyber-threats showcases their commitment to ensuring the security and reliability of AI systems in today’s technology-driven world.

The source of the article is from the blog scimag.news

Privacy policy
Contact