Beware of False Claims About AI Security, Warns NIST

The US National Institute for Standards and Technology (NIST) has issued a warning against trusting vendor claims about the security of artificial intelligence (AI) systems. In a recent publication, the NIST highlighted the lack of foolproof defense mechanisms available to AI developers. The institute emphasized that if an AI program interacts with websites or the public, it can be easily manipulated by attackers who feed it false data. The report cautioned against powerful simultaneous attacks across various modes of AI interaction, such as images, text, speech, and tabular data. The NIST stressed the vulnerability of modern AI systems, particularly through public APIs and their deployment platforms.

In its taxonomy of AI attacks and mitigations, the NIST identified four main types of attacks: evasion, poisoning, privacy, and abuse. Evasion involves manipulating the inputs to an AI model to change its behavior, which could lead to incorrect interpretations by autonomous vehicles, for example. Poisoning attacks occur during the training phase of an AI model, where attackers insert inappropriate language to influence the model’s responses. Privacy attacks attempt to extract private information from AI models, while abuse attacks aim to mislead AI systems with incorrect information.

Co-author Alina Oprea, a professor at Northeastern University, highlighted the relatively easy execution of these attacks, noting that poisoning attacks, for instance, can be mounted with just a few dozen manipulated training samples. This raises concerns about the security of AI systems and highlights the need for more robust defenses to protect against potential threats.

The NIST report serves as a valuable resource for AI developers and users alike, urging caution when it comes to claims of foolproof AI security. As the field of AI continues to expand, it is crucial to prioritize the development of effective security measures to safeguard against potential attacks and ensure the integrity of AI systems.

The source of the article is from the blog j6simracing.com.br

Privacy policy
Contact