Predictive and Generative AI Systems Face Significant Security Risks, Warns New Research

New research in the field of artificial intelligence (AI) warns that predictive and generative AI systems are vulnerable to a range of attacks, contradicting claims that these technologies are secure. Apostol Vassilev, a computer scientist with the US National Institute of Standards and Technology (NIST), emphasizes that despite progress in AI and machine learning, significant security challenges remain. Vassilev co-authored a paper titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” along with researchers from Northeastern University and security shop Robust Intelligence. The paper delves into the security risks associated with AI systems, focusing on evasion, poisoning, privacy, and abuse attacks.

Evasion attacks aim to generate adversarial examples that can manipulate the classifications of AI algorithms with minimal perturbation. Such attacks have been traced back to research conducted in 1988 and can result in dangerous misclassifications, such as the misidentification of stop signs in autonomous vehicles. Poisoning attacks, on the other hand, involve injecting unwanted data into machine learning models during training, which leads to undesirable responses when specific inputs are received. Privacy attacks involve reconstructing training data that should be inaccessible, extracting memorized data, and inferring protected information.

Finally, abuse attacks utilize generative AI systems for malicious purposes, such as promoting hate speech, generating media that incites violence, or facilitating cyber attacks. The researchers behind the paper aim to provide an understanding of these attack categories and variations, as well as propose mitigation methods to enhance AI system defenses. They emphasize that there is currently a tradeoff between security and fairness and accuracy in AI systems optimization.

This research serves as a critical reminder that the security vulnerabilities of AI systems cannot be overlooked. As AI technologies continue to advance, addressing these security concerns becomes crucial to prevent potential catastrophic consequences.

The source of the article is from the blog trebujena.net

Privacy policy
Contact