Rapid Development of Artificial Intelligence Faces Multiple Threats, Warns NIST

A recent publication from the National Institute of Standards and Technology (NIST) highlights the various threats faced by the rapid development of artificial intelligence (AI). According to the report, AI systems are vulnerable to attacks such as misdirection, data poisoning, and privacy breaches.

One major concern raised in the report is the inability to fully protect AI systems against hostile actors who can attack and confuse them. This poses a significant challenge as it is crucial to promote responsible development and deployment of AI tools across industries.

The report also focuses on the potential risks associated with the training of AI models using large language datasets. Due to the sheer volume of data involved, it is difficult to audit every piece of data being fed to AI systems. This creates vulnerabilities in terms of accuracy, content, and response to queries. Some AI systems trained on poisoned data have exhibited racist and derogatory behavior in their responses.

Another area of concern highlighted in the publication is evasion attacks. These post-deployment attacks aim to change how AI systems recognize inputs or respond to them. For instance, by adding additional markings to a stop sign, attackers can cause self-driving cars to fail in recognizing the sign, which could lead to accidents.

The NIST report also draws attention to the possibility of reverse engineering AI systems to identify their sources and injecting malicious examples or information. This can prompt inappropriate responses from the AI, compromising its intended use.

What makes these attacks even more alarming is that they can be executed with minimal knowledge of AI systems. This black-box nature of the attacks makes it challenging to defend against them effectively.

NIST computer scientist, Apostol Vassilev, who contributed to the publication, emphasized the need for better defenses. While there are some mitigation strategies available, they currently lack robust assurances in fully mitigating the risks posed by these attacks.

The report serves as a wake-up call for the AI community to prioritize the development of stronger defenses and ensure responsible usage of AI technologies to protect against evolving threats.

The source of the article is from the blog macholevante.com

Privacy policy
Contact