FAQ

Israeli Military’s Use of Artificial Intelligence for Targeting Raises Concerns

In a recent investigation, it has been uncovered that the Israeli military has been utilizing artificial intelligence (AI) technology to aid in identifying bombing targets in Gaza. This revelation, brought to light by +972 Magazine and Local Call, has raised serious concerns about the accuracy and ethical implications of relying heavily on AI for such purposes. According to six Israeli intelligence officials involved in the alleged program, the AI-based tool, dubbed “Lavender,” has a reported error rate of 10%.

While the Israeli Defence Forces (IDF) acknowledged the existence of the tool, they denied the use of AI to identify suspected terrorists. However, they did admit that information systems serve as tools for analysts during the target identification process. The IDF emphasized their commitment to minimizing harm to civilians, stating that analysts must independently examine and verify identified targets according to international law and IDF directives.

Contrary to the IDF’s claims, one official involved in the program revealed that human personnel often played a minimal role and only spent around 20 seconds assessing each target before authorizing airstrikes. This raises concerns about the level of scrutiny and consideration given to potential civilian casualties.

The investigation comes at a time when international scrutiny of Israel’s military campaign is intensifying. Incidents involving targeted airstrikes resulting in the deaths of foreign aid workers have drawn significant attention. The Gaza Ministry of Health has reported over 32,916 deaths due to Israel’s siege on Gaza, leading to a humanitarian crisis and widespread hunger affecting three-quarters of the population in northern Gaza, as reported by the United Nations.

Yuval Abraham, the author of the investigation, had previously shed light on Israel’s heavy reliance on AI for generating targets with minimal human oversight. The Israeli military, however, denies using an AI system to identify terrorists but acknowledges the use of a database to cross-reference intelligence sources on military operatives of terrorist organizations.

The investigation also revealed that the Israeli army systematically carried out night attacks on targets in residential homes, often resulting in the deaths of innocent Palestinians, including women and children. The use of unguided “dumb” bombs, which can cause significant collateral damage, was preferred by the army when targeting alleged junior militants.

According to a CNN report, nearly half of the munitions dropped on Gaza in a recent campaign were unguided dumb bombs. This poses a greater threat to civilians, particularly in densely populated areas like Gaza. The IDF claims that strikes are only conducted when the expected collateral damage is not excessive in relation to the military advantage and that efforts are made to minimize harm to civilians.

The utilization of AI in the Israeli military’s targeting process raises critical ethical questions. There is a need for greater transparency, accountability, and human involvement in decision-making to ensure the protection of innocent lives. The international community must engage in dialogue and scrutiny to address these concerns and safeguard the principles of humanitarian law.

What is artificial intelligence (AI)?

AI refers to the development of intelligent machines capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and problem-solving.

What are unguided “dumb” bombs?

Unguided “dumb” bombs are conventional explosive devices that lack guidance systems and rely on gravity for their trajectory. These bombs have a higher likelihood of causing collateral damage and harm to civilians due to their imprecise nature.

What is the goal of the Israeli military’s targeting process?

The goal of the Israeli military’s targeting process is to identify and strike targets associated with terrorist organizations, with the intention of disrupting their operations and ensuring national security. However, concerns have been raised about the accuracy of target identification and the potential harm to civilians.

The issue of using artificial intelligence (AI) in the Israeli military’s targeting process raises concerns not only about the accuracy and ethical implications of relying heavily on AI, but also about the potential harm to civilian lives. This investigation comes at a time when international scrutiny of Israel’s military campaign is intensifying.

According to the investigation, the AI-based tool used by the Israeli military, called “Lavender,” reportedly has an error rate of 10%. The Israeli Defence Forces (IDF) acknowledge the existence of the tool but deny using AI to identify suspected terrorists. They claim that information systems serve as tools for analysts during the target identification process. The IDF emphasizes their commitment to minimizing harm to civilians by requiring analysts to independently examine and verify identified targets according to international law and IDF directives.

However, one official involved in the program revealed that human personnel often play a minimal role in the target identification process, spending only around 20 seconds assessing each target before authorizing airstrikes. This raises concerns about the level of scrutiny and consideration given to potential civilian casualties.

The investigation also sheds light on the Israeli army’s systematic night attacks on targets in residential homes, often resulting in the deaths of innocent Palestinians, including women and children. The use of unguided “dumb” bombs, which can cause significant collateral damage, was preferred when targeting alleged junior militants. Nearly half of the munitions dropped on Gaza in a recent campaign were unguided dumb bombs.

The utilization of AI in the Israeli military’s targeting process raises critical ethical questions. There is a need for greater transparency, accountability, and human involvement in decision-making to ensure the protection of innocent lives. The international community must engage in dialogue and scrutiny to address these concerns and safeguard the principles of humanitarian law.

For more information about artificial intelligence (AI), you can visit Brookings Institution.

To understand more about unguided “dumb” bombs, you can visit Arms Control Association.

Further insights on the Israeli military’s targeting process and the ethical implications can be found at Al Jazeera.

For information on the humanitarian crisis in Gaza and the impact of Israel’s siege, you can visit the United Nations.

Privacy policy
Contact