The Reality of AI-driven Warfare: Questioning the Promise of Precision

Artificial intelligence (AI) in warfare has sparked intense concern among members of the digital realm. The potential displacement of human will and agency by remorseless AI killing machines is a disconcerting prospect. However, the deployment of AI-driven automated systems in conventional warfare by human operators has already become a chilling reality.

The misleading notion lies in the belief that autonomous systems will render human agency obsolete as they become more algorithmically sophisticated. The promise of precision in modern warfare suggests that civilian casualties will be reduced while the lethality against combatants and undesirables will increase dramatically.

A recent case study highlights the flaw in this vision, depicting the pulverizing campaign waged by Israel in Gaza. In an attempt to identify and eliminate targets, the Israeli Defense Forces incorporated AI into their operations. However, the outcome has been far from accurate or morally informed. Rather than limiting harm to civilians, Israel’s use of AI appears to bolster the ability to identify, locate, and expand target sets, potentially resulting in maximum damage.

The investigation into this issue references a publication titled “The Human-Machine Team: How to Create Human and Artificial Intelligence That Will Revolutionize Our World,” authored by Brigadier General Y.S., the current commander of Israeli intelligence unit 8200. The publication advocates for a system capable of swiftly generating thousands of potential targets during times of conflict. The sinister aim of such a machine is to eliminate the need for human approval and bypass the necessity of verifying the viability of each target.

The joint investigation by +972 and Local Call uncovered the advanced development stage of this system, known as Lavender to the Israeli forces. Lavender goes beyond previous AI systems by identifying military buildings and structures used by militants. However, even this form of identification failed to curtail the death toll, resulting in what some describe as a “mass assassination factory.”

Six Israeli intelligence officers, who served during the recent Gaza war, revealed Lavender’s central role in the unprecedented bombing of Palestinians, particularly in the early stages of the conflict. The AI machine effectively diminished the human element while lending a fictional sense of human credibility to the targeting results.

In the initial weeks of the war, the IDF heavily relied on Lavender, identifying approximately 37,000 Palestinians as potential militants for possible airstrikes. This marked a shift from the previous doctrine of targeting specific senior military operatives. Under the new policy, all Hamas operatives in its military wing, regardless of rank, were designated as human targets.

Officers were given wide-ranging authority to accept kill lists without scrutiny, with as little as 20 seconds allotted per target before authorizing bombings. Permission was granted despite awareness that approximately 10 percent of cases involved errors in targeting, occasionally marking individuals with loose or no connection to militant groups.

Additionally, the Lavender system worked in conjunction with another automated platform called “Where’s Daddy?” This platform tracked targeted individuals to their family residences, which were subsequently demolished. The result was mass civilian casualties, with thousands of Palestinians, including women, children, and non-combatants, killed by Israeli airstrikes during the conflict’s early stages.

One intelligence officer expressed the grim reality that killing Hamas operatives in military facilities or during military activity was of little interest. Instead, the focus was primarily on bombing their homes, as it proved to be a simpler task. The Lavender system was specifically designed for such situations.

The use of this system involved chilling and grisly calculations. Two sources revealed that, during the initial weeks of the war, the IDF authorized the killing of up to 15 or 20 civilians for every junior Hamas operative marked by Lavender. For more senior Hamas officials, the authorized deaths could reach up to 100 civilians.

In response to these revelations, the IDF continues to assert that it does not use an AI system to identify terrorists or predict an individual’s status as a terrorist. Instead, the claim is that a database cross-references intelligence sources on the military operatives of terrorist organizations.

UN Secretary-General António Guterres expressed deep concern over reports of Israel’s use of AI in identifying targets, particularly in densely populated residential areas. The use of AI in this manner has resulted in a high number of civilian casualties, raising ethical and moral questions about the impact of AI-driven warfare.

FAQ:

Q: What is the Lavender system?
A: The Lavender system is an AI creation used by the Israeli Defense Forces to identify targets in warfare.

Q: How accurate is Lavender in targeting?
A: The Lavender system has been found to have inaccuracies in targeting, marking individuals who have loose or no connection to militant groups.

Q: How were civilian casualties affected by the use of Lavender?
A: The use of Lavender, alongside other AI platforms, resulted in significant civilian casualties, particularly in Gaza, during the initial stages of the conflict.

Sources:
– +972: [URL]
– Local Call: [URL]
– Times of Israel: [URL]

Artificial intelligence (AI) in warfare is a topic of intense concern, as it raises questions about the displacement of human will and agency by AI killing machines. However, AI-driven automated systems in conventional warfare, operated by humans, are already being deployed. The belief that autonomous systems will render human agency obsolete is misleading, as the promise of precision in modern warfare suggests a reduction in civilian casualties but an increase in lethality against combatants.

A recent case study focuses on Israel’s use of AI in warfare, specifically in Gaza. The Israeli Defense Forces incorporated AI into their operations to identify and eliminate targets. However, the outcome has not been accurate or morally informed. Instead of limiting harm to civilians, Israel’s use of AI appears to have expanded target sets and potentially resulted in maximum damage.

The investigation into this issue refers to a publication titled “The Human-Machine Team: How to Create Human and Artificial Intelligence That Will Revolutionize Our World,” authored by Brigadier General Y.S., the commander of Israeli intelligence unit 8200. The publication advocates for a system that swiftly generates thousands of potential targets during conflict, eliminating the need for human approval and verification.

The joint investigation by +972 and Local Call uncovered the advanced development stage of this system, known as Lavender. Lavender goes beyond previous AI systems by identifying military buildings and structures used by militants. However, even with this form of identification, the death toll has not been curtailed, leading to what some describe as a “mass assassination factory.”

During the recent Gaza war, Lavender played a central role in the unprecedented bombing of Palestinians, particularly in the early stages. The AI machine diminished the human element while giving a false sense of human credibility to the targeting results.

The Israeli Defense Forces heavily relied on Lavender in the initial weeks of the war, identifying approximately 37,000 Palestinians as potential militants for airstrikes. This marked a shift from targeting specific senior military operatives to designating all Hamas operatives in its military wing as human targets, regardless of rank.

Officers were given wide-ranging authority to accept kill lists without scrutiny, with as little as 20 seconds allotted per target before authorizing bombings. However, approximately 10 percent of cases involved errors in targeting, sometimes marking individuals with loose or no connection to militant groups.

In addition to Lavender, the IDF used another automated platform called “Where’s Daddy?” This platform tracked targeted individuals to their family residences, which were subsequently demolished. The result was a high number of civilian casualties, including women, children, and non-combatants, killed by Israeli airstrikes.

One intelligence officer revealed that killing Hamas operatives in military facilities or during military activity was of little interest. The focus was primarily on bombing their homes, which the Lavender system was specifically designed for.

The use of Lavender involved chilling calculations. During the initial weeks of the war, the IDF authorized the killing of up to 15 or 20 civilians for every junior Hamas operative marked by Lavender. For more senior Hamas officials, the authorized deaths could reach up to 100 civilians.

The IDF continues to deny using an AI system like Lavender to identify terrorists or predict an individual’s status as a terrorist. Instead, they claim that a database cross-references intelligence sources on the military operatives of terrorist organizations.

The reports of Israel’s use of AI in warfare, particularly in densely populated residential areas, have raised ethical and moral concerns about the impact of AI-driven warfare. UN Secretary-General António Guterres expressed deep concern over the high number of civilian casualties resulting from the use of AI in targeting. This raises questions about the accuracy and consequences of relying on AI in warfare.

Sources:

– +972: [link name]
– Local Call: [link name]
– Times of Israel: [link name]

Privacy policy
Contact