Artificial Intelligence Program Identifies Thousands of Suspects in Gaza

Data analytics program Lavender has been cited for flagging approximately 37,000 individuals as potential affiliates of Hamas in Gaza, home to roughly 2.3 million people. Insights provided by sources to Tel Aviv based “+972” and Local Call suggest that the algorithm evaluated the likelihood of a person’s connection to Hamas based on opaque criteria.

During the early stages of the conflict, the Israeli military reportedly adhered strictly to the program’s findings, with identified individuals being targeted without discernible criteria, so long as they were male.

The contentious concept of “human targets” historically reserved for high-ranking military figures was notably broadened post-October 7 to include a wider array of individuals perceived as having ties to Hamas. This significant expansion in potential targets curtailed the feasibility of individual verification, prompting an increased reliance on artificial intelligence.

One source expressed consternation over incidents where a house would be bombed to eliminate a “non-significant” name, which implies an admission of civilian massacres in Gaza. Moreover, these sources highlight operations against low-level targets where up to 20 civilian casualties were permitted, indicating a departure from the principle of proportionality. In contrast, the casualty threshold for high-value targets was reported to reach up to 100 civilians.

Simple actions such as regular cell phone or residence changes, or being part of an anti-Israel WhatsApp group, were deemed sufficient for inclusion in a list that could lead to mortal consequences—a stark reflection of the broad criteria applied to Gazan residents by Israeli forces.

Artificial intelligence (AI) technologies have been increasingly incorporated into military and surveillance operations, with nations using such tools for various purposes, including the identification of potential threats. The use of AI in such contexts raises several important questions, challenges, and controversies, as well as advantages and disadvantages that are worth exploring in connection to the topic of AI being used to identify suspects in Gaza.

Key Questions:
1. What are the ethical implications of using AI for military targeting decisions?
2. How does the use of such an AI program comply with international law, including the principles of distinction and proportionality?
3. What safeguards exist to ensure the accuracy of AI-generated insights?
4. How does the use of AI in military operations affect the civilian population in conflict zones?

Key Challenges and Controversies:
1. Accuracy and Bias: AI systems are dependent on the quality and bias of the data they are trained on. There is a risk of misidentification and the potential for disproportionate targeting of certain groups or individuals.
2. Ethical Concerns: The application of AI for military purposes, especially in targeting decisions, brings into question the responsibility for civilian casualties and the morality of relying on algorithms for life-and-death decisions.
3. Transparency: The criteria and algorithms used in AI programs like Lavender are often opaque, making it difficult to assess their fairness and to hold operators accountable.
4. Compliance with International Laws: There are concerns about whether AI-targeted operations comply with international humanitarian law, which mandates the protection of civilians in conflict.

Advantages:
1. Efficiency: AI can process vast amounts of data much faster than humans, potentially identifying threats that might otherwise go unnoticed.
2. Consistency: AI programs can apply the same criteria consistently, whereas human judgment may vary.

Disadvantages:
1. Civilian Casualties: The lack of precise verification can lead to higher risks of civilian casualties, especially when a high casualty threshold is set for strikes on particular targets.
2. Accountability: It is challenging to attribute responsibility for mistakes or misconduct when decisions are made or assisted by AI.

In terms of further information, readers interested in learning more about artificial intelligence and its implications may visit reputable AI research and policy organizations. Here are two leading organizations:
AI Now Institute – Focuses on the social implications of artificial intelligence.
Partnership on AI – A partnership between leading technology companies, civil society, and academics to establish best practices for AI.

Privacy policy
Contact