Algorithms, Lavender, and the Art of War
©Shutterstock

In 2023, the conflict between Israel and certain armed groups in Gaza saw the large-scale use of artificial intelligence in military operations. One program, called Lavender, became a central tool for target selection. It relies on machine learning algorithms that cross-reference multiple sources – images, intercepted communications, and administrative records – to identify profiles of potential fighters.

Lavender analyzes data in real time, producing daily lists of several hundred potential targets for commanders. The AI’s recommendations are based on statistical patterns detected across different information sources. For instance, someone repeatedly seen near activities deemed hostile – or linked to other suspects – may be automatically flagged as a priority target.

This technological development follows a long line of innovations that have shaped military operations throughout history. Yet giving a machine a major role in the targeting process raises new challenges. For example, what role does human judgment play when targets are identified through statistical analysis? How much leeway do officers have when faced with a significant flow of such recommendations? According to the Israeli army, these tools are intended to assist decision-making – with each strike requiring validation by a human operator, in line with the rules of international law on armed conflict.

Officially, analysts assess each target on the list to ensure it meets legal and military criteria before any action is authorized, with the AI making no autonomous decisions. In theory, the chain of command, as well as the principles of distinction and proportionality, remain in effect: only military objectives are to be targeted, and civilian harm is to be avoided whenever possible. Nevertheless, the complexity of the operational environment on the ground can make strict adherence to these principles difficult, as documented in multiple reports and analyses.

The Lavender program was able to generate a large number of targets by combining intelligence from diverse sources – surveillance data, electronic intercepts, administrative databases, and so on – to evaluate individuals’ likely affiliations.

Technically, the AI identifies patterns in the data – such as faces, locations, or phone calls – that match profiles already linked to fighters. This process produces suspect lists ranked by statistical similarity. While it saves considerable time, the system has built-in limitations. Even the most advanced models can generate many false positives – especially when attempting to distinguish a fighter from a civilian, a task that challenges even human intelligence.

AI systems work with the data they receive and probabilistic models, but they lack genuine understanding or certainty. As a result, making lethal decisions on the basis of such analyses means accepting a degree of uncertainty, which can be underestimated in practice.

A phenomenon known in cognitive ergonomics as “automation bias” describes the tendency of human operators to trust machine-generated information – particularly when it carries technological credibility.

In the case of Lavender, several military personnel reported approving proposed targets within seconds – making few, if any, changes. One source noted that they spent only about twenty seconds reviewing each target, viewing themselves more as an administrative “buffer.” Faced with a high daily volume of targets, human oversight tends to become automatic and less thorough, with analysts validating recommendations rather than critically reviewing them. This partial transfer of control to the AI raises concerns among experts – particularly in international humanitarian law – who warn it could undermine human accountability and increase the likelihood of serious errors.

In the event of a machine error – for instance, if a civilian is mistakenly identified as a fighter – the question of responsibility becomes complex. Responsibility may be distributed among the operator who approves the target, the software developers, and the command structure that deployed the system. Responsibility thus becomes a shared and fragmented space between the algorithm, its users, and its designers, complicating any clear attribution of blame in the event of an incident.

Under international humanitarian law, this dilution of responsibility raises serious concerns. The Geneva Conventions require parties to a conflict to exercise judgment – targeting only military objectives – and to uphold the principle of proportionality, avoiding civilian harm that would be excessive in relation to the anticipated military advantage.

Using algorithms capable of generating large numbers of targets – combined with a high pace of strikes – can make it harder to carefully verify each target or accurately assess collateral risks. Organizations such as Human Rights Watch have noted that these digital tools may rely on incomplete or flawed data and stress that their deployment does not necessarily improve civilian protection – and could even heighten civilian vulnerability.

This new model underscores the tension between technological advances in military-applied AI and enduring ethical and legal obligations. While armed forces seek greater efficiency and reduced risk to their personnel, transparency, meaningful human oversight, and respect for civilian rights remain essential. This AI-assisted form of warfare calls for renewed reflection on accountability and regulation, to ensure that automation does not undermine the human role in life-and-death decision-making.

Comments
  • No comment yet