The power held by these tech giants is now being harnessed and applied in more explicitly violent and increasingly unaccountable ways. In 2021 Google, in partnership with Amazon, signed project Nimbus, a 1.2 billion dollar agreement to provide the Israeli government and the IDF with cloud computing and AI services. In past years contracts involving the militarisation of Google’s technology were met with fierce internal protests from employees, such as in 2018 when a wave of employee walkouts forced Google to not renew its contract with the Pentagon for the controversial project Maven. In contrast, the current internal resistance against Nimbus has resulted in mass firings, the silencing of dissent, and no change of course from Google. While Google claims that the cloud services provided through project Nimbus are ‘not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services’, Google has worked directly with the IDF to supply access to its AI systems, and in late September 2025 Microsoft terminated Azure services to Israel after it was revealed that its cloud platform was being used in the mass surveillance of Palestinians by the Israeli military.
Regardless of the degree to which (from a purely technical stance) the machine vision systems trained by reCAPTCHA are utilised by the IDF in the ongoing genocide in Gaza, the centralisation of power Google attained through reCAPTCHA is an inextricable and tangible link to the datacenters and computational power they are currently supplying to Israel. The worker inside the Mechanical Turk re-enforces the power of the machine they are trapped within. In a sense, each of us has, in our own little part, directly contributed to this future, and the wrath of this awareness must be directed at the tech monoliths (like Google), who transformed a system from an individual proving that they are human into a killing machine for profit.
The prescriptive nature of machine learning systems encourages the offloading of cognitive work at all stages of their rollout in order for their use to be successful in increasing productivity. These systems inherently resist human intervention or worse, allow the workers in its pipeline to absolve themselves of guilt, while often allowing subjective human decisions to be masked as objective, calculated fact.
In the weeks following October 7th, the IDF heavily increased its use of a ‘target creation’ tool called Lavender in the early stages of its invasion of Gaza. While the tool has always been far from faultless in its stated goal of identifying members of Hamas, since the invasion began it has been repeatedly tweaked to allow for the creation of more and more targets, and there has been no requirement for a human to verify the decision of the machine and its underlying data before carrying out a strike.
‘In a day without targets [whose feature rating was sufficient to authorize a strike], we attacked at a lower threshold. We were constantly being pressured … We finished [killing] our targets very quickly.’ +972 Magazine
Far from the myth of precise warfare claimed by their proponents, prescriptive technologies in practice give cover to inhuman abuses, allowing vastly more collateral damage to be permissible. While systems such as Lavender are touted as advances in automated decision making, the targets it generates are representative of fundamentally human decisions. Its parameters are adjusted to create arbitrarily more targets on demand. The kill lists generated by Lavender are human desires masquerading as the calculated decisions of a robot, its operator pulling the strings hidden by the veneer of the technology enabling these atrocities.