The Foreign Service Journal, May-June 2026

52 MAY-JUNE 2026 | THE FOREIGN SERVICE JOURNAL battle scenarios even as tension between the company and Pentagon ratcheted up.” This debate hits at the heart of autonomy and culpability in action, and one of the long-feared and warned-about dangers of AI tools in military use specifically is that AI functions as a permission structure, possibly untraceable and likely unaccountable, for generated targets and effectively signing death warrants. Target selection is an inevitable part of war, and the laws of war account for humans operating under orders, issuing bad orders, and the flaws, hazards, and limits of bad intelligence. But the danger of AI in targeting isn’t just a hypothetical: As +972 Magazine reported in April 2024, Israel used an AI target generation tool called Lavender to increase the tempo of authorized targets and attacks in Gaza, an automated process that inferred machine wisdom over the difficult and verifiable process of other intelligence gathering. The danger is real and realized: “The result, the sources testify, was that the role of human personnel in incriminating Palestinians as military operatives was pushed aside, and AI did most of the work instead. … Lavender—which was developed to create human targets in the current war—has marked some 37,000 Palestinians as suspected ‘Hamas militants,’ most of them junior, for assassination (the Israeli Defense Forces spokesperson denied the existence of such a kill list in a statement to +972 and Local Call).” This is likely a long digression, but Anthropic was perfectly fine working within a broad set of lawful bounds for the military, and drew a line, I think, at least as much out of reputational risk and culpability on undeniable war crimes as anything else. This is a defense contractor looking to outlast the present administration and avoid being jailed for aiding and abetting its crimes, not necessarily a stalwart defender of human interests in the face of a new conflict. Given that the secretary has gone on to say the war will not be conducted with “stupid rules of engagement,” anyone hedging their bets on the potential for future consequences from these actions is likely to want to distance themselves from Operation Epic Fury. n A good starting point would be to see how processes have been/ are done before implementing an AI tool, and then check in three to six months after the adoption of a tool to see what has changed.

RkJQdWJsaXNoZXIy ODIyMDU=