AI, Nimbus, Lavender … News of a New War Machine

Artificial Intelligence as a Tool of War …

AI Tracks to Target and Bomb …

Since launching war on Gaza last October, Israel has killed more than 34,000 Palestinians and injured over 77,000. Bombing, shelling, missiles, drones have destroyed large sections throughout the narrow enclave and displaced millions who are now facing famine and starvation.


AI Goes to War

Lavender, Nimbus, The Gospel, Unit 8200, Where’s Daddy?




April 3

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties



April 16

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law


Meanwhile, at Google offices, employees are protesting how their work that was contracted with the Israeli army is ‘doing bad’, this from a company that was famously known for it motto “Don’t be evil.”

The original Google motto from 2000 was a mainstay of public relations and often highlighted by Larry Page and Sergey Brin, founders of Google. In 2015 the motto changed to “Do the right thing”.


“Don’t be evil” is Google’s former motto, and a phrase used in Google’s corporate code of conduct. Following Google’s corporate restructuring under the conglomerate Alphabet Inc. in October 2015, Alphabet took “Do the right thing” as its motto, also forming the opening of its corporate code of conduct



A few years later the two founders ‘retired’ from their key roles, but in 2023 Brin returned to push forward Google’s artificial intelligence projects.


There is no doubt that Google is ‘all hands on deck’ with AI. Its investments are worldwide in the rush to be first movers in a AI market that has rapidly gone to the top of investors minds. AI is like a juggernaut and the future of artificial intelligence is now at war. The AI rush has gone deadly.

The question is how deadly and is this battlefield test of a veritable AI Skynet of Hollywood imagination now a marketable application?

To what extent has Google’s Nimbus AI Cloud program been put into effect in the Israeli war in Gaza?

To what extent are AI cloud services, scoped, developed, engineered and tested in Israel’s war doing the “right thing”? The Google ethics motto that Google founders Page and Brin waved is now under fire.

Google engineers, coders, employees working on the company’s AI programs are protesting that Google has now gone far from doing the “right thing.”

The story that has come out in the past weeks about Google’s AI and cloud services being used for targeting of thousands in Gaza — AI targeting of buildings and families, AI tracking, AI eyes in the sky with vast databases compiling and delivering kill targets — reverberates in the field and in the factory.


Google Workers Revolt Over $1.2 Billion Contract With Israel



Google’s response was to fire the employees.



Google followed with a statement about interference and “unacceptable behavior”.

“Physically impeding other employees’ work and preventing them from accessing our facilities is a clear violation of our policies, and completely unacceptable behavior… After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety. We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”


According to news reports, the Google employees protest came a day before the Israeli government approved a five-year strategic plan to transition to the cloud under Project Nimbus and expand digital services.

Israel’s Defense Ministry and military were listed in a government statement as partners in Project Nimbus, along with other government offices.

This was commented on by a representative for Google who said the Nimbus contract is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”


At the same time more reports are circulating widely of AI programs such as The Gospel, Lavender and Where’s Daddy? being use for mass surveillance to identify tens of thousands of Gazans as targets, to “track and strike people specifically in their homes, and essentially run a ‘mass assassination factory’ that works with minimal human oversight.”


In the +972 article, the whistle-blowing article that broke this story, trending over the past week, the charges of AI machine tracking and bombing has turned into a host of follow-on stories and international Internet phenomenon. The investigative journalism reads as if it is revelatory Sy Hersh. The +972 is impossible to ignore and, as with Hersh’s My Lai story, can act to change the course of war…


Witness the opening of the +972 article:

In 2021, a book titled “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World” was released in English under the pen name “Brigadier General Y.S.” In it, the author — a man who we confirmed to be the current commander of the elite Israeli intelligence unit 8200 — makes the case for designing a special machine that could rapidly process massive amounts of data to generate thousands of potential “targets” for military strikes in the heat of a war. Such technology, he writes, would resolve what he described as a “human bottleneck for both locating the new targets and decision-making to approve the targets.”

Such a machine, it turns out, actually exists. A new investigation by +972 Magazine and Local Call reveals that the Israeli army has developed an artificial intelligence-based program known as “Lavender,” unveiled here for the first time. According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.”

Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets. The sources told +972 and Local Call that, during the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants — and their homes — for possible air strikes.


The article then goes into details that came from multiple interviews…


The Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.



AI, Nimbus, Lavender — News of New War Machine


in 2016 Heather Roff and Richard Moyes, then writing for Article 36, a non-profit focused on the issue, cautioned that a person “simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness”, does not meaningfully qualify as “human control”.


The ethics here are more than troubling. The potential profit-taking of delivering AI to military systems for combat are now being demonstrated. Although Google is reacting and denying, the consequences of battlefield use seem, at this point, undeniable.

The tens of thousands of deaths and tens of thousands more of injured, crippled and missing are delivering a message.

The AI war machine being tested in war is already close to being out-of-control.

Humanity has trials ahead if computer-directed AI war systems continue to proliferate.

More to come…