Project Maven Comes Out of the Shadows

Project Maven after a decade of formal DoD development… Mavenesque Artificial Intelligence, Machine Learning, AI Targeting, AI in Space, Satellite Monitoring, AI Full Spectrum Battlefield Command & Control… Now is time to draw a line — ‘Never Give Nuclear Launch Codes to AI’



You want algorithmic systems? The US National Geospatial-Intelligence Agency has got it and is rolling it out in locations worldwide.

The implications of going AI are only beginning and if past lessons are to be factored in, then this new war-making capability is bound to be misused. AI in war is already being used by Israel (with US support) in Gaza. The ethical, moral consequences, and blowback, of this first-use have already, no doubt, opened a modern Pandora’s Box.

Reactions worldwide are coming in — and amid a larger geopolitical arms race, an escalation of development of AI tools for war is blowing in the wind, producing an AI in war race, no matter ethical implications.


There are concerns about the ethical implications of using AI in warfare, with critics arguing that giving machines the discretion to kill is morally repugnant. The UN Secretary-General is leading a group of over 80 countries calling for a ban on autonomous weapons systems.

Despite these concerns, the US Department of Defense issued a directive instructing commanders and operators to exercise “appropriate levels of human judgment” over the use of force, suggesting that human supervision, rather than initiation of decision-making, may be seen as sufficient.


As media news reports the International Criminal Court at the Hague may be issuing arrest warrants against Israeli leaders, and Benjamin Netanyahu is politically under pressure from the government’s religious/right wing coalition, the AI assisted bombing of Gaza continues. The vast destruction reported has brought condemnation and the AI role in the bombing campaign is out in the open, with pushback that is growing.

In the US, the presidential campaign has heated up as have the risks to President Biden as his current policy of support for Netanyahu comes under increasing scrutiny.


Let’s step back though and ask where Maven fits in to ‘battlefield testing’.


The Maven Smart System fuses various data sources – satellite imagery, geolocation data, and communications intercepts – into a unified interface for battlefield analysis.

Maven made a crucial transition from a developmental program to a formal ‘program of record’ by the Pentagon, signifying long-term backing and increased funding.


Where are the Maven systems being used, field ‘tested’, analyzed, then developmentally updated?


(Maven) algorithms are now actively used in places like Yemen, Iraq, and Syria – scanning for targets and informing strike decisions. But the US is not alone. Israel’s use of AI in combat targeting and Ukraine’s employment of AI in its defense against Russia underscore that this technology is reshaping the rules of engagement.


What comes next in AI autonomous weapons? What about nuclear weapons, nuclear codes, nuclear weapons use decision-making?


Autonomous weaponry is the third revolution in warfare, following gunpowder and nuclear arms. The evolution from land mines to guided missiles was just a prelude to true AI-enabled autonomy—the full engagement of killing: searching for, deciding to engage, and obliterating another human life, completely without human involvement.

An example of an autonomous weapon in use today is the Israeli Harpy drone, which is programmed to fly to a particular area, hunt for specific targets, and then destroy them using a high-explosive warhead nicknamed “Fire and Forget.” But a far more provocative example is illustrated in the dystopian short film Slaughterbots, which tells the story of bird-sized drones that can actively seek out a particular person and shoot a small amount of dynamite point-blank through that person’s skull. These drones fly themselves and are too small and nimble to be easily caught, stopped, or destroyed…


Targeting individuals … targeting buildings, city blocks, cities, regions, underground facilities, other countries…


Autonomous weapons can target individuals, using facial or gait recognition, and the tracing of phone or IoT signals. This enables not only the assassination of one person but a genocide of any group of people.

Greater autonomy without a deep understanding of meta issues will further boost the speed of war (and thus casualties) and will potentially lead to disastrous escalations, including nuclear war. AI is limited by its lack of common sense and human ability to reason across domains. No matter how much you train an autonomous-weapon system, the limitation on domain will keep it from fully understanding the consequences of its actions.

In 2015, the Future of Life Institute published an open letter on AI weapons, warning that “a global arms race is virtually inevitable.”

Nuclear weapons are an existential threat, but they’ve been kept in check and have even helped reduce conventional warfare on account of the deterrence theory. Because a nuclear war leads to mutually assured destruction, any country initiating a nuclear first strike likely faces reciprocity and thus self-destruction.

But autonomous weapons are different. The deterrence theory does not apply, because a surprise first attack may be untraceable. As discussed earlier, autonomous-weapon attacks can quickly trigger a response, and escalations can be very fast, potentially leading to nuclear war. The first attack may not even be triggered by a country but by terrorists or other non-state actors. This exacerbates the level of danger of autonomous weapons.

The Atlantic. September, 2021


StratDem: A newly published book, “Nuclear War: A Scenario” has been drawing headline reviews. Politico calls it “Seventy Two Minutes Until the End of the World”.

As Strategic Demands looks at a Nuclear Arms Race 3.0, we need to prevent 4.0 escalation.

It’s time to look at those who rail against “precautionary principles”, especially those who press for technology as having savior powers accompanied by a seemingly infinite ROI.

What should be clear today is what will become more evident every day. The dangers of runaway technology, of a technology race, an AI race, a new nuclear arms race that produces AI+nukes is upon us.

Maven and its successors are proving a need for human intervention — precaution and prevention. Whether from international negotiations, national laws, or court decision, it should be resolved that AI use in war should not, as a beginning point, become ‘precision’ nuclear targeting under direction of AI:

‘Never Give Nuclear Launch Codes to AI’