AI Weapons, Today & Tomorrow

“Lethal Autonomous Robotics” (LARS), artificial intelligence programmed to destroy on command — and Network-on-Network warfare. Autonomous weapon systems are being designed and prototyped to fight future wars — and programmed to go beyond where humans can go. These weapon-autonoms-AI systems are coming

It doesn’t take genius to recognize the threat of weapons systems that go out of control. At every level of system guidance, a necessary component is redundancy and alt directives in the event of system failure. At the height of the Cold War and fears of accidental or unforeseen nuclear events, “fail-safe” controls were designed and deployed as essential. It did not take Peter Sellers and the movie Dr. Strangelove to make it clear in the popular imagination that  fail-safe was less-than-safe. In the military, as declassified history now demonstrates, the number of nuclear incidents, ‘close calls’ and ‘near misses’ were adding up and on all sides in international affairs and military decision-making the risks of cataclysmic war drove negotiations for sane nuclear arms reduction.

In the current era, as nuclear weapons modernization drives nations again toward a new age of nuclear escalation and risk, a new potential category of weapons threat arises. Those in the know say it’s time to take notice…. and take action.

Pentagon exploring AI-human warfare teams

Comments by Robert Work, Deputy Secretary, US Department of Defense / Atlantic Council Conference / May 2, 2016

 
 
On the “Third Offset”
 
“R&D is going down in the public sector, but up in the private sector. Most things that have to do with AI [artificial intelligence] and autonomy are happening in the private sector. And so all competitors are going to have access to it, it’s going to be a world of fast-followers. You’re going to have an instance where you’re not going to have a lasting advantage.”
 
Anti-access area denial, known as A2AD, refers to network-on-network warfare, is a primary focus of discussions of Third Offset. “It’s when your battlefield network collides against another one. And what the Third Offset says is how do you put learning machines, AI, and autonomous systems into the network to allow your network to prevail over an enemy’s network.”

 

Via Defense News:

Work Outlines Key Steps in Third Offset Tech Development

The deputy was up front about why he, and others at the Pentagon, have been talking openly about the (AI) technologies it is looking at, citing it as part of the conventional deterrence strategy against near-peer competitors.

“We will reveal to deter and conceal for war-fighting advantage. I want our competitors to wonder what’s behind the black curtain,” Work said.

Third Offset strategy was awarded, according to public records, $18 billion dollars in the U.S. defense budget for 2017. “$3 billion is intended specifically for human-machine collaboration with another $1.7 billion earmarked for cyber and electronic warfare. Third Offset as originally conceived in November 2014 was a comprehensive method of extending US military advantage using long-term planning and technological advances in areas as diverse as extended-range air, undersea warfare, and complex system engineering, integration and operation.”

In recent statements, Work has given greater insight into five key points he is looking into over the next year:

  • Autonomous “deep learning” machines and systems, which the Pentagon wants to use to improve early warning of events. As an example, Work pointed to the influx of “little green men” from Russia into Ukraine as simply a big data problem that could be crunched to predict what was about to happen.
  • Human-machine collaboration, specifically the ways machines can help humans with decision-making. Work pointed to the advanced helmet on the F-35 joint strike fighter, which fuses data from multiple systems into one layout for the pilot.
  • Assisted-human operations, or ways machines can make the human operate more effectively — think park assist on a car, or the experimental “Iron Man” exoskeleton suit DARPA has been experimenting with. Work was careful here to differentiate between this point and what he called “enhanced human operations,” for which he did not offer an example, but warned “our adversaries are pursuing [enhanced human operations] and it scares the crap out of us, frankly.”
  • Advanced human-machine teaming, where a human is working with an unmanned system. This is already going on with the Army’s Apache and Grey Eagle teaming, or the Navy’s P-8 and Triton systems. “We’re actively looking at a large number of very, very advanced things,” Work said, including swarms of unmanned systems.
  • Semi-autonomous weapons that are hardened to operate in an electronic warfare environment. Work has been raising the alarm for the past year about weapons needing to be hardened against such attacks, and noted the Pentagon has been modifying the small diameter bomb design to operate without GPS if denied.

The next 25 years of military concern, the Deputy has stated, will be primarily about competition between the ‘great powers’, specifically, the U.S., Russia and China.

Autonomous Weapons

First Up to Point Out NextGen AI Weapon System Threats are the Geniuses with Doubts about Systems Thinking

http://futureoflife.org/open-letter-autonomous-weapons/

Hawkin, Musk, et al: an Open Letter

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

http://www.futureoflife.org/awos-signatories (20K plus signatures as of April 2016)

Hawkin, Musk, WozniakBan AI Autonomous Weapons, Now

Thousands of Experts Urge Ban on Autonomous Weapons

Boston Globe

Ban lethal autonomous weapons