For robots to survive on future battlefields, they must go wherever people go.

Practically, this means the robots must have legs, or backpackable flying bodies, or other ways to tag along with infantry. When it comes to sensors and software, it means robots must have the freedom to operate almost as independently as humans.

“How do you access an area that is GPS denied, electronically denied with equipment that is heavily reliant on [those services],” asked Brandon Tseng, chief operating officer and co-founder of Shield AI. “You can’t, but with AI for maneuver, you open up a set of operations that give freedom to maneuver on the battlefield, to gain the intelligence you need, to conduct operations as required in these traditionally denied areas.”

Tseng was speaking as part of a Nov. 20 panel on AI and autonomous capabilities at the 2019 AI and Autonomy symposium, put on by the Association of the United States Army in Detroit, Michigan. As Tseng outlined it, AI for maneuver is specifically about the software and sensors that grant autonomy in denied spaces. This is one of the driving forces behind the military adoption of autonomy writ large.

The greatest promise of machine autonomy is that it will lead to greater freedom for the humans commanding and fighting alongside the robots. Tseng said the goal is to shift from 50 soldiers supporting a single drone, or ground robot, or water robot, to a paradigm where one human supports 50 robots.

An ability for machines to operate despite GPS or electromagnetic denial means machines moving through risky areas with some assurance. Autonomy does not prevent the risk of a kinetic response, of drones shot down or blown apart with missiles, but it does make that outcome explicit and harder to deny. This could be preferred to the ambiguity of a drone loss from jamming that could read like mechanical failure.

“In this era of massive political risk, what AI for maneuver does is opens up aperture of what missions we can accept because they are inherently very low political risk because they involve unmanned systems,” Tseng said.

Unspoken, but underlying the remarks, was the Navy’s loss of a Global Hawk in the Strait of Hormuz in June. That incident did not devolve into a more traditional kinetic war or lead to human death, a fact that’s led to the perception of drones as more-expendable assets.

Trusting AI to maneuver surveillance and reconnaissance platforms into place means giving commanders and, ultimately, policy makers, information despite jamming, and despite a risk of loss. With drones, it is a shift from operating in climates of aerial superiority to ones of aerial expendability.

What is missing from the emphasis on denied environments, or asset projection, is what happens when those machines want to communicate back with human controllers. An ISR asset that can navigate denied space but not transmit what it observes is, at best, a liability. If the uncrewed, autonomous platform is carrying deadlier payloads than just sensors, more human control is needed and therefore autonomy in maneuver is insufficient for meeting both its operational needs and its likely battlefield uses.

Still, the concept is useful for orienting how policymakers and force planners think about what they want robots to do in battle. If autonomy is fundamentally about maneuver, then what autonomous machines do depends, to a great deal, on how those robots respond to command, and how they operate when beyond control.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence