“In the loop” is a phrase with a wicked irony. It is a categorization of human control over the autonomous processes of a machine, and also, an inevitable reference to a cult classic comedy about stumbling into war through miscommunication.
As military planners and designers inch forward with autonomous features in military machines, where the human sits in the loop matters more than ever. The “where” of in the loop sets the range of human control, and it determines when an action falls outside that human control. It also vexes definitions of lethal autonomy, especially when applied to thinking machines in flight. If policy makers are to understand what, exactly, weapons are capable of doing, they might need a new way of understanding how these weapons work.
I’m Kelsey D. Atherton, reporting from Socorro, New Mexico, and I’m here to talk about classification. (No, not that kind of classification).
Let me step back a moment.
In our last issue, Tomorrow Wars dove into the muddy distinctions between armed drones, loitering munitions, and cruise missiles. It was an attempt at classification by physical form, the propulsion system behind the weapon and the ability (or lack thereof) to disarm and land.
What if a classification-by-propulsion metric is wrong?
An alternative possibility is to instead classify a weapon based on when it selects a target. For cruise missiles, this is generally at the time of launch. For remotely piloted drones, the choice is made when a human selects a target and fires one of the drone’s onboard weapons. A loitering munition like the Harop, meanwhile, might fly a whole mission and see no targets, or might pick up a radar signature, arm its warhead, and crash down into the anti-air system it’s designed to destroy.
The “when” of the selection matters, and while it roughly matches the physical characteristics of the weapon, it does so imperfectly. The Long Range Anti-Ship Missile, or LRASM, is a perpetual bugaboo in the lethal autonomous weapons debate. (This is ably documented in a whole section of Paul Scharre’s “Army of None,” an ur-text for the thorny classifications of lethal autonomy). Fired as a cruise missile, the LRASM’s onboard sensors allow it to find a different target if the first target is no longer valid. The human is in the loop at the moment of firing and initial target selection, and the machine itself may change what target it hits, within some parameters.
Putting the focus on how human control and machine autonomy factor into target selection might lead to better designed countermeasures. If the threat model is cheap armed drones actively piloted by humans, then disrupting the communication between pilot and platform is a viable countermeasure. If the threat, instead, is autonomous machines that select targets based on pre-programmed options, masking the signature of a target, say a warship that gives the impression of being a tanker, becomes instead a viable defense.
This debate over how, exactly, to parse the difference between armed flying machines without people on board is bound to continue. A focus on targeting, and where, exactly, the human sits in the loop, is an opportunity to look at the deeply human nature of even such inhuman things as armed robots.