Editor’s note: On Nov. 15, the New York Times Magazine published a story, “Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons.” This story, also written by C4ISRNET Staff Writer Kelsey D. Atherton, serves as a companion to that article.
Remotely piloted vehicles are an anomaly of open skies. For as much as the wars of the United States have been defined by drones and drone strikes, those missions are only possible because the sky is empty of hostile aircraft, and because the electromagnetic spectrum is free of interference. This permissiveness, however, is hardly a guarantee in the future and even in certain theaters in the present. (Think: Syria.) If the machines that are today remotely operated are to take part in the wars of the future, they will need to operate on their own, with only minimal human control. The technologies that will make that possible are broadly grouped together under the subject of autonomy,
We already see autonomy in cyber defense, where automated defenses are used to counter automated attacks and speeds faster and scales larger than humans can work on their own. (The Defense Information Systems Agency, for example, processes more than 1 billion defensive cyber operations thanks to machines.) And there is autonomy in guided munitions, especially loitering weapons, which operate on their own from launch until impact. Waging electronic warfare will require both approaches: machines that can automatically counter the actions of other machines, and vehicles that can navigate through fields of interference on their own.
“There's no way that a human is going to be able to keep up with these new generations of cognitive electronic warfare systems that are constantly scanning the electromagnetic spectrum and jamming software where it can,” said Bob Work, a senior fellow at the Center for a New American Security and former undersecretary of defense. “Humans just won't be able to keep up with that. The expectation is once again, for electronic warfare, machines will fight against their machines.”
Electronic warfare is a data-rich field, as signals can be captured, recorded and studied in a way that most information on a battlefield cannot. That data, combined with machine learning that trains AI on how to interpret, understand, and counter those signals, makes cognitive electronic warfare an area where iteration on software is likely to yield outsized results. It’s already part of the Army strategy, as they’ve encountered a contested spectrum in Europe.
That there’s enough bandwidth available to allow pilots in Nevada to directly control drones flying over countries on the opposite side of the globe is already something of a logistical triumph. As cognitive electronic warfare gets better, and the cost of putting that interference in place gets lower, directly piloting is going to be hard, especially from across the planet but even from closer bases.
Autonomy greatly reduces the amount of data an uncrewed vehicle needs to send back to the humans supervising it. As sensors get cheaper, collecting that information will be easier, but the bottleneck isn’t in the collection. It’s in the transmission.
“Getting the bandwidth to send it back will be an enormous challenge,” says Alexander Kott, chief scientist of the Army Research Laboratory. “No question about that. The communication environments of the future battlefield, will really be challenged by congestion and it will be challenged by active and probably effective interference by the adversary. So teleoperation is a challenge.”
DARPA’s CODE program, for “Collaborative Operations in Denied Environment” is about making the software that will allow drones to operate autonomously as a swarm in areas where electronic interference or other factors make remote control impossible.
“According to CODE’s technical specifications, developers should count on no more than 50 kilobits per second of communications back to a human commander,” said Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War.” “Not super high traffic, but like a 56k modem from the 1990s. In principle, it could send back snapshots of military objects maybe every other second over that kind of bandwidth.”
These photos would be low resolution, but they might be useful enough for humans supervising the machines to keep track of what the drones are doing.
But this is where the shift happens: lower bandwidth in the field encourages autonomy, and autonomy in vehicles then means that the humans move from a dedicated pilot or sensor operator role into a sort of supervisory position, a commander of robots who can only reliable communicate in low-data messages.
Underscoring the debate over the ethics of autonomous machines, especially lethal autonomous machines, is the changed nature of the battlefield. Contested and denied electronic spectrum make areas once open to remote vehicles now hostile and possibly outright impossible. Designing machines that can get around those barriers, that can perform military tasks and missions even if they are out of contact from the humans that ordered them into action, is an adaptation to the environment. It is a way to preserve the utility of uncrewed vehicles, without sending humans into that same danger. Or it a way to make sure that, when soldiers or marines find themselves trapped in a fight, rescue can still come in robotic form.
American forces are unlikely to ever find themselves operating in as permissive an environment as they had between 1991 and 2003. Nations developing autonomous military machines, notably Russia and China but also Israel, South Korea, the United Kingdom, and even states like Belarus, Estonia, and Slovakia, are doing so with an eye to the persistence challenge of jammed communications, unreliable transmissions, and intrusions through computer systems.
Autonomous machines are the tangible, science-fiction edge of what future war might look like. It’s the electromagnetic spectrum, invisible and omnipresent and causing interference, that is the cyberpunk backdrop.