WASHINGTON – The Department of Defense is considering whether delaying the use of autonomous robots in combat is unethical, as they could keep troops out of the line of fire, according to a top official at the United States Army Training and Doctrine Command.
“Is it immoral not to rely on certain robots to execute on their own … given that a smart weapon can potentially limit collateral damage?” Tony Cerri, director of data science, models and simulations at TRADOC, asked an audience at The Atlantic Festival Oct. 4.
Cerri said Defense Department policy requires a human presence during use of smart weapons, but allowing robots or other smart technology tools to operate more independently is “being discussed” as the ethical path to prevent troops’ deaths or injuries.
RELATED
Ethics is one of the DoD’s six core values alongside duty, integrity, honor, courage and loyalty. And Secretary of Defense James Mattis, a retired general, is eager to employ virtual reality technology quickly, Cerri said. He wants infantry soldiers to fight “25 bloodless battles” before a combat zone, according to Cerri.
“[Mattis] is frustrated because we’re not moving quick enough for him,” said Cerri.
But Cerri is hopeful for DoD support for virtual reality because he believes it can improve the world’s view of U.S. military operations by shining light on its core values.
“Our algorithms reflect our national ethics and our national morality,” said Cerri.
When asked for an example of “ethical” or “moral” virtual reality, he referred to a private exercise run by his program in which algorithms were used to project time frames that would protect human combatants.
“Every decision to pair a target with a shooter also had an algorithm to limit collateral damage,” said Cerri.