A prominent group of national security thinkers is questioning if the Pentagon’s policy on developing autonomous weapons needs to be updated to more accurately reflect current technology and the greater role artificial intelligence is expected to play in future conflicts.

In 2012, the Pentagon offered its first formal guidance for developing autonomous and semi-autonomous weapon systems, a document that reflected how the military used artificial intelligence on the battlefield.

The resulting policy directive has been hailed as moral high ground by U.S. leaders, who claim Russia and China would have no similar qualms about robots targeting humans, and commonly misunderstood as an ethical or legal framework for using AI.

But now, among Washington’s think tank community, there is a rumbling that the directive alone is inadequate as a stand-in for official Pentagon policy on the ethical use of autonomous weapons and that changes may be necessary. Simmering in the discussion is a debate about whether the document, as written, is broad enough to encompass present and future developments, and if it isn’t, what needs to be changed.

At a private meeting Dec. 10 in a conference room at the Center for Security and Emerging Technology in Washington, leaders asked if the Pentagon policy hamstrings industry in developing innovative systems, if it puts commanders in the battlefield at a disadvantage, or if it lacks the ethical and legal weight popularly attributed to it. Among the attendees considering a change was Bob Work, the former deputy secretary of defense from 2014 to 2017 and the vice chair of the National Security Commission on Artificial Intelligence. Representatives from Brookings, the Center for New American Security and CSET were also in attendance.

The think tank world serves as the nation’s most direct storehouse of policy formulation experience. Meetings like this are a regular and routine way for the policy-adjacent to collaboratively work through the trickier issues of a subject matter. Multiple people in attendance previously held positions within the Department of Defense, and administrations looking for staff and policy will likely draw from the talent and ideals already on hand in the think tank community.

“The question was: is it time for DoD 3000.09 to be updated?,” Work told C4ISRNET in a Dec. 11 interview. “We came to no conclusion.”

Formally known as Directive 3000.09, Pentagon leaders created the guidance in November 2012 and updated it in May 2017. The language of that document has remained largely the same throughout the last seven years, even as the range and capabilities of weapons with autonomous features in the arsenal of the United States evolved.

In the intervening years, the unclassified directive has become the predominant touchstone in understanding how the military, as an institution, feels about developing autonomous weapons. Even a recommendation to update or clarify the language of the direction would entail a resetting of expectations within the Pentagon about lethal autonomy.

Shortly after it was published, the publicly available directive caught on in the press as an affirmative sign of restraint from a science fiction future where machines would have carte blanche authority to target humans. The Campaign to Stop Killer Robots, an initiative launched in 2013 with the explicit mission of keeping humans in charge of lethal decisions in war, cited the directive favorably in April 2013 as a sign that the directive from the United States was “the first publicly policy by any country on fully autonomous weapons.”

Still, even among the national security community, strategists have long questioned the exact nature of the language and, to some leaders, that has prompted the need for change.

“The directive lays out this process for how the DoD should think through this kind of challenge and review weapons systems going forward,” said Paul Scharre, a senior fellow at the Center for New American Security and a former Pentagon policy hand who led the DoD working group that drafted DoD Directive 3000.09. “But it doesn’t answer this sort of overarching policy guidance about what is the Department of Defenses policy when it comes to autonomy and human control.”

Specifically, Scharre referred to the role humans play when systems identify and ultimately destroy a target. Should the soldier oversee the process, known as “on the loop?” Or should they make the final decision, referred to as “being in the loop?”

The current directive sets out a process for developing and employing autonomous and semi-autonomous weapons that allow “commanders and operators to exercise appropriate levels of human judgment over the use of force.” It also parses out the exact rules for how semi-autonomous weapons can “apply lethal or non-lethal, kinetic or non-kinetic force.” It notes specifically notes that “autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets” in accordance with DoD guidance.

Some experts have read the policy as a distinction about limits on using autonomy to kill, but Work argued instead that this is a misunderstanding of the context in which the language was written.

“We're saying, should there be a policy directive in the department on autonomy?” said Work. “You know, should we rewrite DoDD 3000.09? You know, what are the things we could do?”

By his reading, the directive was more of a descriptive, rather than a prescriptive, document. The guidance was primarily concerned with outlining a procedure for testing new systems. If a contractor pitched a system that resembled an existing autonomous or semi-autonomous system already in use by the Pentagon, then that system was exempt from these new rules.

“It only pertained to weapons and weapons systems that were in existence in 2012,” he said. ”It describes an autonomous weapon system that applied non-lethal, non-kinetic force against material target. The only autonomous weapon system that was in the Department at the time was the Miniature Air-Launched Decoys. And it was a jammer. So they described this is what an autonomous weapon system can do today.”

Whether the directive is a constraint on the development of lethal autonomy became a problem for the Army in February. The service posted a solicitation Feb. 11 for Advanced Targeting and Lethality Automated System, or ATLAS, a tool designed to help Army ground vehicles “acquire, identify, and engage targets at least 3X faster than the current manual process.” That suggested a great deal of machine autonomy in the process, and “engage targets” specifically points to lethal autonomy.

By Feb. 22, the public solicitation for ATLAS had been updated to include a paragraph clarifying that it was in accordance with the standards outlined in the autonomy directive. Furthermore, it went on to say “Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.”

That the Army instinctively turned to 3000.09 suggests primarily the paucity of other official Department of Defense guidance regarding autonomous weapons. Matching the needs of battlefield speed, compliance with the laws of war, and the unknown constraints or standards built into the autonomous machines of rivals are all problems that think tankers have wrestled with publicly.

It does not, fundamentally, make any claim about DoD legal and ethical standards regarding lethal autonomous weapons.

The broader text of the directive goes into detail on many principles of design, from safeties to anti-tamper mechanisms to specifying the need for deactivation procedures. The guidelines are about the development of autonomous systems, and about ensuring that this development is done in a way to, according to the text of the directive, “minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

If the gathering of experts to debate changes to 3000.09 was inconclusive, it is perhaps because all the focus on the directive has been misplaced.

Should the Pentagon want clarity in the legal and ethical standards regarding lethal autonomous weapons, it cannot keep relying on a process document built to guide acquisitions seven years ago.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Battlefield Tech