In the near future, engineers are going to have to figure out ways to prevent robots from killing. This is part of the broader work on the edges of lethal autonomous weapons systems, a proposed legal category that does not quite match the function of any weapons at present, but may soon encompass a range of machines.

In preparation for a diplomatic gathering where nations will debate how to govern or prohibit these systems, countries such as the United Kingdom are outlining how, specifically, humans will exercise meaningful control over the weapons they bring to war. The contours of this debate are political and legal; the execution will be largely up to the engineers.

While the legal category is new, we are starting to see a framework. In advance of the upcoming meeting in Geneva, schedule for Aug. 27-31, the United Kingdom released its working paper on rules for autonomous weapons. Titled “Human Machine Touchpoints: The United Kingdom’s perspective on human control over weapon development and targeting cycles,” the paper outlines the government’s stance on a range of issues, from the importance of human-in-the-loop control to the applicability of existing humanitarian lawn.

What matters most for the companies and contractors that will be tasked with building these machines is likely the section’s first annex, which details a framework for considering human control throughout the design process. The first stages begin with top-level policy decisions from politicians and military leaders, and then explains how designers must think about human control throughout the process going forward.

That emphasis on the role of human control would take place at the research and development stage, with the white paper noting “R&D to help the military define their capability needs, the users of the weapon system, and the context of use. These activities underpin consideration of human control later in the acquisition process.” It would continue through the program and project management stage, and into the definition of requirements. Human control is to remain an integral part of design, and never a feature added in later. Most importantly is the way this will be interpreted for detailed system design, down to human factors integration.

“In order to enable operators to exercise human control during use,” outlines the white paper, “the users’ needs and the context in which the system will be used must be considered throughout the system design.”

In other words, to use common security parlance, human control is “baked in," not “bolted on.”

Following that stage, the weapon will be subject to legal review to make sure it still complies with international law. Once in the hands of the people who will ultimately use the machine in combat, they will undergo training that is specifically mindful of “the capabilities and limitations of the system and any modes of operation.”

At every stage of the process, the way the human controls the machine, and the way the machine complies with lawful rules of engagement, are designed to be hard-coded into how the weapon operates.

This is the politics of the machine. In a section acknowledging autonomy in existing defense weapon systems, such as Counter-Rockets, Artillery and Mortars (C-RAM), the white paper specifies that the parameters for engaging a target are set by humans before hand, even if the necessary speed of counter-fire is too fast for direct human control. And when it comes to guided offensive weapons like air-to-air missiles or hellfire missiles, the white paper specifies that while these weapons have some autonomy in navigating to a target beyond visual line of sight of the human that launches them, “in all instances the parameters for an attack are subject to information received from trusted sources and cannot be arbitrarily changed by the weapon after launch.”

Synthesizing the existing autonomy of offensive and defensive weapons, the white paper specifies that “any new weapon system must allow operators to comply with human-set ROE and targeting policies.”

So what does all of it mean?

This is not quite a human-in-the-loop standard of meaningful control over weapons, nor is it even a “human-on-the-loop” process, both of which require an active uniformed member of the military to be responsible for overseeing and authorizing specific targeting and firing decisions made by a machine. Instead, it is almost a “human-in-the-code” model, where human control is inferred from how the designers made the system, defined the rules of engagement, and programmed the machine to target.

Designing autonomous systems in accordance with this view on international law is a potential legal minefield. It remains to be seen if the final decision from the Group of Governmental Experts will reflect the vision set out by the United Kingdom’s working paper, or if the standards for design and human control will involve a stricter standard than incorporating human input into programmed rules of engagement and targeting policies.

Whatever the outcome, anyone looking to make an autonomous military machine that might be involved in target selection should be looking at how that control is baked into the design process. There’s a nonzero chance a coding error could end up interpreted as a war crime.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Unmanned