In February, after more than a year consulting with a range of experts, the Department of Defense (DoD) released five principles for ethics around artificial intelligence (AI). If AI doesn’t meet these standards, the Department has said, it won’t be fielded.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Secretary Mark Esper said in the news release.

The principles, which apply to combat and non-combat functions, are that AI must be the following: responsible, equitable, traceable, reliable, and governable. Such guidelines are relatively high level, though, leaving individual departments and agencies on their own to implement what each adjective means for a specific use case. This article aims to add depth to each of the DoD’s principles, which is important to ensuring their use.

Responsibility depends on where agencies are in the AI cycle. The official guidelines define responsibility as exercising “appropriate levels of judgment and care,” but it’s important to recognize that means different things during development versus deployment. During development, responsibility entails properly defining bounds of the system and ensuring those bounds are adhered to. AI driven autonomous vehicles, for instance, shouldn’t model erratic behaviors that could cause an accident. Those testing the AI, then, are responsible for setting upper and lower boundaries that the system must operate within to ensure the vehicle operates safely. When AI is fielded, on the other hand, this principle is about accountability, which only comes via a clearly articulated chain of command. Who are the operators and who are the decision-makers that decide when to take action on information obtained from an AI based system?

Equitability isn’t possible without accepting humans are inherently biased. This second principle, in the DoD’s own words, means minimizing unintended bias. But what does that actually look like? Agencies must remember that machine learning is powerful but ultimately learns from the data it’s fed -- practitioners must work to prevent bias in the data set. If a facial recognition system isn’t shown enough photographs of people of a particular race or gender, the model will perform poorly in those cases, even if its overall accuracy is fairly high. Tools to prevent unintentional bias range from very simple checklists (Deon), to statistical checks on the data (IBM AI 360), to detailed visualizations of “what if” scenarios that explore how changes to the dataset impact model decisions and fairness (a what-if tool).

Traceability means transparent documentation of AI model lifecycles is necessary. The ABOUT ML initiative, led by the Partnership on AI, seeks to standardize documentation of AI models in a way that “can shape practice because by asking the right question at the right time in the AI development process, teams will become more likely to identify potential issues and take appropriate mitigating actions.” An initial example of AI model documentation are model cards, like the ones Google released in 2019, help define the context in which models are intended to be used and offer details on the performance evaluation procedures, increasing transparency. Tracking and versioning of the model can help agencies ensure results are replicable and the system hasn’t been tampered with—which relates back to the first principle of responsibility. A tool like ModelDB from Verta AI is useful in this regard. While DoD’s call for “transparent and auditable methodologies, data sources, and design procedure and documentation,” is on the right track, mandating transparent documentation like model cards would give this principle more teeth.

Reliability must be demonstrated through tracking. The DoD defines reliability as consisting of “explicit, well-defined uses” plus “safety, security, and effectiveness.” The first aspect of this is guarding against model drift. Many models run very well on training data. But accuracy often dips when predictions are made on data that is captured from a different camera or sensor. Thus, agencies should track statistics of the systems and sensors used to collect the data and measure how similar or different those statistics are. Uncertainty about predictions should be tracked as well, particularly for data points that are close to a decision boundary. Let’s say a system is identifying animals; it must communicate when it’s only 51 percent certain a particular image is a dog.

Governability must be built into the design. This last principle is about worst-case scenarios: being able to detect when the system isn’t behaving and being able to disengage it. The Department frames this as “the ability to detect and avoid unintended consequences.” What it doesn’t say, though, is that governability simply must be built into the design. Autonomous vehicles offer another prime example. In theory, a steering wheel shouldn’t be needed because the vehicle operates itself. And yet, a steering wheel is required for sufficient governability—for disengaging if something goes wrong.

The fact that the DoD released principles to guide AI ethics is an important step for encouraging dialogue around new technology and its implementation. But as they currently stand, the guidelines tell agencies what they should be striving for, not how to get there.

The tools, processes, and examples outlined here aim to give some much-needed detail to the principles released by the DoD in February. As progress is made—from model cards to the widespread use of what-if tools—even more detailed guidelines should be shared and followed.

Sean McPherson is a deep learning scientist at Intel Corp.

Share:
More In Opinion