A military advisory committee has endorsed a list of principles for the use of artificial intelligence by the Department of Defense, contributing to an ongoing discussion on the ethical use of AI and AI-enabled technology for combat and noncombat purposes.
“We do need to provide clarity to people who will use these systems, and we need to provide clarity to the public so they understand how we want the department to use AI in the world as we move forward,” Michael McQuade, vice president for research at Carnegie Mellon University, said during an Oct. 31 discussion on AI ethics.
McQuade also sits on the Defense Innovation Board, an independent federal committee made up of members of academia and industry that offers policy advice and recommendations to DoD leadership. Recommendations made by the DIB are not automatically adopted by the Pentagon.
“When we’re all said and done, the adoption of any principles needs to be the responsibility of the secretary of the department,” McQuade said.
The report is the result of a 15-month study conducted by the board, which included collecting public commentary, holding listening sessions and facilitating roundtable discussions with AI experts. The DoD also formed the DoD Principles and Ethics Working Group to facilitate the board’s efforts.
Those principles were also pressure-tested in a classified environment, including a red team (adversarial) session, to see how they stood up against what the military perceives as the current applications of AI on the battlefield.
For the purpose of the report, AI was defined as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task,” which the DIB said is comparable to how the department has thought about AI over the last four decades.
Here are the five principles endorsed by the board:
1. Responsible: Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of AI systems.
2. Equitable: The DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to individuals.
3. Traceable: The DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems. That includes transparent methodologies that can stand up to audits as well as data sources, concrete design procedures and documentation.
4. Reliable: AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
5. Governable: The DoD’s AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption. Human-executed or automatic means to disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior should exist.
The language of the final principle was amended at the last minute to emphasize the importance of having a way for humans to deactivate the system if it is causing unintended harm or other undesired behaviors.
According to McQuaid, the AI ethics recommendations were built upon other ethical standards the DoD has already adopted.
“We are not starting from an unfertile ground here,” McQuaid said.
“It is very heartening to see a department … that has taken this as seriously as it has,” he continued. “It’s an opportunity to lead a global dialogue founded in the basics of who we are and how we operate as a country and as a department.”
The board recommended that the Pentagon’s Joint Artificial Intelligence Center formalize these principles within the department, and that the government establish an AI steering committee to ensure military AI projects are held to ethical standards.
Beyond those recommendations, the report also calls on the DoD to increase investment in AI research, training, ethics and evaluation.
AI ethics have become an increasingly hot topic within the military and the intelligence community over the past year. In June, the inspector general of the intelligence community emphasized in a report that there is not enough investment being put into AI accountability. And, at the Pentagon, the newly established Joint AI Center announced it will hire an AI ethicist.
Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.