How the Pentagon acquires, develops, and employs AI in the future has been the subject of intense scrutiny within the press and Silicon Valley. So, following the lead of technology companies like Google and IBM that have released principles of AI use to reassure their employees and the public that the work won’t be bent towards harm, the military establishment too is looking to set out guidance for machine intelligence.
“As we move out, our focus will include ethics, humanitarian considerations, long-term and short-term AI safety,” said Brendan McCord, head of machine learning at the Defense Innovation Unit Experimental (DIUx), “To that end, DoD is asking the Defense Innovation Board to assist us in the Development of AI principles for defense for later this year, developed together with multiple stakeholders.”
McCord’s announcement of these principles came during one of the quarterly public meetings of the Defense Innovation Board, one of the Pentagon’s latest attempts to harness the skill and innovation of Silicon Valley. The public meeting took place on the afternoon of July 11th, and was streamed online as well as open to members of the public present for the meeting in Mountain View, California. It is an exercise in transparency, with video recorded and available for any who care to watch it afterwards, and some of the reports available beforehand.
The announcement about a principles document came as part of McCord’s longer presentation on Joint Artificial Intelligence Center (JAIC, pronounced “Jake”), the new AI coordinating and developing body the Pentagon stood up just last month.
Also in June, DoD delivered its 2018 Artificial Intelligence Strategy to Congress, an annex to the 2018 National Defense Strategy. While an unclassified version of the report is not out yet, according to a DoD spokesperson, “The AI Strategy emphasizes the need to increase the speed and agility with which we deliver AI-enabled capabilities and adapt our way of working, the importance of evolving our partnerships with industry and academia, and the Department's commitment to lead in military ethics and AI safety.”
McCord’s remarks seemed to follow directly from that, framing the project of correctly applying AI to the Pentagon’s mission as “having a long-term linkage to military fundamental role: to keep peace, deter war,” and, said McCord, “protect that set of values that came out of the Enlightenment.”
It remains to be seen if the AI strategy itself, or the Department’s principles of AI development, will adopt quite the same overarching tone. After a quarter where internal discontent within Google lead the company’s leadership to set a finite end for the company’s involvement in its work for the Pentagon’s Project Maven, it appears that the military is starting to make an affirmative case for its use of AI, beyond just looking out for the people presently serving in the United States long-running wars abroad.
Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.