The potential for artificial intelligence in the military is vast, but the Pentagon does not have a coherent strategy to use it. That’s the conclusion from an exhaustive study by RAND, published Dec. 17, and commissioned by the Department of Defense to understand how, exactly, the military is planning for a future of greater machine intelligence.

The Department of Defense’s “posture in AI is significantly challenged across all dimensions of the authors’ assessment,”the RAND study read. That challenge is emphasized in specific ways, the wording of which is often unsubtle.

“DoD faces multiple challenges in data, including the lack of data,” the report summary read. “When data do exist, impediments to their use include lack of traceability, understandability, access, and interoperability of data collected by different systems.”

The report also found that the Pentagon’s AI strategy lacks baselines and metrics to meaningfully assess progress toward its vision,” and notes specifically that the Joint Artificial Intelligence Center was not provided with the needed resources and authority to fulfill its mandate. In addition, RAND found that despite the services centralizing their AI organizations, the roles mandates and authorities of those organizations within the services remains unclear.

The short list of major obstacles to military AI continues, noting that even in a tight AI market, the Department of Defense lacks a clear path to developing and training its own AI talent. If there’s a silver lining, it’s that while RAND found the Pentagon’s verification, validation, testing, and evaluation of AI is nowhere close to ensuring performance and safety, that problem is not unique to the DoD.

That the Pentagon is struggling with AI writ large is perhaps unsurprising. At high levels, the Pentagon can’t really figure out how to talk about data without stumbling into clumsy and inept metaphor. Getting data right is a prerequisite for AI, though it’s not the whole of the picture. But for the Pentagon to truly plan for a future where AI aides humans in everything from bookkeeping to logistics to targeting and autonomous navigation, the Pentagon has to take it seriously, and have a vision of not just what AI looks like in 2050, but what steps it can take in 2020 to be on track by 2025.

The RAND report, commissioned by the JAIC under the authorities of the 2019 defense policy bill, outlines four major strategic steps the Department can take to meet the challenges of planning for AI. Those steps include new governance structures to acquire and scale AI, better testing and evaluation of AI processes, treating data as a critical resource, and an openness to new hiring practices to acquire and retain AI talent.

Those strategic goals are paired with outlined smaller, tactical goals, like five-year AI roadmaps for the services. RAND’s report also focuses narrowly on action the military itself can take, with a careful note of what would need to go through Congress first.

Released almost concurrently, the Center for New America offered “The American AI Century: A Blueprint for Action,” a document that takes a holistic approach to managing and developing AI policy across the United States. The recommendations there span from the fiscal, such as funding increases and tax incentives, to the social, such as relaxing immigration restrictions for skilled AI personnel. Should Congress want to take up AI readiness on its own, it will find a wealth of policy recommendations to choose from.

The stakes, as outlined by both CNAS and RAND, are straightforward: if the United States fails to take a lead in AI, and through that lead establish norms and ethics that maintain a global order of security hinged around the durability of American military supremacy, then rivals will instead reap the benefits.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence