WASHINGTON — The U.S. Department of Defense released a new strategy on its use of data analytics and artificial intelligence as it pushes for additional investment in AI, advanced pattern recognition and autonomous technologies including drones.

The document is a more mature version of a blueprint first published in 2018, in which the Pentagon predicted AI would “transform every industry” and impact all facets of national security. It takes into account AI’s significant growth in the defense industrial base, according to Chief Digital and AI Officer Craig Martell.

“Accelerating the adoption of advanced data, analytics, and artificial intelligence technologies presents an unprecedented opportunity to equip department leaders at all levels with the data they need to make better decisions faster,” Martell told reporters Nov. 2 at the Pentagon.

Craig Martell, the Pentagon's chief digital and artificial intelligence officer, or CDAO, responds to a question May 3, 2023, at the AFCEA TechNet Cyber conference in Baltimore, Maryland.

Among other goals outlined in the strategy are better data sets, improved infrastructure, more partnerships with groups outside the department and reformed internal barriers, which often inhibit technology advancing faster than the department can adopt it.

With the document, the Pentagon is further explaining its AI thinking as it builds internal structures to govern it.

The CDAO position was established in 2021. Billed as an overseer and expeditor of all things AI and analytics, it subsumed the Joint Artificial Intelligence Center, the Defense Digital Service, the Advana data platform and the chief data officer’s role.

“The Secretary and I are ensuring CDAO is empowered to lead change with urgency,” said Deputy Secretary of Defense Kathleen Hicks in prepared remarks from the Pentagon briefing room Thursday.

The employment of generative AI within the military is controversial. Its core benefit is the ability to streamline simple or mundane tasks, such as finding files, digging up contact information and answering simple questions. But the technology has also been used fuel cyberattacks, spoofing attempts and disinformation campaigns.

Hicks cautioned that humans will stay responsible for the use of deadly force and, as stated in the Pentagon’s latest review of its nuclear weapons, will remain in control of all decisions regarding the strategic arsenal.

“We are mindful of AI’s potential dangers and determined to avoid them,” she said.

Development and deployment of semi-autonomous or fully autonomous weapons is governed by what’s known as directive 3000.09, originally signed a decade ago and updated this January.

The directive is meant to reduce the risks of failed autonomy and firepower. It does not apply to cyber, a field in which leaders are increasingly advocating for autonomous capabilities.

AI has advanced rapidly this year in part through the growth in large language models, such as ChatGPT, which analyze enormous tranches of data to predict responses that would otherwise appear human. These commercial programs don’t yet meet departmental standards, and Hicks conceded that much of the innovation in this space is “happening outside DoD and government.”

Still, in her remarks Hicks said the Pentagon is already using models of its own. She cited “DoD components” who were working on similar programs before ChatGPT grew popular. These were trained on Pentagon data, Hicks said, and are at varying levels of maturity.

“Some are actively being experimented with and even used as part of people’s regular workflows,” she said.

Deputy Defense Secretary Kathleen Hicks, onstage at the Defense News Conference, answers a question Sept. 6, 2023.

Were they to become more operational, the Pentagon has identified problems they could solve. Hicks said the DoD has earmarked “over 180 instances” that could benefit from the use of AI — from analyzing battlefield assessments to summarizing data sets, including classified ones. The Defense Department was already juggling more than 685 AI-related ventures as of early 2021, according to a tally made public by the Government Accountability Office.

Task Force Lima, overseen by the CDAO, was created earlier this year to evaluate and guide the application of generative AI for national security purposes.

The new strategy arrives alongside an AI summit in London this week, which Vice President Kamala Harris attended. Just beforehand, the Biden administration issued an executive order on AI security and privacy. Despite attempts to do so by Senate Majority Leader Chuck Schumer, D-NY, Congress has not yet acted on the issue.

The Pentagon requested $1.4 billion for AI in fiscal 2024, which started Oct. 1. A continuing resolution, which maintains the prior fiscal year’s funding levels, is in effect through mid-November.

The Defense Department was handling more than 685 AI-related ventures as of early 2021, according to a tally made public by the Government Accountability Office. The Army led the pack with at least 232 the federal watchdog said. The Marine Corps, on the other hand, was dealing with at least 33.

Noah Robertson is the Pentagon reporter at Defense News. He previously covered national security for the Christian Science Monitor. He holds a bachelor’s degree in English and government from the College of William & Mary in his hometown of Williamsburg, Virginia.

Colin Demarest was a reporter at C4ISRNET, where he covered military networks, cyber and IT. Colin had previously covered the Department of Energy and its National Nuclear Security Administration — namely Cold War cleanup and nuclear weapons development — for a daily newspaper in South Carolina. Colin is also an award-winning photographer.

More In AI & ML