WASHINGTON – The Pentagon’s new roadmap for ensuring the ethical use of artificial intelligence is a good start and may eventually help build trust in the technology, according to Chris Meserole, an AI research director at Brookings.

The U.S. Department of Defense released the Responsible Artificial Intelligence Strategy and Implementation Pathway framework on June 22. The document outlines a plan to mitigate unintended consequences that could result from AI as use of the technology becomes more widespread in military systems.

Although AI has the potential to improve efficiency and accuracy, plans for widespread adoption raise concerns about shifting decision-making away from humans. The pathway is the latest move by the Pentagon to increase trust between the department and the many actors involved with AI programs.

“I was very glad to see that they’re not just taking building warfighter trust internally seriously,” Meserole told C4ISRNET in an interview. “It seems like they’re interested in trying to develop a global set of norms around what that should look like.”

Meserole is a foreign policy fellow at Brookings and head of research for the nonprofit public policy organization’s Artificial Intelligence and Emerging Technology Initiative.

Jane Pinelis, chief of AI assurance at DoD’s Chief Digital and Artificial Intelligence Office, will offer executive-level guidance as the pathway is implemented. The office took over as the Pentagon’s AI authority this month.

At an event hosted by the Center for Strategic and International Studies in Washington on June 28, Pinelis emphasized the role the pathway will play in building trust between military service members and the technologies they will be using.

“Responsible artificial intelligence is really not a destination,” she said. “It’s a journey and we all have to be part of that journey so we have to train our workforce.”

People mistakenly assume that by incorporating AI into programs, humans play a lesser role, she said. In reality, human factors become more important in decision-making when using the technology, according to Pinelis.

The Pentagon is making AI a priority in a bid to keep up with China’s growing cyber capabilities. Despite technology advances, some commanders in the field are reluctant to use them because they don’t necessarily trust the tech, Meserole said.

The pathway, he said, establishes a framework for testing and evaluation, which is designed to ensure that members of the military can trust the technologies that they’re given. He added that the pathway also lays the groundwork for other countries working on implementing responsible AI.

“What I’m for hoping is that the kinds of testing and evaluation processes that the Pentagon is developing for internal use, they’ll be able to advocate for and socialize and normalize among other militaries around the world,” Meserole said in the interview.

Ideally, it wouldn’t just be U.S. allies working on responsible AI. By including China or Russia in public discussions on ethical AI implementation, officials could have some measure of confidence that what scientists are building is going to work the way that it is intended, he said.

The pathway builds on years of discussions about what values and principles the Pentagon should have when working with AI.

What are the rules for responsible AI?

In 2018, after learning about their company’s involvement in Project Maven, a military effort to develop AI to analyze surveillance videos, thousands of Google employees protested. The protests ended with around a dozen employee resignations. Google did not renew the contract in 2019.

The department sought to establish clearer ethical guidelines following the protests.

In 2020, the Pentagon published a document outlining its core values for AI. Under the principles, the Pentagon seeks to make AI responsible, equitable, traceable, reliable and governable in combat and non-combat situations.

Based on those values, the department created foundational tenets that act as general guidance for all the Pentagon’s AI programs. Tenets include responsible AI governance, warfighter trust, AI product and acquisition lifecycle, requirements validation, responsible Al ecosystem and Al workforce.

The recently released pathway builds on foundational tenets created around those tenets, listing a series of executable actions for each area. Under the tenet of responsible AI governance, for example, the pathway calls for the department to identify methods for users and developers to report concerns about the implementation of ethical practices.

Meserole said it is likely the department will now look to operationalize the pathway and create individual guidelines for specific AI programs — a process that needs to look closely at testing, he said.

Most AI systems used by the Pentagon are machine learning-based AI systems, which are trained using data. Meserole said the testing and evaluation for machine learning-based systems remains unclear at times so scientists don’t always know how they fail.

“You can imagine a wide variety of ways in which one of these systems — if it fails in the wrong way at the wrong time — can lead to some kind of conflict onset or rapidly escalating geopolitical crisis,” he said.

Catherine Buchaniec is a reporter at C4ISRNET, where she covers artificial intelligence, cyber warfare and uncrewed technologies.

Share:
More In C4ISRNET