The Pentagon is eager to plug artificial intelligence into lethality. How the benefits of modern information processing, so far mostly realized in the commercial sector, will be applied to the use of weapons in war remains unclear, but it is a problem the military is interested in solving.

“We are ready to start our first lethality project next year in the joint war fighter targeting space,” said Department of Defense Chief Information Officer Dana Deasy said in December in an exclusive interview with sister brand Defense News.

This vision will be carried out by the Joint Artificial Intelligence Center, the military’s AI coordinating and developing organ. As for the specifics of how, exactly, it will bring the benefits of algorithmic processing to the fight, JAIC is still too early in the process to have much concrete information on offer.

The project will be part of a mission initiative under JAIC called Joint Warfighting.

While joint war fighting could in theory encompass every part of combat that involves more than one branch of the military, JAIC spokesperson Arlo Abrahamson clarified that the initiative encompasses, somewhat more narrowly, “Joint All-Domain Command and Control; autonomous ground reconnaissance and surveillance; accelerated sensor-to-shooter timelines; operations center workflows; and deliberate and dynamic targeting solutions.”

In other words, when the JAIC pairs AI with tools that aid in the use of force, it will come through either a communication tool, scout robots, battlefield targeting tools, workforce management software, or other targeting tools.

“The JAIC is participating in dialogue with a variety of commercial tech firms through industry days and other industry engagement activities to help accelerate the Joint Warfighting initiative,” said Abrahamson. “Contracting information for this mission initiative is under development.”

And while the JAIC is still figuring out if the first lethality project will be a robot, a sensor system, or logistics software, it is still explicitly interested in making sure that whatever the use of AI, it ultimately serves the interests of the humans relying on it in a fight.

As plainly as the JAIC can put it, the initiative is looking for “AI solutions that help manage information so humans can make decisions safely and quickly in battle,” said Abrahmson.

Humans, then, will still be the author of any lethal action. Those humans will just have some AI help.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence