WASHINGTON — Artificial intelligence is all the rage within the military right now, with the services working to integrate machine learning algorithms into its processes to automate tasks and operate at machine speed. But even as military leaders express hope that AI can give their forces the edge on the battlefield, there’s growing recognition that these algorithms can potentially introduce unintended biases into military systems.

“If that automated processes that we create are limited in scope or scale or rely on bad sets of data, then we’re introducing bias to that limited perspective,” Lt. Gen. Mary O’Brien, deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations, said during a Nov. 17 virtual presentation hosted by AFCEA’s Alamo chapter.

Machine learning algorithms work by ingesting massive amounts of training data. If that data is flawed or isn’t representative of the full spectrum of information the algorithm needs to work properly, that training process can introduce unintended biases.

The commercial world is full of examples. O’Brien pointed to the often frustrating customer service voice recognition software. Citing studies looking at the medical and automotive field, she explained that voice recognition algorithms generate more errors for women and those that speak English as a second language than men. This is because during the software development and training of the algorithm, typically male voices are used resulting in a bias against higher pitched voice.

In a national security context, the consequences could be dire. For example, what if the military builds an intelligence algorithm that is unintentionally biased toward Russian intrusion methods versus Chinese or Iranian, asked O’Brien. What if the military builds an algorithm to locate ballistic missiles, but developers only used North Korean imagery data to train it? Will it be able to accurately locate ballistic missiles originating from other adversaries?

“Will it be able to respond quickly enough? Or will we fail to predict our adversary’s actions in time to preserve the maneuver space that we need to defend ourselves,” O’Brien said. “It’s a critical way to thing about this challenge, but as we move into the competition, we have to be cognitive how we build in these decision calculus tools … to ensure we’re competing with the right tools that we need.”

Mark Pomerleau is a reporter for C4ISRNET, covering information warfare and cyberspace.

Share:
More In Artificial Intelligence