Despite how often artificial intelligence and machine learning are discussed within the context of the future of war, the field has its limits.
While great strides have been made in those fields, Giorgio Bertoli, senior scientific technical manager for offensive cyber and acting chief scientist for the Army, said perhaps the advances have been a bit over-marketed and oversold. Bertoli was speaking during a Nov. 29 presentation at the Association of Old Crows Symposium in Washington, D.C.
“There are two things that artificial intelligence/machine learning can do well; two types of problems,” he said. These include classification/object recognition and search optimization.
Classification refers to identifying objects, pictures or signals in the environment and cuing humans to potential anomalies they would otherwise be unware of or too slow to discover themselves among the plethora of data. However, this is limited by the need for training data sets the machine must be able to learn from.
Search optimization refers to a machine’s ability to steer toward a “good enough” solution in a large data base.
“That’s it. Those are the two things AI can do,” Bertoli said. He added that he hates to disagree with Elon Musk but he doesn’t think the “Terminator” scenario and a world of cyborg assassins is on its way. Musk has repeatedly warned of the dangers of creating machines that are smarter than humans.
One additional application of AI or machine learning that could benefit military commanders is in deception. said JD McCreary, chief of disruptive technology programs at Georgia Tech Research Institute. Speaking at the same conference, McCreary referred to the observe, orient, decide and act, or OODA loop, that AI could use to keep adversaries in the “orient” phase.
He explained that merely having access to a bevy of sensor data is advantageous because the data must be turned into some type of decision and action. Similarly, the U.S. could deceive or confound adversaries by flooding them with information so they never get the proper output. This could also be done through falsifying information as well.