If you want to watch a movie at home, streaming services like Netflix and Amazon have a suggestion for you.

These companies aggregate massive amounts of user data and then leverage artificial intelligence algorithms to offer highly personalized recommendations to those very same users, all in a matter of seconds. Now, the Army wants to take a page out of Netflix’s playbook and use artificial intelligence to recommend decisions to soldiers on the battlefield.

Researchers at the Army Research Laboratory (ARL) have developed a new approach to “collaborative filtering,” the AI used by Amazon and Netflix to provide personalized recommendations. The new approach enables machines to learn 13 times faster than current AI methods allow and will soon become part of “an adaptive computing/processing system” for the Army, according to one of the head researchers on the project.

“It’s possible to help soldiers decipher hints of information faster and more quickly deploy solutions, such as recognizing threats like a vehicle-borne improvised explosive device, or potential danger zones from aerial war zone images,” an Army press release on the technology reads.

The technique focuses on accelerating stochastic gradient descent, a widely used machine learning algorithm, and is outlined in an award-winning paper titled “FASTCF: FPGA-based Accelerator for Stochastic-Gradient-Descent-based Collaborative Filtering.”

This sped-up AI could eventually be used on the Army’s Next-Generation Combat Vehicle and other cognitive toolkits for soldiers, according to Rajgopal Kannan, one of the head ARL researchers working on the technology.

“The goal is to develop [machine learning] algorithms and models as part of a tactical computing framework for making localized decisions to enable intelligent edge-computing in contested environments under resource constraints,” Kannan told C4ISRNET.

The new artificial technology is also more efficient. Relying on lightweight, battlefield ready hardware known as a Field Programmable Gate Array, the new technique consumed only 13.8 watts, compared to the 130 watts burned by its closest competitor system, the graphics processing unit.

Researchers said their new algorithm may even be more intelligent than its competitors.

“The method we developed in the paper speeds up training but not at the cost of accuracy,” Kannan said. “In other words, training is faster, moreover, the accuracy of the final algorithm is the same or better than previous versions.”

Share:
More In IT and Networks