Human trust is a byproduct of shared experience, where the environment dictates the spectrum of that experience. In a military environment, shared experience is nurtured by displays of instinct: your battle buddy doing what they are supposed to do when it matters — and vice versa. These displays are known doctrinally as tactics, techniques and procedures (TTPs).

The intersection of instinct and TTPs lies in questions such as, are you confident the person next to you can hit their target? How many times did they have to hit that target before you assumed they would, rather than assuming they wouldn’t? How quickly should a soldier react to a change in their environment? How many times have you seen a soldier react too slowly? Just fast enough? Would you have the same level of trust if it was their first mission or their 100th?

In all these cases, the variables contributing to trust can be computationally factored: range qualifications, go/no-go testing, training exercise outcomes, after action reviews, ranking, etc. Each variable uses readily identifiable metrics to support the transfer of individual interest to the collective group or team.

The problem with adding a machine into military teaming arrangements is not doctrinal or numeric, though; it is psychological. It is rethinking the instinctual threshold required for trust to exist between the soldier and machine. Rather, how many successful, instinctual exchanges must occur for the relationship to reach a trust tipping point?

To answer that, take two ends of the military spectrum: marksmanship and analysis. Marksmanship represents a traditionally proof positive problem set. That is, marksmanship can be defined by largely binary, predetermined qualifications. Applying those qualifications to soldier-machine teaming yields questions such as, how precisely does a machine need to shoot to be considered valuable by the Soldiers around it?

Given the current debate on lethal autonomous weapons, the answer is likely to be “precise enough to not miss.” Yet, a soldier is considered a contributing marksman if they can hit little more than half of their targets. Further, a solider is given 15 minutes at each shooting position. Thus, leading to another question, would a squad of soldiers even consider teaming with a machine if they knew its response time was 15 minutes? Probably not.

Foregoing lethality, military analysis represents the complex, soft-science problems at the other end of the spectrum. These are questions such as, how much available intelligence would a machine have to absorb and process for the decision chain to act on its output?

According to the Army’s own Field Manual for Intelligence, the answer to that question is all of it. Further, how certain would you have to be that the machine identified an object, person, or area of interest correctly? Or, how important is it that all instances of that object, person or area from available intelligence were recalled? Again, according to established doctrine, that level of certainty is extremely important as the Department of Defense sets minimal thresholds for signature identification. Yet, high ranking officials conceded just this year that the DoD exploits, at best, only 12 percent of ISR information; and most of that exploitation is prone to human error.

So, while these are just two cases of many, they demonstrate the type of questions being considered by the DoD’s Third Offset Strategy — a strategy that attempts to outmaneuver advantages made by adversaries using technology. Yet, as the questions posed by this editorial demonstrate, the military still has a long way to go if that technology is to ever be integrated with our forces successfully.

Specifically, stakeholders contend that policy, legality and ethics decelerate strategic integration. However, the type of technology considered by the Third Offset will live and breathe at tactical touchpoints throughout the force. Thus, the real hurdle lies in surpassing the individual psychological and sociological barriers to assumption of risk presented by algorithmic warfare.

To do so requires a rewiring of military culture across several mental and emotional domains. Addressing the prevailing culture requires first accepting that artificial intelligence is not something on the horizon; it is already here. Near-peer threats are real and U.S. technology superiority is no longer guaranteed.

There must then be a conscious focus on human understanding of artificial intelligence during military training and exercises. By that, AI trainers should partner with traditional military subject matter experts to develop the psychological feelings of safety not inherently tangible in new technology. Through this exchange, soldiers will develop the same instinctual trust natural to the human-human war-fighting paradigm with machines.

Finally, the military must actually standardize its standards across agents: human or artificial. Until machine performance expectations are brought into parity, or near parity, with that of soldiers, the Army will continue to out-policy any meaningful solution to deploy valuable technology to the tactical edge. Our instinct to be slow and deliberate when considering the breadth and depth of these technologies cannot outpace the criticality and timeliness of our missions or our adversaries’ advancements.

Courtney is a lead scientist for Booz Allen Hamilton’s Strategic Innovation Group. In her current role, Courtney employs artificial intelligence and data science solutions across government. She is also a Ph.D. candidate, with her dissertation on DoD autonomous battlefield systems. Courtney’s previous experience includes policy analysis and social science efforts in support of Army campaigns CONUS and deployed.

The thoughts and opinions in this editorial are that of the author, and not that of any associated company or organization.

Share:
More In Opinion