When any new technology with enormous potential emerges, there is a tendency to swing from utopian expectation to profound disillusionment. It is easy to be blinded by the technology’s promise while ignoring the hard work of thinking through the most appropriate method of implementation. It is easier still to see the predictable failures as damning of the technology itself, rather than its inapt use.

We are exposed to this risk in the current autonomy hype cycle, where we see an impulse toward implementing something called “autonomy” anywhere and everywhere as quickly as possible. But while the impulse is understandable, instead of assuming that autonomy will offer an obvious panacea for perennial challenges, it’s smart to take a step back and determine when and how to leverage different types of autonomy for the enterprise.

A good place to start is by asking the right questions:

  • What goals do I want to achieve?
  • What new autonomous capabilities can I deploy to achieve those goals simply, quickly and at a relatively low cost?
  • What new goals do autonomous technology make possible that previously were difficult or impossible to contemplate?

Asking these types of seemingly simple questions matters because there is a tendency to conflate the different goals that autonomy can help achieve, which in turn obscures choices and trade-offs.

For example, one commonly cited reason to use autonomy is to reduce the role of the warfighter and increase the role of the machine where this is safe, practicable, and to the advantage of both the health of the warfighter and the achievement of military objectives.

But what is the optimal path to achieve this objective?

There is a meaningful difference between using autonomy to reduce the cognitive burden on crew members, reducing or eliminating the number of crew required to operate a platform, and reducing or eliminating the number of military personnel across a formation. Autonomy can be used to advance all these goals, but in different ways, and some of these objectives may conflict.

For example, unmanned drones may require support from large numbers of humans, which means more people, more vehicles, more radio emissions, more fuel supplies that have to be moved by more vehicles and more cargo aircraft to deploy. The paradoxical result here is that some types of unmanned formations may require more personnel than manned formations, and in turn face different kinds of significant threats from enemy forces.

In addition to obscuring our goals for autonomy, we tend to conflate the different types of autonomy. This is unsurprising, as the power of autonomy and artificial intelligence stems from the fact that they are the type of general-purpose technologies that can be applied to many problems. It is therefore important to ask questions like:

  • What is the actual scenario in which we seek to apply autonomy?
  • What type of autonomy is best suited to this scenario?
  • What are the correct measures of success in evaluating this type of autonomy?

For example, we may focus on the autonomy of a single platform, such as a robotic vehicle, or the autonomy of a collection of platforms, ranging from loyal wingmen for crewed aircraft to massive drone swarms. In the latter case, success will be a function of the orchestration of different platforms together, rather than the performance of a single machine. For example, in the case of autonomous loyal wingmen supporting crewed aircraft, this means evaluating autonomy as the sum of the technologies distributed across both crewed and uncrewed aircraft, such as the configurable mix of hardware and software that allow rule sets established by manned aircraft to be executed by unmanned aircraft.

These types of questions may lead to different, even contradictory conclusions about where to invest. But considered in tandem, they should help replace the typical hype cycle with measured expectations and concrete goals, and generate a full spectrum of autonomy solutions that are not myopically constrained by legacy technology investments or traditional operational concepts. The result should help align internal stakeholders, make context-sensitive technology investment decisions and maximize the return on investment.

As seen in Ukraine, the strategic use of autonomy can yield entirely new categories of outcomes that previously were hard to fathom. For those of us working to enhance the deployment of safe and intelligent machines, the stakes of getting this right could not be higher.

Ahmed Humayun leads federal growth at Applied Intuition, which provides software for autonomous vehicle development.

More In Opinion