COLOGNE, Germany — NATO officials are kicking around a new set of questions for member states on artificial intelligence in defense applications, as the alliance seeks common ground ahead of a strategy document planned for this summer.

The move comes amid a grand effort to sharpen NATO’s edge in what officials call emerging and disruptive technologies, or EDT. Autonomous and artificial intelligence-enabled weaponry is a key element in that push, aimed at ensuring tech leadership on a global scale.

Exactly where the alliance falls on the spectrum between permitting AI-powered defense technology in some applications and disavowing it in others is expected to be a hotly debated topic in the run-up to the June 14 NATO summit.

“We have agreed that we need principles of responsible use, but we’re also in the process of delineating specific technologies,” David van Weel, the alliance’s assistant secretary-general for emerging security challenges, said at a web event earlier this month organized by the Estonian Defence Ministry.

Different rules could apply to different systems depending on their intended use and the level of autonomy involved, he said. For example, an algorithm sifting through data as part of a back-office operation at NATO headquarters in Brussels would be subjected to a different level of scrutiny than an autonomous weapon.

In addition, rules are in the works for industry to understand the requirements involved in making systems adhere to a future NATO policy on artificial intelligence. The idea is to present a menu of quantifiable principles for companies to determine what their products can live up to, van Weel said.

For now, alliance officials are teeing up questions to guide the upcoming discussion, he added.

Those range from basic introspections about whether AI-enabled systems fall under NATO’s “legal mandates,” van Weel explained, to whether a given system is free of bias, meaning if its decision-making tilts in a particular direction.

Accountability and transparency are two more buzzwords expected to loom large in the debate. Accidents with autonomous vehicles, for example, will the raise the question of who is responsible — manufacturers or operators.

The level of visibility into of how systems make decisions also will be crucial, according to van Weel. “Can you explain to me as an operator what your autonomous vehicle does, and why it does certain things? And if it does things that we didn’t expect, can we then turn it off?” he asked.

NATO’s effort to hammer out common ground on artificial intelligence follows a push by the European Union to do the same, albeit without considering military applications. In addition, the United Nations has long been a forum for discussing the implications of weaponizing AI.

Some of those organizations have essentially reinvented the wheel every time, according to Frank Sauer, a researcher at the Bundeswehr University in Munich.

Regulators tend to focus too much on slicing and dicing through various definitions of autonomy and pairing them with potential use cases, he said.

“You have to think about this in a technology-agnostic way,” Sauer argued, suggesting that officials place greater emphasis on the precise mechanics of human control. “Let’s just assume the machine can do everything it wants — what role are humans supposed to play?”

Sebastian Sprenger is associate editor for Europe at Defense News, reporting on the state of the defense market in the region, and on U.S.-Europe cooperation and multi-national investments in defense and global security. Previously he served as managing editor for Defense News. He is based in Cologne, Germany.

Share:
More In Artificial Intelligence