The United States has long relied on technology to ensure our security and that of our allies. So naturally the Pentagon and its extensive research efforts are heavily investing in the next wave of technology ― artificial intelligence and autonomous weapons. These systems offer to increase our security and reduce the risk of our service men and women.

But unlike when the United States was the undisputed technology superpower, other countries are competitive when it comes to AI and robotics, and much of the skill and technology is actually available in the private sector and not controlled by governments.

As a result, there will be no American monopoly on intelligent, armed, self-controlling weapon systems, increasingly referred to as lethal autonomous weapon systems, or LAWS. And right now, neither the wider public nor the government have shown much interest in having a sustained debate about the ethics and legality of such systems, leaving the Department of Defense to develop and exploit these capabilities in a vacuum.

And that’s dangerous. Not limited by what we should build, offices tasked with protecting our troops and security are understandably racing ahead, focused only on what we can build. If you can think of it ― swarms of armed mini-drones that can dominate an enemy airspace with no risk to pilots, or armed sentries on the border of South Korea that can defeat incoming North Korea troops, or micro-drones than can be dropped by the millions over an enemy’s country to destroy its power grid ― you can bet someone at the DoD is working to make it a reality.

The possible advantages to the United States are endless. But so too are the risks. Multiple experiments in the United States and even China for the development of artificial intelligence have been shut down when paired “entities” began using new language that the controllers could not understand. Even if you engage in hubris and assume we can control what we develop, thinking about how the United States could be at a disadvantage as adversaries develop their own LAWS to go after our troops, weapons, computer systems and satellites should give you at least some pause in diving into the age of autonomy.

And this is where the discussion usually ends. The systems we and others are developing are in the classified space, or inside secure proprietary company or university complexes. And in many cases, we are not even sure what we are talking about. Is it computer code? Is it a nonlethal system that could become lethal? What about a driverless car that gets hacked? Defining what we are developing, what others are doing, and what we might want to control even if we could is hard, both in public and in classified settings.

If the U.S. is to maintain an edge over our adversaries while developing this technology to our advantage, concrete steps are needed.

First, the National Security Council needs a serious process to assess what the United States might want to do, and what our adversaries can do in the coming years. We began such a process in the last administration, but it needs to be continued and expanded. Process is not this administration’s strong suit, but it has to up its game in this critical area.

Second, Congress, currently absent from this debate, must take a stronger hand in developing policy. The money comes from Congress for development, and the bill for defending against these systems won’t be far behind. The Senate and House Armed Services committees need to be more active in promoting the debate and forcing the executive branch to take a more holistic view of these developments.

Third, President Donald Trump should staff his Office of Science and Technology Policy in the White House. It is unconscionable that this office remains vacant. In this space, OSTP’s role should be crucial. Trump’s denial of climate change should not require him to bury his head in the sand on LAWS as well.

Forth, the U.S. government needs help from people who understand the various technical, military, legal and ethical implications of what we are doing. The panel created to advise the government on biological engineering is one model, but any one government expert who thinks they know it all is the last person who should be advising on this issue. We need to start accepting that this is new territory and get engaged a broad set of experts from private companies, universities and elsewhere.

Lastly, Secretary of Defense Jim Mattis and Chairman of the Joint Chiefs Gen. Joseph Dunford need to engage the policy and technical community is a serious way to promote discussion and engagement with the DoD. This is unlike anything the Washington policy community has managed before, and we need a cooperative approach with the technical community and even ethicists, like those working on biological weapons and genetic engineering, to ensure we have all of the information and insight available. This won’t guarantee a good result, but it reduces the risks of a bad outcome.

We may not be able to stop lethally armed systems with artificial intelligence from coming online. Maybe we should not even try. But we have to be more thoughtful as we enter this landscape. The risks are incredibly high, and it is hard to imagine an issue more worthy of informed, national debate than this.

Jon Wolfsthal is a fellow at Harvard’s Managing the Atom Project and former senior director at the National Security Council for Arms Control and Nonproliferation. You can follow him on Twitter at @JBWolfsthal.

Share:
More In MilTech