Nuclear deterrence depends on fragile, human perceptions of credibility.

As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear command, control, and communications (NC3) systems, the United States and its competitors should take care that these new tools do not inadvertently accelerate crisis instability or an arms race.

NC3 Systems and Credibility

Stability between competing nations largely relies on ascertaining the credibility of threats, capabilities, and decisions in order to decrease uncertainty and reduce the risk of conflict. Throughout the Cold War, NC3 systems served as mechanisms for signaling intent and capability, ensuring the credibility of deterrence postures, and decreasing the risk of nuclear war. In the early 21st century, technological developments such as machine learning techniques are introducing new dynamics and capabilities that increase uncertainty and lend to “strategic instability.”

Without credible NC3 systems, nuclear weapons cannot and will not deter and prevent nuclear war. Credibility underwrites strategic stability. If the United States’ NC3 systems that underpin the operations of a nuclear weapons capability are not credible, neither are the nuclear weapons themselves nor the doctrines behind them. Yet, NC3 systems at present are highly complex and inherently opaque. Modernization and digitization in NC3 systems introduce novel and heightened uncertainty, which fundamentally threatens credibility and increases the risk of nuclear conflict.

Novel Digital Uncertainties

New machine learning-based systems erode previously credible systems and intentions—further compounding instability and increasing the risk of conflict. In conventional warfare, autonomous systems could increase the perception of uncertainty and insecurity that drives security dilemmas. In complex U.S. NC3 systems, these dynamics could be even more destabilizing.

The introduction of novel machine learning techniques accelerates digital vulnerabilities, but also introduces new uncertainties—in a sense, rapidly eating credibility. Cyber or cyber-physical systems, fused with machine learning capabilities, will face an intensifying cat-and-mouse game over the coming years. Offense and defense will constantly test the boundaries of what is acceptable by penetrating networks, capitalizing on vulnerabilities, catching anomalies, meddling in adversaries’ systems—to include the massive edifice that is NC3—while staying below the threshold of conflict.

These dynamics take cyber operations to a new level of complexity, and into the very core elements of the NC3 system of systems. While these challenges are ‘known knowns,’ machine learning techniques actually introduce ‘known unknowns’ as well. For example, Ram Shankar and Microsoft colleagues have identified inherently unique machine learning-related attack vectors, including “model inversions," “backdoor ML," perturbation attacks, and reward hacking.

These machine learning attacks are relatively novel challenges for those working to build credible nuclear deterrence. In particular, the inability of nuclear weapon states to signal their cyber capabilities and intentions makes it even more difficult to establish credibility. Signaling is likely to be more difficult in cyberspace due to unclear attribution of cyber-attacks, trouble communicating through the “noisy” medium, and the secrecy of digital operations.

Deep Learning in NextGen NC3

Little open-source research has probed the potential integration of new deep learning tools into U.S. NC3 systems. A wide number of subsystems may comprise NC3, ranging from 107 to 240 systems. Based on publicly available information, analysts at Technology for Global Security have specified around 99 of the 200-plus systems. Of those 99, approximately 39 percent are plausible candidates for deep learning integration during “NextGen” NC3 modernization efforts. Based on this assessment and the high-end estimate of 240 systems in the U.S. NC3 enterprise, we conclude that 93-plus NC3 systems could see the integration of deep learning into their software and hardware.

To be clear: none of the NextGen architects have indicated they intend to integrate deep learning into these systems. However, we estimate 93-plus systems could benefit from integrating deep learning techniques. For example, integrating deep learning into systems responsible for modulating frequency, processing signals, and transmitting voice and data would increase the reliability and accuracy of communications. Human operators in NC3 systems deal with a large amount of complicated data and systems complexity, so decision-support tools could reduce the burden and improve accuracy.

The implications of novel machine learning techniques in NC3 remain uncertain. What is clear, however, is that NC3 systems that incorporate new machine learning tools are likely to be less credible, particularly in the short term. As such, a nation armed with nuclear weapons could face difficulties signaling the credibility of their systems to an adversary, and vice versa. Even the slightest perceived gap causes uncertainty, which could prompt countries to move more decisively. Further, the fear that competitors will innovate more rapidly could motivate a race to be the first to develop these sophisticated capabilities.

Nuclear deterrence relies on NC3 to ensure credibility across all interactions between countries armed with nuclear weapons. While it remains unclear exactly how machine learning will be integrated into NC3 infrastructure, asking the right questions early in this process is the only way to maintain and increase the credibility of critical systems.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York.

Share:
More In Thought Leadership