Opinion

Protecting people from disinformation requires a cognitive security proving ground

Cognitive security is old and new. The concept is old in terms of Sun Tzu’s famous saying, “All warfare is based on deception,” with cognitive security essential to not being deceived. It is new through the exponential expansion of the internet, which has boosted our productivity to incredible heights, but also has opened avenues for wrongdoers to try to influence us and imperil our cognitive security at extraordinary scale and speed. Thus, cognitive security is about providing assurance in the form of strong tools and techniques to detect, characterize and thwart malicious influence conveyed via social media and other communication channels.

An infamous online influence campaign conducted by a nation-state is disclosed in the 2018 indictment of the Russian Internet Research Agency, or IRA, for using fraud and deceit to interfere with the U.S. political and electoral processes. The indictment describes how the defendants and co-conspirators, “created hundreds of social media accounts and used them to develop fictitious personas into ‘leader[s] of public opinion’ in the United States.” In other words, the IRA conducted online disinformation operations to interfere with or influence U.S. political and electoral processes, where disinformation is deliberately false information.

For Russia, information operations play a critical role in both nonmilitary and military means of influence. The current Russian view is that nonmilitary means, such as economic, political and diplomatic avenues, serve as important ways to achieve strategic objectives, with military means often in supporting roles. Information operations span both nonmilitary and military means, where “information effects distort facts and facilitate the imposition of an emotional perception on the subject, or target, that is advantageous to the side delivering the effects” as reported by the noted expert on Russian information warfare, Timothy Thomas. These information effects are seen clearly in the actions described in the IRA indictment.

The U.S. government recognizes that malicious influence operations constitute threats to cognitive security, and it is organizing capabilities to address these threats. The private sector has to be part of this effort because malicious influence campaigns target private individuals and institutions, such as social media companies, the stock market, and listed companies, as well as government entities. In short, cognitive security is foundational to national security inclusive of the public and private sectors.

The techniques and technologies of cognitive security aim to strengthen individuals and populations against malicious influence, and make malicious influencers ineffective. Cognitive security does this by way of three core elements.

First, increase cognitive resilience against malicious influence, which includes cultivation of critical thinking and media literacy through education, as well as development of tools that can provide real-time identification and defense for people and organizations encountering sophisticated influence efforts. These technologies must operate at the scale and speed of the internet, for instance, the automated identification of deepfakes and other manipulated media.

Second, achieve broad cognitive situational awareness, which includes rapid detection and characterization of malicious influence campaigns, and virtual personal assistants that help individuals and organizations identify the sources and goals of deceptive online messaging.

Third, create accurate and robust cognitive engagement capabilities to counter malicious influence, such as that sown online by software-based agents or bots.

In contrast to cybersecurity, which emphasizes the protection of devices, computers, networks and other machines, cognitive security focuses on the protection of the human, which requires a socio-technical approach that integrates a number of disciplines including social/behavioral sciences, AI, data science and advanced computing. This approach involves not only holistic integration of tools, models and datasets, but the translation of cognitive security needs into problems that can be addressed collaboratively by government, industry and academia.

Fundamental to the realization of this goal is the creation of a cognitive security proving ground, which will mature and validate cognitive security techniques and technologies with a rigor seen in other proving grounds and test ranges established to address hard problems like cyber. The cognitive security proving ground is envisioned to be a comprehensive and flexible live-virtual-constructive simulated environment scalable from the individual to large populations for research, engineering, test/evaluation, training, and development of strategies to defeat malicious influence campaigns.

We want the cognitive security proving ground to tackle key questions, such as, “How do we protect populations against large-scale mis- and disinformation campaigns?” or “How do we measure the effectiveness of cognitive security techniques and technologies?” Addressing ethical, legal and societal issues (ELSI) is critical to development of cognitive security techniques and technologies, and a cognitive security proving ground will need to establish an ELSI panel to help guide the broad range of projects seeking to provide assurance that one’s thinking is not influenced by people with malicious intent communicated on- and offline.

The protection of the human against malicious influence encompasses the public and private sectors, which means that the development of cognitive security techniques and technologies must be conducted in a neutral environment where government, industry, and academia can work together in the holistic integration of tools, models and datasets to overcome the cognitive security challenges of resilience, situational awareness and engagement. Answering this need is a cognitive security proving ground for national security.

Brian M. Pierce is a visiting research scientist at the University of Maryland Applied Research Laboratory for Intelligence and Security, and a mediaX distinguished visiting scholar at Stanford University. He is a former director of the Information Innovation Office at the Defense Advanced Research Projects Agency.

Recommended for you
Around The Web
Comments