Disinformation has become a central feature of the COVID-19 crisis. According to a recent poll, false or misleading information about the pandemic reaches close to half of all online news consumers in the U.K. As this type of malign information and high-tech “deepfake” imagery can spread so fast online, it poses a risk to democratic societies worldwide by increasing public mistrust in governments and public authorities — a phenomenon referred to as “truth decay.” New research, however, highlights new ways to detect and dispel disinformation online.

There are several factors that may account for the rapid spread of disinformation during the COVID-19 pandemic. Given the global nature of the pandemic, more groups are using disinformation to further their agendas. Advances in machine or computer learning also contribute to the problem, as disinformation campaigns powered by artificial intelligence extend the reach of malign information online and on social media platforms.

Research from Carnegie Mellon University suggests that social media “bots” may account for 45 to 60 percent of all reviewed Twitter activity related to COVID-19, in contrast to the 10 to 20 percent of Twitter activity for other events such as U.S. elections and natural disasters. These bots can automatically generate messages, advocate ideas, follow other users and use fake accounts to gain followers themselves.

The university’s research identified more than 100 inaccurate COVID-19 theories, including misleading reporting on prevention, cures and emergency measures implemented by state and local authorities. In this context, disinformation can have harmful effects for individuals, communities, society and democratic governance. False or misleading claims concerning the coronavirus may encourage people to take more risks and pose a threat to themselves and the health of others, for example, by consuming harmful substances or disregarding social distancing guidelines.

Disinformation may also be used to target vulnerable populations including migrants and refugees, heightening the risk of xenophobic violence and hate crime.

Public and private sector groups, as well as civil society organisations, have already introduced various countermeasures to tackle online disinformation. This includes initiatives to moderate content and the use of social media algorithms to identify the presence of disinformation. Online media-literacy programs designed to enhance the ability of online users to recognize false or misleading information can help strengthen public resilience to disinformation.

The Facebook-owned company WhatsApp has now also imposed new limits on message forwarding to tackle the spread of false information over its messaging channels.

The findings of a new Rand Europe study could now help strengthen these efforts further. Commissioned by the U.K. Defence Science and Technology Laboratory, or DSTL, this study shows how machine-learning models can be used to detect malicious actors online, one of them being Russian-sponsored trolls.

The Kremlin’s disinformation tactics have continued apace during COVID-19 and include coordinated narratives with China claiming that the coronavirus was caused by migrants or originated as a biological weapon developed in a U.S. military lab.

Disinformation has also included false claims regarding Russian “humanitarian aid” to countries including the U.S. and Italy. These efforts all act to undermine the resilience, recovery and crisis responses of national governments.

In the study for DSTL, researchers drew on Twitter data from the 2016 U.S. presidential election, and they used a computer model to distinguish between the narratives of Russian “trolls” and authentic political supporters.

The model was able to successfully identify the trolls by detecting the manipulative “us versus them” language used to target Democratic and Republican partisans.

The analysis explains how specific language tactics can be used to identify trolls in real time, while also highlighting the targets of these manipulation tactics. Discord is stoked online by highlighting emotive issues for each side by using repeated linguistic patterns.

To raise awareness and build resilience to these tactics, government bodies could make this visible to members of targeted groups so they can recognize social media manipulation techniques.

By examining how the trolls targeted online debates in relation to the 2016 U.S. presidential election, the community detection, text analysis, machine learning and visualization components of the model could be reconfigured in the future to create a robust, general-purpose social media-monitoring tool. Such a tool could help focus public sector efforts to counter online disinformation in relation to COVID-19, among other issues of public importance.

Understanding how online actors can target countries’ vulnerabilities could serve as a first step toward building wider resilience to disinformation. Further developing such approaches to defend against these manipulation tactics could be instrumental in fighting disinformation at scale — a problem that is evidently at the core of COVID-19.

Kate Cox is a senior analyst and Linda Slapakova is an analyst in the defense, security and infrastructure group at the Europe research unit of the think tank Rand. William Marcellino is a senior behavioral scientist at Rand.

Share:
More In Opinion