Skip to main content

Disinformation campaigns from hostile state and non-state actors continue to thrive and to undermine democracies, leveraging a vast array of communications platforms to exploit elections, referendums, the Covid-19 pandemic, and more. 

The fundamental goals of disinformation remain the same: to undermine democracy, international cohesion, and trust in institutions; and to increase polarisation and promote geopolitical goals. 

This reached a dramatic height when angry mobs stormed the US Capitol on 6 January 2021. Related violence, incited by a web of conspiracies and domestic and foreign disinformation efforts, demonstrate how online disinformation and conspiracies can shake democracy to its core and cause real-world harm to people and the democratic process. 

Western democracies must take notice of the real and emerging threat of disinformation-fuelled radicalisation and violence that hostile state and non-state actors will continue to exploit to achieve their geopolitical goals. Luckily, there’s a vast body of research and practiced methodologies that disinformation practitioners can borrow from the counter-radicalisation and counter-terrorism playbooks to curb it. 

Disinformation and radicalisation experts alike study how offline and online behaviors and discourse might predicate real-world harm. At the onset of the mass migration of foreign fighters to Iraq and Syria in 2013 and the increase in related domestic terrorism events, radicalisation and counterterrorism practitioners began considering ‘push’ and ‘pull’ factors as indicators or vulnerabilities around which to shape their prevention programmes. 

Push factors may be socio-economic, psychological, ideological, and circumstantial (such as discrimination or marginalisation) factors that might make some people more likely to consider or physically mobilise towards violence. Pull factors would include influences, messages, and groups that exploit these vulnerabilities. With these factors in mind, experts and prac-titioners could begin understanding the drivers, external influences, and different stages of the radicalisation process and thus recommend and tailor prevention programmes accordingly. 

Practitioners and global institutions should think about disinformation through a similar lens. Identifying predispositions or push factors will help governments and global institutions avoid one-size-fits-all approaches to disinformation-related prevention programmes, predicated on local-level drivers and influences, and co-opting appropriate credible influencers as part of the resilience building process. 

As Peter Kreko, Director of Political Capital Institute in Budapest states: “Vulnerabilities are easily exploited by malign state and non-state actors who then tailor influence operations to each audience by tapping into these underlying complexities.” By tailoring prevention programmes to address known vulnerabilities, state and non-state actors may see the impact of disinformation campaigns wane.

Just as social and traditional media have become accelerators of terrorist recruitment and radicalisation, artificial intelligence (AI) enabled disinformation could serve as a similar accelerant of disinformation-fuelled mobilisation to violence, if not adequately addressed. 

Deepfake videos online are dramatically increasing. A report from Deeptrace indicates that in 2019, over a ten-month period, there was an increase from 7,964 to 14,678 in deepfakes circulating online. The creation and distribution of sophisticated deepfakes, forged documents, or doctored images presents yet another tool for nefarious actors to exploit. 

Anne Neuberger, White House Deputy National Security Advisor for Cyber and Emerging Technology, echoed this concern by saying that “artificial intelligence could generate disinformation at scale in a way that brings real concern.” 

Governments must be proactive by investing in detection tools and technologies, and domestic and global planning, and put processes in place for information-sharing with social media companies when AI-enabled disinformation has the potential to cause real-world harm. 

Institutions should break down the silos between counterterrorism and counter-disinformation efforts to ensure real-time information sharing, analysis, and future planning, as the lines between counter-radicalisation and counterterrorism efforts and counter-disinformation efforts overlap. While elections, referendums, and the Covid-19 pandemic all present vectors for disinformation-fuelled violence, global institutions and governments should begin anticipating future threats or vectors that could lead to another 6 January-style event.

While it is imperative for global institutions and governments to take critical and swift steps to combat the rising tide of disinformation-related violence, there are important limitations. Not all disinformation or extreme discourse leads to violence. Curbing, or appearing to curb, free speech in the name of countering terrorism or extremism could infringe on protected free speech and cause irreparable damage to democracies while also exacerbating distrust in governing institutions. 

Finally, governments can’t curb the spread of disinformation alone, nor should they be solely responsible. Social media and communications platforms must become less hospitable to the spread of disinformation on their platforms. 

As disinformation enters this new phase and poses an increasing risk to democracy, institutions must act quickly to create plans and programmes that build resilience against this threat. Leveraging the broad array of lessons learned, resources, and tools from recent counterterrorism and counter-radicalisation programmes is a good place to start. 

Lauren Protentis is a national security and communications expert. This article first appeared in our Centre Write magazine Target secured?. Views expressed in this article are those of the author, not necessarily those of Bright Blue. [Image: Brett Davis]