Skip to main content

It is the distribution, not just generation, of falsehoods that we should be worried about, argues Baroness Joanna Shields.

With more than 50% of the global population going to the polls this year, global attention has intensified around the potential for AI to generate sophisticated disinformation. A crucial aspect of the crisis remains under examined – the role of online platform advertising models in disseminating such information. While the technological capabilities of AI deepfakes to mimic real-life candidates and create convincing falsehoods represent a significant challenge, it is the delivery mechanisms inherent in the business models of major social media platforms that propagate those messages that pose a more insidious threat to the fabric of democracies worldwide.

Recognising the potential risks of AI, organisations like OpenAI have started to implement measures to mitigate them, developing guidelines to prevent the misuse of AI in political campaigns. These efforts are crucial steps toward ensuring that advancements in AI are not misused to subvert democracy, but the essence of this threat lies not only in the generation of disinformation – something OpenAI can try to prevent – but also in its targeted distribution to audiences already fragmented by social divisions – something OpenAI has less control over. 

Online platforms, driven by advertising models that prioritise user engagement above all, have mastered the art of harnessing algorithms to feed content that resonates with individuals’ existing biases and passions. This approach, while commercially lucrative, has facilitated the rapid and unchecked spread of misinformation. The consequence is the deepening of societal divides and undermining the principles of informed discourse that are essential to a healthy democracy.

The advertising model’s fundamental flaw is its indifference to the veracity of content, treating information as merely another commodity to be optimised for maximum engagement. This model incentivises sensationalism, controversy and emotional provocation, creating fertile ground for disinformation to flourish. As a result, quality journalism and fact-based discourse are not merely disadvantaged, but are systematically sidelined in favour of content that can more effectively capture and retain user attention. The consequences of this dynamic are profound, relegating fact-checked, quality journalism behind paywalls and making it a luxury item rather than a public good – while misinformation proliferates freely.

The focus on AI-generated disinformation, while important, must not overshadow the critical examination of these delivery mechanisms. Conversely, legislative and regulatory efforts must prioritise reforms that challenge these advertising models. Policies that encourage transparency in the algorithms that run social media websites, alongside initiatives that support the economic viability of quality journalism, must be essential components of a comprehensive strategy to combat disinformation.

For politicians and policymakers the task ahead involves not only addressing the symptoms of the disinformation crisis but also confronting its root causes. By focusing on social media platforms’ advertising models, we can begin to tackle the incentives behind the spread of misinformation. This approach offers a pathway toward restoring the integrity of our information ecosystem, ensuring that democracies remain resilient in the face of both the technological advancements and the economic incentives that threaten to undermine them.

While the world grapples with the challenges posed by AI, it is imperative that we refocus our efforts on understanding and addressing the delivery mechanisms that allow AI-facilitated disinformation to thrive. Only by confronting the economic models that prioritise engagement over accuracy can we hope to mitigate the impact of disinformation and safeguard the future of democratic discourse.

 

Baroness Joanna Shields is the former Minister for Internet Safety and Security and the Founder and CEO of Precognition.

This article was published in the latest edition of Centre Write. Views expressed in this article are those of the author, and not necessarily those of Bright Blue. 

Read more from our June 2024 Centre Write magazine, ‘Generation AI?’ here.