If you received a phone call from your local mayor, how would you feel? “Great!”, you may think. “What a personal touch – I always did like them.”
Now, consider that your mayor made the call in your native language, but you are fairly certain they do not speak Spanish fluently. Then, you turn on the news: it turns out that they used Artificial Intelligence (AI) software to augment their voice into speaking languages they did not know. How would you feel then?
Most of us would find this uncanny, unethical and intrusive – and with good reason. For the people of New York, this was not just a hypothetical situation.
In October of last year, the Mayor of New York City, Eric Adams, used AI to ‘fake’ his way into speaking Spanish, Yiddish, Mandarin, Cantonese and more in thousands of automated interactions with citizens across the city. Adams is a serial user of AI, having trialled an automated subway police robot in 2023 (to woeful results) and, notably, a faulty Chatbot designed to offer 24/7 guidance to local businesses.
It takes three seconds of audio to reach an 85% match to a voice, and one in four people have received some form of AI scam, with many impersonating friends or family. To use AI and deepfake technology in the way that Adams did is, simply, a lie. Despite passing it off as trivial, he would have gained significant stead in the eyes of many New Yorkers if not uncovered. What is more, it is a dangerous precedent for a public official to make, and a very thin line between that and electoral fraud. Earlier this year, the technology that enabled Adams’ phone calls was used to impersonate President Biden in scam calls. fraudulently encouraging voters not to vote for Biden in the primary.
Deepfakes are forms of Generative AI that use machine learning to create ‘fake’, audio or visual, likenesses of real-life people – often without their consent. Whether constituents believe deepfaked content for its realism, or the public begins to distrust organic media, the mere existence of AI and deepfaked content undermines our epistemic basis for believing digital content.
Deepfakes are a present political issue, not a dystopian premonition. And in a year where 49% of the world’s population is set to vote, the question of deepfake fraud has become ever salient. ChatGPT boasts more than 180 million users, and the truth is that Generative AI is going to be an omnipresent aspect of society for the foreseeable future.
This is not to ignore the benefits of AI, both political and otherwise. It can be used to help voters articulate their political beliefs, to generate advertising material at a fraction of the cost and to synthesise large amounts of data to better understand political trends. As such, the question ought to be how we can effectively regulate AI and retain its advantages.
In light of this, effective regulation needs to address three imminent problems. First, the use of political deepfakes to lie to the public. Second, the generative use of unethical data. And third, the rise of misinformation in the political sphere.
Inspiration can be drawn from the ELVIS ACT passed in Tennessee this year. Here, the unauthorised use of a person’s voice or image without their consent was criminalised, as was the dissemination of unauthorised tools, services and algorithms designed to create the deepfaked voices and images. This would assuage certain ethical qualms insofar as a deepfake can only be made following authorisation from the person concerned.
However, this measure does not do enough to demonstrate to the recipient that it was artificially created, so it is still lawful to create a deepfake in your voice as though it were you (recall Adams).
On this front, regulation designed to ‘stamp’ any deepfaked content with an AI accreditation – notifying its recipients that it is artificial yet authorised – needs to be introduced. This may be a soundbite at the beginning of an audio message or a visual cue akin to a trademark for an image or video. Such a stamp should only be awarded when the software that made it is entirely transparent in its data’s origin to ensure that the data was ethically sourced – and by a relevant authority. As detection software is becoming as sophisticated as AI itself, it is something that needs to be attempted if we want to retain trust in politics.
As education and awareness of the capabilities of deepfakes grows, the need for strict regulation will become less pressing. But, in the meantime, it is paramount that we implement legislation to make it explicit when a piece of content is made with AI. It is important to stress that regulation is not designed to prohibit AI, but instead encourage its ethical use.
Crawford Sawyer is a Masters student at University College London (UCL).
Views expressed in this article are those of the author, and not those of Bright Blue.
[Image: Sikov]