How AI-generated disinformation might impact this year’s elections and how journalists should report on it

 

Selección

Marina Adami

A phone call from the US president, covert recordings of politicians, false video clips of newsreaders, and surprising photographs of celebrities. A wide array of media can now be generated or altered with artificial intelligence, sometimes mimicking real people, often very convincingly. 

These kinds of deepfakes are cropping up on the internet, particularly on social media. AI-generated fake intimate images of pop star Taylor Swift recently led to X temporarily blocking all searches of the singer. And it’s not just celebrities whose likeness is faked: deepfakes of news presenters and politicians are also making the rounds. 

In a year in which around 2 billion people are eligible to vote in 50 elections, can AI-generated disinformation impact democratic outcomes? And how is this going to affect journalists reporting on the campaigns? I contacted journalists and experts from Spain, Mexico and India to find out.

The issue

If we were to take a snapshot of the deepfakes circulating in recent months, we would find a range of individuals whose likenesses have been faked, for a range of aims. Popular targets are those with abundant examples of their (real) appearance and voice available online: celebrities, newsreaders, politicians. The purpose of these fakes ranges from satire to scams to disinformation. 

Politicians have appeared in deepfakes peddling financial scams. The UK Prime Minister Rishi Sunak, for example, was impersonated in a range of video ads which appeared on Facebook, as have TV newsreaders, whose image is often used to advertise fake ‘investment opportunities’, sometimes involving celebrities who may have also been faked.

There have also been examples of deepfakes of politicians to achieve a political outcome, including some election-related ones. An early example was a (quite unconvincing) video showing a clone of Ukraine’s President Volodímir Zelensky calling for his troops to lay down their arms only days into the Russian full-scale invasion. 

A high-profile and higher-quality recent example was an AI-generated audio message of a fake Joe Biden attempting to dissuade people from voting in the New Hampshire primaries. Another example was a video of Muhammad Basharat Raja, a candidate in Pakistan’s elections, altered to tell voters to boycott the vote. 

AI for satire: a view from Spain

Creating AI-generated images has never been easier. With popular and easily accessible tools such as Midjourney, OpenAI’s DALL-E and Microsoft’s Copilot Designer, users can obtain images for their prompts in a matter of seconds. However, AI platforms have put in place some parameters to limit the use of their product. 

DALL-E doesn’t allow users to create images of real people and Microsoft’s option prohibits ‘deceptive impersonation.’ Midjourney only mentions ‘offensive or inflammatory images of celebrities or public figures’ as examples of content that would breach its community guidelines. Violent and pornographic images are also barred from all of these platforms. 

Other tools do allow for a wider range of creation and some people are using them. Spanish collective United Unknown describes itself as a group of ‘visual guerrilla, video and image creators.’ They use deepfakes to create satirical images. They often portray politicians, as in the images in this piece published by Rodrigo Terrasa at El Mundo. 

Seguir leyendo: Reuters Institute

Imagen de Stefan Keller en Pixabay

Vistas:

384