Over the past few weeks, a number of improbable images went viral: former US President Donald Trump getting arrested; Pope Francis wearing a stylish white puffer coat; Elon Musk walking hand in hand with General Motors CEO Mary Barra.
These pictures are not that improbable though: President Trump was indeed getting arrested; Popes are known to wear ostentatious outfits; and Elon Musk has been one half of an unconventional pairing before. What is peculiar though is that they are all fake images created by generative artificial intelligence software.
AI image generators like DALL-E and Midjourney are popular and easy to use. Anyone can create new images through text prompts. Both applications are getting a lot of attention. DALL-E claims more than 3 million users. Midjourney has not published numbers, but they recently halted free trials citing a massive influx of new users.
While the most popular uses of generative AI so far are for satire and entertainment purposes, the sophistication of their technology is growing fast. A number of prominent researchers, technologists and public figures have signed an open letter asking for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, a large language model created by US company Open AI. “Should we let machines flood our information channels with propaganda and untruth?” they ask.
I spoke to several journalists, experts, and fact-checkers to assess the dangers posed by visual generative AI. When seeing is no longer believing, what are the implications this technology has on misinformation? How will this impact journalists and fact-checkers who debunk hoaxes? Will our information channels be flooded with “propaganda and untruth”?
A fake Trump gets out of jail
On 20 March, journalist Eliot Higgins, founder of Bellingcat, tweeted a series of images he made using Midjourney. The pictures depicted a narrative around former US President Donald Trump’s criminal conviction: from fictional arrest to fictional escape from prison. The pictures quickly went viral and Higgins was subsequently locked out of the AI image generator’s server.
“The thread I posted proves how quickly images that appeal to individuals’ interests and biases can become viral,” Higgins says. “Fact-checking is something that takes a lot more time than a retweet.”
For those who work to debunk disinformation, the rise of AI generated images is indeed a growing concern since a big proportion of the fact-checking they do is image or video-based. Marilín Gonzalo writes a technology column at Newtral, an independent Spanish fact-checking organisation. She says that visual disinformation is a particular concern since images are especially compelling and they can have a strong emotive impact on audiences’ perceptions.
“You can talk to a person for an hour and give him 20 arguments for one thing, but if you show him an image that makes sense to him, it is going to be very difficult to convince him that’s not true,” Gonzalo says.
Is a tsunami on its way?
Chilean journalist Valentina de Marval, a professor of journalism in the Universidad Diego Portales with previous fact-checking experience for agencies like AFP, Chicas Poderosas and LaBot Chequea, is also worried about the rise of AI-generated images. While there are clues to these images that show they are fake, like hands, teeth or ears, De Marval is concerned that the rapid improvement of these models will render these indicators obsolete.
“Maybe in a couple of months or days artificial intelligence will have learned, for example, to draw hands well, to outline the eyes well, to put teeth or ears, to make the skin less smooth and make it more real with imperfections,“ she says.
Despite concerns that AI generated imagery might lead to a truth crisis, experts like Felix Simon, a communication researcher and a PhD student at the Oxford Internet Institute, warns against taking an alarmist view on these new technologies saying that its proliferation does not necessarily equate to more people believing in those images.
Seguir leyendo: Reuters Institute