Based on an online survey focused on understanding if and how people use generative artificial intelligence (AI), and what they think about its application in journalism and other areas of work and life across six countries (Argentina, Denmark, France, Japan, the UK, and the USA), we present the following findings.
Findings on the public’s use of generative AI
ChatGPT is by far the most widely recognised generative AI product – around 50% of the online population in the six countries surveyed have heard of it. It is also by far the most widely used generative AI tool in the six countries surveyed. That being said, frequent use of ChatGPT is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have used it just once or twice, and it is yet to become part of people’s routine internet use.
In more detail, we find:
- While there is widespread awareness of generative AI overall, a sizable minority of the public – between 20% and 30% of the online population in the six countries surveyed – have not heard of any of the most popular AI tools.
- In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot.
- Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18–24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over.
- Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%).
- Just 5% across the six countries covered say that they have used generative AI to get the latest news.
Findings on public opinion about the use of generative AI in different sectors
Most of the public expect generative AI to have a large impact on virtually every sector of society in the next five years, ranging from 51% expecting a large impact on political parties to 66% for news media and 66% for science. But, there is significant variation in whether people expect different sectors to use AI responsibly – ranging from around half trusting scientists and healthcare professionals to do so, to less than one-third trusting social media companies, politicians, and news media to use generative AI responsibly.
In more detail, we find:
- Expectations around the impact of generative AI in the coming years are broadly similar across age, gender, and education, except for expectations around what impact generative AI will have for ordinary people – younger respondents are much more likely to expect a large impact in their own lives than older people are.
- Asked if they think that generative AI will make their life better or worse, a plurality in four of the six countries covered answered ‘better’, but many have no strong views, and a significant minority believe it will make their life worse. People’s expectations when asked whether generative AI will make society better or worse are generally more pessimistic.
- Asked whether generative AI will make different sectors better or worse, there is considerable optimism around science, healthcare, and many daily routine activities, including in the media space and entertainment (where there are 17 percentage points more optimists than pessimists), and considerable pessimism for issues including cost of living, job security, and news (8 percentage points more pessimists than optimists).
- When asked their views on the impact of generative AI, between one-third and half of our respondents opted for middle options or answered ‘don’t know’. While some have clear and strong views, many have not made up their mind.
Findings on public opinion about the use of generative AI in journalism
Asked to assess what they think news produced mostly by AI with some human oversight might mean for the quality of news, people tend to expect it to be less trustworthy and less transparent, but more up to date and (by a large margin) cheaper for publishers to produce. Very few people (8%) think that news produced by AI will be more worth paying for compared to news produced by humans.
In more detail, we find:
- Much of the public think that journalists are currently using generative AI to complete certain tasks, with 43% thinking that they always or often use it for editing spelling and grammar, 29% for writing headlines, and 27% for writing the text of an article.
- Around one-third (32%) of respondents think that human editors check AI outputs to make sure they are correct or of a high standard before publishing them.
- People are generally more comfortable with news produced by human journalists than by AI.
- Although people are generally wary, there is somewhat more comfort with using news produced mostly by AI with some human oversight when it comes to soft news topics like fashion (+7 percentage point difference between comfortable and uncomfortable) and sport (+5) than with ‘hard’ news topics, including international affairs (-21) and, especially, politics (-33).
- Asked whether news that has been produced mostly by AI with some human oversight should be labelled as such, the vast majority of respondents want at least some disclosure or labelling. Only 5% of our respondents say none of the use cases we listed need to be disclosed.
- There is less consensus on what uses should be disclosed or labelled. Around one-third think ‘editing the spelling and grammar of an article’ (32%) and ‘writing a headline’ (35%) should be disclosed, rising to around half for ‘writing the text of an article’ (47%) and ‘data analysis’ (47%).
- Again, when asked their views on generative AI in journalism, between a third and half of our respondents opted for neutral middle options or answered ‘don’t know’, reflecting a large degree of uncertainty and/or recognition of complexity.
Introduction
The public launch of OpenAI’s ChatGPT in November 2022 and subsequent developments have spawned huge interest in generative AI. Both the underlying technologies and the range of applications and products involving at least some generative AI have developed rapidly (though unevenly), especially since the publication in 2017 of the breakthrough ‘transformers’ paper (Vaswani et al. 2017) that helped spur new advances in what foundation models and Large Language Models (LLMs) can do.
These developments have attracted much important scholarly attention, ranging from computer scientists and engineers trying to improve the tools involved, to scholars testing their performance against quantitative or qualitative benchmarks, to lawyers considering their legal implications. Wider work has drawn attention to built-in limitations, issues around the sourcing and quality of training data, and the tendency of these technologies to reproduce and even exacerbate stereotypes and thus reinforce wider social inequalities, as well as the implications of their environmental impact and political economy.
One important area of scholarship has focused on public use and perceptions of AI in general, and generative AI in particular (see, for example, Ada Lovelace Institute 2023; Pew 2023). In this report, we build on this line of work by using online survey data from six countries to document and analyse public attitudes towards generative AI, its application across a range of different sectors in society, and, in greater detail, in journalism and the news media specifically.
We go beyond already published work on countries including the USA (Pew 2023; 2024), Switzerland (Vogler et al. 2023), and Chile (Mellado et al. 2024), both in terms of the questions we cover and specifically in providing a cross-national comparative analysis of six countries that are all relatively privileged, affluent, free, and highly connected, but have very different media systems (Humprecht et al. 2022) and degrees of platformisation of their news media system in particular (Nielsen and Fletcher 2023).
The report focuses on the public because we believe that – in addition to economic, political, and technological factors – public uptake and understanding of generative AI will be among the key factors shaping how these technologies are being developed and are used, and what they, over time, will come to mean for different groups and different societies (Nielsen 2024). There are many powerful interests at play around AI, and much hype – often positive salesmanship, but sometimes wildly pessimistic warnings about possible future risks that might even distract us from already present issues. But there is also a fundamental question of whether and how the public at large will react to the development of this family of products. Will it be like blockchain, virtual reality, and Web3? All promoted with much bombast but little popular uptake so far. Or will it be more like the internet, search, and social media – hyped, yes, but also quickly becoming part of billions of people’s everyday media use.
Seguir leyendo: Reuters Institute
Imagen de Dean Moriarty en Pixabay