#callfor Human-Centered Generative AI (International Journal of Information Management)

Fin: 30/10/2024

Localización:

The rapid advancement of Generative Artificial Intelligence (GAI) has opened up a plethora of possibilities for how we live, work and learn (Brynjolfsson et al., 2023; Dwivedi et al., 2023a; Nah et al., 2023). GAI are a class of algorithms capable of generating new content. Based on questions and prompts GAI tools can generate text, code (e.g. ChatGPT) or images (e.g., MidJourney). Other GAIs specialize in producing audio and video (e.g., Synthesia). The quality of the output, and the widespread availability of these tools, was unimaginable only a few years ago.

On the one hand, these developments have spurred a range of innovations, business opportunities and efficiencies (Gartner, 2023), drawing researchers to study its role and impact on business, society, and individuals (Budhwar et al., 2023; Dwivedi et al., 2023b; Richey et al., 2023; Susarla et al., 2023; Van Dis et al., 2023; Wamba et al., 2023). On the other hand, their accelerated pace of development has made it difficult for business and society to adapt, leading to uncertainty and unanticipated dilemmas. These include concerns around the misuse of GAI, issues with data privacy, overreliance on GAI content as fact and the challenges associated with verifying GAI content, as well as the inherent biases and “hallucinations” of such systems (Ji et al., 2023; Mukherjee and Chang, 2023). More broadly, there are significant concerns about the impact of GAI on job displacement and the potential for exacerbating divides between people who can make use of the technology and those who cannot.

Theorizing these technology-related opportunities and tensions is at the very core of Information Systems scholarship. It comes as no surprise then that there are calls for more research on challenges and opportunities and understanding how it creates new forms of value.

The goal of this special issue to emphasize Human-Centered AI (HCAI) approaches that prioritize human values, needs, and abilities throughout the design, development, deployment and situated use of GAI. HCAI refers to the prospect in which the digital technologies that tremendously amplify human abilities, also empower people in remarkable ways while ensuring human control (Shneiderman, 2020). HCAI can be seen as a two-dimensional framework of automation and control. It has evolved into an approach that combines AI-based algorithms with human-centered design thinking, and influences methods, processes, and outcomes. Capel and Brereton (2023) provide a mapping of the literature in HCAI, highlighting among others, Shneiderman’s work, which, has been the foundation for many studies that take an HCAI perspective. Based on their review, they define HCAI as:

Human-Centered Artificial Intelligence utilizes data to empower and enable its human users, while revealing its underlying values, biases, limitations, and the ethics of its data gathering and algorithms to foster ethical, interactive, and contestable use (Capel and Brereton 2023).

This special issues recognizes that a human centred mindset is crucial for the responsible design, development, and deployment of AI (Vassilakopoulou et al., 2022). GAI can have a dual role, sometimes being part of the problem or facilitating solutions to existing problems (Veit and Thatcher, 2023; Pappas et al., 2023). As HCAI puts humans at the center, it emphasizes that the next frontier of AI is not just technological but also humanistic and ethical (Stahl & Eke, 2024). This ensures that the GAI models developed and deployed are designed with human values, ethics, and user experience in mind, and that they are used in ways that are socially beneficial and responsible. This involves considering the implications of the technology from multiple perspectives, including the user, the developer, and society as a whole. Placing humans at the center allows the creation of AI systems that are more inclusive, trustworthy, and aligned with human values and goals (Schoenherr et al., 2023; Shneiderman, 2020). For research, studying or advancing human-centred GAI ensures that IS research has societal impact, is relevant and meaningful (Burton-Jones et al., 2023; Karanasios, 2022; Majchrzak et al., 2014). Given the nascent stage of GAI in practice, promoting a human-centered approach to GAI allows for IS research to go beyond postliminary institutionalization of technology. Rather it encourages a proactive exploration of the theoretical understanding how to ‘do’ human-centered GAI and its practical applications and benefits.

We invite researchers, practitioners, and policymakers to submit their articles for a special issue dedicated to exploring the opportunities, challenges, and implications of human-centered GAI. Topics of interest include, but are not limited to, human-centered GAI situated in use as well as GAI under design, ethical considerations, data protection and privacy, content moderation, and the development of policies and frameworks for human-centered GAI.

  • Designing GAI systems for a positive user experience: What are the principles and best practices for creating generative models that are user-friendly, accessible, and beneficial?
  • The role of human-centered design in preventing misuse of GAI: How can a human-centered approach help to prevent the creation and propagation of harmful or misleading content?
  • Ensuring inclusivity in human-centered GAI: How can we ensure that these systems are designed to be inclusive and equitable for users from diverse backgrounds and abilities?
  • The impact of human-centered GAI on content creation and consumption: How will these systems transform the way we create and consume content, and what are the implications for industries such as media, entertainment, and marketing?
  • The role of user feedback and control in human-centered GAI systems: How can we give users more control over the content generated by AI, and how can user feedback be used to improve these systems?
  • Regulatory considerations for human-centered GAI: Regulatory challenges associated with the development and deployment of these systems, and how can they be addressed?
  • The future of Human-Computer Interaction (HCI) with GAI: How can human-centered GAI transform the way we interact with computers and other digital devices?
  • Ethical considerations in the development and deployment of human-centered AI systems: How can we ensure that these systems respect human values, privacy, and autonomy?
  • Developing new theories or enhancing existing theoretical frameworks: How humans interact with Generative Artificial Intelligence.