#callfor Artificial Intelligence in Participatory Environments: Technologies, Ethics, and Literacy Aspects (Societies)

Fin: 30/06/2025

Entidad Organizadora:

Societies

Localización:

Artificial intelligence (AI) has constituted a significant scholarly object over the past few decades, profoundly impacting a broad spectrum of academic and industrial fields. The area of AI has exploded in recent years along with participatory tools and media environments. In this age of fragmented information flows and vast amounts of raw data, computational developments along with socio-economic changes facilitate the incorporation of AI technologies in areas ranging from mathematics, engineering, and medical science to psychology, education, media, and communications. Diverse aspects of people’s daily lives are also formed under the driving power of AI tools and systems. Applications based on machine/deep learning (ML/DL) and natural language processing (NLP) techniques increasingly play a considerable role in living, learning, working, and co-situating in collaborative and participatory environments.

Although the utilization of such algorithmic approaches and technologies offers significant benefits for society, the need to consider arising risks and challenges is strong. The request for ethical codes in the use of AI concerns not only the machine training part and the design of the targeted functionalities but also the deployment and implementation of the envisioned services. For example, the acquisition of Facebook users’ personal data by Cambridge Analytica or the role of Twitter bots in the United States Presidential Election of 2016 stand as milestones in the ongoing discussion about AI misusage. Likewise, disinformation problems have been substantially intensified with the proliferation of generative content and deep learning models, launching the so-called deep fakes, which pose severe threats to our societies and democracies. More broadly, issues about transparency, accountability, and justice deserve consideration. Data integrity, privacy, and security protocols are always in place when users and (crowdsourced) datasets are involved. In this vein, initial steps towards a necessary framework have been conducted by national and international authorities. However, the development of precise regulatory guidelines is of great importance in terms of security, data protection, bias, and discrimination avoidance, among others. Against this background and since AI implications are increasingly omnipresent, the prioritization of literacy and educational initiatives should include all actors involved (stakeholders, developers, targeted end users, media and communication professionals, journalists, practitioners, etc.). Thus, a multidisciplinary approach can shape the context for a deeper understanding and the harmless use of AI without overlooking the constantly evolving (technological) landscape.

The current call for papers (CfP) aims at further enlightening the above perspectives. We invite researchers to submit original/featured research works related but not limited to the following multidisciplinary topics:

  • AI techniques in participatory tools and collaborative environments;
  • AI ethics;
  • AI education and multidisciplinary literacy needs;
  • Audience engagement in data crowdsourcing and annotation tasks;
  • Datasets utilization, ethics, and legal concerns in AI;
  • Participatory media, journalism, and AI perspectives;
  • Hate speech detection using AI;
  • Hate crime prevention using AI;
  • AI tools in misinformation and disinformation detection;
  • AI-assisted forensics tools: legal and ethical concerns;
  • AI-assisted management of media assets and/or use rights: technological and ethical concerns;
  • Technological and ethical concerns of big data;
  • Smart systems for education and collaborative working environments;
  • AI-assisted citizen science: Technological limitation, ethics, and training concerns.