#callfor AI Use in Marginalized Media Markets (Media and Communication)

Inicio: 01/11/2025 Fin: 15/11/2025

Entidad Organizadora:

Media and Communication

Localización:

Uncivil online communication, such as personal harassment, hate speech, or hateful misinformation, poses a pressing challenge in Western democracies and beyond. Most users regularly encounter such incidents, with marginalised groups and professional communicators like journalists being particularly affected. The consequences for individuals, digital discourse, and society at large are profound. Identifying these incidents and responding with counterspeech or reporting can help mitigate their impact. In addition to, or in support of, interventions by internet users, AI could have a crucial role in detecting and addressing uncivil online communication.

For example, journalists and fact-checkers can leverage AI tools to identify uncivil online comments, enabling them to manually moderate, verify, and debunk harmful content (Dierickx & Lindén, 2023; Stoll et al., 2019). Likewise, citizens who regularly engage in counterspeech can benefit from AI tools to receive factual support, maintain emotional detachment, and seek assistance when faced with harmful speech in response to their counterspeech efforts (Mun et al., 2024; Obermaier et al., 2023). However, counterspeakers themselves are concerned about the potentially negative effects of AI on people’s perceptions of the authenticity of counterspeech, their own agency, and the functionality of counterspeech (Mun et al., 2024). Similarly, users’ willingness to engage with innovative counterspeech technologies varies along the specific characteristics of the technology, such as its risk of depleting already limited resources even more (Frischlich et al., 2024).

This thematic issue aims to consolidate cutting-edge research on the use of AI for detecting and countering uncivil online communication, user perceptions of AI use in counterspeech, and the associated risks and opportunities of this AI application. Potential contributions can include, but are not limited to, articles that:

  • Develop, test, or employ AI to detect or respond to uncivil communication or counterspeech;
  • Study the perspectives of senders, targets, bystanders, moderators, etc., on the employment of AI;
  • Present or discuss theoretical frameworks for understanding human–AI relationships in the context of counterspeech;
  • Reflect on normative or regulatory frameworks around AI and counterspeech;
  • Employ qualitative, quantitative, or computational measures.

Resumen de privacidad

Esta web utiliza cookies para que podamos ofrecerte la mejor experiencia de usuario posible. La información de las cookies se almacena en tu navegador y realiza funciones tales como reconocerte cuando vuelves a nuestra web o ayudar a nuestro equipo a comprender qué secciones de la web encuentras más interesantes y útiles. Ver aviso legal y política de cookies