Potential Research Topics
We specifically call for original empirical research that engages contexts and topics that
explore both how algorithmic organizing affects organizational actions, activities and
interactions and how organizations respond to algorithmic technologies. Research
questions might include, but are not limited to the following:
- Organizational and Material Agency. Recent theoretical work highlights the unique
ways that deep learning technologies reshape human and organizational agency
(Cameron, 2024; Murray, Rhymer & Sirmon, 2021). For example, recent research
shows how AI can help individuals learn new skills (Gaessler & Piezunka,
forthcoming) and develop their creativity (Amabile, 2019). However, the fundamental
properties of artificial intelligence technologies, such as their ability to learn, adapt,
and make autonomous decisions, require us to critically re-examine the basic
ontological assumptions that have driven prior theorizing. The interactions between
humans and AI differ significantly from those with more traditional technologies due
to AI’s capacity for independent agency and its increasing role in organizational
decision-making. Similarly, scholars may have to critically examine current
epistemological approaches to studying the interface between algorithmic
technologies and human/organizational agency. Digging deeper into the ways in
which people give meaning to this interface, how they shape it and are being shaped
by it, particularly in the context of decision-making processes, requires practice-
based and interpretive approaches that can capture the situated, distributed,
contextual, and often tacit nature of human-algorithm interactions. How have the
deployment and implementation of algorithms and AI reshaped the assembling of
agency in organizational activities and decision-making? How is this changing our
understanding of the effects of technology over organizations and organizing? Which
established conceptual assumptions should be revisited, and what does this mean
for extant theories? What are the epistemological and methodological consequences
of the ways in which algorithmic technologies can be studied, and how and why
would this be different from ways to study other forms of technology? - Professional Skills, Autonomy and Expertise. Scholars have begun to conduct
empirical research investigating the ways that algorithmic organizing influences how
professionals enact their jobs, tasks and roles and respond to increased
centralization in organizations. What insights can we gain from ethnographic
investigations of how professional work is enacted with these tools in AI-infused
contexts (Anthony, Bechky & Fayard, 2023)? How do algorithms shape or constrain
professional decision-making autonomy? How do algorithms alter professional
problem-solving strategies and decision-making processes? How do organizational
structures and cultures evolve to accommodate the integration of algorithmic
technologies (e.g., Monteiro, 2024)? How do professionals perceive the effects of
algorithms on their autonomy and expertise? How do they develop new skills to
effectively work with algorithmic technologies? How do professionals ensure ethical
and responsible use of algorithms in organizational settings? And how do the
implications of algorithmic technologies on professional expertise differ in distinct
professional and institutional fields? - Practices and Routines. Big data and the corresponding algorithmic processing of
data typically occur in the context of organizational practices or routines (Glaser,
Valadao & Hannigan, 2021; Omidvar, Safavi & Glaser, 2023). How does the materiality
of algorithms and digital data shape routine dynamics (D’Adderio, 2008)? How do
practices or routines and their outcomes evolve over time after the integration of
algorithms and data analytics? How do organizations determine how transparent or
opaque to be with respect to algorithmic inputs and calculations? How do datadriven insights and predictions modify routine dynamics? How do practices or
routines evolve over time during the implementation of algorithmic initiatives? How
does the adoption of ‘algorithmic routines’ influence an organization’s cultural
dynamics? How do organizations assess the effectiveness of ‘algorithmic routines’,
and how do they monitor routine performance on an ongoing basis? - Privacy and Surveillance. Management scholars have raised significant concerns
about the organizational use of algorithms and data for purposes of surveillance
(Zuboff, 2019). Theory about these concepts has been developed by Newlands (2021)
while recent work on the ride-sharing industry (Cameron, 2020) begins to explore
these dynamics empirically by showing how individuals resist surveillance. Other
questions may include: How are organizations navigating the challenges of privacy
and consent in their surveillance practices? How does algorithmic surveillance vary
across distinct cultural contexts? How do organizations resist surveillance
measures? - AI and Automation. The advent of increasingly capable generative autonomous
agents is expected to have significant effects on those organizations relying on
automation. The increasing sophistication and autonomy of AI-powered systems
necessitate a deeper examination of how these technologies reshape the organizing
dimension of work. Beyond simply replacing human labor, AI and automation have
the potential to fundamentally transform organizational processes, structures, and
decision-making (Brynjolfsson & McAfee, 2014). For example, the integration of
generative AI into robotic systems may enable more adaptive and responsive
automation, allowing organizations to dynamically reconfigure their operations in
response to changing demands or conditions. The hybrid and fluid enmeshing of
human and machines (Haraway, 1991), embedded in practice as a sociotechnical or
sociomaterial assemblage (Glaser, Pollock, & D’Adderio, 2021), has been developed
theoretically. However, more empirical explorations are needed to develop and
unpack this concept from an organizational perspective. Could hybrid and fluid
agency potentially lead to the emergence of new organizational forms and control
mechanisms, such as algorithmically managed teams or decentralized decisionmaking enabled by AI? Or might it even change the very nature of control, as in when
the separation between controller and controlled becomes fuzzy and AI becomes an
active agent? At the same time, the increasing reliance on AI and automation may
also create new challenges for coordination, communication, and culture within
organizations, as human workers navigate new roles and relationships with their
algorithmic counterparts. To date, empirical investigation of robotic telepresence on
the coordination of distributed knowledge work and its outcomes is limited (but see
Beane & Orlikowski, 2015) and some important questions remain unanswered: How
will generative AI, robotic technologies—and their combination—influence
organizational coordination and communication? How will the use of AI-powered
robotic technologies affect organization/customer relationships? What is the role of
AI in task and process automation, and how does this shape the design and
enactment of organizational practices and routines? - Strategy-as-Practice. Scholars have begun to theorize the relationship between AI
technologies and competitive advantage (Kemp, 2023), and there have been calls for
more work to investigate this empirically (Berg, Raj & Seamans, 2023). How are
organizations integrating algorithmic routines and artificial intelligence into their
strategy-making practices? How do organizations use algorithms to gather
competitive intelligence or survey their competitive landscape? Given the immediacy
of data in algorithmic technologies, how do organizations deal with the ontological
issues of legitimizing knowledge claims? To what extent do algorithmic technologies
enable social processes where insights begin as forms of “provisional knowledge”
(Hannigan, Seidel, Yakis Douglas, 2018)? How do organizations invoke algorithmic
analysis to conceptualize the future (Wenzel et al., 2020)? How do algorithmic
technologies inform organizational possibilities (Hannigan, 2023)? - Organizational Values and Ethics. The implementation of AI has substantial
implications for the values and ethics enacted by organizations. Algorithmic
technologies can potentially lead to a mechanization of values that privileges
rationality and efficiency over all other values (Lindebaum, Moser, Ashraf & Glaser,
2023). We know that values can propel entrepreneurship and organizations in
powerful ways (Woolley, Pozner & DeSoucey, 2022), but less is known about how AI
shapes these dynamics. While some attention has been given to the ways in which AI
can encode and reinforce existing organizational values, it is also important to
consider how these technologies may create spaces for the emergence of new
values. For example, the use of AI-powered tools for decision-making or resource
allocation may help surface previously unrecognized biases or blindspots in
organizational processes, prompting a re-invigoration of core values (Lindebaum,
Moser, Ashraf & Glaser, 2023). Similarly, the deployment of AI systems that can
interact with humans in more natural and empathetic ways may lead to a greater
emphasis on values such as trust, care, and emotional intelligence in organizations.
Similar to values, the way that organizations and their members enact ethics and
morality may change as a consequence of AI implementation (Moser, den Hond,
Lindebaum, & Ashraf, 2022). While scholars have made a start at problematizing the
intersection of ethics and AI (e.g., Dignum, 2018; Martin, 2019) little is known so far
about the ways in which this new technology performs in heterogeneous
assemblages, particularly with regards to the values and interests that algorithms
may embody (den Hond & Moser, 2022). This could lead to interesting potential
research questions: What are the implications of AI on organizational identity as a
result of its transformational effect on values and ethics? How can organizations
design AI systems that align with and support their articulated (ethical) values, while
also remaining open to the possibility of value change and evolution? How do
organizations “digitize” values to embed them in mathematical parameters and how
do they provide space for creating them? How do organizations manage the tension
between maintaining value plurality and adopting analytic and algorithmic
techniques that prioritize efficiency and rationalization? How do organizations deal
with conflicts that might arise between traditional values and newer AI-driven values?
How do employees and managers perceive shifts in organizational values and ethics
associated with AI implementation? - The Emergence of Algorithmic Affordances. It is increasingly important to
understand the material affordances of the algorithmic technologies used to store
data and facilitate algorithmic processing. Recent empirical work has begun to look
at the processes undergirding the selection of AI tools in the context of work practices
(Lebovitz, Lifshitz-Assaf & Levina, 2023). What are the implications of the selection of
these tools for practitioners, practices, and organizations? How do political dynamics
shape the selection and subsequent implementation of these tools (Glaser, Pollock
& D’Adderio, 2021)? How does organizational culture influence the selection and
acceptance of AI-related tools? What are the cultural barriers or facilitators that may
impact the adoption of AI-specific tools? How do perceptions of AI tools differ by
individuals at distinct levels of the organization? What role do external consultants
play in the AI selection process? And how do external consultants guide their clients
to select and implement AI-related tools?
These suggestions do not provide a comprehensive set of research questions, and we
actively encourage any submission that delves into the mechanisms, processes, practices,
and perceptions associated with empirical phenomena related to data, algorithms, and
artificial intelligence. - Submission
Your manuscript is to be submitted through the journal’s online submission system
(http://mc.manuscriptcentral.com/orgstudies). You will need to create a user account if you
do not already have one, and you must select the appropriate Special Issue at the
“Manuscript Type” option. The Guest Editors handle all manuscripts in accordance with the
journal’s policies and procedures; we expect that the authors will follow the journal’s
submission guidelines (http://journals.sagepub.com/home/oss). - Submissions to the Special Issue will be possible between 15-30 November 2025.
- Informal substantive questions can be addressed to Vern Glaser (vglaser@ualberta.ca) or, as stand-in, Christine Moser (c.moser@vu.nl). For administrative support and general queries, please contact
John Kokkonakis, Assistant Managing Editor of Organization Studies, at
orgstudassist@gmail.com.