Page 1 of 3
Exploring the dark side of human – AI interaction
Rezzani A.
Department of Computer Science
Free University of Bozen-Bolzano
Bolzano, BZ, 39100
andrea.rezzani@unibz.it
Menendez Blanco M.
Department of Computer Science
Free University of Bozen-Bolzano
Bolzano, BZ, 39100
maria.menendezblanco@unibz.it
De Angeli A.
Department of Computer Science
Free University of Bozen-Bolzano
Bolzano, BZ, 39100
antonella.deangeli@unibz.it
Abstract
Research on human-AI interaction is receiving increasing attention in recent years
because of its potential widespread application in critical decision-making contexts
in our society. An essential aspect is how to enable collaborations with AI systems.
This new stream of research has mainly focused on the effects and establishment
of positive human-machine collaborations, i.e., when the relationship is marked
by effective, trusting, fair, and transparent interaction. However, this relationship
could also be characterised by negative interactions when, presumably, the system
fails or is unable to respond adequately to human needs. In this position paper,
we discuss the importance of investigating the dark side of human-AI interaction,
conceptualised in the broadest sense of the term from algorithms to robots. This
contribution could outline future research aimed at investigating design choices
that promote collaboration with AI systems.
1 Social interactions with objects, computers, and Artificial Intelligence
In 2021, the European Commission proposed to establish a regulatory framework on Artificial
Intelligence (AI) that aims to ensure the protection of fundamental human rights, as well as safe
development and adoption 1
. This framework proposes a human-centred approach to AI to guarantee
trust and a high level of protection of safety when interacting with AI systems. The framework is
aligned with many other worldwide efforts that seek to bring human-centred aspects into AI systems.
The bottom line for such an efforts is that the fast-paced technological advances on the development
of AI systems in different applications (e.g., decision-making, risk assessment, healthcare), often
become in juxtaposition with a limited understanding of the types of interactions produced between AI
and humans. Research that investigates social interactions between humans and computers assumes
that they are different to interactions with inanimated objects. For example, the Computers Are Social
Actors (CASA) approach [1] proposes that people perceive computers as social agents, and therefore
tend to adopt social scripts when interacting with them. This approach highlights that people attribute
human qualities to computers even when they know that they are machines. However, [2] suggested
that this is not a distinctive feature of social interactions with computers. They argued that people
also create relationships with objects such as cars or boats, and often attribute character and meaning
to them. Thereby, challenging the assumption that computers are the only inanimate social partners
1
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia.
Page 2 of 3
to humans. What really could distinguish social interactions with a computer in comparison with an
object would be the computer’s agency represented as an ability to interact with humans, perceive
their actions and emotions, and respond to them [3]. This opens up interesting opportunities in
social interactions in collaborative settings with intelligent systems. Zooming into the distinctive
characteristics of social interactions with artificial systems brings to the fore a paradox. On the
one hand, people tend to respond to artificial systems with social scripts learned in human-human
interaction, which could seem inappropriate for human-computer interaction. Still, people ignore
clues that reveal the essential material nature of a computer and overuse social categories [4]. On
the other hand, they also behave in abusive ways towards artificial systems in ways they would not
usually do with other people. A possible explanation for this kind of abusive behaviours is that social
interaction is characterised by a sort of power game elicited between humans and computers where
the user acts the role of the master and the computer that of the slave [5]. In addition, social scripts
and abusive behaviours become especially relevant when investigating how to establish effective
collaborations between humans and AI systems. Especially considering that the asymmetric power
relationships between humans-computers are being challenged by the potential applications of AI
systems that can actually support or even replace humans in some complex decision-making processes
(e.g., employment, worker management, educational training, etc.).
2 The dark side to human-AI interaction: abusive behaviours
The dark side of interaction usually refers to a phenomenon in which the user’s values is replaced by
other stakeholders’ values [6]. Dark patterns, interactions, and algorithms deceive or nudge users into
decisions, leading to digital addiction, digital persuasion, data exploitation and, dark models [7]. The
increasing pervasiveness of dark patterns and the difficulty to spot them in AI systems has triggered
several efforts in HCI and related fields to investigate and propose fairer, more ethically grounded and,
more transparent systems [8]. With the aim of working towards establishing successful collaborations
between humans and artificial agents, we add a perspective that also considers user’s behaviour and
reactions. Adopting a perspective that considers the relevance of AI agents as social partners in our
society entails investigating determinants of this collaboration. Research on HCI has demonstrated
the impact of computer characteristics on the quality of interaction such as aesthetic and usability [9]
interactivity and liveliness [10]. However, there is also a negative side to this interaction. Interacting
with computers evokes errors and frustration due to poor interface design or poor implementation of
human-like features in the system, which is referred to as the anthropomorphisation [11]. As a result
of poor interaction and frustration, the user may exhibit antisocial, hostile and uninhibited behaviour
towards computers [12,13]. On the topic of social reactions between humans and AI systems, De
Angeli et al., [Ibid.] conducted a study on natural conversations with chatterbots that highlighted how
people might dislike machines or not like them at all. More concretely, they found widespread verbal
abuse behaviour in social interactions between humans and chatbots. Interestingly, these results
pointed to distinctive interactions with computers that were different from interactions with both
people and objects. Similarly, in a reproduction of the Milgram’s experiment on obedience using
a robot, Bartneck et al. [14] found that people had fewer concerns to abuse robots than abusing
other humans. In this workshop paper, we propose that abusive behaviours could therefore represent
another stream of research that contributes to successful collaborations between humans and AI
systems. Such abusive behaviours may be manifested verbally or physically and are characterised by
negative affect, such as frustration and anger. Hypotheses on physiological aggression - for example,
the neurobehavioural fight-or-flight response [15] - argue that humans tend to become aggressive
when threat and power conditions are simultaneously present. Consequently, if we consider artificial
systems as inanimate objects, i.e., human-made and human-used tools, whose operating mechanism
might be opaque to the user, abusive behaviours can easily emerge.
3 Future directions
During the workshop, we would like to discuss the opportunities and challenges of integrating
psychological perspectives into the analysis and design of AI systems. These are the key questions
we bring to the dialogue. What methodological approaches are best suited for investigating abusive
behaviour? From a psychological perspective, one of the main challenges in the study of abuse could
be to obtain unfiltered reactions from users that could be better obtained in a private and personal
context rather than in a laboratory. How can the system recognise it? What are the techniques
2
Page 3 of 3
adopted in affective computing to identify adverse reactions? How well suited are psychology and
neuroscience methodologies such as questionnaires (e.g. PANAS) and physiological measures such
as brain activity (e.g. EEG, fMRI) for it? Finally, how should artificial systems react to abusive
behaviour? What social impact do these reactions may have? For example, it might emerge that the
reactions of artificial systems have an impact in reinforcing possible gender, age, or race stereotypes.
References
[1] Nass, C., Steuer, J., Tauber, E. R. (1994, April). Computers are social actors. In Proceedings of the SIGCHI
conference on Human factors in computing systems (pp. 72-78).
[2] Muller, M. (2004). Multiple paradigms in affective computing. Interacting with Computers, 16(4), (pp.
759-768).
[3] Picard, R.W., and Klein, J. (2002). Computers that recognize and respond to user emotion: Theoretical and
practical implications. Interacting with Computers 14:2, (pp. 141-169).
[4] Nass, C., Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social
issues, 56(1), (pp. 81-103).
[5] De Angeli, A., Brahnam, S. (2008). I hate you! Disinhibition with virtual partners. Interacting with
computers, 20(3), (pp. 302-310).
[6] Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., Toombs, A. L. (2018). The dark (patterns) side of UX design.
In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
[7] Rogers, Y., Brereton, M., Dourish, P., Forlizzi, J., Olivier, P. (2021). The dark side of interaction design. In
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-2).
[8] Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe trustworthy. International
Journal of Human–Computer Interaction, 36(6), (pp. 495-504).
[9] Tractinsky, N., Lowengart, O. (2007). Web-store aesthetics in e-retailing: A conceptual framework and
some theoretical implications. Academy of Marketing Science Review, 2007, (1).
[10] Sheng, H., Joginapelly, T. (2012). Effects of web atmospheric cues on users’ emotional responses in
e-commerce. AIS Transactions on Human-Computer Interaction, 4(1), (pp.1-24).
[11] Kim, Y., Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless?. Computers in
Human Behavior, 28(1), (pp. 241-250).
[12] De Angeli, A., Brahnam, S., Wallis, P. (2005). ABUSE: The dark side of human-computer interaction.
Interact Adjunct Proceedings. (pp. 91-92).
[13] Brahnam, S., De Angeli, A. (2008). Editorial - Abuse and Misuse of Social Agent. Interacting with
Computers, 20(3), (pp. 287-432).
[14] Bartneck, C., Brahnam, S., De Angeli, A., Pelachaud, C. (2008). Editorial - Abuse and Misuse of Interactive
Technologies. Interaction studies: Social Behaviour and Communication in Biological and Artificial Systems,
9(3), (pp. 397-401).
[15] Cannon, W.B. (1929). Bodily changes in pain, hunger, fear and rage: An account of recent researches into
the function of emotional excitement (2nd edition). D Appleton Company. https://doi.org/10.1037/10013-000
3