Using Nonverbal Cues in Empathic Multi-Modal LLM-Driven Chatbots for Mental Health SupportDespite their popularity in providing digital mental health support, mobile conversational agents primarily rely on verbal input, which limits their ability to respond to emotional expressions. We therefore envision using the sensory equipment of today's devices to increase the nonverbal, empathic capabilities of chatbots. We initially validated that multi-modal LLMs (MLLM) can infer emotional expressions from facial expressions with high accuracy. In a user study (N=200), we then investigated the effects of such multi-modal input on response generation and perceived system empathy in emotional support scenarios. We found significant effects on cognitive and affective dimensions of linguistic expression in system responses, yet no significant increases in perceived empathy. Our research demonstrates the general potential of using nonverbal context to adapt LLM response behavior, providing input for future research on augmented interaction in empathic MLLM-based systems.2025MSMatthias Schmidmaier et al.Motion Sickness & Passenger ExperienceConversational ChatbotsHuman-LLM CollaborationMobileHCI
User Understanding of Privacy Permissions in Mobile Augmented Reality: Perceptions and MisconceptionsMobile Augmented Reality (AR) applications leverage various sensors to provide immersive user experiences. However, their reliance on diverse data sources introduces significant privacy challenges. This paper investigates user perceptions and understanding of privacy permissions in mobile AR apps through an analysis of existing applications and an online survey of 120 participants. Findings reveal common misconceptions, including confusion about how permissions relate to specific AR functionalities (e.g., location and measurement of physical distances), and misinterpretations of permission labels (e.g., conflating camera and gallery access). We identify a set of actionable implications for designing more usable and transparent privacy mechanisms tailored to mobile AR technologies, including contextual explanations, modular permission requests, and clearer permission labels. These findings offer actionable guidance for developers, researchers, and policymakers working to enhance privacy frameworks in mobile AR.2025VPViktorija Paneva et al.AR Navigation & Context AwarenessPrivacy by Design & User ControlMobileHCI
European Users' In-Depth Privacy Concerns with Smartphone Data CollectionToday's context-aware mobile phones allow developers to build intelligent and adaptive applications. The data demand induced by context awareness leads to decreased trust and increased privacy concerns. However, users' deeper reasons and real-world fears that underlie these concerns are not fully understood. We conducted an online survey (N=100) and semi-structured interviews (N=20) to understand users' concerns about smartphone data privacy. We investigated three key areas: general user understanding and misconceptions, specific in-depth concerns, and mitigation strategies. We found that effective transparency and control are the central themes across all areas. Users are concerned about privacy issues negatively impacting their lives, especially through financial loss, physical harm, or manipulation. We show that privacy measures should be implemented with a stronger focus on the user by keeping the user in the loop through transparency and control.2025FBFlorian Bemmann et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingMobileHCI
Us-Reflection: Designing for Meaningful Social InteractionsTechnology increasingly shapes our social interactions, both online and in person. Strong social connections and face-to-face interactions are vital for wellbeing, especially with close relationships. In this context, technology can play an ambivalent role: whereas it often has a negative impact on the quality of these interactions, it carries potential to enrich conversations and improve social interactions if used in a meaningful way. We design a prototype that implements subtle intervention strategies to foster meaningful technology use, specifically aimed at enhancing close relationships during in-person interactions. We evaluate the prototype within an exploratory, two-week in-the-wild user study with 6 tandems (N=12). Our findings suggest that the strategy of "us-reflection" – a social approach to reflection – contributes to mutual awareness of participants' shared time. Our prototype encouraged more meaningful interactions by proposing conversation topics or suggesting activities, ultimately strengthening close relationships and fostering more intentional, engaging, and rewarding social experiences.2025SSSophia Sakel et al.Cyberbullying & Online HarassmentTechnology Ethics & Critical HCIMobileHCI
When AI Joins the Negotiation Table: Evaluating AI as a ModeratorNegotiation is a crucial decision-making process where parties seek to resolve differences and optimize outcomes. While prior research has focused on maximizing negotiation outcomes, fostering a collaborative atmosphere is essential for long-term relationship-building. This study explores the role of AI-assisted moderation in negotiations that emulate high-stress environments. We developed a text-based AI moderator and evaluated its usability and effectiveness in a two-phase study: a pilot study with 14 participants followed by a final user study with 16 participants. To provide an initial point of comparison, we assessed trust, respect, and equitability in AI-moderated versus non-moderated negotiations. Quantitative findings indicate a negative effect of AI-assisted moderation on relationship-building, while qualitative insights suggest that AI moderation fosters collaboration. However, the cognitive load of text-based facilitation hinders its effectiveness. These results highlight the importance of seamless AI integration and contribute to the broader discourse on AI’s role in behavior change and mediated communication.2025CKCharlotte Kobiella et al.Agent Personality & AnthropomorphismAI-Assisted Decision-Making & AutomationCUI
Situated Artifacts Amplify Engagement in Physical ActivityIn the context of rising sedentary lifestyles, this paper investigates the efficacy of "Situated Artifacts" in promoting physical activity. We designed two artifacts that display users' physical activity data within their homes - one physical and one digital. We conducted a 9-week, counterbalanced, within-subject field study with N=24 participants to assess the impact of these artifacts on physical activity, reflection, and motivation. We collected quantitative data on physical activity and administered daily and weekly questionnaires, employing individual Likert items and standardized instruments, as well as conducted interviews post-prototype usage. Our findings indicate that while both artifacts act as reminders for physical activity, the physical artifact was superior in terms of user engagement. The study revealed that this can be attributed to the higher perceived presence and, thereby, enhanced social interaction, which acts as a motivational source for activity. In this sense, situated artifacts gently nudge toward sustainable health behavior change.2025JKJonas Keppel et al.Fitness Tracking & Physical Activity MonitoringSleep & Stress MonitoringDIS
One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of HumorCollaboration has been shown to enhance creativity, leading to more innovative and effective outcomes. While previous research has explored the abilities of Large Language Models (LLMs) to serve as co-creative partners in tasks like writing poetry or creating narratives, the collaborative potential of LLMs in humor-rich and culturally nuanced domains remains an open question. To address this gap, we conducted a user study to explore the potential of LLMs in co-creating memes—a humor-driven and culturally specific form of creative expression. We conducted a user study with three groups of 50 participants each: a human-only group creating memes without AI assistance, a human-AI collaboration group interacting with a state-of-the-art LLM model, and an AI-only group where the LLM autonomously generated memes. We assessed the quality of the generated memes through crowdsourcing, with each meme rated on creativity, humor, and shareability. Our results showed that LLM assistance increased the number of ideas generated and reduced the effort participants felt. However, it did not improve the quality of the memes when humans collaborated with LLM. Interestingly, memes created entirely by AI performed better than both human-only and human-AI collaborative memes in all areas on average. However, when looking at the top-performing memes, human-created ones were better in humor, while human-AI collaborations stood out in creativity and shareability. These findings highlight the complexities of human-AI collaboration in creative tasks. While AI can boost productivity and create content that appeals to a broad audience, human creativity remains crucial for content that connects on a deeper level.2025ZWZhikun Wu et al.Generative AI (Text, Image, Music, Video)AI-Assisted Creative WritingIUI
CreepyCoCreator? Investigating AI Representation Modes for 3D Object Co-Creation in Virtual RealityGenerative AI in Virtual Reality offers the potential for collaborative object-building, yet challenges remain in aligning AI contributions with user expectations. In particular, users often struggle to understand and collaborate with AI when its actions are not transparently represented. This paper thus explores the co-creative object-building process through a Wizard-of-Oz study, focusing on how AI can effectively convey its intent to users during object customization in Virtual Reality. Inspired by human-to-human collaboration, we focus on three representation modes: the presence of an embodied avatar, whether the AI’s contributions are visualized immediately or incrementally, and whether the areas modified are highlighted in advance. The findings provide insights into how these factors affect user perception and interaction with object-generating AI tools in Virtual Reality as well as satisfaction and ownership of the created objects. The results offer design implications for co-creative world-building systems, aiming to foster more effective and satisfying collaborations between humans and AI in Virtual Reality.2025JRJulian Rasch et al.LMU MunichMixed Reality WorkspacesCreative Collaboration & Feedback SystemsCHI
The TaPSI Research Framework - A Systematization of Knowledge on Tangible Privacy and Security InterfacesThis paper presents a comprehensive Systematization of Knowledge on tangible privacy and security interfaces (TaPSI). Tangible interfaces provide physical forms for digital interactions. They can offer significant benefits for privacy and security applications by making complex and abstract security concepts more intuitive, comprehensible, and engaging. Through a literature survey, we collected and analyzed 80 publications. We identified terminology used in these publications and addressed usable privacy and security domains, contributions, applied methods, implementation details, and opportunities or challenges inherent to TaPSI. Based on our findings, we define TaPSI and propose the TaPSI Research Framework, which guides future research by offering insights into when and how to conduct research on privacy and security involving TaPSI as well as a design space of TaPSI.2025SRSarah Delgado Rodriguez et al.University of the Bundeswehr MunichPrivacy by Design & User ControlPasswords & AuthenticationPrivacy Perception & Decision-MakingCHI
Exploring the Effect of Music on User Typing and Identification through Keystroke DynamicsThis paper explores the relationship between music and keyboard typing behavior. In particular, we focus on how it affects keystroke-based authentication systems. To this end, we conducted an online experiment (N=43), where participants were asked to replicate paragraphs of text while listening to music at varying tempos and loudness levels across two sessions. Our findings reveal that listening to music leads to more errors and faster typing if the music is fast. Identification through a biometric model was improved when music was played either during its training or testing. This hints at the potential of music for increasing identification performance and a tradeoff between this benefit and user distraction. Overall, our research sheds light on typing behavior and introduces music as a subtle and effective tool to influence user typing behavior in the context of keystroke-based authentication.2025LMLukas Mecke et al.LMU Munich; University of the Bundeswehr MunichVibrotactile Feedback & Skin StimulationExplainable AI (XAI)Passwords & AuthenticationCHI
Understanding the Influence of Electrical Muscle Stimulation on Motor Learning: Enhancing Motor Learning or Disrupting Natural Progression?Electrical Muscle Stimulation (EMS) induces muscle movement through external currents, offering a novel approach to motor learning. Researchers investigated using EMS as an alternative to conventional non-movement-inducing feedback techniques, such as vibrotactile and electrotactile feedback. While EMS shows promise in areas such as dance, sports, and motor skill acquisition, neurophysiological models of motor learning conflict about the impact of externally induced movements on sensorimotor representations. This study evaluated EMS against electrotactile feedback and a control condition in a two-session experiment assessing fast learning, consolidation, and learning transfer. Our results suggest an overall positive impact of EMS in motor learning. Although traditional electrotactile feedback had a higher learning rate, EMS increased the learning plateau, as measured by a three-factor exponential decay model. This study provides empirical evidence supporting EMS as a plausible method for motor augmentation and skill transfer, contributing to understanding its role in motor learning.2025SVSteeven Villa et al.LMU MunichVibrotactile Feedback & Skin StimulationElectrical Muscle Stimulation (EMS)CHI
PrivacyHub: A Functional Tangible and Digital Ecosystem for Interoperable Smart Home Privacy Awareness and ControlHubs are at the core of most smart homes. Modern cross-ecosystem protocols and standards enable smart home hubs to achieve interoperability across devices, offering the unique opportunity to integrate universally available smart home privacy awareness and control features. To date, such privacy features mainly focus on individual products or prototypical research artifacts. We developed a cross-ecosystem hub featuring a tangible dashboard and a digital web application to deepen our understanding of how smart home users interact with functional privacy features. The ecosystem allows users to control the connectivity states of their devices and raises awareness by visualizing device positions, states, and data flows. We deployed the ecosystem in six households for one week and found that it increased participants' perceived control, awareness, and understanding of smart home privacy. We further found distinct differences between tangible and digital mechanisms. Our findings highlight the value of cross-ecosystem hubs for effective privacy management.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Privacy by Design & User ControlPrivacy Perception & Decision-MakingSmart Home Privacy & SecurityCHI
AR You on Track? Investigating Effects of Augmented Reality Anchoring on Dual-Task Performance While WalkingWith the increasing spread of AR head-mounted displays suitable for everyday use, interaction with information becomes ubiquitous, even while walking. However, this requires constant shifts of our attention between walking and interacting with virtual information to fulfill both tasks adequately. Accordingly, we as a community need a thorough understanding of the mutual influences of walking and interacting with digital information to design safe yet effective interactions. Thus, we systematically investigate the effects of different AR anchors (hand, head, torso) and task difficulties on user experience and performance. We engage participants (n=26) in a dual-task paradigm involving a visual working memory task while walking. We assess the impact of dual-tasking on both virtual and walking performance, and subjective evaluations of mental and physical load. Our results show that head-anchored AR content least affected walking while allowing for fast and accurate virtual task interaction, while hand-anchored content increased reaction times and workload.2025JRJulian Rasch et al.LMU MunichFull-Body Interaction & Embodied InputAR Navigation & Context AwarenessCHI
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot InteractionUnderstanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.2025JLJan Leusmann et al.LMU MunichHand Gesture RecognitionSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Developing and Validating the Perceived System Curiosity Scale (PSC): Measuring Users' Perceived Curiosity of SystemsLike humans, today's systems, such as robots and voice assistants, can express curiosity to learn and engage with their surroundings. While curiosity is a well-established human trait that enhances social connections and drives learning, no existing scales assess the perceived curiosity of systems. Thus, we introduce the Perceived System Curiosity (PSC) scale to determine how users perceive curious systems. We followed a standardized process of developing and validating scales, resulting in a validated 12-item scale with 3 individual sub-scales measuring explorative, investigative, and social dimensions of system curiosity. In total, we generated 831 items based on literature and recruited 414 participants for item selection and 320 additional participants for scale validation. Our results show that the PSC scale has inter-item reliability and convergent and construct validity. Thus, this scale provides an instrument to explore how perceived curiosity influences interactions with technical systems systematically.2025JLJan Leusmann et al.LMU MunichBrain-Computer Interface (BCI) & NeurofeedbackAgent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)CHI
The Illusion of Privacy: Investigating User Misperceptions in Browser Tracking ProtectionThird parties track users' web browsing activities, raising privacy concerns. Tracking protection extensions prevent this, but their influence on privacy protection beliefs shaped by narratives remains uncertain. This paper investigates users' misperception of tracking protection offered by browser plugins. Our study explores how different narratives influence users' perceived privacy protection by examining three tracking protection extension narratives: no protection, functional protection, and a placebo. In a study (N=36), participants evaluated their anticipated protection during a hotel booking process, influenced by the narrative about the plugin's functionality. However, participants viewed the same website without tracking protection adaptations. We show that users feel more protected when informed they use a functional or placebo extension, compared to no protection. Our findings highlight the deceptive nature of misleading privacy tools, emphasizing the need for greater transparency to prevent users from a false sense of protection, as such misleading tools negatively affect user study results.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Privacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Preventing Harmful Data Practices by using Participatory Input to Navigate the Machine Learning MultiverseIn light of inherent trade-offs regarding fairness, privacy, interpretability and performance, as well as normative questions, the machine learning (ML) pipeline needs to be made accessible for public input, critical reflection and engagement of diverse stakeholders. In this work, we introduce a participatory approach to gather input from the general public on the design of an ML pipeline. We show how people's input can be used to navigate and constrain the multiverse of decisions during both model development and evaluation. We highlight that central design decisions should be democratized rather than "optimized" to acknowledge their critical impact on the system's output downstream. We describe the iterative development of our approach and its exemplary implementation on a citizen science platform. Our results demonstrate how public participation can inform critical design decisions along the model-building pipeline and combat widespread lazy data practices.2025JSJan Simson et al.LMU Munich, Institut of Statistics; Munich Center for Machine Learning (MCML)AI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasParticipatory DesignCHI
Designing Effective Consent Mechanisms for Spontaneous Interactions in Augmented RealityUbiquitous computing devices like Augmented Reality (AR) glasses allow countless spontaneous interactions - all serving different goals. AR devices rely on data transfer to personalize recommendations and adapt to the user. Today's consent mechanisms, such as privacy policies, are suitable for long-lasting interactions; however, how users can consent to fast, spontaneous interactions is unclear. We first conducted two focus groups (N=17) to identify privacy-relevant scenarios in AR. We then conducted expert interviews (N=11) with co-design activities to establish effective consent mechanisms. Based on that, we contribute (1) a validated scenario taxonomy to define privacy-relevant AR interaction scenarios, (2) a flowchart to decide on the type of mechanisms considering contextual factors, (3) a design continuum and design aspects chart to create the mechanisms, and (4) a trade-off and prediction chart to evaluate the mechanism. Thus, we contribute a conceptual framework fostering a privacy-preserving future with AR.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Context-Aware ComputingSmart Home Privacy & SecurityCHI
"When Two Wrongs Don't Make a Right" - Examining Confirmation Bias and the Role of Time Pressure During Human-AI Collaboration in Computational PathologyArtificial intelligence (AI)-based decision support systems hold promise for enhancing diagnostic accuracy and efficiency in computational pathology. However, human-AI collaboration can introduce and amplify cognitive biases, like confirmation bias caused by false confirmation when erroneous human opinions are reinforced by inaccurate AI output. This bias may increase under time pressure, a ubiquitous factor in routine pathology, as it strains practitioners' cognitive resources. We quantified confirmation bias triggered by AI-induced false confirmation and examined the role of time constraints in a web-based experiment, where trained pathology experts (n=28) estimated tumor cell percentages. Our results suggest that AI integration fuels confirmation bias, evidenced by a statistically significant positive linear-mixed-effects model coefficient linking AI recommendations mirroring flawed human judgment and alignment with system advice. Conversely, time pressure appeared to weaken this relationship. These findings highlight potential risks of AI in healthcare and aim to support the safe integration of clinical decision support systems.2025EREmely Rosbach et al.Technische Hochschule IngolstadtExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityCHI
Investigating LLM-Driven Curiosity in Human-Robot InteractionIntegrating curious behavior traits into robots is essential for them to learn and adapt to new tasks over their lifetime and to enhance human-robot interaction. However, the effects of robots expressing curiosity on user perception, user interaction, and user experience in collaborative tasks are unclear. In this work, we present a Multimodal Large Language Model-based system that equips a robot with non-verbal and verbal curiosity traits. We conducted a user study ($N=20$) to investigate how these traits modulate the robot's behavior and the users' impressions of sociability and quality of interaction. Participants prepared cocktails or pizzas with a robot, which was either curious or non-curious. Our results show that we could create user-centric curiosity, which users perceived as more human-like, inquisitive, and autonomous while resulting in a longer interaction time. We contribute a set of design recommendations allowing system designers to take advantage of curiosity in collaborative tasks.2025JLJan Leusmann et al.LMU MunichHuman-LLM CollaborationSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI