Using Nonverbal Cues in Empathic Multi-Modal LLM-Driven Chatbots for Mental Health SupportDespite their popularity in providing digital mental health support, mobile conversational agents primarily rely on verbal input, which limits their ability to respond to emotional expressions. We therefore envision using the sensory equipment of today's devices to increase the nonverbal, empathic capabilities of chatbots. We initially validated that multi-modal LLMs (MLLM) can infer emotional expressions from facial expressions with high accuracy. In a user study (N=200), we then investigated the effects of such multi-modal input on response generation and perceived system empathy in emotional support scenarios. We found significant effects on cognitive and affective dimensions of linguistic expression in system responses, yet no significant increases in perceived empathy. Our research demonstrates the general potential of using nonverbal context to adapt LLM response behavior, providing input for future research on augmented interaction in empathic MLLM-based systems.2025MSMatthias Schmidmaier et al.Motion Sickness & Passenger ExperienceConversational ChatbotsHuman-LLM CollaborationMobileHCI
European Users' In-Depth Privacy Concerns with Smartphone Data CollectionToday's context-aware mobile phones allow developers to build intelligent and adaptive applications. The data demand induced by context awareness leads to decreased trust and increased privacy concerns. However, users' deeper reasons and real-world fears that underlie these concerns are not fully understood. We conducted an online survey (N=100) and semi-structured interviews (N=20) to understand users' concerns about smartphone data privacy. We investigated three key areas: general user understanding and misconceptions, specific in-depth concerns, and mitigation strategies. We found that effective transparency and control are the central themes across all areas. Users are concerned about privacy issues negatively impacting their lives, especially through financial loss, physical harm, or manipulation. We show that privacy measures should be implemented with a stronger focus on the user by keeping the user in the loop through transparency and control.2025FBFlorian Bemmann et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingMobileHCI
Situated Artifacts Amplify Engagement in Physical ActivityIn the context of rising sedentary lifestyles, this paper investigates the efficacy of "Situated Artifacts" in promoting physical activity. We designed two artifacts that display users' physical activity data within their homes - one physical and one digital. We conducted a 9-week, counterbalanced, within-subject field study with N=24 participants to assess the impact of these artifacts on physical activity, reflection, and motivation. We collected quantitative data on physical activity and administered daily and weekly questionnaires, employing individual Likert items and standardized instruments, as well as conducted interviews post-prototype usage. Our findings indicate that while both artifacts act as reminders for physical activity, the physical artifact was superior in terms of user engagement. The study revealed that this can be attributed to the higher perceived presence and, thereby, enhanced social interaction, which acts as a motivational source for activity. In this sense, situated artifacts gently nudge toward sustainable health behavior change.2025JKJonas Keppel et al.Fitness Tracking & Physical Activity MonitoringSleep & Stress MonitoringDIS
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot InteractionUnderstanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.2025JLJan Leusmann et al.LMU MunichHand Gesture RecognitionSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Developing and Validating the Perceived System Curiosity Scale (PSC): Measuring Users' Perceived Curiosity of SystemsLike humans, today's systems, such as robots and voice assistants, can express curiosity to learn and engage with their surroundings. While curiosity is a well-established human trait that enhances social connections and drives learning, no existing scales assess the perceived curiosity of systems. Thus, we introduce the Perceived System Curiosity (PSC) scale to determine how users perceive curious systems. We followed a standardized process of developing and validating scales, resulting in a validated 12-item scale with 3 individual sub-scales measuring explorative, investigative, and social dimensions of system curiosity. In total, we generated 831 items based on literature and recruited 414 participants for item selection and 320 additional participants for scale validation. Our results show that the PSC scale has inter-item reliability and convergent and construct validity. Thus, this scale provides an instrument to explore how perceived curiosity influences interactions with technical systems systematically.2025JLJan Leusmann et al.LMU MunichBrain-Computer Interface (BCI) & NeurofeedbackAgent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)CHI
Designing Effective Consent Mechanisms for Spontaneous Interactions in Augmented RealityUbiquitous computing devices like Augmented Reality (AR) glasses allow countless spontaneous interactions - all serving different goals. AR devices rely on data transfer to personalize recommendations and adapt to the user. Today's consent mechanisms, such as privacy policies, are suitable for long-lasting interactions; however, how users can consent to fast, spontaneous interactions is unclear. We first conducted two focus groups (N=17) to identify privacy-relevant scenarios in AR. We then conducted expert interviews (N=11) with co-design activities to establish effective consent mechanisms. Based on that, we contribute (1) a validated scenario taxonomy to define privacy-relevant AR interaction scenarios, (2) a flowchart to decide on the type of mechanisms considering contextual factors, (3) a design continuum and design aspects chart to create the mechanisms, and (4) a trade-off and prediction chart to evaluate the mechanism. Thus, we contribute a conceptual framework fostering a privacy-preserving future with AR.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Context-Aware ComputingSmart Home Privacy & SecurityCHI
Investigating LLM-Driven Curiosity in Human-Robot InteractionIntegrating curious behavior traits into robots is essential for them to learn and adapt to new tasks over their lifetime and to enhance human-robot interaction. However, the effects of robots expressing curiosity on user perception, user interaction, and user experience in collaborative tasks are unclear. In this work, we present a Multimodal Large Language Model-based system that equips a robot with non-verbal and verbal curiosity traits. We conducted a user study ($N=20$) to investigate how these traits modulate the robot's behavior and the users' impressions of sociability and quality of interaction. Participants prepared cocktails or pizzas with a robot, which was either curious or non-curious. Our results show that we could create user-centric curiosity, which users perceived as more human-like, inquisitive, and autonomous while resulting in a longer interaction time. We contribute a set of design recommendations allowing system designers to take advantage of curiosity in collaborative tasks.2025JLJan Leusmann et al.LMU MunichHuman-LLM CollaborationSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Privacy Slider: Fine-Grain Privacy Control for Smartphones Today, users are constrained by binary choices when configuring permissions. These binary choices contrast with the complex data collected, limiting user control and transparency. For instance, weather applications do not need exact user locations when merely inquiring about local weather conditions. We envision sliders to empower users to fine-tune permissions. First, we ran two online surveys (N=123 & N=109) and a workshop (N=5) to develop the initial design of Privacy Slider. After the implementation phase, we evaluated our functional prototype using a lab study (N=32). The results show that our slider design for permission control outperforms today’s system concerning all measures, including control and transparency.2024FBFlorian Bemmann et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingMobileHCI
Understanding the Impact of the Reality-Virtuality Continuum on Visual Search using Physiological MeasuresWhile Mixed Reality allows the seamless blending of digital content in their surroundings, it is not clear if such a fusion of digital and physical information impacts users' perceptual and cognitive resources differently. While the fusion of real and virtual objects provides numerous opportunities to present additional information, it also introduces undesirable side effects, such as split attention and increased visual complexity. We conducted a visual search study in three manifestations of mixed reality to understand the effects of the environment on visual search behavior. We conducted a multimodal evaluation using EEG and eye-tracking correlates of search efficiency, distractor suppression, attention allocation, and behavioral measures. We found that, independently of the perceptual load, Augmented Reality environments reduce users' capacity to identify target information and suppress irrelevant stimuli. Participants reported AR as more demanding and distracting. We discuss design implications for MR interfaces based on physiological inputs for adaptive interactions.2024FCFrancesco Chiossi et al.Eye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackAR Navigation & Context AwarenessMobileHCI
Exploring Users' Mental Models and Privacy Concerns During Interconnected InteractionsUsers frequently use their smartphones in combination with other smart devices, for example, when streaming music to smart speakers or controlling smart appliances. During these interconnected interactions, user data gets handled and processed by several entities that employ different data protection practices or are subject to different regulations. Users need to understand these processes to inform themselves in the right places and make informed privacy decisions. We conducted an online survey (N = 120) to investigate whether users have correct mental models about interconnected interactions. We found that users consider scenarios more privacy-concerning when multiple devices are involved. Yet, we also found that most users do not fully comprehend the privacy-relevant processes in interconnected interactions. Our results show that current privacy information methods are insufficient and that users must be better educated to make informed privacy decisions. Finally, we advocate for restricting data processing to the app layer and better encryption to reduce users’ data protection responsibilities.2024MWMaximiliane Windl et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingIoT Device PrivacyMobileHCI
The Impact of Data Privacy on Users' Smartphone App Adoption DecisionsMobile smartphone applications can fuel themselves with a large and diverse set of data. Apps thereby become aware of the user and their context, enabling intelligent and adaptive applications. However such data poses severe privacy risks. Although users are only partially aware of them, awareness increases with the proliferation of privacy-enhancing technologies. This leads to a lower adoption rate of data-heavy smartphone apps, as non-usage often is the user’s only option to protect themselves. How privacy concerns affect app adoption is unclear. Studies researched that privacy concerns are an issue and the lack of sufficient privacy-enhancing technologies lowers app adoption. However, it is unclear which privacy-relevant aspects are mainly responsible for this effect and to what extent it plays a role for users. We conducted a survey (N=100) to investigate the relationship between privacy-relevant app- and publisher characteristics with the users’ intention to install and use it. We found that users are especially critical about contentful datatypes and apps that have rights to perform actions on their behalves. On the other hand, the expectation of a productive benefit induced by the app can increase the app adoption intention. Our findings show which aspects designers of privacy-enhancing technologies should focus on, to meet the demand for more user-centered privacy.2024FBFlorian Bemmann et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingMobileHCI
Sitting Posture Recognition and Feedback: A Literature ReviewExtensive sitting is unhealthy; thus, countermeasures are needed to react to the ongoing trend toward more prolonged sitting. A variety of studies and guidelines have long addressed the question of how we can improve our sitting habits. Nevertheless, sitting time is still increasing. Here, smart devices can provide a general overview of sitting habits for more nuanced feedback on the user's sitting posture. Based on a literature review (N=223), including publications from engineering, computer science, medical sciences, electronics, and more, our work guides developers of posture systems. There is a large variety of approaches, with pressure-sensing hardware and visual feedback being the most prominent. We found factors like environment, cost, privacy concerns, portability, and accuracy important for deciding hardware and feedback types. Further, one should consider the user's capabilities, preferences, and tasks. Regarding user studies for sitting posture feedback, there is a need for better comparability and for investigating long-term effects.2024CKChristian Krauter et al.University of StuttgartHuman Pose & Activity RecognitionBiosensors & Physiological MonitoringContext-Aware ComputingCHI
Perceived Empathy of Technology Scale (PETS): Measuring Empathy of Systems Toward the UserAffective computing improves rapidly, allowing systems to process human emotions. This enables systems such as conversational agents or social robots to show empathy toward users. While there are various established methods to measure the empathy of humans, there is no reliable and validated instrument to quantify the perceived empathy of interactive systems. Thus, we developed the Perceived Empathy of Technology Scale (PETS) to assess and compare how empathic users perceive technology. We followed a standardized multi-phase process of developing and validating scales. In total, we invited 30 experts for item generation, 324 participants for item selection, and 396 additional participants for scale validation. We developed our scale using 22 scenarios with opposing empathy levels, ensuring the scale is universally applicable. This resulted in the PETS, a 10-item, 2-factor scale. The PETS allows designers and researchers to evaluate and compare the perceived empathy of interactive systems rapidly.2024MSMatthias Schmidmaier et al.LMU MunichAgent Personality & AnthropomorphismSocial Robot InteractionCHI
Exploring Smart Standing Desks to Foster a Healthier Workplace"Sedentary behavior is endemic in modern workplaces, contributing to negative physical and mental health outcomes. Although adjustable standing desks are increasing in popularity, people still avoid standing. We developed an open-source plug-and-play system to remotely control standing desks and investigated three system modes with a three-week in-the-wild user study (N=15). Interval mode forces users to stand once per hour, causing frustration. Adaptive mode nudges users to stand every hour unless the user has stood already. Smart mode, which raises the desk during breaks, was the best rated, contributing to increased standing time with the most positive qualitative feedback. However, non-computer activities need to be accounted for in the future. Therefore, our results indicate that a smart standing desk that shifts modes at opportune times has the most potential to reduce sedentary behavior in the workplace. We contribute our open-source system and insights for future intelligent workplace well-being systems. https://doi.org/10.1145/3596260"2023LHLuke Haliburton et al.Workplace Wellbeing & Work StressUbiComp
A Mixed-Method Exploration into the Mobile Phone Rabbit HoleSmartphones provide various functions supporting users in their daily lives. However, the temptation of getting distracted and tuning out is high leading to so-called rabbit holes. To quantify rabbit hole behavior, we developed an Android tracking application that collects smartphone usage enriched with experience sampling questionnaires. We analyzed 14,395 smartphone use sessions from 21 participants, collected over two weeks, showing that rabbit hole sessions are significantly longer and contain more user interaction, revealing a certain level of restlessness in use. The context of rabbit hole sessions and subjective results revealed different triggers for spending more time on the phone. Next, we conduct an expert focus group (N=6) to put the gained insights into perspective and formulate a definition of the mobile phone rabbit hole. Our results form the foundation for predicting and communicating the mobile phone rabbit hole, especially when prolonged smartphone use results in regret.2023NTNađa Terzimehić et al.Chronic Disease Self-Management (Diabetes, Hypertension, etc.)Notification & Interruption ManagementMobileHCI
Adapting Visual Complexity Based on Electrodermal Activity Improves Performance in Virtual RealityBiocybernetic loops encompass users' state detection and system adaptation based on physiological signals. Current adaptive systems limit the adaptation to task features such as task difficulty or multitasking demands. However, virtual reality allows the manipulation of task-irrelevant elements in the environment. We present a physiologically adaptive system that adjusts the virtual environment based on physiological arousal, i.e., electrodermal activity. We conducted a user study with our adaptive system in social virtual reality to verify improved performance. Here, participants completed an n-back task, and we adapted the visual complexity of the environment by changing the number of non-player characters. Our results show that an adaptive virtual reality can control users' comfort, performance, and workload by adapting the visual complexity based on physiological arousal. Thus, our physiologically adaptive system improves task performance and perceived workload. Finally, we embed our findings in physiological computing and discuss applications in various scenarios.2023FCFrancesco Chiossi et al.Social & Collaborative VRBiosensors & Physiological MonitoringMobileHCI
SensCon: Embedding Physiological Sensing into Virtual Reality ControllersVirtual reality experiences increasingly use physiological data for virtual environment adaptations to evaluate user experience and immersion. Previous research required complex medical-grade equipment to collect physiological data, limiting real-world applicability. To overcome this, we present SensCon for skin conductance and heart rate data acquisition. To identify the optimal sensor location in the controller, we conducted a first study investigating users' controller grasp behavior. In a second study, we evaluated the performance of SensCon against medical-grade devices in six scenarios regarding user experience and signal quality. Users subjectively preferred SensCon in terms of usability and user experience. Moreover, the signal quality evaluation showed satisfactory accuracy across static, dynamic, and cognitive scenarios. Therefore, SensCon reduces the complexity of capturing and adapting the environment via real-time physiological data. By open-sourcing SensCon, we enable researchers and practitioners to adapt their virtual reality environment effortlessly. Finally, we discuss possible use cases for virtual reality-embedded physiological sensing.2023FCFrancesco Chiossi et al.Immersion & Presence ResearchBiosensors & Physiological MonitoringContext-Aware ComputingMobileHCI
Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual RealityProviding users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands' Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion's impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.2023YWYannick Weiss et al.LMU MunichForce Feedback & Pseudo-Haptic WeightImmersion & Presence ResearchCHI
Deep Learning Super-Resolution Network Facilitating Fiducial Tangibles on Capacitive TouchscreensOver the last years, we have seen many approaches using tangibles to address the limited expressiveness of touchscreens. Mainstream tangible detection uses fiducial markers embedded in the tangibles. However, the coarse sensor size of capacitive touchscreens makes tangibles bulky, limiting their usefulness. We propose a novel deep-learning super-resolution network to facilitate fiducial tangibles on capacitive touchscreens better. In detail, our network super-resolves the markers enabling off-the-shelf detection algorithms to track tangibles reliably. Our network generalizes to unseen marker sets, such as AprilTag, ArUco, and ARToolKit. Therefore, we are not limited to a fixed number of distinguishable objects and do not require data collection and network training for new fiducial markers. With extensive evaluation including real-world users and five showcases, we demonstrate the applicability of our open-source approach on commodity mobile devices and further highlight the potential of tangibles on capacitive touchscreens.2023MRMarius Rusu et al.LMU MunichHaptic WearablesCircuit Making & Hardware PrototypingCHI
VRception: Rapid Prototyping of Cross-Reality Systems in Virtual RealityCross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.2022UGUwe Gruenefeld et al.University of Duisburg-EssenMixed Reality WorkspacesImmersion & Presence ResearchCHI