Us-Reflection: Designing for Meaningful Social InteractionsTechnology increasingly shapes our social interactions, both online and in person. Strong social connections and face-to-face interactions are vital for wellbeing, especially with close relationships. In this context, technology can play an ambivalent role: whereas it often has a negative impact on the quality of these interactions, it carries potential to enrich conversations and improve social interactions if used in a meaningful way. We design a prototype that implements subtle intervention strategies to foster meaningful technology use, specifically aimed at enhancing close relationships during in-person interactions. We evaluate the prototype within an exploratory, two-week in-the-wild user study with 6 tandems (N=12). Our findings suggest that the strategy of "us-reflection" – a social approach to reflection – contributes to mutual awareness of participants' shared time. Our prototype encouraged more meaningful interactions by proposing conversation topics or suggesting activities, ultimately strengthening close relationships and fostering more intentional, engaging, and rewarding social experiences.2025SSSophia Sakel et al.Cyberbullying & Online HarassmentTechnology Ethics & Critical HCIMobileHCI
When AI Joins the Negotiation Table: Evaluating AI as a ModeratorNegotiation is a crucial decision-making process where parties seek to resolve differences and optimize outcomes. While prior research has focused on maximizing negotiation outcomes, fostering a collaborative atmosphere is essential for long-term relationship-building. This study explores the role of AI-assisted moderation in negotiations that emulate high-stress environments. We developed a text-based AI moderator and evaluated its usability and effectiveness in a two-phase study: a pilot study with 14 participants followed by a final user study with 16 participants. To provide an initial point of comparison, we assessed trust, respect, and equitability in AI-moderated versus non-moderated negotiations. Quantitative findings indicate a negative effect of AI-assisted moderation on relationship-building, while qualitative insights suggest that AI moderation fosters collaboration. However, the cognitive load of text-based facilitation hinders its effectiveness. These results highlight the importance of seamless AI integration and contribute to the broader discourse on AI’s role in behavior change and mediated communication.2025CKCharlotte Kobiella et al.Agent Personality & AnthropomorphismAI-Assisted Decision-Making & AutomationCUI
PrivacyHub: A Functional Tangible and Digital Ecosystem for Interoperable Smart Home Privacy Awareness and ControlHubs are at the core of most smart homes. Modern cross-ecosystem protocols and standards enable smart home hubs to achieve interoperability across devices, offering the unique opportunity to integrate universally available smart home privacy awareness and control features. To date, such privacy features mainly focus on individual products or prototypical research artifacts. We developed a cross-ecosystem hub featuring a tangible dashboard and a digital web application to deepen our understanding of how smart home users interact with functional privacy features. The ecosystem allows users to control the connectivity states of their devices and raises awareness by visualizing device positions, states, and data flows. We deployed the ecosystem in six households for one week and found that it increased participants' perceived control, awareness, and understanding of smart home privacy. We further found distinct differences between tangible and digital mechanisms. Our findings highlight the value of cross-ecosystem hubs for effective privacy management.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Privacy by Design & User ControlPrivacy Perception & Decision-MakingSmart Home Privacy & SecurityCHI
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot InteractionUnderstanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.2025JLJan Leusmann et al.LMU MunichHand Gesture RecognitionSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Investigating LLM-Driven Curiosity in Human-Robot InteractionIntegrating curious behavior traits into robots is essential for them to learn and adapt to new tasks over their lifetime and to enhance human-robot interaction. However, the effects of robots expressing curiosity on user perception, user interaction, and user experience in collaborative tasks are unclear. In this work, we present a Multimodal Large Language Model-based system that equips a robot with non-verbal and verbal curiosity traits. We conducted a user study ($N=20$) to investigate how these traits modulate the robot's behavior and the users' impressions of sociability and quality of interaction. Participants prepared cocktails or pizzas with a robot, which was either curious or non-curious. Our results show that we could create user-centric curiosity, which users perceived as more human-like, inquisitive, and autonomous while resulting in a longer interaction time. We contribute a set of design recommendations allowing system designers to take advantage of curiosity in collaborative tasks.2025JLJan Leusmann et al.LMU MunichHuman-LLM CollaborationSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Pixel Memories: Do Lifelog Summaries Fail to Enhance Memory but Offer Privacy-Aware Memory Assessments?We explore the metaphorical "daily memory pill" concept – a brief pictorial lifelog recap aimed at reviving and preserving memories. Leveraging psychological strategies, we explore the potential of such summaries to boost autobiographical memory. We developed an automated lifelogging memory prosthesis and a research protocol (Automated Memory Validation ``AMV'') for conducting privacy-aware, in-situ evaluations. We conducted a real-world lifelogging experiment for a month (n=11). We also designed a browser ``Pixel Memories’’ for browsing one-week worth of lifelogs. The results suggest that daily timelapse summaries, while not yielding significant memory augmentation effects, also do not lead to memory degradation. Participants' confidence in recalled content remains unaltered, but the study highlights the challenge of users' overestimation of memory accuracy. Our core contributions, the AMV protocol and "Pixel Memories" browser, advance our understanding of memory augmentations and offer a privacy-preserving method for evaluating future ubicomp systems.2025PEPassant ElAgroudy et al.German Research Centre for Artificial Intelligence (DFKI); RPTU KaiserslauternContext-Aware ComputingUbiquitous ComputingCHI
The AI Ghostwriter Effect: When Users do not Perceive Ownership of AI-Generated Text but Self-Declare as AuthorsHuman-AI interaction in text production increases complexity in authorship. In two empirical studies (n1 = 30 & n2 = 96), we investigate authorship and ownership in human-AI collaboration for personalized language generation. We show an AI Ghostwriter Effect: Users do not consider themselves the owners and authors of AI-generated text but refrain from publicly declaring AI authorship. Personalization of AI-generated texts did not impact the AI Ghostwriter Effect, and higher levels of participants’ influence on texts increased their sense of ownership. Participants were more likely to attribute ownership to supposedly human ghostwriters than AI ghostwriters, resulting in a higher ownership-authorship discrepancy for human ghostwriters. Rationalizations for authorship in AI ghostwriters and human ghostwriters were similar. We discuss how our findings relate to psychological ownership and human-AI interaction to lay the foundations for adapting authorship frameworks and user interfaces in AI in text-generation tasks.2024FDFiona Draxler et al.Generative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityAI-Assisted Creative WritingDIS
"If the Machine Is As Good As Me, Then What Use Am I?" – How the Use of ChatGPT Changes Young Professionals' Perception of Productivity and AccomplishmentLarge language models (LLMs) like ChatGPT have been widely adopted in work contexts. We explore the impact of ChatGPT on young professionals' perception of productivity and sense of accomplishment. We collected LLMs' main use cases in knowledge work through a preliminary study, which served as the basis for a two-week diary study with 21 young professionals reflecting on their ChatGPT use. Findings indicate that ChatGPT enhanced some participants' perceptions of productivity and accomplishment by enabling greater creative output and satisfaction from efficient tool utilization. Others experienced decreased perceived productivity and accomplishment, driven by a diminished sense of ownership, perceived lack of challenge, and mediocre results. We found that the suitability of task delegation to ChatGPT varies strongly depending on the task nature. It's especially suitable for comprehending broad subject domains, generating creative solutions, and uncovering new information. It's less suitable for research tasks due to hallucinations, which necessitate extensive validation.2024CKCharlotte Kobiella et al.Center for Digital Technology & Management, LMU MunichHuman-LLM CollaborationAI-Assisted Decision-Making & AutomationCHI
The Social Journal: Investigating Technology to Support and Reflect on Social InteractionsSocial interaction is a crucial part of what it means to be human. Maintaining a healthy social life is strongly tied to positive outcomes for both physical and mental health. While we use personal informatics data to reflect on many aspects of our lives, technology-supported reflection for social interactions is currently under-explored. To address this, we first conducted an online survey (N=124) to understand how users want to be supported in their social interactions. Based on this, we designed and developed an app for users to track and reflect on their social interactions and deployed it in the wild for two weeks (N=25). Our results show that users are interested in tracking meaningful in-person interactions that are currently untraced and that an app can effectively support self-reflection on social interaction frequency and social load. We contribute insights and concrete design recommendations for technology-supported reflection for social interaction.2024SSSophia Sakel et al.LMU MunichMental Health Apps & Online Support CommunitiesSocial Platform Design & User BehaviorCHI
A Longitudinal In-the-Wild Investigation of Design Frictions to Prevent Smartphone OveruseSmartphone overuse is hyper-prevalent in society, and developing tools to prevent this overuse has become a focus of HCI. However, there is a lack of work investigating smartphone overuse interventions over the long term. We collected usage data from N=1,039 users of one sec over an average of 13.4 weeks and qualitative insights from 249 of the users through an online survey. We found that users overwhelmingly choose to target Social Media apps. We found that the short design frictions introduced by one sec effectively reduce how often users attempt to open target apps and lead to more intentional app-openings over time. Additionally, we found that users take periodic breaks from one sec interventions, and quickly rebound from a pattern of overuse when returning from breaks. Overall, we contribute findings from a longitudinal investigation of design frictions in the wild and identify usage patterns from real users in practice.2024LHLuke Haliburton et al.LMU Munich, Munich Center for Machine Learning (MCML)Privacy by Design & User ControlNotification & Interruption ManagementCHI
Society’s Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale"Human augmentation technologies (ATs) are a subset of ubiquitous on-body devices designed to improve cognitive, sensory, and motor capacities. Although there is a large corpus of knowledge concerning ATs, less is known about societal attitudes towards them and how they shift over time. To that end, we developed The Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale, which measures how users of ATs are perceived. To develop the scale, we first created a list of possible scale items based on past work on how people respond to new technologies. The items were then reviewed by experts. Next, we performed exploratory factor analysis to reduce the scale to its final length of thirteen items. Subsequently, we confirmed test-retest validity of our instrument, as well as its construct validity. The SHAPE scale enables researchers and practitioners to understand elements contributing to attitudes toward augmentation technology users. The SHAPE scale assists designers of ATs in designing artifacts that will be more universally accepted." https://doi.org/10.1145/36109152023SVSteeven Villa et al.Brain-Computer Interface (BCI) & NeurofeedbackMotor Impairment Assistive Input TechnologiesUbiComp
Exploring Smart Standing Desks to Foster a Healthier Workplace"Sedentary behavior is endemic in modern workplaces, contributing to negative physical and mental health outcomes. Although adjustable standing desks are increasing in popularity, people still avoid standing. We developed an open-source plug-and-play system to remotely control standing desks and investigated three system modes with a three-week in-the-wild user study (N=15). Interval mode forces users to stand once per hour, causing frustration. Adaptive mode nudges users to stand every hour unless the user has stood already. Smart mode, which raises the desk during breaks, was the best rated, contributing to increased standing time with the most positive qualitative feedback. However, non-computer activities need to be accounted for in the future. Therefore, our results indicate that a smart standing desk that shifts modes at opportune times has the most potential to reduce sedentary behavior in the workplace. We contribute our open-source system and insights for future intelligent workplace well-being systems. https://doi.org/10.1145/3596260"2023LHLuke Haliburton et al.Workplace Wellbeing & Work StressUbiComp
Technical Design Space Analysis for Unobtrusive Driver Emotion Assessment Using Multi-Domain ContextDriver emotions play a vital role in driving safety and performance. Consequently, regulating driver emotions through empathic interfaces have been investigated thoroughly. However, the prerequisite - driver emotion sensing - is a challenging endeavor: Body-worn physiological sensors are intrusive, while facial and speech recognition only capture overt emotions. In a user study (N=27), we investigate how emotions can be unobtrusively predicted by analyzing a rich set of contextual features captured by a smartphone, including road and traffic conditions, visual scene analysis, audio, weather information, and car speed. We derive a technical design space to inform practitioners and researchers about the most indicative sensing modalities, the corresponding impact on users' privacy, and the computational cost associated with processing this data. Our analysis shows that contextual emotion recognition is significantly more robust than facial recognition, leading to an overall improvement of 7% using a leave-one-participant-out cross-validation. https://dl.acm.org/doi/10.1145/35694662023DBDavid Bethge et al.Automated Driving Interface & Takeover DesignPrivacy by Design & User ControlContext-Aware ComputingUbiComp
Feeling the Temperature of the Room: Unobtrusive Thermal Display of Engagement during Group Communication"Thermal signals have been explored in HCI for emotion-elicitation and enhancing two-person communication, showing that temperature invokes social and emotional signals in individuals. Yet, extending these findings to group communication is missing. We investigated how thermal signals can be used to communicate group affective states in a hybrid meeting scenario to help people feel connected over a distance. We conducted a lab study (N=20 participants) and explored wrist-worn thermal feedback to communicate audience emotions. Our results show that thermal feedback is an effective method of conveying audience engagement without increasing workload and can help a presenter feel more in tune with the audience. We outline design implications for real-world wearable social thermal feedback systems for both virtual and in-person communication that support group affect communication and social connectedness. Thermal feedback has the potential to connect people across distances and facilitate more effective and dynamic communication in multiple contexts. https://dl.acm.org/doi/10.1145/3580820"2023LHLuke Haliburton et al.Haptic WearablesFull-Body Interaction & Embodied InputUbiComp
Towards a Haptic Taxonomy of Emotions: Exploring Vibrotactile Stimulation in the Dorsal RegionThe implicit communication of emotional states between persons is a key use case for novel assistive and augmentation technologies. It can serve to expand individuals' perceptual capabilities and assist neurodivergent individuals. Notably, vibrotactile rendering is a promising method for delivering emotional information with minimal interference with visual or auditory perception. To date, the subjective individual association between vibrotactile properties and emotional states remains unclear. Previous approaches relied on analogies or arbitrary variations, limiting generalization. To address this, we conducted a study with 40 participants, analyzing associations between attributes of self-generated vibrotactile patterns (\textsc{amplitude}, \textsc{frequency}, \textsc{spatial location} of stimulation) and four emotional states (\textsc{Anger}, \textsc{Happiness}, \textsc{Neutral}, \textsc{Sadness}). We fin a preference for symmetrically arranged patterns, as well as distinct amplitude and frequency profiles for different emotions. These insights can aid in creating standardized vibrotactile patterns for universal emotional communication.2023SVSteeven Villa et al.Vibrotactile Feedback & Skin StimulationUbiComp
VR-Hiking: Physical Exertion Benefits Mindfulness and Positive Emotions in Virtual RealityExploring the great outdoors offers physical and mental health benefits. Hiking is healthy, provides a sense of accomplishment, and offers an opportunity to relax. However, a nature trip is not always possible, and there is a lack of evidence showing how these beneficial experiences can be replicated in Virtual Reality (VR). In response, we recruited (N=24) participants to explore a virtual mountain landscape in a within-subjects study with different levels of exertion: walking, using a chairlift, and teleporting. We found that physical exertion when walking produced significantly more positive emotions and mindfulness than other conditions. Our research shows that physically demanding outdoor activities in VR can be beneficial for the user and that the achievement of hiking up a virtual mountain on a treadmill positively impacts wellbeing. We demonstrate how physical exertion can be used to add mindfulness and positive affect to VR experiences and discuss consequences for VR designers.2023LHLuke Haliburton et al.Social & Collaborative VRImmersion & Presence ResearchMobileHCI
[Don't] Let The Bodies HIIT The Floor: Fostering Body Awareness in Fast-Paced Physical Activity Using Body-Worn SensorsTechnologies have become an integral part of physical activity. Yet, the majority of popular programs do not focus on promoting a genuine understanding of how sport affects our bodies. As apps and trackers persuade users to exercise more, lack of body awareness can be detrimental to health. In this work, we propose and evaluate the concept of in-session reflective feedback as a means to support informed exercise routines by design. We designed and implemented REPLAY, a system which presents users with a visualization of physiological signals (heart rate, movement) from body-worn sensors during high-intensity interval training (HIIT). Our evaluation showed that participants gained a better understanding of how their body reacted to physical activity, allowing them to understand its effect and recognize own weaknesses. Further, our work demonstrates how the type of feedback can significantly moderate a user's perceived exhaustion. We highlight how in-session reflective feedback using bodily signals can promote healthy and effective workouts through creating a deeper understanding of one's own body physiology and limits.2023BEBettina Eska et al.Fitness Tracking & Physical Activity MonitoringBiosensors & Physiological MonitoringMobileHCI
Relevance, Effort, and Perceived Quality: Language Learners' Experiences with AI-Generated Contextually Personalized Learning MaterialArtificial intelligence has enabled scalable auto-creation of context-based personalized learning materials. However, it remains unclear how content personalization shapes the learners' experience. We developed one personalized and two non-personalized, crowdsourced versions of a mobile language learning app: (1) with personalized auto-generated photo flashcards, (2) the same flashcards provided through crowdsourcing, and (3) manually generated flashcards based on the same photos. A two-week in-situ study (n=64) showed that learners assessed the quality of the non-personalized auto-generated material to be on par with manually generated material, which means that auto-generation is viable. However, when the auto-generation was personalized, the learners' quality rating was significantly lower. Further analyses suggest that aspects such as prior expectations and required efforts must be addressed before learners can actually benefit from context-based personalization with auto-generated material. We discuss resulting design implications and provide an outlook on the role of content personalization in AI-supported learning.2023FDFiona Draxler et al.Generative AI (Text, Image, Music, Video)Programming Education & Computational ThinkingIntelligent Tutoring Systems & Learning AnalyticsDIS
Your Text Is Hard to Read: Facilitating Readability Awareness to Support Writing Proficiency in Text ProductionAllowing users of interactive systems to reflect on their task proficiency is often incidental. This is unfortunate, as communicating meaningful task-related proficiency feedback could improve users' awareness of their abilities and their willingness to improve. To highlight the feasibility of this concept, we evaluated how different methods of readability feedback impacted users during a text production task. In general, our results showed that having access to readability feedback allowed participants to reflect on their task solving approach, facilitating the users' understanding of their proficiency. Revision-based methods are less distracting for the user than continuous feedback methods, while still offering high efficacy. Further, feedback should be paired with a subtle form of gamification elements. We envision this reflection-oriented design to user proficiency to be applicable to a variety of interactive systems, allowing for an improved and engaging user experience.2023JKJakob Karolus et al.Visualization Perception & CognitionGamification DesignDIS
EyePiano: Leveraging Gaze For Reflective Piano LearningMastering skills which involve high dexterity, such as playing the piano, requires extensive guidance through personal teaching. Understanding how we can leverage data from sensor-based systems to improve the learning process, allows us to build interactive systems which effectively facilitate skill acquisition. To explore such possibilities, we developed EyePiano - a gaze-assisted tool for reflective piano playing. EyePiano guides the practice process of learning piano scores through analyzing the pianist's gaze behavior. We based the design of EyePiano on requirements identified through interviews with piano teachers and a feasibility evaluation of gaze metrics. Our system illustrates that basic gaze metrics are sufficient to predict difficult regions for students. Thus, highlighting sections of the music piece which are particularly difficult for the pianist allows EyePiano to support piano rehearsals for students. Our work showcases the feasibility of using gaze data for reflective music education, enabling effective instrument practice.2023JKJakob Karolus et al.Eye Tracking & Gaze InteractionSTEM Education & Science CommunicationDIS