Leveraging Learner Errors in Digital Argumentation Learning: How ALure Helps Students Learn from their Mistakes and Write Better ArgumentsProviding argumentation feedback is considered helpful for students preparing to work in collaborative environments, helping them with writing higher-quality argumentative texts. Domain-independent natural language processing (NLP) methods, such as generative models, can utilize learner errors and fallacies in argumentation learning to help students write better argumentative texts. To test this, we collect design requirements, and then design and implement two different versions of our system called ALure to improve the students’ argumentation skills. We test how ALure helps students learn argumentation in a university lecture with 305 students and compare the learning gains of the two versions of ALure with a control group using video tutoring. We find and discuss the differences of learning gains in argument structure and fallacies in both groups after using ALure, as well as the control group. Our results shed light on the applicability of computer-supported systems using recent advances in NLP to help students in learning argumentation as a necessary skill for collaborative working settings.2025SNSeyed Parsa Neshaei et al.Fighting Misinformation, Building BelievabilityCSCW
Emotionally Aware Moderation: The Potential of Emotion Monitoring in Shaping Healthier Social Media ConversationsSocial media platforms increasingly employ proactive moderation techniques, such as detecting and curbing toxic and uncivil comments, to prevent the spread of harmful content. Despite these efforts, such approaches are often criticized for creating a climate of censorship and failing to address the underlying causes of uncivil behavior. Our work makes both theoretical and practical contributions by proposing and evaluating two types of emotion monitoring dashboards to users' emotional awareness and mitigate hate speech. In a study involving 211 participants, we evaluate the effects of the two mechanisms on user commenting behavior and emotional experiences. The results reveal that these interventions effectively increase users' awareness of their emotional states and reduce hate speech. However, our findings also indicate potential unintended effects, including increased expression of negative emotions (Angry, Fear, and Sad) when discussing sensitive issues. These insights provide a basis for further research on integrating proactive emotion regulation tools into social media platforms to foster healthier digital interactions.2025XSXiaotian Su et al.Toxic and Anti-Social BehaviorCSCW
Vocalizing User Feedback: The Impact of Input Modality on Self-DisclosureThis study explores how input modality — voice versus text — affects self-disclosure in user feedback, leveraging a novel approach that uses transformer-based models to detect self-disclosure per token embedded in context. In an online experiment with 122 participants, results indicate that participants using voice input engaged significantly less in self-disclosure than those using text, a finding associated with reduced perceived anonymity in voice interactions. This effect persisted after accounting for response length, suggesting that the influence of voice input on self-disclosure is not merely due to brevity but also reflects unique psychological responses to voice-based communication. These findings contribute to a deeper theoretical understanding of input modality’s role in shaping disclosure behavior in user feedback contexts. Practical implications offer design guidance for voice-based feedback systems to encourage more open and authentic feedback in sensitive settings.2025MGMarc Christopher Grau et al.Voice TechnologyCSCW
To Cuddle, Mingle, Venture, or Guide: How Architectural Affordances Influence the Experience of Social VR PlacesSocial virtual reality (VR) encompasses a growing network of three-dimensional virtual worlds where users interact in a shared, embodied way. While research has focused on the social interactions between the users themselves, less is known about how the design of virtual spaces influences these interactions. Our study combines interviews with 15 social VR users logging over 1,000 hours and a 20-hour spatial protocol of a purposeful sampling of VR worlds. We analysed how spatial characteristics (including proportion, sightlines, materiality, atmosphere, and navigation) influence meaningful user interaction to turn space into place. We synthesised four place types for a new social VR typology: Cuddle worlds that encourage cosy conversations; Mingle worlds that facilitate new encounters; Venture worlds that promote exploration; and Guided worlds that elicit a sense of belonging with the online community. By relating architectural affordances to social patterns, we contribute insights towards the purposeful design of social VR places.2025JHJihae Han et al.Social & Collaborative VRImmersion & Presence ResearchVisualization Perception & CognitionDIS
Towards Societally Beneficial Personalized Realities: A Conceptual Foundation for Responsible Ubiquitous Personalization SystemsPersonalization of online realities is today ubiquitous to support decision making or reduce information overload. Recently, through the expanding capabilities and pervasiveness of Mixed Reality and Ubiquitous Computing technologies, we observe increasing personalization also of physical reality. This might yield more convenient, efficient and inclusive everyday interactions. However, it may readily lead to serious societal consequences such as the loss of shared worlds and the emergence of perceptual filter bubbles. To mitigate such harms while retaining the benefits of personalization, it is important to understand how ubiquitous personalization systems may operate responsibly. Responding to this need, we propose a conceptual model that overcomes the limitations of established personalization models and expands their applicable scope to physical, virtual, and hybrid environments. We validated our model in relation to existing literature and show how it provides a conceptual foundation for the analysis and study of responsible personalization systems that create individually and societally beneficial Personalized Realities.2025JSJannis Strecker-Bischoff et al.AI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityUbiquitous ComputingDIS
Experiencing the World through Imperfect Lenses: An Autoethnography of Living in Mixed RealityMixed reality (MR) technologies are evolving to become more portable, incorporating video see-through capabilities, which enable a shift from stationary to mobile use. This development allows MR headsets to be used in various everyday contexts, including eating, travelling, and exercising. Before MR technologies reshape how we live and seamlessly integrate into our daily activities, we must understand the lived experiences of using MR in our personal lives and their influences and implications on our day-to-day activities. This paper presents an autoethnographic study that adopts an exploratory first-person perspective to uncover challenges and opportunities within this intimate context. We present the experiences and challenges of living in mixed reality, including on-the-go scenarios and social interactions. Our findings reveal issues such as social and ethical concerns and offer lessons learned to inform the design of future interactive systems for mobile mixed reality.2025YSYu Sun et al.Mixed Reality WorkspacesImmersion & Presence ResearchContext-Aware ComputingDIS
UrbAI: Exploring the Possibilities of Generative AI Image Processing to Promote Citizen ParticipationGiving citizens a voice in urban development processes is crucial for enabling socially sustainable cities and communities. However, citizens' opportunities to express ideas are often limited to communication channels that offer poor incentives for participation. In this paper, we conducted an in-the-wild technology probe study (N=16) using a generative AI (GenAI) tool to allow citizens to visualise and submit urban development ideas by taking pictures and manipulating them with GenAI. The results highlight the potential of GenAI to empower, engage, and inspire citizens‘ creativity. We then conducted additional expert interviews (N=6) with city representatives and community associates. They voiced GenAI's value in early-stage citizen participation but raised concerns about excluding senior citizens. Building on these insights, we present the design and evaluation (N=10) of UrbAI, a co-creative system tailored to urban development participation and conclude with lessons learned to inform how GenAI could be embedded in future citizen participation processes.2025APAdrian Preussner et al.University of St. GallenGenerative AI (Text, Image, Music, Video)Community Engagement & Civic TechnologyCHI
LifeInsight: Design and Evaluation of an AI-Powered Assistive Wearable for Blind and Low Vision People Across Multiple Everyday Life ScenariosAssistive technologies (ATs) have the potential to empower blind and low vision (BLV) people. Yet, they often remain underutilised due to their immobility and limited applicability across scenarios. This paper presents LifeInsight, an AI-powered assistive wearable for BLV people that uses a wearable camera, microphone and single-click interface for goal-oriented visual querying. To inform the design of LifeInsight, we first collected a corpus of BLV people’s daily experiences using video probes and interviews. Ten BLV people recorded their daily experiences over one week using GoPro cameras, providing empirical insights. Based on these, we report on LifeInsight and its evaluation with 13 BLV people across six scenarios. LifeInsight effectively responded to visual queries, such as distinguishing between jars or identifying the status of a candle. Drawing on our work, we conclude with key lessons and practical recommendations to guide future research and advance the development and evaluation of AI-powered assistive wearables.2025FMFlorian Mathis et al.University of St. Gallen; University of Applied Sciences of the GrisonsHaptic WearablesVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Ad-Blocked Reality: Evaluating User Perceptions of Content Blocking Concepts Using Extended RealityInspired by the concepts of diminishing reality and ad-blocking in browsers, this study investigates the perceived benefits and concerns of blocking physical, real-world content, particularly ads, through Extended Reality (XR). To understand how users perceive this concept, we first conducted a user study (N=18) with an ad-blocking prototype to gather initial insights. The results revealed a mixed willingness to adopt XR blockers, with participants appreciating aspects such as customizability, convenience, and privacy. Expected benefits included enhanced focus and reduced stress, while concerns centered on missing important information and increased feelings of isolation. Hence, we investigated the user acceptance of different ad-blocking visualizations through a follow-up online survey (N=120), comparing six concepts based on related work. The results indicated that the XR ad-blocker visualizations play a significant role in how and for what kinds of advertisements such a concept might be used, paving the path for future feedback-driven prototyping.2025CKChristopher Katins et al.HU BerlinPrivacy by Design & User ControlSocial Platform Design & User BehaviorCHI
Moving Beyond the Simulator: Interaction-Based Drunk Driving Detection in a Real Vehicle Using Driver Monitoring Cameras and Real-Time Vehicle DataAlcohol consumption poses a significant public health challenge, presenting serious risks to individual health and contributing to over 700 daily road fatalities worldwide. Digital interventions can play a crucial role in reducing these risks. However, reliable drunk driving detection systems are vital to effectively deliver these interventions. To develop and evaluate such a system, we conducted an interventional study on a test track to collect real vehicle data from 54 participants. Our system reliably identifies non-sober driving with an area under the receiver operating characteristic curve (AUROC) of 0.84 ± 0.11 and driving above the WHO-recommended blood alcohol concentration limit of 0.05 g/dL with an AUROC of 0.80 ± 0.10. Our models rely on well-known physiological drunk driving patterns. To the best of our knowledge, we are the first to (1) rigorously evaluate the potential of (2) driver monitoring cameras and real-time vehicle data for detecting drunk driving in a (3) real vehicle.2025RDRobin Deuber et al.ETH ZürichTeleoperated DrivingHuman Pose & Activity RecognitionCHI
Describing Explored Places through OpenStreetMap DataMobile navigation applications are good at providing efficient navigation instructions. However, they currently lack the capability to facilitate free exploration. Therefore, users are limited to encountering only places close to the shortest paths, neglecting places that could diversify navigation and foster spatial learning. To better understand what characteristics places have that users like to explore we collected a dataset with a mobile application that encourages free exploration using gamification (n = 39, t = 455 days, 106.50 km2). Using OpenStreetMap data, we found highly frequented freely explored places comprising office, educational, retail, touristic and commercial places. When comparing the characteristics of the freely explored places to those along the shortest path, those categories were different. Based on our findings, we propose that implementing more diverse routing algorithms can enhance navigation diversity, improve spatial learning, and optimise the utilisation of urban spaces for travel.2025ESEve Schade et al.University of St. GallenGeospatial & Map VisualizationPublic Transit & Trip PlanningCHI
InterFACE: Establishing a Facial Action Unit Input Vocabulary for Hands-Free Extended Reality Interactions, From VR Gaming to AR Web BrowsingExtended Reality (XR) interactions often rely on spatial hand or controller inputs - necessitating dexterous wrist, hand and finger movements including pressing virtual buttons, pinching to select, and performing hand gestures. However, there are scenarios where such dependencies may render XR devices and apps inaccessible to users - from situational/temporary impairments such as encumbrance, to physical motor impairments. In this paper, we contribute to a growing literature considering facial input as an alternative. In a user study (N=20) we systematically evaluate the usability of 53 Facial Action Units in VR, deriving a set of optimal (comfort, effort, performance) FAUs for interaction. We then use these facial inputs to drive and evaluate (N=10) two demonstrator apps: VR locomotion, and AR web browsing, showcasing how close facial interaction can get to existing baselines, and demonstrating that FAUs offer a viable, generalizable input modality for XR devices.2025GWGraham Wilson et al.University of Glasgow, School of Computing ScienceHand Gesture RecognitionFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionCHI
Real-Time Adaptive Industrial Robots: Improving Safety And Comfort In Human-Robot CollaborationIndustrial robots become increasingly prevalent, resulting in a growing need for intuitive, comforting human-robot collaboration. We present a user-aware robotic system that adapts to operator behavior in real time while non-intrusively monitoring physiological signals to create a more responsive and empathetic environment. Our prototype dynamically adjusts robot speed and movement patterns to proxemics while measuring operator pupil dilation. Our user study compares this adaptive system to a non-adaptive counterpart, and demonstrates that the adaptive system significantly reduces both perceived and physiologically measured cognitive load while enhancing usability. Participants reported increased feelings of comfort, safety, trust, and a stronger sense of collaboration when working with the adaptive robot. This highlights the potential of integrating real-time physiological data into human-robot interaction paradigms. This novel approach creates more intuitive and collaborative industrial environments where robots effectively ’read’ and respond to human cognitive states, and we feature all data and code for future use.2025DHDamian Hostettler et al.University of St. Gallen, ICS-HSGBiosensors & Physiological MonitoringHuman-Robot Collaboration (HRC)CHI
A Digital Companion Architecture for Ambient IntelligenceGarcia等人提出面向环境智能的数字伴侣架构,整合多模态感知与智能交互,为用户提供个性化的持续辅助服务。2024KGKimberly Garcia et al.Context-Aware ComputingSocial Robot InteractionUbiComp
DIY Digital Interventions: Behaviour Change with Trigger-Action ProgrammingWhether it is sleep, diet, or procrastination, changing behaviours can be challenging. Individuals could design and build their own personalised digital interventions to help them reach their goals, but little is known about this process. Building upon previous research we propose the Behaviour Change with Trigger-Action Programming (BC-TAP) model which describes how individuals could bridge the gap between their current and desired behaviour through the creation of `Do-It-Yourself' (DIY) digital interventions. We conducted a two-day participatory workshop based on the BC-TAP model with 28 participants. Participants articulated plans to change a behaviour of their choice and represented these plans in mobile device automations. After using their interventions for up to three weeks, participants reflected on their experience. Our findings report opportunities and challenges at each stage of the process. While formulating a digital proxy for certain behaviours was challenging, both failures and successes facilitated participants’ awareness of their behaviour, and their ability to change it.2024ASAva Elizabeth Scott et al.Creative Collaboration & Feedback SystemsKnowledge Worker Tools & WorkflowsMobileHCI
MoodShaper: A Virtual Reality Experience to Support Managing Negative EmotionsNegative emotions such as sadness or anger are often seen as something to be avoided. However, recognising, processing and regulating challenging emotional experiences can facilitate personal growth and is essential for long-term well-being. To support people in regulating and reflecting on negative emotions, we designed MoodShaper — a VR experience where participants autonomously create a virtual environment combined with emotion regulation (ER) interventions. Our system included three different interventions designed based on interviews with psychotherapists. We evaluated MoodShaper in a mixed-method between-subject study with $n=60$ participants. Participants experienced one of the three ER interventions, allowing them to manipulate visual representations of negative emotions through either externalisation, seclusion, or appreciation. We found that MoodShaper significantly increased positive affect while decreasing difficulties in ER and negative affect. Our work demonstrates how VR can provide technology-mediated support to reflect on, engage with and manage negative emotions. We contribute insights for future VR systems which support ER for challenging situations.2024NWNadine Wagener et al.Immersion & Presence ResearchVR Medical Training & RehabilitationMental Health Apps & Online Support CommunitiesDIS
Exploring Mobile Devices as Haptic Interfaces for Mixed RealityDedicated handheld controllers facilitate haptic experiences of virtual objects in mixed reality (MR). However, as mobile MR becomes more prevalent, we observe the emergence of controller-free MR interactions. To retain immersive haptic experiences, we explore the use of mobile devices as a substitute for specialised MR controller. In an exploratory gesture elicitation study (n = 18), we examined users' (1) intuitive hand gestures performed with prospective mobile devices and (2) preferences for real-time haptic feedback when exploring haptic object properties. Our results reveal three haptic exploration modes for the mobile device, as an object, hand substitute, or as an additional tool, and emphasise the benefits of incorporating the device's unique physical features into the object interaction. This work expands the design possibilities using mobile devices for tangible object interaction, guiding the future design of mobile devices for haptic MR experiences.2024CSCarolin Stellmacher et al.University of BremenShape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesCHI
Playing with Perspectives and Unveiling the Autoethnographic Kaleidoscope in HCI – A Literature Review of AutoethnographiesAutoethnography is a valuable methodological approach bridging the gap between personal experiences and academic inquiry, enabling researchers to gain deep insights into various dimensions of technology use and design. While its adoption in Human-Computer Interaction (HCI) continues to grow, a comprehensive investigation of its function and role within HCI research is still lacking. This paper examines the evolving landscape of autoethnographies within HCI over the past two decades through a systematic literature review. We identify prevalent themes, methodologies, and contributions emerging from autoethnographies by analysing a corpus of 31 HCI publications. Furthermore, we detail data collection techniques and analysis methods and describe reporting standards. Our literature review aims to inform future (HCI) researchers, practitioners, and designers. It encourages them to embrace autoethnography's rich opportunities by providing examples across domains (e.g., embodiment or health and wellbeing) to advance our understanding of the complex relationships between humans and technology.2024AKAnnika Kaltenhauser et al.University of St. GallenUser Research Methods (Interviews, Surveys, Observation)Field StudiesCHI
Predicting early user churn in a public digital weight loss interventionDigital health interventions (DHIs) offer promising solutions to the rising global challenges of noncommunicable diseases by promoting behavior change, improving health outcomes, and reducing healthcare costs. However, high churn rates are a concern with DHIs, with many users disengaging before achieving desired outcomes. Churn prediction can help DHI providers identify and retain at-risk users, enhancing the efficacy of DHIs. We analyzed churn prediction models for a weight loss app using various machine learning algorithms on data from 1,283 users and 310,845 event logs. The best-performing model, a random forest model that only used daily login counts, achieved an F1 score of 0.87 on day 7 and identified an average of 93% of churned users during the week-long trial. Notably, higher-dimensional models performed better at low false positive rate thresholds. Our findings suggest that user churn can be forecasted using engagement data, aiding in timely personalized strategies and better health results.2024RJRobert Jakob et al.ETH ZurichMental Health Apps & Online Support CommunitiesChronic Disease Self-Management (Diabetes, Hypertension, etc.)Telemedicine & Remote Patient MonitoringCHI
Listening to the Voices: Describing Ethical Caveats of Conversational User Interfaces According to Experts and Frequent UsersAdvances in natural language processing and understanding have led to a rapid growth in the popularity of conversational user interfaces (CUIs). While CUIs introduce novel benefits, they also yield risks that may exploit people's trust. Although research looking at unethical design deployed through graphical user interfaces (GUIs) established a thorough taxonomy of so-called dark patterns, there is a need for an equally in-depth understanding in the context of CUIs. Addressing this gap, we interviewed 27 participants from three cohorts: researchers, practitioners, and frequent users of CUIs. Applying thematic analysis, we develop five themes reflecting each cohort's insights about ethical design challenges and introduce the CUI Expectation Cycle, bridging system capabilities and user expectations while respecting each theme's ethical caveats. This research aims to inform future work to consider ethical constraints while adopting a human-centred approach.2024TMThomas Mildner et al.University of BremenVoice User Interface (VUI) DesignAI Ethics, Fairness & AccountabilityDark Patterns RecognitionCHI