ChairPose: Pressure-based Chair Morphology Grounded Sitting Pose Estimation through Simulation-Assisted TrainingProlonged seated activity is increasingly common in modern environments, raising concerns around musculoskeletal health, ergonomics, and the design of responsive interactive systems. Existing posture sensing methods such as vision-based or wearable approaches face limitations including occlusion, privacy concerns, user discomfort, and restricted deployment flexibility. We introduce ChairPose, the first full body, wearable free seated pose estimation system that relies solely on pressure sensing and operates independently of chair geometry. ChairPose employs a two stage generative model trained on pressure maps captured from a thin, chair agnostic sensing mattress. Unlike prior approaches, our method explicitly incorporates chair morphology into the inference process, enabling accurate, occlusion free, and privacy preserving pose estimation. To support generalization across diverse users and chairs, we introduce a physics driven data augmentation pipeline that simulates realistic variations in posture and seating conditions. Evaluated across eight users and four distinct chairs, ChairPose achieves a mean per joint position error of 89.4 mm when both the user and the chair are unseen, demonstrating robust generalization to novel real world generalizability. ChairPose expands the design space for posture aware interactive systems, with potential applications in ergonomics, healthcare, and adaptive user interfaces. All code and data are publicly available on Kaggle at https://www.kaggle.com/datasets/lalaray/chairpose.2025LRLala Shakti Swarup Ray et al.Human Pose & Activity RecognitionBiosensors & Physiological MonitoringUIST
Delusionized? Potential Harms of Proprioceptive Manipulations through Hand Redirection in Virtual RealityTo enhance interactions in VR, hand redirection (HR)-based illusion techniques apply offsets between the virtual and real-world position of users’ hands. While adaptation to such HR offsets is recognized, their impact on proprioception accuracy remains unexplored. However, deploying HR without understanding its potential effects on proprioception accuracy may pose risks to users in real-life situations. To investigate this, we conducted an experiment with 22 participants, studying the influence of prolonged exposure to unnoticeable HR offsets on proprioceptive accuracy during hand-reaching in VR. Our results show that proprioceptive accuracy declines significantly after prolonged exposure to redirected hand interactions. However, short-time exposure to unaltered hand interactions can – yet only partially – restore normal levels. Thus, we advocate being aware of potential risks arising from prolonged exposure to visual-proprioceptive offsets to ensure users’ safety.2025MFMartin Feick et al.Haptic WearablesHand Gesture RecognitionUIST
iBreath: Usage of Breathing Gestures as Means of InteractionsBreathing is a spontaneous but controllable body function that can be used for hands-free interaction. Our work introduces ``iBreath'', a novel system to detect breathing gestures similar to clicks using bio-impedance. We evaluated iBreath's accuracy and user experience using two lab studies (n=34). Our results show high detection accuracy (F1-scores > 95.2\%). Furthermore, the users found the gestures easy to use and comfortable. Thus, we developed eight practical guidelines for the future development of breathing gestures. For example, designers can train users on new gestures within just 50 seconds (five trials), and achieve robust performance with both user-dependent and user-independent models trained on data from 21 participants, each yielding accuracies above 90\%. Users preferred single clicks and disliked triple clicks. The median gesture duration is 3.5-5.3 seconds. Our work provides solid ground for researchers to experiment with creating breathing gestures and interactions.2025MLMengxi Liu et al.Full-Body Interaction & Embodied InputBiosensors & Physiological MonitoringMobileHCI
LLMs Enable Context-Aware Augmented Reality in Surgical NavigationWearable Augmented Reality (AR) technologies are gaining recognition for their potential to transform surgical navigation systems. As these technologies evolve, selecting the right interaction method to control the system becomes crucial. Our work introduces a voice user interface (VUI) for surgical AR assistance systems (ARAS), designed for pancreatic surgery, that integrates Large Language Models (LLMs). Employing a mixed-method research approach, we assessed the usability of our LLM-based design in both simulated surgical tasks and during pancreatic surgeries, comparing its performance against conventional VUI for surgical ARAS using speech commands. Our findings demonstrated the usability of our proposed LLM-based VUI, yielding a significantly lower task completion time and cognitive workload compared to speech commands. Additionally, qualitative insights from interviews with surgeons aligned with the quantitative data, revealing a strong preference for the LLM-based VUI. Surgeons emphasized its intuitiveness and highlighted the potential of LLM-based VUI in expediting decision-making in surgical environments.2025HJHamraz Javaheri et al.Eye Tracking & Gaze InteractionAR Navigation & Context AwarenessHuman-LLM CollaborationDIS
Towards Trustable Intelligent Clinical Decision Support Systems: A User Study with OphthalmologistsIntegrating Artificial Intelligence (AI) into Clinical Decision Support Systems (CDSS) presents significant opportunities for improving healthcare delivery, particularly in fields like ophthalmology. This paper explores the usability and trustworthiness of an AI-driven CDSS designed to assist ophthalmologists in treating diabetic retinopathy and age-related macular degeneration. Therefore, we created a CDSS and evaluated its impact on efficiency, informedness, and user experience through task-based semi-structured interviews and questionnaires with 11 ophthalmologists. The usability of the CDSS was rated highly, with a SUS of 81.75. Additionally, results show that participants felt like the CDSS would improve their efficiency and informedness with one major aspect being integrating Electronic Health Records (EHR) and Optical Coherence Tomography (OCT) data into a single interface. Additionally, we explored aspects of the trustworthiness of AI components, specifically OCT segmentation, treatment recommendation, and visual acuity forecasting. Through thematic analysis, we identified key factors influencing trustworthiness and clinical adoption. Results show that a larger degree of abstraction from input to output of a model correlates with decreased trust. From our findings, we propose two guidelines for designing trustworthy CDSS.2025RLRobert Andreas Leist et al.Explainable AI (XAI)Telemedicine & Remote Patient MonitoringIUI
PromptMap: An Alternative Interaction Style for AI-Based Image GenerationRecent technological advances popularized the use of image generation among the general public. Crafting effective prompts can, however, be difficult for novice users. To tackle this challenge, we developed PromptMap, a new interaction style for text-to-image AI that allows users to freely explore a vast collection of synthetic prompts through a map-like view with semantic zoom. PromptMap groups images visually by their semantic similarity, allowing users to discover relevant examples. We evaluated PromptMap in a between-subject online study (n=60) and a qualitative within-subject study (n=12). We found that PromptMap supported users in crafting prompts by providing them with examples. We also demonstrated the feasibility of using LLMs to create vast example collections. Our work contributes a new interaction style that supports users unfamiliar with prompting in achieving a satisfactory image output.2025KAKrzysztof Adamkiewicz et al.Generative AI (Text, Image, Music, Video)Interactive Data VisualizationIUI
From Concept to Clinic: Multidisciplinary Design, Development, and Clinical Validation of Augmented Reality-Assisted Open Pancreatic SurgeryWearable augmented reality (AR) systems have significant potential to enhance surgical outcomes through in-situ visualization of patient-specific data. Yet, efforts to develop AR-based systems for open surgery have been limited, lacking comprehensive interdisciplinary research and actual clinical evaluations in real surgical environments. Our research addresses this gap by presenting a user-centered design and development process of ARAS, an AR assistance for open pancreatic surgery. ARAS provides in-situ visualization of critical structures, such as the vascular system and the tumor, while offering a robust dual-layer registration method ensuring accurate registration during relevant phases of the surgery. We evaluated ARAS in clinical trials of 20 patients with pancreatic tumors. Accuracy validation and postoperative surgeon interviews confirmed its successful deployment, supporting surgeons in vascular localization and critical decision-making. Our work showcases AR's potential to fundamentally transform procedures for complex surgical operations, advocating a research shift toward ecological validation in open surgery.2025HJHamraz Javaheri et al.German Research Center for Artificial Intelligence (DFKI)VR Medical Training & RehabilitationSurgical Assistance & Medical TrainingCHI
The Effect of Gender De-biased Recommendations – A User Study on Gender-specific PreferencesRecommender systems treat users inherently differently. Sometimes, however, personalization turns into discrimination. Gender bias occurs when a system treats users differently based on gender. While most research discusses measures and countermeasures for gender bias, one recent study explored whether users enjoy gender de-biased recommendations. However, its methodology has significant shortcomings; It fails to validate its de-biasing method appropriately and compares biased and unbiased models that differ in key properties. We reproduce the study in a 2x2 between-subjects design with n=800 participants. Moreover, we examine the authors' hypothesis that educating users on gender bias improves their attitude towards de-biasing. We find that the genders perceive de-biasing differently. The female users —the majority group — rate biased recommendations significantly higher while the male users —the minority group — indicate no preference. Educating users on gender bias increased acceptance non-significantly. We consider our contribution vital towards understanding how gender de-biasing affects different user groups.2025TKThorsten Krause et al.German Research Center for Artificial Intelligence, Smart Enterprise Engineering; Radboud UniversityExplainable AI (XAI)AI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine PerceptionHybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.2025MFMartin Feick et al.DFKI and Saarland University, Saarland Informatics Campus; MIT CSAILElectronic Textiles (E-textiles)On-Skin Display & On-Skin InputCHI
On-device Learning of EEGNet-based Network For Wearable Motor Imagery Brain-Computer Interface Electroencephalogram (EEG)-based Brain-Computer Interfaces (BCIs) have garnered significant interest across various domains, including rehabilitation and robotics. Despite advancements in neural network-based EEG decoding, maintaining performance across diverse user populations remains challenging due to feature distribution drift. This paper presents an effective approach to address this challenge by implementing a lightweight and efficient on-device learning engine for wearable motor imagery recognition. The proposed approach, applied to the well-established EEGNet architecture, enables real-time and accurate adaptation to EEG signals from unregistered users. Leveraging the newly released low-power parallel RISC-V-based processor, GAP9 from Greeenwaves, and the Physionet EEG Motor Imagery dataset, we demonstrate a remarkable accuracy gain of up to 7.31\% with respect to the baseline with a memory footprint of 15.6 KByte. Furthermore, by optimizing the input stream, we achieve enhanced real-time performance without compromising inference accuracy. Our tailored approach exhibits inference time of 14.9 ms and 0.76 mJ per single inference and 20 us and 0.83 uJ per single update during online training. These findings highlight the feasibility of our method for edge EEG devices as well as other battery-powered wearable AI systems suffering from subject-dependant feature distribution drift.2024SBSizhen Bian et al.Electrical Muscle Stimulation (EMS)Brain-Computer Interface (BCI) & NeurofeedbackUbiComp
iKAN: Global Incremental Learning with KAN for Human Activity Recognition Across Heterogeneous DatasetsThis work proposes an incremental learning (IL) framework for wearable sensor human activity recognition (HAR) that tackles two challenges simultaneously: catastrophic forgetting and non-uniform inputs. The scalable framework, iKAN, pioneers IL with Kolmogorov-Arnold Networks (KAN) to replace multi-layer perceptrons as the classifier that leverages the local plasticity and global stability of splines. To adapt KAN for HAR, iKAN uses task-specific feature branches and a feature redistribution layer. Unlike existing IL methods that primarily adjust the output dimension or the number of classifier nodes to adapt to new tasks, iKAN focuses on expanding the feature extraction branches to accommodate new inputs from different sensor modalities while maintaining consistent dimensions and the number of classifier outputs. Continual learning across six public HAR datasets demonstrated the iKAN framework's incremental learning performance, with a last performance of 84.9\% (weighted F1 score) and an average incremental performance of 81.34\%, which significantly outperforms the two existing incremental learning methods, such as EWC (51.42\%) and experience replay (59.92\%).2024MLMengxi Liu et al.Human Pose & Activity RecognitionUbiComp
Enhancing Inertial Hand based HAR through Joint Representation of Language, Pose and Synthetic IMUsDue to the scarcity of labeled sensor data in HAR, prior research has turned to video data to synthesize Inertial Measurement Units (IMU) data, capitalizing on its rich activity annotations. However, generating IMU data from videos presents challenges for HAR in real-world settings, attributed to the poor quality of synthetic IMU data and its limited efficacy in subtle, fine-grained motions. In this paper, we propose Multi³Net, our novel multi-modal, multitask, and contrastive-based framework approach to address the issue of limited data. Our pretraining procedure uses videos from online repositories, aiming to learn joint representations of text, pose, and IMU simultaneously. By employing video data and contrastive learning, our method seeks to enhance wearable HAR performance, especially in recognizing subtle activities. Our experimental findings validate the effectiveness of our approach in improving HAR performance with IMU data. We demonstrate that models trained with synthetic IMU data generated from videos using our method surpass existing approaches in recognizing fine-grained activities.2024VRVitor Fortes Rey et al.Human Pose & Activity RecognitionBiosensors & Physiological MonitoringUbiComp
Predicting the Limits: Tailoring Unnoticeable Hand Redirection Offsets in Virtual Reality to Individuals’ Perceptual BoundariesMany illusion and interaction techniques in Virtual Reality (VR) rely on Hand Redirection (HR), which has proved to be effective as long as the introduced offsets between the position of the real and virtual hand do not noticeably disturb the user experience. Yet calibrating HR offsets is a tedious and time-consuming process involving psychophysical experimentation, and the resulting thresholds are known to be affected by many variables---limiting HR's practical utility. As a result, there is a clear need for alternative methods that allow tailoring HR to the perceptual boundaries of individual users. We conducted an experiment with 18 participants combining movement, eye gaze and EEG data to detect HR offsets Below, At, and Above individuals' detection thresholds. Our results suggest that we can distinguish HR At and Above from no HR. Our exploration provides a promising new direction with potentially strong implications for the broad field of VR illusions.2024MFMartin Feick et al.Full-Body Interaction & Embodied InputEye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackUIST
Head ’n Shoulder: Gesture-Driven Biking Through Capacitive Sensing Garments to Innovate Hands-Free InteractionDistractions caused by digital devices are increasingly causing dangerous situations on the road, particularly for more vulnerable road users like cyclists. While researchers have been exploring ways to enable richer interaction scenarios on the bike, safety concerns are frequently neglected and compromised. In this work, we propose Head ’n Shoulder, a gesture-driven approach to bike interaction without affecting bike control, based on a wearable garment that allows hands- and eyes-free interaction with digital devices through integrated capacitive sensors. It achieves an average accuracy of 97% in the final iteration, evaluated on 14 participants. Head ’n Shoulder does not rely on direct pressure sensing, allowing users to wear their everyday garments on top or underneath, not affecting recognition accuracy. Our work introduces a promising research direction: easily deployable smart garments with a minimal set of gestures suited for most bike interaction scenarios, sustaining the rider’s comfort and safety.2024DGDaniel Geißler et al.Motion Sickness & Passenger ExperienceHaptic WearablesFoot & Wrist InteractionMobileHCI
Ghost Readers of the Nile: Decrypting Password Sharing Habits in Chatting Applications among Egyptian WomenPassword sharing is a convenient means to access shared resources, save on subscription costs, provide emergency access, and avoid forgetting vital account details. However, it also raises significant privacy concerns, especially in digital communication contexts where content may be inadvertently exposed to unintended recipients. In this paper, we investigate this duality, using a survey of 86 Egyptian women to understand their sharing behavior and the design and evaluation of a chat application used by 60 participants. This application issues warnings based on content sensitivity, leading to increased user awareness about privacy risks. Our findings indicate that, while many participants initially shared passwords, they were surprised to discover others doing the same. Furthermore, our application effectively reduced password sharing, reflecting improved awareness of associated risks. This research acknowledges the cultural aspects of password sharing while striving to enhance the experience, enabling participants to make informed choices that enhance their information control.2024MSMennatallah Saleh et al.Privacy by Design & User ControlPasswords & AuthenticationMobileHCI
Improving Conversational User Interfaces for Citizen Complaint Management through enhanced Contextual FeedbackAs cities transform, disrupting citizens' lives, their participation in urban development is often undervalued despite its importance. Citizen complaint systems exist but are often limited in fostering meaningful dialogue with municipalities. Meanwhile, smart cities aim to improve living standards, efficiency, and sustainability by integrating digital twins with physical infrastructures, potentially enhancing transparency and enriching communication between cities and their inhabitants with real-time data. Complementing these developments, technologies realizing Conversational User Interfaces (CUIs) are becoming more capable in providing a conversational and feedback-oriented approach such as complaint management processes. The improvement of CUIs for citizen complaint management through enhanced contextual feedback is explored in this work. The term contextual feedback has been developed and defined as all information (for example, background, conditions, explanations, timelines, and the existence of similar complaints) related to a complaint and or the underlying problem that could potentially be relevant for the user. The solution proposed in this paper gathers data from users about their issues via a CUI, which subsequently queries various data sources to obtain relevant contextual information. Following this, a Large Language Model processes the collected data to produce the corresponding feedback. In the study, a static CUI without contextual data as the baseline has been compared to a CUI that includes contextual data, analyzing their impact on pragmatic and hedonic quality, reuse intention, and potential influence on the citizens’ trust in their municipality. The study has been conducted in cooperation with the German municipality of Wadgassen. The good performance of the baseline system shows the general potential of LLMs in the citizen complaint domain even without data sources. The results show that contextual feedback performed better overall, with significant improvements in the pragmatic and hedonic quality, attractiveness, reuse intention, feeling that the complaint is taken seriously, and the citizens’ trust in their municipality.2024KKKai Karren et al.Human-LLM CollaborationCrowdsourcing Task Design & Quality ControlSmart Cities & Urban SensingCUI
Touching the Moon: Leveraging Passive Haptics, Embodiment and Presence for Operational Assessments in Virtual RealitySpace agencies are in the process of drawing up carefully thought-out Concepts of Operations (ConOps) for future human missions on the Moon. These are typically assessed and validated through costly and logistically demanding analogue field studies. While interactive simulations in Virtual Reality (VR) offer a comparatively cost-effective alternative, they have faced criticism for lacking the fidelity of real-world deployments. This paper explores the applicability of passive haptic interfaces in bridging the gap between simulated and real-world ConOps assessments. Leveraging passive haptic props (equipment mockup and astronaut gloves), we virtually recreated the Apollo 12 mission procedure and assessed it with experienced astronauts and other space experts. Quantitative and qualitative findings indicate that haptics increased presence and embodiment, thus improving perceived simulation fidelity and validity of user reflections. We conclude by discussing the potential role of passive haptic modalities in facilitating early-stage ConOps assessments for human endeavours on the Moon and beyond.2024FDFlorian Dufresne et al.Arts et Métiers Institute of Technology, European Space AgencyFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Beyond the Blink: Investigating Combined Saccadic & Blink-Suppressed Hand Redirection in Virtual RealityIn pursuit of hand redirection techniques that are ever more tailored to human perception, we propose the first algorithm for hand redirection in virtual reality that makes use of saccades, i.e., fast ballistic eye movements that are accompanied by the perceptual phenomenon of change blindness. Our technique combines the previously proposed approaches of gradual hand warping and blink-suppressed hand redirection with the novel approach of saccadic redirection in one unified yet simple algorithm. We compare three variants of the proposed Saccadic & Blink-Suppressed Hand Redirection (SBHR) technique with the conventional approach to redirection in a psychophysical study (N=25). Our results highlight the great potential of our proposed technique for comfortable redirection by showing that SBHR allows for significantly greater magnitudes of unnoticeable redirection while being perceived as significantly less intrusive and less noticeable than commonly employed techniques that only use gradual hand warping.2024AZAndré Zenner et al.Saarland University, Saarland Informatics Campus, German Research Center for Artificial Intelligence (DFKI), Saarland Informatics CampusHand Gesture RecognitionFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionCHI
The Impact of Avatar Completeness on Embodiment and the Detectability of Hand Redirection in Virtual RealityTo enhance interactions in VR, many techniques introduce offsets between the virtual and real-world position of users’ hands. Nevertheless, such hand redirection (HR) techniques are only effective as long as they go unnoticed by users—not disrupting the VR experience. While several studies consider how much unnoticeable redirection can be applied, these focus on mid-air floating hands that are disconnected from users’ bodies. Increasingly, VR avatars are embodied as being directly connected with the user’s body, which provide more visual cue anchoring, and may therefore reduce the unnoticeable redirection threshold. In this work, we studied more complete avatars and their effect on the sense of embodiment and the detectability of HR. We found that higher avatar completeness increases embodiment, and we provide evidence for the absence of practically relevant effects on the detectability of HR.2024MFMartin Feick et al.DFKI, Saarland Informatics CampusForce Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputIdentity & Avatars in XRCHI
FARPLS: A Feature-Augmented Robot Trajectory Preference Labeling System to Assist Human Labelers’ Preference ElicitationPreference-based learning aims to align robot task objectives with human values. One of the most common methods to infer human preferences is by pairwise comparisons of robot task trajectories. Traditional comparison-based preference labeling systems seldom support labelers to digest and identify critical differences between complex trajectories recorded in videos. Our formative study (N = 12) suggests that individuals may overlook non-salient task features and establish biased preference criteria during their preference elicitation process because of partial observations. In addition, they may experience mental fatigue when given many pairs to compare, causing their label quality to deteriorate. To mitigate these issues, we propose FARPLS, a Feature-Augmented Robot trajectory Preference Labeling System. FARPLS highlights potential outliers in a wide variety of task features that matter to humans and extracts the corresponding video keyframes for easy review and comparison. It also dynamically adjusts the labeling order according to users’ familiarities, difficulties of the trajectory pair, and level of disagreements. At the same time, the system monitors labelers’ consistency and provides feedback on labeling progress to keep labelers engaged. . A between-subjects study (N = 42, 105 pairs of robot pick-and-place trajectories per person) shows that FARPLS can help users establish preference criteria more easily and notice more relevant details in the presented trajectories than the conventional interface. FARPLS also improves labeling consistency and engagement, mitigating challenges in preference elicitation without raising cognitive loads significantly.2024HLHanfang Lyu et al.Human-Robot Collaboration (HRC)Prototyping & User TestingIUI