Move with Style! Enhancing Avatar Embodiment in Virtual Reality through Proprioceptive Motion FeedbackIn virtual reality (VR), users slip into a variety of roles, represented by a rich diversity of avatars that each exhibit specific visual attributes and motion styles. While users can see their avatar's motion in VR, they usually cannot feel it. To enhance avatar embodiment, we propose active proprioceptive feedback that aligns users' physical movements with the expected motion style of their avatar, for instance, by mimicking the avatar's weight, typical motion speed or motion range. We introduce a conceptual space of relevant motion properties which enable designers to create expressive proprioceptive motion styles for avatars. We instantiate this concept with MotionStyler: a system for designing customized motion styles and rendering them in real-time with an arm-based exoskeleton that is synchronized with the VR avatar. Results from a survey confirmed the expressiveness of the proposed conceptual space. A user study demonstrated the system's capability to create diverse proprioceptive motion styles which enhance user's self-identification with their avatar and thereby positively contribute to avatar embodiment in VR.2025DWDavid Wagmann et al.Force Feedback & Pseudo-Haptic WeightIdentity & Avatars in XRUIST
Imaginary Joint: Proprioceptive Feedback for Virtual Body Extensions via Skin StretchVirtual body extensions such as a wing or tail have the potential to offer users new bodily experiences and capabilities in virtual and augmented reality. To use these extensions as naturally as one’s own body—particularly for body parts that are normally hard to see, such as a tail—it is essential to provide proprioceptive feedback that allows users to perceive the position, orientation, and force exerted by these parts, rather than relying solely on visual cues. In this study, we propose a novel approach by introducing an "Imaginary Joint" at the interface between the user's actual body and the virtual extension, delivering information about joint flexion and force through skin-stretch feedback. We present a wearable device for skin-stretch feedback and explore informing mappings that convey the bending rotation and torque of the Imaginary Joint. The final system presents both types of information simultaneously by superimposing these skin deformations. Results from a controlled experiment with users demonstrate that users could identify tail position and force without relying on visual cues, and do so more effectively than in the vibrotactile condition. Furthermore, the tail was perceived as more embodied than in a vibrotactile condition, resulting in a more naturalistic and intuitive sensation. Finally, we introduce several application scenarios, including Perception of Extended Bodies, Enhanced Bodily Expression, and Body-Mediated Communication, and discuss the potential for future extensions of this system.2025STShuto Takashita et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsDance & Body Movement ComputingUIST
eTactileKit: A Toolkit for Design Exploration and Rapid Prototyping of Electro-Tactile InterfacesElectro-tactile interfaces are becoming increasingly popular due to their unique advantages, such as delivering fast and localised tactile response, thin and flexible form factors, and the potential to create novel tactile experiences. However, insights from a formative study with typical designers highlighted the lack of resources, limited access to information and complexity of software and hardware tools. This establishes a high barrier to entry and limits the ability to rapidly prototype and experiment with electro-tactile interfaces. To address these challenges, we propose eTactileKit, a scalable and accessible toolkit providing end-to-end support for designing and prototyping electro-tactile interfaces. eTactileKit comprises a hardware platform and a software framework for designing, simulating and exploring electro-tactile stimuli. We evaluated the impact and usability of eTactileKit through a three-week long take-home study, which demonstrated increased accessibility, ease of use, and the toolkit's positive impact on design workflow. Additionally, we implemented a set of use cases to demonstrate the toolkit's practicality and effectiveness across various applications.2025PPPraneeth Bimsara Perera et al.Electrical Muscle Stimulation (EMS)Prototyping & User TestingUIST
Texergy: Textile-based Harvesting, Storing, and Releasing of Mechanical Energy for Passive On-Body ActuationHumans instinctively manipulate and "actuate" their clothing, for instance, to adapt to the environment or to modify aesthetics. However, such manual actuation remains inflexible and directly tied to user action. We introduce Texergy, a textile-based technical framework that decouples user input and actuated output to make passive on-body actuation interactive and programmable. Texergy achieves this by harvesting energy from user interactions with a set of input modules, storing it mechanically on the body in elastic materials, later releasing the energy on demand, and finally connecting to output end-effectors that realize the actuation. We present a fabrication approach based on almost entirely textile materials using laser-cutting and simple manual assembly to enable integration into clothing and easy prototyping. We report the results of technical experiments and provide a design tool to support customizing the actuation’s force and distance, type of harvesting, and deployment of Texergy mechanisms. We practically demonstrate the capabilities of Texergy with four applications, including a quick-release belt, a passive exosuit with dynamic assistance, a haptic feedback top powered by implicit user actions in VR, and a dance-driven shape-changing costume.2025YJYu Jiang et al.Force Feedback & Pseudo-Haptic WeightHaptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
GestureCoach: Rehearsing for Engaging Talks with LLM-Driven Gesture RecommendationsThis paper introduces GestureCoach, a system designed to help speakers deliver more engaging talks by guiding them to gesture effectively during rehearsal. GestureCoach combines an LLM-driven gesture recommendation model with a rehearsal interface that proactively cues speakers to gesture appropriately. Trained on experts’ gesturing patterns from TED talks, the model consists of two modules: an emphasis proposal module, which predicts when to gesture by identifying gesture-worthy text segments in the presenter notes, and a gesture identification module, which determines what gesture to use by retrieving semantically appropriate gestures from a curated gesture database. Results of a model performance evaluation and user study (N=30) show that the emphasis proposal module outperforms off-the-shelf LLMs in identifying suitable gesture regions, and that participants rated the majority of these predicted regions and their corresponding gestures as highly appropriate. A subsequent user study (N=10) showed that rehearsing with GestureCoach encouraged speakers to gesture and significantly increased gesture diversity, resulting in more engaging talks. We conclude with design implications for future AI-driven rehearsal systems.2025ARAshwin Ram et al.Hand Gesture RecognitionHuman-LLM CollaborationCreative Collaboration & Feedback SystemsUIST
Learn, Explore and Reflect by Chatting: Understanding the Value of an LLM-Based Voting Advice Application ChatbotVoting advice applications (VAAs), which have become increasingly prominent in European elections, are seen as a successful tool for boosting electorates' political knowledge and engagement. However, VAAs' complex language and rigid presentation constrain their utility to less-sophisticated voters. While previous work enhanced VAAs' click-based interaction with scripted explanations, a conversational chatbot's potential for tailored discussion and deliberate political decision-making remains untapped. Our exploratory mixed-method study investigates how LLM-based chatbots can support voting preparation. We deployed a VAA chatbot to 331 users before Germany's 2024 European Parliament election, gathering insights from surveys, conversation logs, and 10 follow-up interviews. Participants found the VAA chatbot intuitive and informative, citing its simple language and flexible interaction. We further uncovered VAA chatbots' role as a catalyst for reflection and rationalization. Expanding on participants' desire for transparency, we provide design recommendations for building interactive and trustworthy VAA chatbots.2025JZJianlong Zhu et al.Conversational ChatbotsHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityCUI
Towards Trustable Intelligent Clinical Decision Support Systems: A User Study with OphthalmologistsIntegrating Artificial Intelligence (AI) into Clinical Decision Support Systems (CDSS) presents significant opportunities for improving healthcare delivery, particularly in fields like ophthalmology. This paper explores the usability and trustworthiness of an AI-driven CDSS designed to assist ophthalmologists in treating diabetic retinopathy and age-related macular degeneration. Therefore, we created a CDSS and evaluated its impact on efficiency, informedness, and user experience through task-based semi-structured interviews and questionnaires with 11 ophthalmologists. The usability of the CDSS was rated highly, with a SUS of 81.75. Additionally, results show that participants felt like the CDSS would improve their efficiency and informedness with one major aspect being integrating Electronic Health Records (EHR) and Optical Coherence Tomography (OCT) data into a single interface. Additionally, we explored aspects of the trustworthiness of AI components, specifically OCT segmentation, treatment recommendation, and visual acuity forecasting. Through thematic analysis, we identified key factors influencing trustworthiness and clinical adoption. Results show that a larger degree of abstraction from input to output of a model correlates with decreased trust. From our findings, we propose two guidelines for designing trustworthy CDSS.2025RLRobert Andreas Leist et al.Explainable AI (XAI)Telemedicine & Remote Patient MonitoringIUI
CreepyCoCreator? Investigating AI Representation Modes for 3D Object Co-Creation in Virtual RealityGenerative AI in Virtual Reality offers the potential for collaborative object-building, yet challenges remain in aligning AI contributions with user expectations. In particular, users often struggle to understand and collaborate with AI when its actions are not transparently represented. This paper thus explores the co-creative object-building process through a Wizard-of-Oz study, focusing on how AI can effectively convey its intent to users during object customization in Virtual Reality. Inspired by human-to-human collaboration, we focus on three representation modes: the presence of an embodied avatar, whether the AI’s contributions are visualized immediately or incrementally, and whether the areas modified are highlighted in advance. The findings provide insights into how these factors affect user perception and interaction with object-generating AI tools in Virtual Reality as well as satisfaction and ownership of the created objects. The results offer design implications for co-creative world-building systems, aiming to foster more effective and satisfying collaborations between humans and AI in Virtual Reality.2025JRJulian Rasch et al.LMU MunichMixed Reality WorkspacesCreative Collaboration & Feedback SystemsCHI
ExoKit: A Toolkit for Rapid Prototyping of Interactions for Arm-based ExoskeletonsExoskeletons open up a unique interaction space that seamlessly integrates users' body movements with robotic actuation. Despite its potential, human-exoskeleton interaction remains an underexplored area in HCI, largely due to the lack of accessible prototyping tools that enable designers to easily develop exoskeleton designs and customized interactive behaviors. We present ExoKit, a do-it-yourself toolkit for rapid prototyping of low-fidelity, functional exoskeletons targeted at novice roboticists. ExoKit includes modular hardware components for sensing and actuating shoulder and elbow joints, which are easy to fabricate and (re)configure for customized functionality and wearability. To simplify the programming of interactive behaviors, we propose functional abstractions that encapsulate high-level human-exoskeleton interactions. These can be readily accessed either through ExoKit's command-line or graphical user interface, a Processing library, or microcontroller firmware, each targeted at different experience levels. Findings from implemented application cases and two usage studies demonstrate the versatility and accessibility of ExoKit for early-stage interaction design.2025MMMarie Muehlhaus et al.Saarland Informatics Campus, Saarland UniversityForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsCircuit Making & Hardware PrototypingCHI
3HANDS Dataset: Learning from Humans for Generating Naturalistic Handovers with Supernumerary Robotic LimbsSupernumerary robotic limbs are robotic structures integrated closely with the user's body, which augment human physical capabilities and necessitate seamless, naturalistic human-machine interaction. For effective assistance in physical tasks, enabling SRLs to hand over objects to humans is crucial. Yet, designing heuristic-based policies for robots is time-consuming, difficult to generalize across tasks, and results in less human-like motion. When trained with proper datasets, generative models are powerful alternatives for creating naturalistic handover motions. We introduce 3HANDS, a novel dataset of object handover interactions between a participant performing a daily activity and another participant enacting a hip-mounted SRL in a naturalistic manner. 3HANDS captures the unique characteristics of SRL interactions: operating in intimate personal space with asymmetric object origins, implicit motion synchronization, and the user’s engagement in a primary task during the handover. To demonstrate the effectiveness of our dataset, we present three models: one that generates naturalistic handover trajectories, another that determines the appropriate handover endpoints, and a third that predicts the moment to initiate a handover. In a user study (N=10), we compare the handover interaction performed with our method compared to a baseline. The findings show that our method was perceived as significantly more natural, less physically demanding, and more comfortable.2025AAArtin Saberpour Abadian et al.Saarland University, Saarland Informatics CampusTeleoperated DrivingHuman-Robot Collaboration (HRC)CHI
Motion-Coupled Asymmetric Vibration for Pseudo Force Rendering in Virtual RealityIn Virtual Reality (VR), rendering realistic forces is crucial for immersion, but traditional vibrotactile feedback fails to convey force sensations effectively. Studies of asymmetric vibrations that elicit pseudo forces show promise but are inherently tied to unwanted vibrations, reducing realism. Leveraging sensory attenuation to reduce the perceived intensity of self-generated vibrations during user movement, we present a novel algorithm that couples asymmetric vibrations with user motion, which mimics self-generated sensations. Our psychophysics study with 12 participants shows that motion-coupled asymmetric vibration attenuates the experience of vibration (equivalent to a \textasciitilde 30\% reduction in vibration-amplitude) while preserving the experience of force, compared to continuous asymmetric vibrations (state-of-the-art). We demonstrate the effectiveness of our approach in VR through three scenarios: shooting arrows, lifting weights, and simulating haptic magnets. Results revealed that participants preferred forces elicited by motion-coupled asymmetric vibration for tasks like shooting arrows and lifting weights. This research highlights the potential of motion-coupled asymmetric vibrations, offers new insights into sensory attenuation, and advances force rendering in VR.2025NSNihar Sabnis et al.Max Planck Institute for Informatics, Saarland Informatics Campus, Sensorimotor InteractionForce Feedback & Pseudo-Haptic WeightCHI
Understanding the Security Advice Mechanisms of Low Socioeconomic PakistanisLow socioeconomic populations face severe security challenges while being unable to access traditional written advice resources. We present the first study to explore the security advice landscape of low socioeconomic people in Pakistan. With 20 semi-structured interviews, we uncover how they learn and share security advice and what factors enable or limit their advice sharing. Our findings highlight that they heavily rely on community advice and intermediation to establish and maintain security-related practices (such as passwords). We uncover how shifting social environments shape advice dissemination, e.g., across different workplaces. Participants leverage their social structures to protect each other against threats that exploit their financial vulnerability and lack of digital literacy. However, we uncover barriers to social advice mechanisms, limiting their effectiveness, which may lead to increased security and privacy risks. Our results lay the foundation for rethinking security paradigms and advice for this vulnerable population.2025SHSumair Ijaz Hashmi et al.CISPA Helmholtz Center for Information Security; Saarland UniversityPrivacy by Design & User ControlDark Patterns RecognitionEmpowerment of Marginalized GroupsCHI
Curious Shorts: Curiosity-Driven Exploration and Learning on Short-Form Video PlatformsShort-form video platforms like YouTube Shorts captivate users with engaging content, but their potential for promoting incidental learning remains underexplored. We present Curious Shorts, a conceptual framework that extends the Hook Model, designed to enhance curiosity-driven exploration and incidental learning on these platforms. In Study 1, we empirically tested two designs that incorporate "curiosity nudges" — interactive prompts that spark curiosity and encourage further exploration — with follow-up videos to satisfy that curiosity. Results show that specific, question-driven prompts proved most effective, significantly boosting curiosity and encouraging more focused and intentional viewing compared to the baseline. Study 2 examined whether this design enhances incidental learning without compromising engagement. Findings confirmed improved learning outcomes. However, when applied to a realistic viewing environment interspersed with entertainment videos, engagement remained high while learning benefits diminished. We conclude with implications for balancing learning and engagement on short-form video platforms and propose directions for future research.2025FTFelicia Fang-Yi Tan et al.National University of Singapore, Augmented Human Lab, School of ComputingHuman-LLM CollaborationData StorytellingOnline Learning & MOOC PlatformsCHI
IrOnTex: Using Ironable 3D Printed Objects to Fabricate and Prototype Customizable Interactive TextilesYu等人提出IrOnTex,利用可熨烫3D打印对象制作定制交互式纺织品,实现快速原型设计。2024JYJiakun Yu et al.Desktop 3D Printing & Personal FabricationTextile Art & Craft DigitizationUbiComp
MediKnit: Soft Medical Making for Personalized and Clinician-Designed Wearable Devices for Hand EdemaKim 等人开发 MediKnit 软医疗制作系统,支持临床医生为手部水肿患者快速设计与定制个性化可穿戴压力设备,实现低成本临床解决方案。2024HKHeather Jin Hjee Kim et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsChronic Disease Self-Management (Diabetes, Hypertension, etc.)UbiComp
CoplayingVR: Understanding User Experience in Shared Control in Virtual RealityZhou等人通过用户实验研究VR中共享控制机制,发现协作控制方式显著影响用户参与度和任务完成效率。2024HZHongyu Zhou et al.Social & Collaborative VRMixed Reality WorkspacesUbiComp
Embrogami: Shape-Changing Textiles with Machine EmbroideryMachine embroidery is a versatile technique for creating custom and entirely fabric-based patterns on thin and conformable textile surfaces. However, existing machine-embroidered surfaces remain static, limiting the interactions they can support. We introduce Embrogami, an approach for fabricating textile structures with versatile shape-changing behaviors. Inspired by origami, we leverage machine embroidery to form finger-tip-scale mountain-and-valley structures on textiles with customized shapes, bistable or elastic behaviors, and modular composition. The structures can be actuated by the user or the system to modify the local textile surface topology, creating interactive elements like toggles and sliders or textile shape displays with an ultra-thin, flexible, and integrated form factor. We provide a dedicated software tool and report results of technical experiments to allow users to flexibly design, fabricate, and deploy customized Embrogami structures. With four application cases, we showcase Embrogami’s potential to create functional and flexible shape-changing textiles with diverse visuo-tactile feedback.2024YJYu Jiang et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
Who did it? How User Agency is influenced by Visual Properties of Generated ImagesThe increasing proliferation of AI and GenAI requires new interfaces tailored to how their specific affordances and human requirements meet. As GenAI is capable of taking over tasks from users on an unprecedented scale, designing the experience of agency -- if and how users experience control over the process and responsibility over the outcome -- is crucial. As an initial step towards design guidelines for shaping agency, we present a study that explores how features of AI-generated images influence users' experience of agency. We use two measures; temporal binding to implicitly estimate pre-reflective agency and magnitude estimation to assess user judgments of agency. We observe that abstract images lead to more temporal binding than images with semantic meaning. In contrast, the closer an image aligns with what a user might expect, the higher the agency judgment. When comparing the experiment results with objective metrics of image differences, we find that temporal binding results correlate with semantic differences, while agency judgments are better explained by local differences between images. This work contributes towards a future where agency is considered an important design dimension for GenAI interfaces.2024JDJohanna K. Didion et al.Generative AI (Text, Image, Music, Video)Explainable AI (XAI)UIST
Improving Conversational User Interfaces for Citizen Complaint Management through enhanced Contextual FeedbackAs cities transform, disrupting citizens' lives, their participation in urban development is often undervalued despite its importance. Citizen complaint systems exist but are often limited in fostering meaningful dialogue with municipalities. Meanwhile, smart cities aim to improve living standards, efficiency, and sustainability by integrating digital twins with physical infrastructures, potentially enhancing transparency and enriching communication between cities and their inhabitants with real-time data. Complementing these developments, technologies realizing Conversational User Interfaces (CUIs) are becoming more capable in providing a conversational and feedback-oriented approach such as complaint management processes. The improvement of CUIs for citizen complaint management through enhanced contextual feedback is explored in this work. The term contextual feedback has been developed and defined as all information (for example, background, conditions, explanations, timelines, and the existence of similar complaints) related to a complaint and or the underlying problem that could potentially be relevant for the user. The solution proposed in this paper gathers data from users about their issues via a CUI, which subsequently queries various data sources to obtain relevant contextual information. Following this, a Large Language Model processes the collected data to produce the corresponding feedback. In the study, a static CUI without contextual data as the baseline has been compared to a CUI that includes contextual data, analyzing their impact on pragmatic and hedonic quality, reuse intention, and potential influence on the citizens’ trust in their municipality. The study has been conducted in cooperation with the German municipality of Wadgassen. The good performance of the baseline system shows the general potential of LLMs in the citizen complaint domain even without data sources. The results show that contextual feedback performed better overall, with significant improvements in the pragmatic and hedonic quality, attractiveness, reuse intention, feeling that the complaint is taken seriously, and the citizens’ trust in their municipality.2024KKKai Karren et al.Human-LLM CollaborationCrowdsourcing Task Design & Quality ControlSmart Cities & Urban SensingCUI
SoftBioMorph: Fabricating Sustainable Shape-changing Interfaces using Soft BiopolymersBio-based and bio-degradable materials have shown promising results for sustainable Human-Computer Interaction (HCI) applications, including shape-changing interfaces. However, the diversity of shape-changing behaviors achievable with these materials remains unclear as the fabrication knowledge is scattered across multiple research fields. This paper introduces SoftBioMorph, a fabrication framework that aims to integrate the fabrication know-how of sustainable soft shape-changing interfaces with biopolymers. Based on the example of Sodium Alginate, the framework contributes (1) a set of material synthesis processes that modify the biopolymer's properties to fulfill different functions; (2) a set of DIY crafting-based assembling techniques that functionalize the material and assembling properties to achieve three primitive types of change in shape; and (3) a series of application cases that demonstrate the versatility of the framework. We further discuss limitations, research questions, and fabrication challenges, presenting a comprehensive approach to sustainable prototyping in HCI.2024MNMadalina Nicolae et al.Shape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingSustainable HCIDIS