"Can I Decorate My Teeth With Diamonds?": Exploring Multi-Stakeholder Perspectives on Using VR to Reduce Children's Dental AnxietyDental anxiety is prevalent among children,often leading to missed treatment and potential negative effects on their mental well-being. While several interventions (e.g., pharmacological and psychotherapeutic techniques) have been introduced for anxiety alleviation, the recently emerged virtual reality (VR) technology, with its immersive and playful nature,opened new opportunities for complementing and enhancing the therapeutic effects of existing interventions. In this light, we conducted a series of co-design workshops with 13 children aged 10-12 to explore how they envisioned using VR to address their fear and stress associated with dental visits, followed by interviews with parents (n = 13) and two dentists. Our findings revealed that children expected VR to provide immediate relief, social support, and a sense of control during dental treatment, parents sought educational opportunities for their children to learn about oral health, and dentists prioritized treatment efficiency and safety issues. Drawing from the findings, we discuss the considerations of multi-stakeholders for developing VR-assisted anxiety management applications for children within and beyond dental settings.2025YMYaxuan MAO et al.Perspectives on VRCSCW
Extendlibur: Dynamic Haptic Retargeting for Length-Mismatched Proxies in Co-Located VRHaptic retargeting is an effective technique to deliver realistic haptic feedback from a single physical proxy to multiple virtual objects. Previous studies have mainly focused on single-user scenarios involving virtual objects with varying shapes or locations, but few have explored how to retarget multiple virtual objects that are jointly manipulated or interacted with by multiple users. This paper presents a new haptic retargeting technique for co-located VR. It allows two users to interact using shape-mismatched virtual tools, specifically lengths that mismatch the physical props they hold. By gradually offsetting the virtual tools, the technique ensures appropriate haptic feedback and creates the illusion of using tools with different lengths. We conducted a user study to examine how much the virtual tool’s length can be altered using our approach in our setup without breaking the illusion for users. Based on the findings, we proposed two example uses and validated in a follow-up application study. The results show that our method can provide more realistic and enjoyable experiences in shared VR environments.2025JCJunyu Chen et al.Mid-Air Haptics (Ultrasonic)Haptic WearablesUIST
RAGTrace: Understanding and Refining Retrieval-Generation Dynamics in Retrieval-Augmented GenerationRetrieval-Augmented Generation (RAG) systems have emerged as a promising solution to enhance large language models (LLMs) by integrating external knowledge retrieval with generative capabilities. While significant advancements have been made in improving retrieval accuracy and response quality, a critical challenge remains that the internal knowledge integration and retrieval-generation interactions in RAG systems are largely opaque. This paper introduces RAGTrace, a system designed to analyze retrieval and generation dynamics in RAG-based systems. Informed by a comprehensive literature review and expert interviews, the system supports a multi-level analysis approach, ranging from high-level performance evaluation to fine-grained examination of retrieval relevance, generation fidelity, and cross-component interactions. Unlike conventional evaluation practices that focus on isolated retrieval or generation quality assessments, RAGTrace enables an integrated exploration of retrieval-generation relationships, allowing users to trace knowledge sources and identify potential failure cases. The system's workflow allows users to build, evaluate, and iterate on retrieval processes tailored to their specific domains of interest. The effectiveness of the system is demonstrated through case studies and expert evaluations on real-world RAG applications.2025SCJiaping Li et al.Human-LLM CollaborationExplainable AI (XAI)AI-Assisted Decision-Making & AutomationUIST
HapticWings: Enhancing the Experience of Extra Wing Motions in Virtual Reality through Dynamic 2D Weight ShiftingIn virtual reality (VR), our virtual body can have different characteristics from our real body, such as appearance, size, and even extra body parts. Previous research shows that haptic feedback enhances the user-perceived embodiment of those dissimilar avatars. In particular, weight-shifting devices showed the potential to enhance arm deformation. However, there has been no exploration of using such techniques to enhance embodiment with extra body parts, like wings. We introduce HapticWings, a back-wearable 2D weight-shifting device that provides haptic feedback for wing motions, enhancing the user embodiment of avatars with extra wings. In three user studies, we explored (1) users' abilities to recognize different weight-shifting motions provided by HapticWings, (2) users' perceived embodiment of avatars with extra wings when providing haptic feedback for wing motions, and (3) four possible applications and used two of them to evaluate users' sense of realism and enjoyment in VR.2025YCYingjie Chang et al.Force Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsImmersion & Presence ResearchDIS
ClassComet: Exploring and Designing AI-generated Danmaku in Educational Videos to Enhance Online LearningDanmaku, users’ live comments synchronized with, and overlaying on videos, has recently shown potential in promoting online video-based learning. However, user-generated danmaku can be scarce—especially in newer or less viewed videos—and its quality is unpredictable, limiting its educational impact. This paper explores how large multimodal models (LMM) can be leveraged to automatically generate effective, high-quality danmaku. We first conducted a formative study to identify the desirable characteristics of content- and emotion-related danmaku in educational videos. Based on the obtained insights, we developed ClassComet, an educational video platform with novel LMM-driven techniques for generating relevant types of danmaku to enhance video-based learning. Through user studies, we examined the quality of generated danmaku and their influence on learning experiences. The results indicate that our generated danmaku is comparable to human-created ones, and videos with both content- and emotion-related danmaku showed significant improvement in viewers' engagement and learning outcome.2025ZJZipeng Ji et al.Human-LLM CollaborationOnline Learning & MOOC PlatformsDIS
DobbyEar: Inducing Body Illusion of Ear Deformation with Haptic RetargetingThe use of haptic and visual stimuli to create body illusions and enhance body ownership of virtual avatars in virtual reality (VR) has been extensively studied in the fields of psychology and Human-Computer Interaction (HCI). However, previous studies have relied on mechanical devices or corresponding proxies to provide haptic feedback. In this paper, we applied haptic retargeting to induce body illusions by redirecting users’ hand movements, altering their perception of the shape of body parts when touched. Our technique allows for the realization of more precise and complex deformations. We implemented mapping of the ear’s contour, thereby creating illusions of different ear shapes, such as elf ears and dog ears. To determine the scope of retargeting, we conducted a user study to identify the maximum tolerable deviation angle for virtual ears. Subsequently, we explored the impact of haptic retargeting on body ownership of virtual avatars.2025HSHan Shi et al.Southern University of Science and Technology; Fudan UniversityMid-Air Haptics (Ultrasonic)Identity & Avatars in XRCHI
CalliSence: An Interactive Educational Tool for Process-based learning in Chinese CalligraphyProcess-based learning is crucial for the transmission of intangible cultural heritage, especially in complex arts like Chinese calligraphy, where mastering techniques cannot be achieved by merely observing the final work. To explore the challenges faced in calligraphy heritage transmission, we conducted semi-structured interviews (N=8) as a formative study. Our findings indicate that the lack of calligraphy instructors and tools makes it difficult for students to master brush techniques, and teachers struggle to convey the intricate details and rhythm of brushwork. To address this, we collaborated with calligraphy instructors to develop an educational tool that integrates writing process capture and visualization, showcasing the writing rhythm, hand force, and brush posture. Through empirical studies conducted in multiple teaching workshops, we evaluated the system's effectiveness with teachers (N=4) and students (N=12). The results show that the tool significantly enhances teaching efficiency and aids learners in better understanding brush techniques.2025XGXinya Gong et al.Southern University of Science and TechnologySTEM Education & Science CommunicationSpecial Education TechnologyMuseum & Cultural Heritage DigitizationCHI
Influencer: Empowering Everyday Users in Creating Promotional Posts via AI-infused Exploration and CustomizationCreating promotional posts on social platforms enables everyday users to disseminate their creative outcomes, engage in community exchanges, or generate additional income from micro-businesses. However, crafting eye-catching posts with appealing images and effective captions can be challenging and time-consuming for everyday users since they are mostly design novices. We propose Influencer, an interactive tool that helps novice creators quickly generate ideas and create high-quality promotional post designs through AI. Influencer offers a multi-dimensional recommendation system for ideation through example-based image and caption suggestions. Further, Influencer implements a holistic promotional post-design system supporting context-aware exploration considering brand messages and user-specified design constraints, flexible fusion of content, and a mind-map-like layout for idea tracking. Our user study, comparing the system with industry-standard tools, along with two real-life case studies, indicates that Influencer is effective in assisting design novices to generate ideas as well as creative and diverse promotional posts with user-friendly interaction.2025XLXuye Liu et al.University of WaterlooGenerative AI (Text, Image, Music, Video)Recommender System UXCHI
GenComUI: Exploring Generative Visual Aids as Medium to Support Task-Oriented Human-Robot CommunicationThis work investigates the integration of generative visual aids in human-robot task communication. We developed GenComUI, a system powered by large language models (LLMs) that dynamically generates contextual visual aids—such as map annotations, path indicators, and animations—to support verbal task communication and facilitate the generation of customized task programs for the robot. This system was informed by a formative study that examined how humans use external visual tools to assist verbal communication in spatial tasks. To evaluate its effectiveness, we conducted a user experiment (n = 20) comparing GenComUI with a voice-only baseline. The results demonstrate that generative visual aids, through both qualitative and quantitative analysis, enhance verbal task communication by providing continuous visual feedback, thus promoting natural and effective human-robot communication. Additionally, the study offers a set of design implications, emphasizing how dynamically generated visual aids can serve as an effective communication medium in human-robot interaction. These findings underscore the potential of generative visual aids to inform the design of more intuitive and effective human-robot communication, particularly for complex communication scenarios in human-robot interaction and LLM-based end-user development.2025YGYate Ge et al.Tongji University, College of Design and InnovationGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationAI-Assisted Decision-Making & AutomationCHI
ReachPad: Interacting with Multiple Virtual Screens using a Single Physical Pad through Haptic RetargetingThe advancement of Virtual Reality (VR) has expanded 2D user interfaces into 3D space. This change has introduced richer interaction modalities but also brought challenges, especially the lack of haptic feedback in mid-air interactions. Previous research has explored various methods to provide feedback for interface interactions, but most approaches require specialized haptic devices. We introduce haptic retargeting to enable users to control multiple virtual screens in VR using a simple flat pad, which serves as a single physical proxy to support seamless interaction across multiple virtual screens. We conducted user studies to explore the appropriate virtual screen size and positioning under our retargeting method and then compared various drag-and-drop methods for cross-screen interaction. Finally, we compared our method with controller-based interaction in application scenarios.2025HSHan Shi et al.Southern University of Science and Technology; Fudan UniversityIn-Vehicle Haptic, Audio & Multimodal FeedbackMixed Reality WorkspacesImmersion & Presence ResearchCHI
PalateTouch : Enabling Palate as a Touchpad to Interact with Earphones Using Acoustic SensingThis paper introduces PalateTouch, a hands-free earphone interaction system that leverages acoustic sensing technology to detect gestures resulting from the interaction between the tongue and the palate. By transmitting Zadoff-Chu signals and analyzing ear canal transfer function features, PalateTouch can capture subtle ear canal deformation and recognize various palate gestures used for interaction. Our proposed palate touch screening method ensures the system remains unaffected by unintended gestures from daily activities and the calibration mechanism enables our system to achieve user-independent recognition. Using only the earphone's built-in microphone and speaker, our system can distinguish nine gestures with an average F1 score of 0.92 and a false alarm rate of 0.02 across diverse conditions with 16 participants. Additionally, we have enabled real-time functionality and conducted a user study with 11 participants to evaluate PalateTouch's effectiveness in a demo application. The results demonstrate the superior performance and high usability of PalateTouch.2025YZYankai Zhao et al.Southern University of Science and Technology, Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and EngineeringHaptic WearablesFoot & Wrist InteractionCHI
VRCaptions: Design Captions for DHH Users in Multiplayer Communication in VRAccessing auditory information remains challenging for DHH individuals in real-world situations and multiplayer VR interactions. To improve this, we investigated caption designs that specialize in the needs of DHH users in multiplayer VR settings. First, we conducted three co-design workshops with DHH participants, social workers, and designers to gather insights into the specific needs of design directions for DHH users in the context of a room escape game in VR. We further refined our designs with 13 DHH users to determine the most preferred features. Based on this, we developed VRCaptions, a caption prototype for DHH users to better experience multiplayer conversations in VR. We lastly invited two mixed-hearing groups to participate in the VR room escape game with our VRCaptions to validate. The results demonstrate that VRCaptions can enhance the ability of DHH participants to access information and reduce the barrier to communication in VR.2025TXTianze Xie et al.Southern University of Science and TechnologyConversational ChatbotsSocial & Collaborative VRDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)CHI
HarmonyCut: Supporting Creative Chinese Paper-cutting Design with Form and Connotation HarmonyChinese paper-cutting, an Intangible Cultural Heritage (ICH), faces challenges from the erosion of traditional culture due to the prevalence of realism alongside limited public access to cultural elements. While generative AI can enhance paper-cutting design with its extensive knowledge base and efficient production capabilities, it often struggles to align content with cultural meaning due to users' and models' lack of comprehensive paper-cutting knowledge. To address these issues, we conducted a formative study (N=7) to identify the workflow and design space, including four core factors (Function, Subject Matter, Style, and Method of Expression) and a key element (Pattern). We then developed HarmonyCut, a generative AI-based tool that translates abstract intentions into creative and structured ideas. This tool facilitates the exploration of suggested related content (knowledge, works, and patterns), enabling users to select, combine, and adjust elements for creative paper-cutting design. A user study (N=16) and an expert evaluation (N=3) demonstrated that HarmonyCut effectively provided relevant knowledge, aiding the ideation of diverse paper-cutting designs and maintaining design quality within the design space to ensure alignment between form and cultural connotation.2025HWHuanchen Wang et al.Southern University of Science and Technology, Department of Computer Science and Engineering; City University of Hong Kong, Department of Computer ScienceGenerative AI (Text, Image, Music, Video)Museum & Cultural Heritage DigitizationCHI
LumaDreams: Designing Positive Dream Meaning-Making for Daily EmpowermentDreams contribute to cognitive and emotional health, yet tools for everyday dream engagement remain largely underexplored outside clinical settings. In this paper, we introduce LumaDreams, a mobile application designed to foster daily empowerment through positive dream transformation using generative AI. Informed by meaning-making theories, LumaDreams enables users to journal dreams through sketches and text, which are then transformed into positive images and stories for users to revisit and reflect on. We conducted a mixed-method study with 14 participants over 14 days. Our findings show that LumaDreams strengthened participants’ daily empowerment through cognitive and emotional shifts that arise from the positive meaning-making process. Qualitative insights further revealed how users’ perceptions and trust of AI-driven dream transformation were shaped through their interactions. In conclusion, we propose an inspiring approach that enables users to co-create positive meanings in dream experiences with generative AI, promoting cognitive and emotional shifts, fostering positive mindsets, and ultimately strengthening daily empowerment.2025BLBolin Lyu et al.Southeast University, School of Computer Science and EngineeringGenerative AI (Text, Image, Music, Video)Mental Health Apps & Online Support CommunitiesCHI
Breaking Barriers or Building Dependency? Exploring Team-LLM Collaboration in AI-infused Classroom DebateClassroom debates are a unique form of collaborative learning characterized by fast-paced, high-intensity interactions that foster critical thinking and teamwork. Despite the recognized importance of debates, the role of AI tools, particularly LLM-based systems, in supporting this dynamic learning environment has been under-explored in HCI. This study addresses this opportunity by investigating the integration of LLM-based AI into real-time classroom debates. Over four weeks, 22 students in a Design History course participated in three rounds of debates with support from ChatGPT. The findings reveal how learners prompted the AI to offer insights, collaboratively processed its outputs, and divided labor in team-AI interactions. The study also surfaces key advantages of AI usage—reducing social anxiety, breaking communication barriers, and providing scaffolding for novices—alongside risks, such as information overload and cognitive dependency, which could limit learners' autonomy. We thereby discuss a set of nuanced implications for future HCI exploration.2025ZZZihan Zhang et al.Southern University of Science and Technology, School of DesignHuman-LLM CollaborationCollaborative Learning & Peer TeachingCHI
Walk in Their Shoes to Navigate Your Own Path: Learning About Procrastination Through A Serious GameProcrastination, the voluntary delay of tasks despite potential negative consequences, has prompted numerous time and task management interventions in the HCI community. While these interventions have shown promise in addressing specific behaviors, psychological theories suggest that learning about procrastination itself may help individuals develop their own coping strategies and build mental resilience. However, little research has explored how to support this learning process through HCI approaches. We present ProcrastiMate, a text adventure game where players learn about procrastination's causes and experiment with coping strategies by guiding in-game characters in managing relatable scenarios. Our field study with 27 participants revealed that ProcrastiMate facilitated learning and self-reflection while maintaining psychological distance, motivating players to integrate newly acquired knowledge in daily life. This paper contributes empirical insights on leveraging serious games to facilitate learning about procrastination and offers design implications for addressing psychological challenges through HCI approaches.2025RZRunhua ZHANG et al.Tongji University, College of Design and Innovation; Hong Kong University of Science and Technology, IIP (Human-Computer Interaction)Serious & Functional GamesSTEM Education & Science CommunicationMental Health Apps & Online Support CommunitiesCHI
Vision-Based Multimodal Interfaces: A Survey and Taxonomy for Enhanced Context-Aware System DesignThe recent surge in artificial intelligence, particularly in multimodal processing technology, has advanced human-computer interaction, by altering how intelligent systems perceive, understand, and respond to contextual information (i.e., context awareness). Despite such advancements, there is a significant gap in comprehensive reviews examining these advances, especially from a multimodal data perspective, which is crucial for refining system design. This paper addresses a key aspect of this gap by conducting a systematic survey of data modality-driven Vision-based Multimodal Interfaces (VMIs). VMIs are essential for integrating multimodal data, enabling more precise interpretation of user intentions and complex interactions across physical and digital environments. Unlike previous task- or scenario-driven surveys, this study highlights the critical role of the visual modality in processing contextual information and facilitating multimodal interaction. Adopting a design framework moving from the whole to the details and back, it classifies VMIs across dimensions, providing insights for developing effective, context-aware systems.2025YHYongquan 'Owen' Hu et al.University of New South WalesContext-Aware ComputingUbiquitous ComputingCHI
DeepBreath: Breathing Exercise Assessment with a Depth CameraXie 等人开发 DeepBreath 深度摄像系统,通过分析胸腹部轮廓变化自动评估呼吸练习,为用户提供实时反馈和指导。2024WXWentao Xie et al.Vibrotactile Feedback & Skin StimulationBiosensors & Physiological MonitoringUbiComp
SIDA: Self-Supervised Imbalanced Domain Adaptation for Sound Enhancement and Cross-Domain WiFi SensingZhang 等人提出 SIDA 框架,采用自监督学习方法解决声音增强和跨域 WiFi 感知中的数据不平衡问题,提升模型泛化能力。2024JZJin Zhang et al.Context-Aware ComputingUbiComp
AirPush: A Pneumatic Wearable Haptic Device Providing Multi-Dimensional Force Feedback on a FingertipFinger wearable haptic devices enrich virtual reality experiences by offering haptic feedback corresponding to the virtual environment. However, despite the effectiveness of current finger wearable haptic devices in delivering haptic feedback, many are often constrained in their ability to provide force feedback across a diverse range of directions or to sustain it. Therefore, we present AirPush, a finger wearable haptic device capable of generating continuously adjustable force feedback in multiple directions using compressed air. To evaluate its usability, we conducted a technical evaluation and four user studies: (1) we obtained the user's perceptual thresholds of angles under different directions on horizontal and vertical planes, (2) in perception studies, we found that users can identify five different magnitudes of force and eight different motion when using AirPush, and (3) using it in VR applications, we confirmed that users felt more realistic and immersed when using AirPush than the HTC VIVE Controller or AirPush with a fixed nozzle.2024YMYuxin Ma et al.Southern University of Science and TechnologyForce Feedback & Pseudo-Haptic WeightCHI