Guaranteeing Equitable Musical Collaboration: Lessons Learned from the Music-Making Activities in Mixed-Hearing GroupsIntegrating mixed-hearing groups in musical collaboration presents unique challenges and opportunities for their communication and equal contribution. This observational study aims to explore their collaborative work, focusing on the way for equitable music-making. We observed two music-making workshops to identify the potential and dynamics of their musical collaboration. While the first workshop proceeded in a traditional manner of music-making, the second workshop used an assistive tool with multimodality. Our findings highlight the dynamics in musical collaboration that foster engagement and bridge interaction gaps. In turn, sensory inclusion with multimodal music-making promoted role transition in mixed-hearing groups and their equal contributions, leading to the embracing of diverse cultural perspectives. Based on the insights derived from the observations, we propose a design guideline and future research directions for harnessing group dynamics and building equitable musical collaborations for an inclusive environment for mixed-hearing groups.2025CLChungHa Lee et al.Deaf and Hard-of-Hearing ResearchCSCW
AttraCar: Multisensory In-Car VR with Thermal, Airflow, and Motion Feedback through Built-In Vehicle SystemsWe introduce AttraCar, a novel multisensory in-car Virtual Reality (VR) platform that delivers thermal, airflow, and motion feedback using built-in vehicle systems. Leveraging the Heating, Ventilation, and Air Conditioning (HVAC) system for airflow and thermal variation, and the power seat for motion feedback, perceptual thresholds were determined through Just Noticeable Difference (JND) experiments. A user study evaluated six feedback conditions (Baseline, Ambient Airflow, Thermal Airflow, Seat Motion, Ambient Airflow + Seat Motion, Thermal Airflow + Seat Motion) during on-road VR scenarios. A subsequent on-road study demonstrates that different combinations of feedback are not only perceptually distinct but also highly effective in a dynamic VR context, significantly mitigating motion sickness and enhancing presence and haptic experience. We conclude with reflections on design considerations, integration challenges, and real-world applicability for scalable multisensory in-car VR systems utilizing existing vehicle components.2025DYDohyeon Yeo et al.In-Vehicle Haptic, Audio & Multimodal FeedbackMotion Sickness & Passenger ExperienceUIST
EarPressure VR: Ear Canal Pressure Feedback for Enhancing Environmental Presence in Virtual RealityThis study presents EarPressure VR, a system that modulates ear canal pressure to simulate atmospheric pressure changes in virtual reality (VR). EarPressure VR employs sealed earbuds and a linear stepper motor–driven syringe to generate controlled pressure variations within safe limits (±40 hPa relative to ambient pressure). Through two user studies, we evaluate (1) perceptual thresholds for detecting ear pressure in terms of direction (inward or outward) and intensity differences, and (2) the effect of ear pressure feedback on users’ sense of environmental presence across two VR scenarios involving gradual and discrete changes in ambient pressure. Results show that participants reliably identified pressure direction at thresholds of +14.4 hPa (inward) and –23.8 hPa (outward), and intensity differences at ±14.6% and ±34.9%, respectively. Pressure feedback significantly improved presence ratings, particularly when pressure variation was continuously adjusted to reflect environmental transitions. We conclude by discussing the broader applicability of ear canal pressure feedback in areas such as training, simulation, and everyday experiences.2025SKSeongjun Kang et al.Mid-Air Haptics (Ultrasonic)Immersion & Presence ResearchUIST
EI-Lite: Electrical Impedance Sensing for Micro-gesture Recognition and Pinch Force EstimationMicro-gesture recognition and fine-grain pinch press enables intuitive and discreet control of devices, offering significant potential for enhancing human-computer interaction (HCI). In this paper, we present EI-Lite, a lightweight wrist-worn electrical impedance sensing device for micro-gesture recognition and continuous pinch force estimation. We elicit an optimal and simplified device architecture through an ablation study on electrode placement with 13 users, and implement the elicited designs through 3D printing. We capture data on 15 participants on (1) six common micro-gestures (plus idle state) and (2) index finger pinch forces, then develop machine learning models that interpret the impedance signals generated by these micro-gestures and pinch forces. Our system is capable of accurate recognition of micro-gesture events (96.33% accuracy), as well as continuously estimating the pinch force of the index finger in physical units (Newton), with the mean-squared-error (MSE) of 0.3071 (or mean-force-variance of 0.55 Newtons) over 15 participants. Finally, we demonstrate EI-Lite's applicability via three applications in AR/VR, gaming, and assistive technologies.2025JZJunyi Zhu et al.Vibrotactile Feedback & Skin StimulationFoot & Wrist InteractionUIST
BandEI: A Flexible Electrical Impedance Sensing Bandage for Deep Muscles and TendonsMonitoring deep muscles and tissues is critical for rehabilitation, training, and fine motor control. In this work, we propose BandEI, a flexible, bandage-like wearable sensor for electrical impedance sensing. BandEI utilizes woven conductive fabric as the core material for its electrodes and leverages digital fabrication, including laser cutting, to enable scalable and customizable fabrication. To streamline the design process, we provide a user interface that allows users to freely select the deployment location of BandEI. The interface automatically generates fabrication-ready design files that accommodate for the curvature and shape of the selected area. We evaluate BandEI and validate its ability to detect signals from actively engaged large muscles, such as the biceps and triceps. Additionally, it can capture signals from deep or passively activated muscles, like those in the hand, which are typically difficult to detect with conventional surface electromyography (sEMG). We design and implement BandEI for muscles in the fingers, neck, and ankle, demonstrating its capability for diverse applications, including real-time gesture recognition, neck motion monitoring, and gait tracking.2025HWHongrui Wu et al.Vibrotactile Feedback & Skin StimulationHaptic WearablesHuman Pose & Activity RecognitionUIST
Meta-antenna: Mechanically Frequency Reconfigurable Metamaterial AntennasWe introduce Meta-antenna, a design and fabrication pipeline for creating frequency reconfigurable antennas while making use of a single type of mechanical metamaterial structure. Unlike traditional static antenna systems with fixed radiation patterns and frequency responses per geometry, Meta-antenna leverages mechanical reconfiguration to alter the radiation and geometry characteristics of the antenna, making it more versatile for sensing and communication. Meta-antenna provides a design space of resonance frequency from 500 MHz to 6.3 GHz ≥10 dB upon the structure's compression, bending, or rotation. Additionally, we provide an Ansys-based editor that allows users to generate metamaterial antenna geometries and simulate their resonance frequency. We also provide a code template for Meta-antenna based sensing interactions. Our technical evaluation demonstrates that our fabricated Meta-antenna structures remain functional even after 10,000 compression cycles. Finally, we contribute three example applications showcasing Meta-antenna’s potential in adaptive personal devices, smart home systems, and tangible user interfaces.2025MAMarwa AlAlawi et al.Circuit Making & Hardware PrototypingCustomizable & Personalized ObjectsUIST
Over the Mouse: Navigating across the GUI with Finger-Lifting Operation MouseModern GUIs often have a hierarchical structure, i.e., the z-axis of the GUI interaction space. However, conventional mice do not support effective navigation along the z-axis, leading to increased physical movements and cognitive load. To address this inefficiency, we present the OtMouse, a novel mouse that supports finger-lifting operations by detecting finger height through proximity sensors embedded beneath the mouse buttons, and 'Over the Mouse' (OtM) interface, a set of interaction techniques along the z-axis of the GUI interaction space with the OtMouse. Initially, We evaluated the performance of finger-lifting operations (n = 8) with the OtMouse for two- and three-level lifting discrimination tasks. Subsequently, we conducted a user study (n = 16) to compare the usability of the OtM interface and traditional mouse interface for three representative tasks: 'Context Switch,' 'Video Preview,' and 'Map Zooming.' The results showed that OtM interface was both qualitatively and quantitatively superior to using traditional mouse in the Context Switch and Video Preview tasks. This research contributes to the ongoing efforts to enhance mouse-based GUI navigation experiences.2025YKYoungIn Kim et al.School of Computing, KAIST, HCI LabForce Feedback & Pseudo-Haptic WeightPrototyping & User TestingCHI
I Want to Break Free: Enabling User-Applied Active Locomotion in In-Car VR through Contextual CuesWe explore the feasibility of active user-applied locomotion in virtual reality (VR) within in-car environments, diverging from previous in-car VR research that synchronized virtual motion with the car's movement. Through a two-step study, we examined the effects of locomotion methods on user experience in dynamic vehicle environments and evaluated contextual cues designed to mitigate sensory mismatch caused by vehicle motion. The first study evaluated five locomotion methods, identifying joystick-based navigation as the most suitable for in-car use due to its low physical demand and stability. The second study focused on designing and testing contextual cues that translate physical sensations of vehicle motion into virtual effects without limiting the user’s freedom of movement, with results demonstrating their effectiveness in reducing motion sickness and enhancing presence. We conclude with initial insights and design considerations for expanding upon our findings in regards to enabling active locomotion in in-car VR.2025BGBocheon Gim et al.Gwangju Institute of Science and Technology, Human-Centered Intelligent Systems LabMotion Sickness & Passenger ExperienceSocial & Collaborative VRImmersion & Presence ResearchCHI
OnomaCap: Making Non-speech Sound Captions Accessible and Enjoyable through Onomatopoeic Sound RepresentationNon-speech sounds play an important role in setting the mood of a video and aiding comprehension. However, current non-speech sound captioning practices focus primarily on sound categories, which fails to provide a rich sound experience for d/Deaf and hard-of-hearing (DHH) viewers. Onomatopoeia, which succinctly captures expressive sound information, offers a potential solution but remains underutilized in non-speech sound captioning. This paper investigates how onomatopoeia benefits DHH audiences in non-speech sound captioning. We collected 7,962 sound-onomatopoeia pairs from listeners and developed a sound-onomatopoeia model that automatically transcribes sounds into onomatopoeic descriptions indistinguishable from human-generated ones. A user evaluation of 25 DHH participants using the model-generated onomatopoeia demonstrated that onomatopoeia significantly improved their video viewing experience. Participants most favored captions with onomatopoeia and category, and expressed a desire to see such captions across genres. We discuss the benefits and challenges of using onomatopoeia in non-speech sound captions, offering insights for future practices.2025JKJooYeong Kim et al.Gwangju Institute of Science and Technology, School of Integrated Technology/Soft Computing & Interaction LaboratoryVoice AccessibilityDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Universal & Inclusive DesignCHI
MVPrompt: Building Music-Visual Prompts for AI Artists to Craft Music Video Mise-en-scèneMusic videos have traditionally been the domain of experts, but with text-to-video generative AI models, AI artists can now create them more easily. However, accurately reflecting the desired music-visual mise-en-scène remains challenging without specialized knowledge, highlighting the need for supportive tools. To address this, we conducted a design workshop with seven music video experts, identified design goals, and developed MVPrompt—a tool for generating music-visual mise-en-scène prompts. In a user study with 24 AI artists, MVPrompt outperformed the Baseline, effectively supporting the collaborative creative process. Specifically, the Visual Theme stage facilitated the exploration of tone and manner, while the Visual Scene & Grammar stage refined prompts with detailed mise-en-scène elements. By enabling AI artists to specify mise-en-scène creatively, MVPrompt enhances the experience of making music video scenes with text-to-video generative AI.2025CLChungHa Lee et al.Gwangju Institute of Science and Technology, School of Integrated Technology/Soft Computing & Interaction LaboratoryGenerative AI (Text, Image, Music, Video)AI-Assisted Creative WritingVideo Production & EditingCHI
Understanding the Potentials and Limitations of Prompt-based Music Generative AIPrompt-based music generative artificial intelligence (GenAI) offers an efficient way to engage in music creation through language. However, it faces limitations in conveying artistic intent with language alone, highlighting the need for more research on AI-creator interactions. This study evaluates three different interaction modes (prompt-based, preset-based, and motif-based) of commercialized music AI toots with 17 participants of varying musical expertise to examine how prompt-based GenAI can improve creative intention. Our findings revealed that user groups preferred prompt-based music GenAI for distinct purposes: experts used it to validate musical concepts, novices to generate reference samples, and nonprofessionals to transform abstract ideas into musical compositions. We identified its potential for enhancing compositional efficiency and creativity through intuitive interaction, while also noting limitations in handling temporal and musical nuances solely through prompts. Based on these insights, we present design guidelines to ensure users can effectively engage in the creative process, considering their musical expertise.2025YCYoujin Choi et al.Gwangju Institute of Science and Technology, School of Integrated Technology/Soft Computing & Interaction LaboratoryGenerative AI (Text, Image, Music, Video)Music Composition & Sound Design ToolsCHI
I-Scratch: Independent Slide Creation With Auditory Comment and Haptic Interface for the Blind and Visually ImpairedPresentation software still holds barriers to independent creation for blind and visually impaired users (BVIs) due to its visual-centric interface. To address this gap, we introduce I-Scratch, a multimodal system which empowers BVIs to independently create, explore, and edit PowerPoint slides. We initially designed I-Scratch to tackle the practical challenges faced by BVIs and refined I-Scratch to improve its usability and accessibility through iterative participatory sessions involving a blind user. I-Scratch integrates a graphical tactile display with auditory guidance for multimodal feedback, simplifies the user interface, and leverages AI technologies for visual assistance in image generation and content interpretation. A user study with ten BVIs demonstrated that I-Scratch enables them to produce visually coherent and aesthetically pleasing slides independently, achieving 91.25% of full and partial successes with a CSI score of 85.07. We present five guidelines and future directions to support the creative work of BVIs using presentation software.2025GKGyeongdeok Kim et al.Gwangju Institute of Science and TechnologyVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Universal & Inclusive DesignCHI
MoWa: An Authoring Tool for Refining AI-Generated Human Avatar Motions Through Latent Waveform ManipulationCreating expressive and realistic motion animations is a challenging task. Generative artificial intelligence (AI) models have emerged to address this challenge, offering the capability to synthesize human motion animations from text prompts. However, the effective integration of AI-generated motion into professional designer workflows remains uncertain. This study proposes MoWa, an authoring tool designed to refine AI-generated human motions to meet professional standards. A formative study with six professional motion designers identified the strengths and weaknesses of AI-generated motions. To address these weaknesses, MoWa utilizes latent space to enhance the expressiveness of motions, making them suitable for use in professional workflows. A user study involving twelve professional motion designers was conducted to evaluate MoWa's effectiveness in refining AI-generated motions. The results indicated that MoWa streamlines the motion design process and improves the quality of the outcomes. These findings suggest that incorporating latent space into motion design tasks can improve efficiency.2025JOJeongseok Oh et al.Gwangju Institute of Science and Technology, Human-Centered Intelligent Systems LabGenerative AI (Text, Image, Music, Video)3D Modeling & AnimationCHI
TelePulse: Enhancing the Teleoperation Experience through Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual RealityThis paper introduces TelePulse, a system integrating biomechanical simulation with electrical muscle stimulation (EMS) to provide precise haptic feedback for robot teleoperation tasks in virtual reality (VR). TelePulse has two components: a physical simulation part that calculates joint torques based on real-time force data from remote manipulators, and an electrical stimulation part that converts these torques into muscle stimulation. Two experiments were conducted to evaluate the system. The first experiment assessed the accuracy of EMS generated through biomechanical simulations by comparing it with electromyography (EMG) data during force-directed tasks, while the second experiment evaluated the impact of TelePulse on teleoperation performance during sanding and drilling tasks. The results suggest that TelePulse provided more accurate stimulation across all arm muscles, thereby enhancing task performance and user experience in the teleoperation environment. In this paper, we discuss the effect of TelePulse on teleoperation, its limitations, and areas for future improvement.2025SHSeokhyun Hwang et al.University of Washington, Information SchoolTeleoperated DrivingElectrical Muscle Stimulation (EMS)CHI
ChatHAP: A Chat-Based Haptic System for Designing Vibrations through ConversationIn contrast to design tools for graphics and audio generation from text prompts, haptic design tools lag behind due to challenges in constructing large-scale, high-quality datasets including vibrations and text descriptions. To address this gap, we propose ChatHAP, a conversational haptic system for designing vibrations. ChatHAP integrates various haptic design approaches using a large language model, including generating vibrations using signal parameters, navigating through libraries, and modifying existing vibrations. To further improve vibration navigation, we present an algorithm that adaptively learns user preferences for vibration features. A user study with novices (n=20) demonstrated that ChatHAP can serve as a practical design tool, and the proposed algorithm significantly reduced task completion time (38%), prompt quantity (25%), and verbosity (36%). The study found ChatHAP easy-to-use and identified requirements for chat-based haptic design as well as features for further improvement. Finally, we present key findings with ChatHAP and discuss implications for future work.2025CLChungman Lim et al.Gwangju Institute of Science and TechnologyVibrotactile Feedback & Skin StimulationConversational ChatbotsCHI
BIASsist: Empowering News Readers via Bias Identification, Explanation, and NeutralizationBiased news articles can distort readers' perceptions by presenting information in a way that favors or disfavors a particular point of view. Subtly embedded in the text, these biased news articles can shape our views daily without people even realizing it. To address this issue, we propose BIASsist, an LLM-based approach designed to mitigate bias in news articles. Based on existing research, we defined six types of bias and introduced three assistive components—identification, explanation, and neutralization—to provide a broader range of bias information and enhance readers' bias-awareness. We conducted a mixed-method study with 36 participants to evaluate the effectiveness of BIASsist. The results show participants' bias awareness significantly improved and their interest in identifying bias increased. Participants also tended to engage more actively in critically evaluating articles. Based on these findings, we discuss its potential to improve media literacy and critical thinking in today's information overload era.2025YNYeo-Gyeong Noh et al.Gwangju Institute of Science and Technology, School of Integrated TechnologyAI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityMisinformation & Fact-CheckingCHI
Exploring the Potential of Music Generative AI for Music-Making by Deaf and Hard of Hearing PeopleRecent advancements in text-to-music generative AI (GenAI) have significantly expanded access to music creation. However, deaf and hard of hearing (DHH) individuals remain largely excluded from these developments. This study explores how music GenAI could enhance the music-making experience of DHH individuals, who often rely on hearing people to translate sounds and music. We developed a multimodal music-making assistive tool informed by focus group interviews. This tool enables DHH users to create and edit music independently through language interaction with music GenAI, supported by integrated visual and tactile feedback. Our findings from the music-making study revealed that the system empowers them to engage in independent and proactive music-making activities, increasing their confidence, fostering musical expression, and positively shifting their attitudes toward music. Contributing to inclusive art by preserving the unique sensory characteristics of DHH individuals, this study demonstrates how music GenAI can benefit a marginalized community, fostering independent creative expression.2025YCYoujin Choi et al.Gwangju Institute of Science and Technology, School of Integrated Technology/Soft Computing & Interaction LaboratoryGenerative AI (Text, Image, Music, Video)Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Music Composition & Sound Design ToolsCHI
Working Together Toward Interdependence: Chatbot-Based Support for Balanced Social Interactions Between Neurodivergent and Neurotypical IndividualsWhile many technologies have been developed for facilitating interaction between neurodivergent and neurotypical people to bridge communication differences and reduce social exclusion, most focus on supporting and teaching neurodivergent people to adapt to neurotypical standards and norms. To promote a more balanced approach to bridging the social gap, we conducted a 5-day diary study and semi-structured interviews with 16 participants (8 neurotypical and 8 with intellectual disability) to examine the current factors and barriers to their social interactions and to explore the design of social support chatbot systems. Our findings revealed diverging views between the groups on factors they valued in their interaction, and identified social uncertainty and differing social expectations as the main barriers to successful interactions. Based on the results, we outline three pitfalls that social support chatbots can fall into if not designed mindfully, and suggest design approaches that promote bidirectional social support and interdependence.2025HKHa-Kyung Kong et al.Rochester Institute of Technology, School of InformationConversational ChatbotsCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Participatory DesignCHI
WatchCap: Improving Scanning Efficiency in People with Low Vision through Compensatory Head Movement StimulationJo 等人提出 WatchCap 方法,通过补偿性头部运动刺激技术,提升低视力人群的视觉扫描效率。2024TJTaewoo Jo et al.Eye Tracking & Gaze InteractionVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UbiComp
TimelyTale: A Multimodal Dataset Approach to Assessing Passengers' Explanation Demands in Highly Automated VehiclesKim等人构建TimelyTale多模态数据集,评估高度自动化车辆中乘客的系统解释需求,为改善自动驾驶人机交互体验提供数据支撑。2024GKGwangbin Kim et al.Automated Driving Interface & Takeover DesignExplainable AI (XAI)UbiComp