Mod2Hap: Two Level Modular Haptic System for Body- and Hand- Interactions using Magnetorheological FluidWe present Mod2Hap, a two-level modular haptic system that enables both body-scale and hand-scale interactions using magnetorheological (MR) fluid. The system comprises (1) configurable frame modules for constructing spatial interaction structures, which can be flexibly assembled to suit various body postures and environments, and (2) haptic modules—rotary (e.g., for dials or pedals) and linear (e.g., for sliders or handles)—that provide customizable hand-scale feedback interfaces. Each haptic module utilizes MR fluid, whose viscosity varies with applied magnetic fields, to generate tunable resistive feedback. Our design achieves a wide feedback range (0.12–0.52 N·m torque, 41–212 N force) by optimizing the solenoid coil for power efficiency and validating magnetic field distribution via simulation. We demonstrate Mod2Hap through three interactive scenarios—cycling, kayaking, and fishing—and evaluate its performance in a user study with 12 participants. Results show high perceived realism and engagement, supporting the system’s versatility, scalability, and effectiveness as an immersive haptic interaction platform.2025YHYong Hae Heo et al.In-Vehicle Haptic, Audio & Multimodal FeedbackHaptic WearablesUIST
StabilizAR: Enabling Hands-Free Head Pointing while MobileWearable Augmented Reality headsets are inherently mobile: they enable hands-free and immersive interaction while on the go. Despite this, research into input methods that cater to mobility issues, such as the instabilities introduced by canonical tasks such as walking, remains in its infancy. This paper addresses this omission by presenting StabilizAR, a technique to enhance head cursor input while walking. It introduces a novel cursor velocity limit activated by the mutual alignment of head and eye vectors that enhances fine-grained targeting without compromising input speed during large-scale cursor motion. It integrates this with a target scoring system that reduces the precision required during selection by accruing proximity-based estimates of a user's intended target. Two studies show these combined techniques dramatically increase targeting performance---boosting success rates from 6% to 91% while mobile---and elevate measures of usability and user preference. They show StabilizAR's potential to enable genuinely mobile HMD use.2025YSYonghwan Shin et al.Hand Gesture RecognitionImmersion & Presence ResearchUIST
Pati & Bellio: Coordinating Face-to-Face Interruptions via Availability Expressions and Proximal Notifications in Open-Plan OfficesIn open-plan offices, face-to-face (F2F) interruptions frequently occur to facilitate collaboration and cooperation with colleagues. We designed Pati & Bellio to support the coordination of F2F interruptions in open-plan offices. Pati is a partition-style personal device that visualizes availability for F2F interruptions and Bellio is a shared interface that allows users to send notifications to their colleagues from a distance. Our three-week in field study with four groups of participants reveals that examining the process of F2F interruptions helps determine the importance of the interruption, and the physical distance provided by Pati and Bellio naturally allowed time to prepare for conversations. We also identified how the visualized availability is considered after an interruption begins. Our findings imply considerations in designing systems to support coordinating social interaction in work environments.2025NKNari Kim et al.Notification & Interruption ManagementDIS
Lino: An Interactive System for Daily Mood Recordings Supporting Meaning-Making through Single Stroke Drawing ApproachMood is influenced by complex factors and involves subjective interpretation, leading to diverse methods of recording it. While existing tools provide customizable features, they often fall short in promoting deep reflection and meaningful engagement. We developed Lino, an interactive system that includes single stroke drawing records created in a mobile app and a desktop frame designed for archiving these drawings and supporting the attachment of optional voice recordings. Through a three-week field study with six participants, we found that participants make meaning in the process of reframing their daily moods into single stroke drawings and continuously refined these recordings through interactions in their everyday spaces. Our findings imply considerations for empowering users through personal interpretation for meaning-making process in data collection and visualization for effective personal informatics system and supporting evolving personal reflective practices.2025NKNanum Kim et al.Interactive Data VisualizationData StorytellingMental Health Apps & Online Support CommunitiesDIS
Diversifying Grain-Based Compliance Illusion by Varying Base ComplianceGrain-based compliance illusion mimics the mechanical vibrations when a compliant object deforms with grain-like, short (~15 ms) impulse-response vibrations. Previous work has demonstrated its robust effect on various types of devices. However, the impact of the device's inherent compliance (i.e., base compliance) on perceived compliance remains unclear. This paper investigates the influence of base compliance on the perception of illusory compliance through three psychophysical experiments. The results show that (1) the compliance illusion remained effective with base compliance, (2) the description of compliance was affected by both illusory and base compliance, and (3) it is possible to render the compliance with the same magnitude but multiple different feelings.2025BMBuyoung Mun et al.UNIST, TACT Lab, Computer Science & EngineeringVibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightCHI
BudsID: Mobile-Ready and Expressive Finger Identification Input for EarbudsWireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.2025JKJiwan Kim et al.KAIST, School of Electrical EngineeringVibrotactile Feedback & Skin StimulationHaptic WearablesFoot & Wrist InteractionCHI
Unveiling High-dimensional Backstage: A Survey for Reliable Visual Analytics with Dimensionality ReductionDimensionality reduction (DR) techniques are essential for visually analyzing high-dimensional data. However, visual analytics using DR often face unreliability, stemming from factors such as inherent distortions in DR projections. This unreliability can lead to analytic insights that misrepresent the underlying data, potentially resulting in misguided decisions. To tackle these reliability challenges, we review 133 papers that address the unreliability of visual analytics using DR. Through this review, we contribute (1) a workflow model that describes the interaction between analysts and machines in visual analytics using DR, and (2) a taxonomy that identifies where and why reliability issues arise within the workflow, along with existing solutions for addressing them. Our review reveals ongoing challenges in the field, whose significance and urgency are validated by five expert researchers. This review also finds that the current research landscape is skewed toward developing new DR techniques rather than their interpretation or evaluation, where we discuss how the HCI community can contribute to broadening this focus.2025HJHyeon Jeon et al.Seoul National University, Department of Computer Science and EngineeringInteractive Data VisualizationUncertainty VisualizationVisualization Perception & CognitionCHI
Journey to My Past: Exploring and Journaling Past Memories Evoked by Questions Framed as Proud MomentsAccumulating a life history is a valuable resource for understanding self and reflecting on personal historical experience, which could be developed through diary writing. To facilitate the recording of past events in a diary, we designed and implemented Rebulb, a system that enables users to engage with reflective questions about proud moments and document memories evoked for accumulating one's life history. Our four-month field study with three participants showed that users intentionally and spontaneously recalled vague and wide memories during their daily activity and then concretized these memories by writing them down in a journal. The study also revealed that regardless of whether the memories were positive or negative, the current state of the user played a crucial role in how these memories were processed and reflected upon. Our finding imply consideration in designing a tool for supporting the recall and documentation of past experiences.2025SJSangsu Jang et al.UNIST, Department of DesignContext-Aware ComputingUser Research Methods (Interviews, Surveys, Observation)Interactive Narrative & Immersive StorytellingCHI
Expanding the Design Space of Computer Vision-based Interactive Systems for Group Dance PracticeGroup dance, a sub-genre characterized by intricate motions made by a cohort of performers in tight synchronization, has a longstanding and culturally significant history and, in modern forms such as cheerleading, a broad base of current adherents. However, despite its popularity, learning group dance routines remains challenging. Based on the prior success of interactive systems to support individual dance learning, this paper argues that group dance settings are fertile ground for augmentation by interactive aids. To better understand these design opportunities, this paper presents a sequence of user-centered studies of and with amateur cheerleading troupes, spanning from the formative (interviews, observations) through the generative (an ideation workshop) to concept validation (technology probes and speed dating). The outcomes are a nuanced understanding of the lived practice of group dance learning, a set of interactive concepts to support those practices, and design directions derived from validating the proposed concepts. Through this empirical work, we expand the design space of interactive dance practice systems from the established context of single-user practice (primarily focused on gesture recognition) to a multi-user, group-based scenario focused on feedback and communication.2024SLSooHwan Lee et al.Full-Body Interaction & Embodied InputDance & Body Movement ComputingDIS
QuadStretcher: A Forearm-Worn Skin Stretch Display for Bare-Hand Interaction in AR/VRThe paradigm of bare-hand interaction has become increasingly prevalent in Augmented Reality (AR) and Virtual Reality (VR) environments, propelled by advancements in hand tracking technology. However, a significant challenge arises in delivering haptic feedback to users’ hands, due to the necessity for the hands to remain bare. In response to this challenge, recent research has proposed an indirect solution of providing haptic feedback to the forearm. In this work, we present QuadStretcher, a skin stretch display featuring four independently controlled stretching units surrounding the forearm. While achieving rich haptic expression, our device also eliminates the need for a grounding base on the forearm by using a pair of counteracting tactors, thereby reducing bulkiness. To assess the effectiveness of QuadStretcher in facilitating immersive barehand experiences, we conducted a comparative user evaluation (n = 20) with a baseline solution, Squeezer. The results confirmed that QuadStretcher outperformed Squeezer in terms of expressing force direction and heightening the sense of realism, particularly in 3-DoF VR interactions such as pulling a rubber band, hooking a fishing rod, and swinging a tennis racket. We further discuss the design insights gained from qualitative user interviews, presenting key takeaways for future forearm-haptic systems aimed at advancing AR/VR bare-hand experiences.2024TKTaejun Kim et al.School of Computing, KAISTHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Haptic WearablesCHI
SkullID: Through-Skull Sound Conduction based Authentication for SmartglassesThis paper investigates the use of through-skull sound conduction to authenticate smartglass users. We mount a surface transducer on the right mastoid process to play cue signals and capture skull-transformed audio responses through contact microphones on various skull locations. We use the resultant bio-acoustic information as classification features. In an initial single-session study (N=25), we achieved mean Equal Error Rates (EERs) of 5.68% and 7.95% with microphones on the brow and left mastoid process. Combining the two signals substantially improves performance (to 2.35% EER). A subsequent multi-session study (N=30) demonstrates EERs are maintained over three recalls and, additionally, shows robustness to donning variations and background noise (achieving 2.72% EER). In a follow-up usability study over one week, participants report high levels of usability (as expressed by SUS scores) and that only modest workload is required to authenticate. Finally, a security analysis demonstrates the system's robustness to spoofing and imitation attacks.2024HSHyejin Shin et al.Samsung ResearchPasswords & AuthenticationBiosensors & Physiological MonitoringCHI
ThumbAir: In-Air Typing for Head Mounted DisplaysTyping while wearing a standalone Head Mounted Display (HMD)---systems without external input devices or sensors to support text entry---is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations. https://dl.acm.org/doi/10.1145/35694742023HGHyunjae Gil et al.Eye Tracking & Gaze InteractionVoice User Interface (VUI) DesignImmersion & Presence ResearchUbiComp
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for SmartwatchesPIN and pattern lock are difficult to accurately enter on small watch screens, and are vulnerable against guessing attacks. To address these problems, this paper proposes a novel implicit biometric scheme based on through-wrist acoustic responses. A cue signal is played on a surface transducer mounted on the dorsal wrist and the acoustic response recorded by a contact microphone on the volar wrist. We build classifiers using these recordings for each of three simple hand poses (relax, fist and open), and use an ensemble approach to make final authentication decisions. In an initial single session study (N=25), we achieve an Equal Error Rate (EER) of 0.01%, substantially outperforming prior on-wrist biometric solutions. A subsequent five recall-session study (N=20) shows reduced performance with 5.06% EER. We attribute this to increased variability in how participants perform hand poses over time. However, after retraining classifiers performance improved substantially, ultimately achieving 0.79% EER. We observed most variability with the relax pose. Consequently, we achieve the most reliable multi-session performance by combining the fist and open poses: 0.51% EER. Further studies elaborate on these basic results. A usability evaluation reveals users experience low workload as well as reporting high SUS scores and fluctuating levels of perceived exertion: moderate during initial enrollment dropping to slight during authentication. A final study examining performance in various poses and in the presence of noise demonstrates the system is robust to such disturbances and likely to work well in wide range of real-world contexts. https://dl.acm.org/doi/10.1145/35694732023JHJun Ho Huh et al.Foot & Wrist InteractionMotor Impairment Assistive Input TechnologiesUbiComp
Stubbi: an Interactive Device for Enhancing Remote Text and Voice Communication in Small Intimate Groups through Simple Physical MovementsThe remote communication within intimate groups can be challenging due to lack of or excessive of non-verbal messages, which lack physicality. In this paper, we introduce Stubbi, a device designed to enhance remote text and voice communication in small intimate groups by enabling the exchange of simple physical movements. Each group member is equipped with Stubbi, which consists of three stubs that represent each person and enable the expression of physical movements through height changes and rotation. Our in-lab study involving six groups of three friends revealed that Stubbi effectively maintained the flow of IM conversations by increasing attentiveness and responsiveness. Additionally, in group voice calls, the exchange of personified messages through physical movements of the stubs assisted group interactions, such as turn-taking. Our findings suggest further implications for designing interactive systems that can support improved remote communication by seamlessly connecting IM and voice call conversations for small intimate groups.2023JMJin-young Moon et al.Mixed Reality WorkspacesCollaborative Learning & Peer TeachingDIS
Design and Field Trial of Tunee in Shared Houses: Exploring Experiences of Sharing Individuals’ Current Noise-level Preferences with HousematesBeing a little more careful about the sound that people produce is difficult in shared houses because individuals can generate several unintended living noises and sounds. We designed Tunee to help each housemate better understand the others’ context and desired noise-level. It is an interactive speaker that allows people to share noise-level preferences through the position change of nodes. Our three-week in-field study with four groups of participants revealed that expressing noise-level preference through nodes reduced the burden of verbally delivering issues about the trivial noises of everyday life, and the intentions of the lowered preference were referred to and deemed significant. We also identified how participants figured out what behavior was acceptable for others according to each noise-level. Our findings imply considerations in designing interfaces to support coordinating behaviors and awareness of social contexts in shared spaces.2023NKNari Kim et al.UNISTContext-Aware ComputingSmart Home Interaction DesignCHI
Augmenting On-Body Touch Input with Tactile Feedback Through Fingernail HapticsThe key assumption attributed to on-body touch input is that the skin being touched provides natural tactile feedback. In this paper, we for the first time systematically explore augmenting on-body touch input with computer-generated tactile feedback. We employ vibrotactile actuation on the fingernail to couple on-body touch input with tactile feedback. Results from our first experiment show that users prefer tactile feedback for on-body touch input. In our second experiment, we determine the frequency thresholds for rendering realistic tactile “click” sensations for on-body touch buttons on three different body locations. Finally, in our third experiment, we dig deeper to render highly expressive tactile effects with a single actuator. Our non-metric multi-dimensional analysis shows that haptic augmentation of on-body buttons enhances the expressivity of on-body touch input. Overall, results from our experiments reinforce the need for tactile feedback for on-body touch input and show that actuation on the fingernail is a promising approach.2023PTPeter Khoa Duc Tran et al.University of CalgaryVibrotactile Feedback & Skin StimulationFoot & Wrist InteractionCHI
GestureMeter: Design and Evaluation of a Gesture Password Strength MeterGestures drawn on touchscreens have been proposed as an authentication method to secure access to smartphones. They provide good usability and a theoretically large password space. However, recent work has demonstrated that users tend to select simple or similar gestures as their passwords, rendering them susceptible to dictionary based guessing attacks. To improve their security, this paper describes a novel gesture password strength meter that interactively provides security assessments and improvement suggestions based on a scoring algorithm that combines a probabilistic model, a gesture dictionary, and a set of novel stroke heuristics. We evaluate this system in both online and offline settings and show it supports creation of gestures that are significantly more resistant to guessing attacks (by up to 67%) while also maintaining performance on usability metrics such as recall success rate and time. We conclude that gesture password strength meters can help users select more secure gesture passwords.2023ECEunyong Cheon et al.UNIST , UNISTPasswords & AuthenticationCHI
Sad or just jealous? Using Experience Sampling to Understand and Detect Negative Affective Experiences on InstagramSocial Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).2022MRMintra Ruensuk et al.UNISTSocial Platform Design & User BehaviorCyberbullying & Online HarassmentOnline Identity & Self-PresentationCHI
SonarID: Using Sonar to Identify Fingers on a SmartwatchThe diminutive size of wrist wearables has prompted the design of many novel input techniques to increase expressivity. Finger identification, or assigning different functionality to different fingers, has been frequently proposed. However, while the value of the technique seems clear, its implementation remains challenging, often relying on external devices (e.g., worn magnets) or explicit instructions. Addressing these limitations, this paper explores a novel approach to natural and unencumbered finger identification on an unmodified smartwatch: sonar. To do this, we adapt an existing finger tracking smartphone sonar implementation---rather than extract finger motion, we process raw sonar fingerprints representing the complete sonar scene recorded during a touch. We capture data from 16 participants operating a smartwatch and use their sonar fingerprints to train a deep learning recognizer that identifies taps by the thumb, index, and middle fingers with an accuracy of up to 93.7%, sufficient to support meaningful application development.2022JKJiwan Kim et al.UNIST, UNISTFoot & Wrist InteractionBiosensors & Physiological MonitoringCHI
The Trial of Posit in Shared Offices: Controlling Disclosure Levels of Schedule Data for Privacy by Changing the Placement of a Personal Interactive CalendarWhen expressing personal data on the displays of personal IoT devices, it is important to be intuitively aware of privacy settings and perform ready-to-hand interactions to respond appropriately to various situations occurring in shared spaces. In this paper, we developed Posit, an interactive calendar in which the disclosure level of schedule content can be changed in three stages according to the object’s placement by the user. The results of our three-week in-field study with six participants revealed that Posit’s interaction was considered to be a simple way of hiding personal schedules quickly, and we could identify the roles of the positional messages in determining the others’ gazes on displays. Additionally, we confirmed that social relationships and trust between colleagues affect the use of Posit. Our findings imply new opportunities in designing interactions for the management of personal privacy by applying physical state-changing interaction and understanding social factors in shared spaces.2021NKNari Kim et al.Privacy by Design & User ControlNotification & Interruption ManagementDIS