HapPalm : Providing Rich Spatio-Temporal Vibrotactile Feedback on the Palm for Laptop GamingWhile many modern gaming environments provide haptic feedback, laptop keyboard gaming remains largely without rich tactile interaction, despite a rapidly growing audience. In this paper, we propose the HapPalm interface, a novel laptop interface concept that delivers rich spatio-temporal vibrotactile feedback through the palmrest area, allowing players to feel game events with their palms. Our prototype uses dual 4×6 linear resonant actuator arrays. To render various game events with the HapPalm interface, our first study aims to create a haptic pattern dataset. Iterative design workshops identified 11 haptic pattern templates, of which our second study validated that they convincingly convey diverse game events. Our final study embedded these patterns into a custom game, showing that spatial haptics significantly improved fun, immersion, realism, and presence compared to non-spatial or no-haptic conditions. HapPalm interface demonstrates that palmrest-based haptics can enrich keyboard-only laptop gaming, providing an expressive and immersive tactile channel for future laptop interfaces.2026YYYohan Yun et al.School of Computing, KAISTHaptic, Touch, and Physical DisplayCHI
Redirected Pinch: Efficient and Comfortable Bare-Hand Interaction for 2D Windows in VRVirtual Reality (VR) offers portable and flexible workspaces. However, enabling efficient and comfortable interactions without external input devices remains challenging. We propose leveraging redirected input to enable comfortable and touch-like interaction for quick and intuitive control. Our design study revealed that while touch interaction performs well with direct input, its performance degrades significantly under input redirection. In contrast, using pinch improves redirected input by providing self-haptic feedback and reducing input dimensionality, thereby compensating for spatial discrepancies. Based on these findings, we introduce Redirected Pinch, a bare-hand interaction technique that combines input redirection with pinch confirmation. It creates a virtual plane at waist height, remapping hand movements on the plane to a vertical window, with pinch gestures used for confirmation. A user study demonstrated that Redirected Pinch achieves a strong balance of accuracy, efficiency, comfort, and sense of agency across fundamental interactions.2026WYWen Ying et al.University of VirginiaXR SelectionCHI
ConCon: A Wrist-Worn Clutch-Coupled Force-Feedback Device for VR ControllerEffective force feedback is critical for user immersion in VR. However, current solutions have limitations; ungrounded devices using propellers or air-jets often suffer from slow response times and bulky hardware, body-worn devices tend to hinder hand movement, and wrist-worn force-feedback devices usually restrict free wrist movement. To address these challenges, we present ConCon, a wrist-worn 3-DoF force-feedback device utilizing motors with electromagnetic clutches. ConCon's three actuation units apply force to wrist along the radial/ulnar-deviation, flexion/extension, and proximal/distal directions. Clutches can control force transmission continuously, discretely, or impulsively by suddenly releasing a loaded-state. They also enable unimpeded free movement by minimizing mechanical resistance. We first evaluated ConCon's technical performance, including force output, wrist manipulation range, wrist impedance while free movement, and clutch response time. Subsequently, a user study (N=12) across six VR scenarios (Slingshot, Door, Fishing, Handfan, Pistol, Spray) showed ConCon provided significantly higher fun, immersion, and realism than vibrotactile feedback.2026YKYoungIn Kim et al.School of Computing, KAISTHaptic, Touch, and Physical DisplayCHI
Typing Haptically: Towards Enabling Non-auditory Smartphone Text Entry with Haptic Feedback for Blind and Low Vision UsersText entry on smartphones remains challenging for Blind and Low Vision (BLV) users, particularly in environments where audio feedback is impractical due to noise, privacy, or social stigma. We present TypeHap, a new system that enables BLV users to type confidently on smartphones using only haptic feedback without relying on audio. Through formative interviews (N=20), we identified the key user needs and iteratively designed a compact, attachable system combining phoneme-based haptic cues delivered through piezo actuators embedded on both sides of the smartphone and a tactile overlay on a touchscreen for differentiating rows in a keyboard. In a four-day study (N=11), BLV participants trained with TypeHap achieved text entry speeds and accuracies comparable to typing with conventional audio feedback. Participants described TypeHap as liberating in public, noisy, and private contexts where audio feedback falls short. Our findings highlight haptic feedback as a promising alternative to audio-based interaction for enabling more private, accessible smartphone use of BLV users in diverse everyday contexts.2025JYJisu Yim et al.Vibrotactile Feedback & Skin StimulationVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UIST
TwinSpin: A Virtual Ball in a VR Controller Enabling In-Hand 3DoF RotationIn-hand rotation is a natural motor skill of humans, yet current VR controllers mainly rely on arm and wrist movements to rotate virtual objects, leading to significant arm motion and fatigue. To address this, we propose TwinSpin, a VR controller employing two embedded mini-trackballs manipulated by the thumb and index finger. Its design is based on the intuitive metaphor of rolling a virtual ball in-hand to achieve three degrees-of-freedom (3DoF) rotation, leveraging finger dexterity to reduce arm movement and improve task efficiency in VR object manipulation tasks. Through docking tasks in both direct and distant object manipulations, our evaluation showed that TwinSpin significantly reduced arm travel distance, arm rotation, and task completion time compared to conventional arm-based rotation techniques. In line with the objective metrics, participants reported lower perceived physical demand, effort, and less perceived fatigue in the wrist, arm, and shoulder. We also share deeper analyses of the parallel control of translation and rotation, as well as optimal rotation trajectories, to gain further insights into user behavior with TwinSpin. To the best of our knowledge, this is the first attempt to enable full in-hand 3DoF rotation in a power-grip style VR controller.2025CLChangsung Lim et al.Shape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputMixed Reality WorkspacesUIST
StringTouch: A Non-occlusive 3DoF Haptic Interface Using String Structures for Modulating Finger SensationsProviding realistic and diverse tactile feedback during interactions with objects in virtual and augmented reality, various studies have explored the use of tangible proxies. However, tangible proxies face limitations due to their fixed physical properties, restricting the expression of various stiffness, weights, and shapes. To address these issues, we propose StringTouch, a device modulating sensations from proxies without obstructing the fingers to preserve finger sensitivity. StringTouch modulates sensations utilizing 0.2mm thin nylon threads tactor to deform fingers with 3DoF. In a user study (n = 12), our string structure showed better performance in distinguishing orientation, roughness, and weight than conditions using a 0.1 mm latex finger cot and was comparable to bare fingers in some of the discriminating tasks. Another experiment (n = 12) verified the device's capability to modulate orientation, stiffness, and weight perceptions. Finally, in a user study (n = 10) in proxy-based VR scenarios (pouring water, touching a teddy bear, touching a bottle), participants preferred StringTouch over bare finger interactions, with most of them reporting enhanced presence.2025YKYoungIn Kim et al.Shape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputUIST
Over the Mouse: Navigating across the GUI with Finger-Lifting Operation MouseModern GUIs often have a hierarchical structure, i.e., the z-axis of the GUI interaction space. However, conventional mice do not support effective navigation along the z-axis, leading to increased physical movements and cognitive load. To address this inefficiency, we present the OtMouse, a novel mouse that supports finger-lifting operations by detecting finger height through proximity sensors embedded beneath the mouse buttons, and 'Over the Mouse' (OtM) interface, a set of interaction techniques along the z-axis of the GUI interaction space with the OtMouse. Initially, We evaluated the performance of finger-lifting operations (n = 8) with the OtMouse for two- and three-level lifting discrimination tasks. Subsequently, we conducted a user study (n = 16) to compare the usability of the OtM interface and traditional mouse interface for three representative tasks: 'Context Switch,' 'Video Preview,' and 'Map Zooming.' The results showed that OtM interface was both qualitatively and quantitatively superior to using traditional mouse in the Context Switch and Video Preview tasks. This research contributes to the ongoing efforts to enhance mouse-based GUI navigation experiences.2025YKYoungIn Kim et al.School of Computing, KAIST, HCI LabForce Feedback & Pseudo-Haptic WeightPrototyping & User TestingCHI
Pro-Tact: Hierarchical Synthesis of Proprioception and Tactile Exploration for Eyes-Free Ray Pointing on Out-of-View VR MenusWe introduce Pro-Tact, a novel eyes-free pointing technique for interacting with out-of-view (OoV) VR menus. This technique combines rapid rough pointing using proprioception with fine-grain adjustments through tactile exploration, enabling menu interaction without visual attention. Our user study demonstrated that Pro-Tact allows users to select menu items accurately (95% accuracy for 54 items) in an eyes-free manner, with reduced fatigue and sickness compared to eyes-engaged interaction. Additionally, we observed that participants voluntarily interacted with OoV menus eyes-free when Pro-Tact's tactile feedback was provided in practical VR application usage contexts. This research contributes by introducing the novel interaction technique, Pro-Tact, and quantitatively evaluating its benefits in terms of performance, user experience, and user preference in OoV menu interactions.2024YKYeonsu Kim et al.Mid-Air Haptics (Ultrasonic)Immersion & Presence ResearchUIST
Palmrest+: Expanding Laptop Input Space with Shear Force on Palm-Resting AreaThe palmrest area of laptops has the potential as an additional input space, considering its consistent palm contact during keyboard interaction. We propose Palmrest+, leveraging shear force exerted on the palmrest area. We suggest two input techniques: Palmrest Shortcut, for instant shortcut execution, and Palmrest Joystick, for continuous value input. These allow seamless and subtle input amidst keyboard typing. Evaluation of Palmrest Shortcut against conventional keyboard shortcuts revealed faster performance for applying shear force in unimanual and bimanual-manner with a significant reduction in gaze shifting. Additionally, the assessment of Palmrest Joystick against the laptop touchpad demonstrated comparable performance in selecting one- and two- dimensional targets with low-precision pointing, i.e., for short distances and large target sizes. The maximal hand displacement significantly decreased for both Palmrest Shortcut and Palmrest Joystick compared to conventional methods. These findings verify the feasibility and effectiveness of leveraging the palmrest area as an additional input space on laptops, offering promising enhanced typing-related user interaction experiences.2024JYJisu Yim et al.Foot & Wrist InteractionPrototyping & User TestingUIST
QuadStretcher: A Forearm-Worn Skin Stretch Display for Bare-Hand Interaction in AR/VRThe paradigm of bare-hand interaction has become increasingly prevalent in Augmented Reality (AR) and Virtual Reality (VR) environments, propelled by advancements in hand tracking technology. However, a significant challenge arises in delivering haptic feedback to users’ hands, due to the necessity for the hands to remain bare. In response to this challenge, recent research has proposed an indirect solution of providing haptic feedback to the forearm. In this work, we present QuadStretcher, a skin stretch display featuring four independently controlled stretching units surrounding the forearm. While achieving rich haptic expression, our device also eliminates the need for a grounding base on the forearm by using a pair of counteracting tactors, thereby reducing bulkiness. To assess the effectiveness of QuadStretcher in facilitating immersive barehand experiences, we conducted a comparative user evaluation (n = 20) with a baseline solution, Squeezer. The results confirmed that QuadStretcher outperformed Squeezer in terms of expressing force direction and heightening the sense of realism, particularly in 3-DoF VR interactions such as pulling a rubber band, hooking a fishing rod, and swinging a tennis racket. We further discuss the design insights gained from qualitative user interviews, presenting key takeaways for future forearm-haptic systems aimed at advancing AR/VR bare-hand experiences.2024TKTaejun Kim et al.School of Computing, KAISTHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Haptic WearablesCHI
Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual AnchorsWe present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.2022TKTaejun Kim et al.School of Computing, KAISTEye Tracking & Gaze InteractionCHI
SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational AgentsNon-verbal behavior is essential for embodied agents like social robots, virtual avatars, and digital humans. Existing behavior authoring approaches including keyframe animation and motion capture are too expensive to use when there are numerous utterances requiring gestures. Automatic generation methods show promising results, but their output quality is not satisfactory yet, and it is hard to modify outputs as a gesture designer wants. We introduce a new gesture generation toolkit, named SGToolkit, which gives a higher quality output than automatic methods and is more efficient than manual authoring. For the toolkit, we propose a neural generative model that synthesizes gestures from speech and accommodates fine-level pose controls and coarse-level style controls from users. The user study with 24 participants showed that the toolkit is favorable over manual authoring, and the generated gestures were also human-like and appropriate to input speech. The SGToolkit is platform agnostic, and the code is available at https://github.com/ai4r/SGToolkit.2021YYYoungwoo Yoon et al.Agent Personality & AnthropomorphismHuman-Robot Collaboration (HRC)UIST
AtaTouch: Robust Finger Pinch Detection for a VR Controller Using RF Return LossHandheld controllers are an essential part of VR systems. Modern sensing techniques enable them to track users' finger movements to support natural interaction using hands. The sensing techniques, however, often fail to precisely determine whether two fingertips touch each other, which is important for the robust detection of a pinch gesture. To address this problem, we propose AtaTouch, which is a novel, robust sensing technique for detecting the closure of a finger pinch. It utilizes a change in the coupled impedance of an antenna and human fingers when the thumb and finger form a loop. We implemented a prototype controller in which AtaTouch detects the finger pinch of the grabbing hand. A user test with the prototype showed a finger-touch detection accuracy of 96.4%. Another user test with the scenarios of moving virtual blocks demonstrated low object-drop rate (2.75%) and false-pinch rate (4.40%). The results and feedback from the participants support the robustness and sensitivity of AtaTouch.2021DKDaehwa Kim et al.KAISTHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple ObjectsUsers with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.2021JJJingun Jung et al.KAIST, KAISTVibrotactile Feedback & Skin StimulationVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Accessible GamingCHI
OddEyeCam: A Sensing Technique for Body-Centric Peephole Interaction using WFoV RGB and NFoV Depth CamerasThe space around the body not only expands the interaction space of a mobile device beyond its small screen, but also enables users to utilize their kinesthetic sense. Therefore, body-centric peephole interaction has gained considerable attention. To support its practical implementation, we propose OddEyeCam, which is a vision-based method that tracks the 3D location of a mobile device in an absolute, wide, and continuous manner with respect to the body of a user in both static and mobile environments. OddEyeCam tracks the body of a user using a wide-view RGB camera and obtains precise depth information using a narrow-view depth camera from a smartphone close to the body. We quantitatively evaluated OddEyeCam through an accuracy test and two user studies. The accuracy test showed the average tracking accuracy of OddEyeCam was 4.17 and 4.47cm in 3D space when a participant is standing and walking, respectively. In the frst user study, we implemented various interaction scenarios and observed that OddEyeCam was well received by the participants. In the second user study, we observed that the peephole target acquisition task performed using our system followed Fitts’ law. We also analyzed the performance of OddEyeCam using the obtained measurements and observed that the participants completed the tasks with suffcient speed and accuracy.2020DKDaehwa Kim et al.Full-Body Interaction & Embodied InputHuman Pose & Activity RecognitionContext-Aware ComputingUIST
DeepFisheye: Near-Surface Multi-Finger Tracking Technology Using Fisheye CameraNear-surface multi-finger tracking (NMFT) technology expands the input space of touchscreens by enabling novel interactions such as mid-air and finger-aware interactions. We present DeepFisheye, a practical NMFT solution for mobile devices, that utilizes a fisheye camera attached at the bottom of a touchscreen. DeepFisheye acquires the image of an interacting hand positioned above the touchscreen using the camera and employs deep learning to estimate the 3D position of each fingertip. We created two new hand pose datasets comprising fisheye images, on which our network was trained. We evaluated DeepFisheye’s performance for three device sizes. DeepFisheye showed average errors with approximate value of 20 mm for fingertip tracking across the different device sizes. Additionally, we created simple rule-based classifiers that estimate the contact finger and hand posture from DeepFisheye’s output. The contact finger and hand posture classifiers showed accuracy of approximately 83 and 90%, respectively, across the device sizes.2020KPKeunwoo Park et al.Hand Gesture RecognitionEye Tracking & Gaze InteractionKnowledge Worker Tools & WorkflowsUIST
FS-Pad: Video Game Interactions with a Force Feedback GamepadForce feedback has not been fully explored in modern gaming environments where a gamepad is the main interface. We developed various game interaction scenarios where force feedback through the thumbstick of the gamepad can be effective, and categorized them into five themes. We built a haptic device and control system that can support all presented interactions. The resulting device, FS-Pad, has sufficient fidelity to be used as a haptic game interaction design tool. To verify the presented interactions and effectiveness of the FS-Pad, we conducted a user study with game players, developers, and designers. The subjects used an FS-Pad while playing a demo game and were then interviewed. Their feedback revealed the actual needs for the presented interactions as well as insight into the potential design of game interactions when applying FS-Pad.2020YSYoungbo Aram Shim et al.Force Feedback & Pseudo-Haptic WeightSerious & Functional GamesUIST
MirrorPad: Mirror on Touchpad for Direct Pen Interaction in the Laptop EnvironmentThere are needs for pen interaction on a laptop, and the market sees many pen-enabled laptop products. Many of these laptops can be transformed into tablets, when pen interaction is needed. In a real situation, however, a workflow often requires both keyboard and pen interactions, and such a convertible feature may not be effective. In this study, we introduce MirrorPad, a novel interface device contained in a laptop for direct pen interaction. It is both a normal touchpad and a viewport for pen interaction with a mirrored region on the screen. We report findings and decisions obtained from the design iterations that we conducted with users to refine MirrorPad toward the final design. In the user study, MirrorPad showed the same performance as that of the laptop configuration during keyboard interaction and a performance similar to that of the tablet configuration during pen interaction. The user study results confirmed that MirrorPad effectively supports a workflow, which requires interspersed keyboard and pen interactions, thereby achieving its initial goal.2020SLSangyoon Lee et al.Korea Advanced Institute of Science and Technology360° Video & Panoramic ContentPrototyping & User TestingCHI
MagTouch: Robust Finger Identification for a Smartwatch Using a Magnet Ring and a Built-in MagnetometerCompleting tasks on smartwatches often requires multiple gestures due to the small size of the touchscreens and the lack of sufficient number of touch controls that are easily accessible with a finger. We propose to increase the number of functions that can be triggered with the touch gesture by enabling a smartwatch to identify which finger is being used. We developed MagTouch, a method that uses a magnetometer embedded in an off-the-shelf smartwatch. It measures the magnetic field of a magnet fixed to a ring worn on the middle finger. By combining the measured magnetic field and the touch location on the screen, MagTouch recognizes which finger is being used. The tests demonstrated that MagTouch can differentiate among the three fingers used to make contacts at a success rate of 95.03%.2020KPKeunwoo Park et al.Korea Advanced Institute of Science and TechnologyHand Gesture RecognitionSmartwatches & Fitness BandsCHI
Voice+Tactile: Augmenting In-vehicle Voice User Interface with Tactile Touchpad InteractionPromisingly, driving is adapting to a Voice User Interface (VUI) that lets drivers utilize diverse applications with little effort. However, the VUI has innate usability issues, such as a turn-taking problem, a short-term memory workload, inefficient controls, and difficulty correcting errors. To overcome these weaknesses, we explored supplementing the VUI with tactile interaction. As an early result, we present the Voice+Tactile interactions that augment the VUI via multi-touch inputs and high-resolution tactile outputs. We designed various Voice+Tactile interactions to support different VUI interaction stages and derived four Voice+Tactile interaction themes: Status Feedback, Input Adjustment, Output Control, and Finger Feedforward. A user study showed that the Voice+Tactile interactions improved the VUI efficiency and its user experiences without incurring significant additional distraction overhead on driving. We hope these early results open new research questions to improve in-vehicle VUI with a tactile channel.2020JJJingun Jung et al.Korea Advanced Institute of Science and TechnologyIn-Vehicle Haptic, Audio & Multimodal FeedbackVoice User Interface (VUI) DesignCHI