JettingPointer: Enabling Skin-to-Pointer Midair Touch Interaction on Minimal Wearables Using Integrated Airflow Haptic CuesWe introduce JettingPointer, a skin-to-pointer interaction technique that enables accurate near-surface 2D touch input on minimal wearable devices, such as smart glasses. The core component is an airflow jet, embedded in the glasses frame, that functions as a haptic pointer by providing localized feedback to the finger skin during touch interactions performed above the frame. Users activate functions by aligning their finger phalanx with the airflow stream, guided by proprioception and a distinct point sensation. We optimized the airflow using fluid dynamics principles and characterized the required flow rate for stable tactile perception. In Study 1, we validated its perceptual clarity, confirming that a perceptible point sensation could be reliably achieved within 20 mm of the nozzle. In Study 2, participants performed eyes-free touch tasks with nearly three times greater accuracy when supported by haptic feedback (7.49<< vs. 21.85<< error). These findings demonstrate the potential of JettingPointer as a practical method for enabling proprioception-guided, near-surface interaction on compact wearables, with implications for expanding dense input in space-constrained form factors.2025YFYuan-Ling Feng et al.Mid-Air Haptics (Ultrasonic)Haptic WearablesMobileHCI
Surrogate Avatar: Enhancing Situated Co-Presence and User Mobility in Symmetric Telepresence ConversationsWe present Surrogate Avatar, an adaptive telepresence method that enhances user mobility and situated co-presence in symmetric avatar-mediated communication. The system enables a remote user’s avatar to autonomously position itself in socially and environmentally appropriate locations within the local user’s space—based on spatial affordances, interactional norms, and environmental constraints—supporting fluid interaction without requiring a shared environmental context. Through a formative study, we derived key adaptation objectives and implemented them using a distributed optimization framework based on the AUIT system. The framework distributes adaptation tasks across server and client to balance responsiveness and computational efficiency. A user study involving both stationary and nomadic scenarios demonstrated consistently high usability and presence, with some limitations observed under walking conditions. An additional exploratory field study in a semi-structured public setting demonstrated the system’s viability beyond controlled lab conditions. These findings motivate future designs of mobile telepresence systems that dynamically adapt to spatial and conversational context while mitigating misunderstandings that can arise from asymmetric environmental awareness and supporting privacy-sensitive interaction.2025SLSheng-Cian Lee et al.Teleoperation & TelepresenceMobileHCI
SeeThroughBody: Mitigating Occlusion through Body Transparency to Enhance Touch Interaction between the Foot and Interactive FloorOcclusion, often caused by the user's body or fingers, can significantly reduce the efficiency and usability of touch interfaces. As foot-based interactions in HMDs become more prevalent, self-occlusion becomes a more pronounced issue due to the involvement of the body and legs. This work presents SeeThroughBody, a body-rendering approach designed to mitigate occlusion and enhance touch interactions between the foot and interactive floor in virtual environments. Our user study unveiled twofold results. First, changing VisualizationStyles and BodyPartsVisibility can improve objective performance (e.g., time, movement) by reducing occlusion. Second, these modifications also affect the subjective user experience (e.g., embodiment, usability). Different VisualizationStyles and BodyPartsVisibility have varying impacts, presenting trade-offs between performance and experience. Based on these insights, we recommend Transparent-Foot and Outline-Foot for interactions focused on efficiency, and Transparent-All and Transparent-Thigh for enhancing overall user experience. Finally, we demonstrate the application of these recommendations in a map browsing scenario using foot touch.2025MSMeng Ting Shih et al.National Yang Ming Chiao Tung University, Institute of Computer Science and EngineeringFull-Body Interaction & Embodied InputFoot & Wrist InteractionCHI
BodyTouch: Investigating Eye-Free, On-Body and Near-Body Touch Interactions with HMDsCheng 等人提出 BodyTouch 系统,探索在佩戴 HMD 条件下无需视觉注视的本体触觉和近身触觉交互方式。2024WCWen-Wei Cheng et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputUbiComp
Seated-WIP: Enabling Walking-in-Place Locomotion for Stationary Chairs in Confined SpacesWe introduce Seated-WIP, a footstep-based locomotion technique tailored for users seated in confined spaces such as on an airplane. It emulates real-world walking using forefoot or rearfoot in-place stepping, enhancing embodiment while reducing fatigue for pro- longed interactions. Our footstep-locomotion maps users’ footstep motions to four locomotion actions: walking forward, turning-in- place, walking backward, and sidestepping. Our first study examined embodiment and fatigue levels across various sitting positions using forefoot, rearfoot, and fullfoot stepping methods. While all these methods effectively replicated walking, users favored the forefoot and rearfoot methods due to reduced fatigue. In our sec- ond study, we compared the footstep-locomotion to leaning- and controller-locomotion on a multitasking navigation task. Results indicate that footstep locomotion offers the best embodied sense of walking and has comparable fatigue levels to controller-locomotion, albeit with slightly reduced efficiency than controller-locomotion. In seated VR environments, footstep locomotion offers a harmonious blend of embodiment, fatigue mitigation, and efficiency.2024LCLiwei Chan et al.National Chiao Tung UniversityFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
LapTouch: Using the Lap for Seated Touch Interaction with HMDs"Use of virtual reality while seated is common, but studies on seated interaction beyond the use of controllers or hand gestures have been sparse. This work present LapTouch, which makes use of the lap as a touch interface and includes two user studies to inform the design of direct and indirect touch interaction using the lap with visual feedback that guides the user touch, as well as eye-free interaction in which users are not provided with such visual feedback. The first study suggests that direct interaction can provide effective layouts with 95% accuracy with up to a 4×4 layout and a shorter completion time, while indirect interaction can provide effective layouts with up to a 4×5 layout but a longer completion time. Considering user experience, which revealed that 4-row and 5-column layouts are not preferred, it is recommended to use both direct and indirect interaction with a maximum of a 3×4 layout. According to the second study, increasing the eye-free interaction with support vector machine (SVM) allows for a 2×2 layout with a generalized model and 2×2, 2×3 and 3×2 layouts with personalized models." https://doi.org/10.1145/36108782023TMTzu-Wei Mi et al.Full-Body Interaction & Embodied InputUbiComp
"A feeling of déjà vu": The Effects of Avatar Appearance-Similarity on Persuasiveness in Social Virtual RealityThe similarity effect refers to the tendency for people to be more easily influenced by others who resemble them in appearance. This phenomenon has been found to have positive impacts, including on the building of trust, that enrich the quality of communication (e.g., fluency or collaboration performance). While research has shown that the similarity effect occurs in screen-based communication platforms, it remains unclear how this phenomenon impacts user perceptions, especially of others' persuasiveness, in immersive environments such as virtual reality (VR). In this study, we adopted a mixed-methods approach to exploring how interaction with avatars of similar appearance to one's own self-representation influences conversations. Such similarity was operationalized as having three levels: identicality, moderate similarity, and dissimilarity. The study found that avatars of moderate similarity have the greatest persuasiveness; however, in both identicality and moderate similarity conditions, participants felt it was easier to communicate with and lower eeriness rating to avatars than in the dissimilarity condition. Multiple linear regression further revealed that users who had relatively low self-esteem and/or were relatively conscientious were more susceptible to the positive effect of appearance similarity on persuasiveness. We conclude that the similarity effect, especially when the similarity in question is moderate, could be leveraged to support persuasiveness in VR-based communication.2023MSFaye Shih et al.AR/VRCSCW
RealityLens: A User Interface for Blending Customized Physical World View into Virtual RealityResearch has enabled virtual reality (VR) users to interact with the physical world by blending the physical world view into the virtual environment. However, current solutions are designed for specific use cases and hence are not capable of covering users' varying needs for accessing information about the physical world. This work presents RealityLens, a user interface that allows users to peep into the physical world in VR with the reality lenses they deployed for their needs. For this purpose, we first conducted a preliminary study with experienced VR users to identify users' needs for interacting with the physical world, which led to a set of features for customizing the scale, placement, and activation method of a reality lens. We evaluated the design in a user study (n=12) and collected the feedback of participants engaged in two VR applications while encountering a range of interventions from the physical world. The results show that users' VR presence tends to be better preserved when interacting with the physical world with the support of the RealityLens interface.2022CWPeng-Jui Wang et al.Mixed Reality WorkspacesUIST
Predicting Opportune Moments to Deliver Notifications in Virtual RealityVirtual reality (VR) has increasingly been used in many areas, and the need to deliver notifications in VR is also expected to increase accordingly. However, untimely interruptions could largely impact the experience in VR. Identifying opportune times to deliver notifications to users allows for notifications to be scheduled in a way that minimizes disruption. We conducted a study to investigate the use of sensor data available on an off-the-shelf VR device and additional contextual information, including current activity and engagement of users, to predict opportune moments for sending notifications using deep learning models. Our analysis shows that using mainly sensor features could achieve 72% recall, 71% precision and 0.86 area under receiver operating characteristic (AUROC); performance can be further improved to 81% recall, 82% precision, and 0.93 AUROC if information about activity and summarized user engagement is included.2022KCTze-Yu Chen et al.National Yang Ming Chiao Tung UniversityNotification & Interruption ManagementCHI
Investigating Positive and Negative Qualities of Human-in-the-Loop Optimization for Designing Interaction TechniquesDesigners reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives. In HCI, design optimization problems are often exceedingly complex, involving multiple objectives and expensive empirical evaluations. Model-based computational design algorithms assist designers by generating design examples during design, however they assume a model of the interaction domain. Black box methods for assistance, on the other hand, can work with any design problem. However, virtually all empirical studies of this human-in-the-loop approach have been carried out by either researchers or end-users. The question stands out if such methods can help designers in realistic tasks. In this paper, we study Bayesian optimization as an algorithmic method to guide the design optimization process. It operates by proposing to a designer which design candidate to try next, given previous observations. We report observations from a comparative study with 40 novice designers who were tasked to optimize a complex 3D touch interaction technique. The optimizer helped designers explore larger proportions of the design space and arrive at a better solution, however they reported lower agency and expressiveness. Designers guided by an optimizer reported lower mental effort but also felt less creative and less in charge of the progress. We conclude that human-in-the-loop optimization can support novice designers in cases where agency is not critical.2022LCLiwei Chan et al.National Chiao Tung UniversityForce Feedback & Pseudo-Haptic WeightComputational Methods in HCICHI
Slice of Light: Transparent and Integrative Transition Among Realities in a Multi-HMD-User EnvironmentThis work presents Slice of Light, a visualization design created to enhance transparency and integrative transition between realities of Head-Mounted Display (HMD) users sharing the same physical environment. Targeted at reality-guests, Slice of Light’s design enables guests to view other HMD users’ interactions contextualized in their own virtual environments while allowing the guests to navigate among these virtual environments. In this paper, we detail our visualization design and the implementation. We demonstrate Slice of Light with a block-world construction scenario that involves a multi-HMD-user environment. VR developer and HCI expert participants were recruited to evaluate the scenario, and responded positively to Slice of Light. We discuss their feedback, our design insights, and the limitations of this work.2020CWChiu-Hsuan Wang et al.Social & Collaborative VRMixed Reality WorkspacesImmersion & Presence ResearchUIST
HMD Light: Sharing In-VR Experience via Head-Mounted Projector for Asymmetric InteractionWe present HMD Light, a proof-of-concept Head-Mounted Display (HMD) implementation that reveals the Virtual Reality (VR) user’s experience in the physical environment to facilitate communication between VR and external users in a mobile VR context. While previous work externalized the VR user’s experience through an on-HMD display, HMD Light places the display into the physical environment to enable larger display and interaction area. This work explores the interaction design space of HMD Light and presents four applications to demonstrate its versatility. Our exploratory user study observed participant pairs experience applications with HMD Light and evaluated usability, accessibility and social presence between users. From the results, we distill design insights for HMD Light and asymmetric VR collaboration.2020CWChiu-Hsuan Wang et al.Social & Collaborative VRMixed Reality WorkspacesImmersion & Presence ResearchUIST
OmniGlobeVR: A Collaborative 360-Degree Communication System for VRIn this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner’s focus. We discuss the design implications of these results and directions for future research.2020ZLZhengqing Li et al.Social & Collaborative VRImmersion & Presence ResearchDIS
A Skin-Stroke Display on the Eye-Ring Through Head-Mounted DisplaysWe present the Skin-Stroke Display, a system mounted on the lens inside the head-mounted display, which exerts subtle yet recognizable tactile feedback on the eye-ring using a motorized air jet. To inform our design of noticeable air-jet haptic feedback, we conducted a user study to identify absolute detection thresholds. Our results show that the tactile sensation had different sensitivity around the eyes, and we determined a standard intensity (8 mbar) to prevent turbulent airflow blowing into the eyes. In the second study, we asked participants to adjust the intensity around the eye for equal sensation based on standard intensity. Next, we investigated the recognition of point and stroke stimuli with or without inducing cognitive load on eight directions on the eye-ring. Our longStroke stimulus can achieve an accuracy of 82.6% without cognitive load and 80.6% with cognitive load simulated by the Stroop test. Finally, we demonstrate example applications using the skin-stroke display as the off-screen indicator, tactile I/O progress display, and tactile display.2020WTWen-Jie Tseng et al.National Chiao Tung University & Institut Polytechnique de ParisMid-Air Haptics (Ultrasonic)Eye Tracking & Gaze InteractionCHI
TilePoP: Tile-type Pop-up Prop for Virtual RealityWe present TilePoP, a new type of pneumatically-actuated interface deployed as floor tiles which dynamically pop up to large shapes to construct proxy objects for whole-body interactions in Virtual Reality. TilePoP consists of a 2D array of stacked cube-shaped airbags designed with specific folding structures, enabling each airbag to be inflated into a physical proxy and deflated back to a tile when not in use. TilePoP is capable of providing haptic feedback for the whole body and can even support human body weight. Thus it affords new interaction possibilities in VR. We describe the design and implementation in details. We finally demonstrate the applications and conducted a preliminary user evaluation to understand the experiences of using TilePoP.2019STShan-Yuan Teng et al.Shape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputUIST
Pull-Ups: Enhancing Suspension Activities in Virtual Reality with Body-Scale Kinesthetic Force FeedbackWe present Pull-Ups, a suspension kit that can suggest a range of body postures and thus enables various exercise styles of users perceiving the kinesthetic force feedback by suspending their weight with arm exertion during the interaction. Pull-Ups actuates the user's body to move up to 15 cm by pulling his or her hands using a pair of pneumatic artificial muscle groups. Our studies informed the discernible kinesthetic force feedbacks that were then exploited for the design of kinesthetic force feedback in three physical activities: kitesurfing, paragliding, and space invader. Our final study on user experiences suggested that a passive suspension kit alone added substantially to users' perceptions of realism and enjoyment (all above neutral) with passive physical support, while sufficient active feedback can further level them up. In addition, we found that both passive and active feedback of the suspension kit significantly reduced motion sickness in simulated kitesurfing and paragliding compared to when no suspension kit (thus no feedback) was provided. This work suggests that a passive suspension kit is cost-effective as a home exercise kit, while active feedback can further level up user experience, though at the cost of the installation (e.g., an air compressor in our prototype).2019YYYuan-Syun Ye et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputUIST
FaceWidgets: Exploring Tangible Interaction on Face with Head-Mounted DisplaysWe present FaceWidgets, a device integrated with the backside of a head-mounted display (HMD) that enables tangible interactions using physical controls. To allow for near range-to-eye interactions, our first study suggested displaying the virtual widgets at 20 cm from the eye positions, which is 9 cm from the HMD backside. We propose two novel interactions, widget canvas and palm-facing gesture, that can help users avoid double vision and allow them to access the interface as needed. Our second study showed that displaying a hand reference improved performance of face widgets interactions. We developed two applications of FaceWidgets, a fixed-layout 360 video player and a contextual input for smart home control. Finally, we compared four hand visualizations against the two applications in an exploratory study. Participants considered the transparent hand as the most suitable and responded positively to our system.2019WTWen-Jie Tseng et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)In-Vehicle Haptic, Audio & Multimodal FeedbackEye Tracking & Gaze InteractionUIST
ThermalBracelet: Exploring Thermal Haptic Feedback Around the WristSmartwatches enable the wrist to be used as an ideal location to provide always-available haptic notifications as they are constantly worn with direct contact with the skin. With the wrist straps, the haptic feedback can be extended to the full space around the wrist to provide more spatial and enriched feedback. With ThermalBracelet, we investigate thermal feedback as a haptic feedback modality around the wrist. We present three studies that lead to the development of a smartwatch-integratable thermal bracelet that stimulates six locations around the wrist. Our initial evaluation reports on the selection of the thermal module configurations. Secondly, with the selected six-module configuration, we explore its usability in a real-world scenarios such as walking and reading. Thirdly, we investigate its capability of providing spatio temporal feedback while engaged in distracting tasks. Finally we present application scenarios that demonstrates its usability.2019RPRoshan Lalitha Peiris et al.Keio University Graduate School of Media DesignFoot & Wrist InteractionBiosensors & Physiological MonitoringContext-Aware ComputingCHI
PuPoP: Pop-up Prop on Palm for Virtual RealityThe sensation of being able to feel the shape of an object when grasping it in Virtual Reality (VR) enhances a sense of presence and the ease of object manipulation. Though most prior works focus on force feedback on fingers, the haptic emulation of grasping a 3D shape requires the sensation of touch using the entire hand. Hence, we present Pop-up Prop on Palm (PuPoP), a light-weight pneumatic shape-proxy interface worn on the palm that pops several airbags up with predefined primitive shapes for grasping. When a user's hand encounters a virtual object, an airbag of appropriate shape, ready for grasping, is inflated by way of the use of air pumps; the airbag then deflates when the object is no longer in play. Since PuPoP is a physical prop, it can provide the full sensation of touch to enhance the sense of realism for VR object manipulation. For this paper, we first explored the design and implementation of PuPoP with multiple shape structures. We then conducted two user studies to further understand its applicability. The first study shows that, when in conflict, visual sensation tends to dominate over touch sensation, allowing a prop with a fixed size to represent multiple virtual objects with similar sizes. The second study compares PuPoP with controllers and free-hand manipulation in two VR applications. The results suggest that utilization of dynamically-changing PuPoP, when grasped by users in line with the shapes of virtual objects, enhances enjoyment and realism. We believe that PuPoP is a simple yet effective way to convey haptic shapes in VR.2018STShan-Yuan Teng et al.Shape-Changing Interfaces & Soft Robotic MaterialsImmersion & Presence ResearchUIST
FacePush: Introducing Normal Force on Face with Head-Mounted DisplaysThis paper presents FacePush, a Head-Mounted Display (HMD) integrated with a pulley system to generate normal forces on a user’s face in virtual reality (VR). The mechanism of FacePush is obtained by shifting torques provided by two motors that press upon a user’s face via utilization of a pulley system. FacePush can generate normal forces of varying strengths and apply those to the surface of the face. To inform our design of FacePush for noticeable and discernible normal forces in VR applications, we conducted two studies to iden- tify the absolute detection threshold and the discrimination threshold for users’ perception. After further consideration in regard to user comfort, we determined that two levels of force, 2.7 kPa and 3.375 kPa, are ideal for the development of the FacePush experience via implementation with three applications which demonstrate use of discrete and continuous normal force for the actions of boxing, diving, and 360 guidance in virtual reality. In addition, with regards to a virtual boxing application, we conducted a user study evaluating the user experience in terms of enjoyment and realism and collected the user’s feedback.2018HCHong-Yu Chang et al.Mid-Air Haptics (Ultrasonic)Immersion & Presence ResearchUIST