ArmDeformation: Inducing the Sensation of Arm Deformation in Virtual Reality Using Skin-StretchingWith the development of virtual reality (VR) technology, research is being actively conducted on how incorporating multisensory feedback can create the illusion that virtual avatars are perceived as an extension of the body in VR. In line with this research direction, we introduce ArmDeformation, a wearable device employing skin-stretching to enhance virtual forearm ownership during arm deformation illusion. We conducted five user studies with 98 participants. Using a developed tabletop device, we confirmed the optimal number of actuators and the ideal skin-stretching design effectively increases the user's body ownership. Additionally, we explored the maximum visual threshold for forearm bending and the minimum detectable bending direction angle when using skin-stretching in VR. Finally, our study demonstrates that using ArmDeformation in VR applications enhances user realism and enjoyment compared to relying on visual feedback alone.2024YLYilong Lin et al.Southern University of Science and TechnologyMid-Air Haptics (Ultrasonic)Immersion & Presence ResearchIdentity & Avatars in XRCHI
Big or Small, It’s All in Your Head: Visuo-Haptic Illusion of Size-Change Using Finger-RepositioningHaptic perception of physical sizes increases the realism and immersion in Virtual Reality (VR). Prior work rendered sizes by exerting pressure on the user’s fingertips or employing tangible, shape-changing devices. These interfaces are constrained by the physical shapes they can assume, making it challenging to simulate objects growing larger or smaller than the perceived size of the interface. Motivated by literature on pseudo-haptics describing the strong influence of visuals over haptic perception, this work investigates modulating the perception of size beyond this range. We developed a fixed-sized VR controller leveraging finger-repositioning to create a visuo-haptic illusion of dynamic size-change of handheld virtual objects. Through two user studies, we found that with an accompanying size-changing visual context, users can perceive virtual object sizes up to 44.2% smaller to 160.4%larger than the perceived size of the device. Without the accompanying visuals, a constant size (141.4% of device size) was perceived.2024MKMyung Jin Kim et al.KAISTForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputCHI
Beyond Audio: Towards a Design Space of Headphones as a Site for Interaction and SensingVia Research through Design (RtD), we explore the potential of headphones as a general-purpose input device for both foreground motion-gestures as well as background sensing of user activity. As a familiar wearable device, headphones offer a compelling site for head-situated interaction and sensing. Using emerging sensing modalities such as inertial motion, capacitive touch sensing, and depth cameras, our implemented prototypes explore sensing and interaction techniques that offer a range of compelling capabilities. User scenarios include context-aware privacy, gestural audio-visual control, and co-opting natural body language as context to drive animated avatars for "camera-off" scenarios in remote work--or to co-opt (oft-subconscious) head movements such as dodging attacks in video games to enhance the gameplay experience. Drawing from literature and other frameworks, we situate our prototypes and related techniques in a design space across the dual dimensions of (1) type of input (touch, mid-air, or head orientation); and (2) the context of user action (application, body, or environment). In particular, interactions that combine multiple inputs and contexts at the same time offer a rich design space of headphone-situated wearable interactions and sensing techniques.2023PPPayod Panda et al.Haptic WearablesFull-Body Interaction & Embodied InputContext-Aware ComputingDIS
Exploring Levels of Control for a Navigation Assistant for Blind TravelersOnly a small percentage of blind and low-vision people use traditional mobility aids such as a cane or a guide dog. Various assistive technologies have been proposed to address the limitations of traditional mobility aids. These devices often give either the user or the device majority of the control. In this work, we explore how varying levels of control affect the users’ sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two modes for control: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users’ control mode preferences varied in different situations; no single mode "won" in all situations.2023VRVinitha Ranganeni et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Augmentative & Alternative Communication (AAC)Context-Aware ComputingHRI
Embodying Physics-Aware Avatars in Virtual RealityEmbodiment toward an avatar in virtual reality (VR) is generally stronger when there is a high degree of alignment between the user's and self-avatar's motion. However, one-to-one mapping between the two is not always ideal when user interacts with the virtual environment. On these occasions, the user input often leads to unnatural behavior without physical realism (e.g., objects penetrating virtual body, body unmoved by hitting stimuli). We investigate how adding physics correction to self-avatar motion impacts embodiment. Physics-aware self-avatar preserves the physical meaning of the movement but introduces discrepancies between the user's and self-avatar's motion, whose contingency is a determining factor for embodiment. To understand its impact, we conducted an in-lab study (n = 20) where participants interacted with obstacles on their upper bodies in VR with and without physics correction. Our results showed that, rather than compromising embodiment level, physics-responsive self-avatar improved embodiment compared to no-physics condition in both active and passive interactions.2023YTYujie Tao et al.Stanford UniversityImmersion & Presence ResearchIdentity & Avatars in XRCHI
AdHocProx: Sensing Mobile, Ad-Hoc Collaborative Device Formations using Dual Ultra-Wideband RadiosWe present AdHocProx, a system that uses device-relative, inside-out sensing to augment co-located collaboration across multiple devices, without recourse to externally-anchored beacons -- or even reliance on WiFi connectivity. AdHocProx achives this via sensors including dual ultra-wideband (UWB) radios for sensing distance and angle to other devices in dynamic, ad-hoc arrangements; plus capacitive grip to determine where the user's hands hold the device, and to partially correct for the resulting UWB signal attenuation. All spatial sensing and communication takes place via the side-channel capability of the UWB radios, suitable for small-group collaboration across up to four devices (eight UWB radios). Together, these sensors detect proximity and natural, socially meaningful device movements to enable contextual interaction techniques. We find that AdHocProx can obtain 95% accuracy recognizing various ad-hoc device arrangements in an offline evaluation, with participants particularly appreciative of interaction techniques that automatically leverage proximity-awareness and relative orientation amongst multiple devices.2023RLRichard Li et al.University of WashingtonContext-Aware ComputingUbiquitous ComputingCHI
RemoteLab: Virtual Reality Remote study Tool KitUser studies play a critical role in human subject research, including human-computer interaction. Virtual reality (VR) researchers tend to conduct user studies in-person at their laboratory, where participants experiment with novel equipment to complete tasks in a simulated environment, which is often new to many. However, due to social distancing requirements in recent years, VR research has been disrupted by preventing participants from attending in-person laboratory studies. On the other hand, affordable head-mounted displays are becoming common, enabling access to VR experiences and interactions outside traditional research settings. Recent research has shown that unsupervised remote user studies can yield reliable results, however, the setup of experiment software designed for remote studies can be technically complex and convoluted. We present a novel open-source Unity toolkit, RemoteLab, designed to facilitate the preparation of remote experiments by providing a set of tools that synchronize experiment state across multiple computers, record and collect data from various multimedia sources, and replay the accumulated data for analysis. This toolkit facilitates VR researchers to conduct remote experiments when in-person experiments are not feasible or increase the sampling variety of a target population and reach participants that otherwise would not be able to attend in-person.2022JLJaewook Lee et al.Social & Collaborative VRRemote Work Tools & ExperienceUIST
HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile RobotsHapticBots introduces a novel encountered-type haptic approach for Virtual Reality (VR) based on multiple tabletop-size shape-changing robots. These robots move on a tabletop and change their height and orientation to haptically render various surfaces and objects on-demand. Compared to previous encountered-type haptic approaches like shape displays or robotic arms, our proposed approach has an advantage in deployability, scalability, and generalizability---these robots can be easily deployed due to their compact form factor. They can support multiple concurrent touch points in a large area thanks to the distributed nature of the robots. We propose and evaluate a novel set of interactions enabled by these robots which include: 1) rendering haptics for VR objects by providing just-in-time touch-points on the user's hand, 2) simulating continuous surfaces with the concurrent height and position change, and 3) enabling the user to pick up and move VR objects through graspable proxy objects. Finally, we demonstrate HapticBots with various applications, including remote collaboration, education and training, design and 3D modeling, and gaming and entertainment.2021RSRyo Suzuki et al.Mid-Air Haptics (Ultrasonic)Mixed Reality WorkspacesImmersion & Presence ResearchUIST
AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware ArmaturesAirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air – with 2-5 armatures poseable in 7DoF within the same workspace – to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen" in remote work.2021NMNicolai Marquardt et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Knowledge Management & Team AwarenessUbiquitous ComputingUIST
X-Rings: A Hand-mounted 360 Degree Shape Display for Grasping in Virtual RealityX-Rings is a novel hand-mounted 360 degree shape display for Virtual Reality that renders objects in 3D and responds to user-applied touch and grasping force. Designed as a modular stack of motor-driven expandable rings (5.7-7.7 cm diameter), X-Rings renders radially-symmetric surfaces graspable by the user's whole hand. The device is strapped to the palm, allowing the fingers to freely make and break contact with the device. Capacitance sensors and motor current sensing provide estimates of finger touch states and gripping force. We present the results of a user study evaluating participants’ ability to associate device-rendered shapes with visually-rendered objects as well as a demo application that allows users to freely interact with a variety of objects in a virtual environment.2021EGEric J. Gonzalez et al.Shape-Changing Interfaces & Soft Robotic MaterialsIdentity & Avatars in XRUIST
A Taxonomy of Sounds in Virtual RealityVirtual reality (VR) leverages human sight, hearing and touch senses to convey virtual experiences. For d/Deaf and hard of hearing (DHH) people, information conveyed through sound may not be accessible. To help with future design of accessible VR sound representations for DHH users, this paper contributes a consistent language and structure for representing sounds in VR. Using two studies, we report on the design and evaluation of a novel taxonomy for VR sounds. Study 1 included interviews with 10 VR sound designers to develop our taxonomy along two dimensions: sound source and intent. To evaluate this taxonomy, we conducted another study (Study 2) where eight HCI researchers used our taxonomy to document sounds in 33 VR apps. We found that our taxonomy was able to successfully categorize nearly all sounds (265/267) in these apps. We also uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.2021DJDhruv Jain et al.Social & Collaborative VRImmersion & Presence ResearchDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)DIS
Haptic PIVOT: On-Demand Handhelds in VRWe present PIVOT, a wrist-worn haptic device that renders virtual objects into the user’s hand on demand. Its simple design comprises a single actuated joint that pivots a haptic handle into and out of the user’s hand, rendering the haptic sensations of grasping, catching, or throwing an object – anywhere in space. Unlike existing hand-held haptic devices and haptic gloves, PIVOT leaves the user’s palm free when not in use, allowing users to make unencumbered use of their hand. PIVOT also enables rendering forces acting on the held virtual objects, such as gravity, inertia, or air-drag, by actively driving its motor while the user is firmly holding the handle. When wearing a PIVOT device on both hands, they can add haptic feedback to bimanual interaction, such as lifting larger objects. In our user study, participants (n=12) evaluated the realism of grabbing and releasing objects of different shape and size with mean score 5.19 on a scale from 1 to 7, rated the ability to catch and throw balls in different directions with different velocities (mean=5.5), and verified the ability to render the comparative weight of held objects with 87% accuracy for ~100g increments.2020RKRobert Kovacs et al.Force Feedback & Pseudo-Haptic WeightHaptic WearablesUIST
Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual WorldsCurrent Virtual Reality (VR) technologies focus on rendering visuospatial effects, and thus are inaccessible for blind or low vision users. We examine the use of a novel white cane controller that enables navigation without vision of large virtual environments with complex architecture, such as winding paths and occluding walls and doors. The cane controller employs a lightweight three-axis brake mechanism to provide large-scale shape of virtual objects. The multiple degrees-of-freedom enables users to adapt the controller to their preferred techniques and grip. In addition, surface textures are rendered with a voice coil actuator based on contact vibrations; and spatialized audio is determined based on the progression of sound through the geometry around the user. We design a scavenger hunt game that demonstrates how our device enables blind users to navigate a complex virtual environment. Seven out of eight users were able to successfully navigate the virtual room (6x6m) to locate targets while avoiding collisions. We conclude with design consideration on creating immersive non-visual VR experiences based on user preferences for cane techniques, and cane material properties.2020ASAlexa F. Siu et al.Stanford UniversitySocial & Collaborative VRVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
CapstanCrunch: A Haptic VR Controller with User-supplied Force FeedbackWe introduce CapstanCrunch, a force resisting, ungrounded haptic controller that renders haptic feedback for touching and grasping both rigid and compliant objects in a VR environment. In contrast to previous controllers, CapstanCrunch renders human-scale forces without the use of large, high force, electrically power consumptive and expensive actuators. Instead, CapstanCrunch integrates a friction-based capstan-plus-cord variable-resistance brake mechanism that is dynamically controlled by a small internal motor. The capstan mechanism magnifies the motor’s force by a factor of around 40. Compared to active force control devices, it is low cost, low electrical power, robust, safe, fast and quiet, while providing high force control to user interaction. We describe the design and implementation of CapstanCrunch and demonstrate its use in a series of VR scenarios. Finally, we evaluate the performance of CapstanCrunch in two user studies and compare our controller with an active haptic controller with the ability to simulate different levels of convincing object rigidity and/or compliance.2019MSMike Sinclair et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputUIST
RealityCheck: Blending Virtual Environments with Situated Physical RealityToday's virtual reality (VR) systems offer chaperone rendering techniques that prevent the user from colliding with physical objects. Without a detailed geometric model of the physical world, these techniques offer limited possibility for more advanced compositing between the real world and the virtual. We explore this using a realtime 3D reconstruction of the real world that can be combined with a virtual environment. RealityCheck allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical space without losing the sense of immersion or presence inside their virtual world. We demonstrate RealityCheck with seven existing VR titles, and describe compositing approaches that address the potential conflicts when rendering the real world and a virtual environment together. A study with frequent VR users demonstrate the affordances provided by our system and how it can be used to enhance current VR experiences.2019JHJeremy Hartmann et al.University of Waterloo & Microsoft ResearchImmersion & Presence ResearchContext-Aware ComputingCHI
Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.2019SMSebastian Marwecki et al.Eye Tracking & Gaze InteractionImmersion & Presence ResearchUIST
SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low VisionCurrent virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.2019YZYuhang Zhao et al.Cornell University & Microsoft ResearchSocial & Collaborative VRVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
DreamWalker: Substituting Real-World Walking Experiences with a Virtual RealityWe explore a future in which people spend considerably more time in virtual reality, even during moments when walk between locations in the real world. We present DreamWalker, a VR system that enables such real-world walking while users explore large virtual environments in a headset. Provided with a real-world destination, DreamWalker finds a similar path in a preauthored VR environment and then guides the user's real-walking VR. DreamWalker's tracking system fuses GPS locations, inside-out tracking, and RGBD frames to 1) continuously and accurately position the user in the real world, 2) sense walkable paths and obstacles in real time, and 3) represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination. We show DreamWalker's versatility through users walking three paths across a large campus while enjoying preauthored VR worlds, supplemented with a variety of obstacle avoidance and redirection techniques. In our evaluation, 8 participants walked through campus on a 15-minute route, experiencing virtual Manhattan full of animated cars, people, and other objects.2019JYJackie (Junrui) Yang et al.AR Navigation & Context AwarenessImmersion & Presence ResearchUIST
I'm a Giant: Walking in Large Virtual Environments at High Speed GainsAdvances in tracking technology and wireless headsets enable walking as a means of locomotion in Virtual Reality. When exploring virtual environments larger than room-scale, it is often desirable to increase users' perceived walking speed, for which we investigate three methods. (1) Ground-Level Scaling increases users' avatar size, allowing them to walk farther. (2) Eye-Level Scaling enables users to walk through a World in Miniature, while maintaining a street-level view. (3) Seven-League Boots amplifies users' movements along their walking path. We conduct a study comparing these methods and find that users feel most embodied using Ground-Level Scaling and consequently increase their stride length. Using Seven-League Boots, unlike the other two methods, diminishes positional accuracy at high gains, and users modify their walking behavior to compensate for the lack of control. We conclude with a discussion on each technique's strength and weaknesses and the types of situation they might be appropriate for.2019PAParastoo Abtahi et al.Stanford UniversityFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger InteractionRecent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user's hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC's trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.2019JLJaeyeon Lee et al.Microsoft Research & Korea Advanced Institute of Science and TechnologyVibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightCHI