HandyCast: Phone-based Bimanual Input for Virtual Reality in Mobile and Space-Constrained Settings via Pose-and-Touch TransferDespite the potential of Virtual Reality as the next computing platform for general purposes, current systems are tailored to stationary settings to support expansive interaction in mid-air. However, in mobile scenarios, the physical constraints of the space surrounding the user may be prohibitively small for spatial interaction in VR with classical controllers. In this paper, we present HandyCast, a smartphone-based input technique that enables full-range 3D input with two virtual hands in VR while requiring little physical space, allowing users to operate large virtual environments in mobile settings. HandyCast defines a pose-and-touch transfer function that fuses the phone's position and orientation with touch input to derive two individual 3D hand positions. Holding their phone like a gamepad, users can thus move and turn it to independently control their virtual hands. Touch input using the thumbs fine-tunes the respective virtual hand position and controls object selection. We evaluated HandyCast in three studies, comparing its performance with that of Go-Go, a classic bimanual controller technique. In our open-space study, participants required significantly less physical motion using HandyCast with no decrease in completion time or body ownership. In our space-constrained study, participants achieved significantly faster completion times, smaller interaction volumes, and shorter path lengths with HandyCast compared to Go-Go. In our technical evaluation, HandyCast's fully standalone inside-out 6D tracking performance again incurred no decrease in completion time compared to an outside-in tracking baseline.2023MKMohamed Kari et al.ETH Zürich, Porsche AGFull-Body Interaction & Embodied InputMixed Reality WorkspacesCHI
Haptic PIVOT: On-Demand Handhelds in VRWe present PIVOT, a wrist-worn haptic device that renders virtual objects into the user’s hand on demand. Its simple design comprises a single actuated joint that pivots a haptic handle into and out of the user’s hand, rendering the haptic sensations of grasping, catching, or throwing an object – anywhere in space. Unlike existing hand-held haptic devices and haptic gloves, PIVOT leaves the user’s palm free when not in use, allowing users to make unencumbered use of their hand. PIVOT also enables rendering forces acting on the held virtual objects, such as gravity, inertia, or air-drag, by actively driving its motor while the user is firmly holding the handle. When wearing a PIVOT device on both hands, they can add haptic feedback to bimanual interaction, such as lifting larger objects. In our user study, participants (n=12) evaluated the realism of grabbing and releasing objects of different shape and size with mean score 5.19 on a scale from 1 to 7, rated the ability to catch and throw balls in different directions with different velocities (mean=5.5), and verified the ability to render the comparative weight of held objects with 87% accuracy for ~100g increments.2020RKRobert Kovacs et al.Force Feedback & Pseudo-Haptic WeightHaptic WearablesUIST
Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual WorldsCurrent Virtual Reality (VR) technologies focus on rendering visuospatial effects, and thus are inaccessible for blind or low vision users. We examine the use of a novel white cane controller that enables navigation without vision of large virtual environments with complex architecture, such as winding paths and occluding walls and doors. The cane controller employs a lightweight three-axis brake mechanism to provide large-scale shape of virtual objects. The multiple degrees-of-freedom enables users to adapt the controller to their preferred techniques and grip. In addition, surface textures are rendered with a voice coil actuator based on contact vibrations; and spatialized audio is determined based on the progression of sound through the geometry around the user. We design a scavenger hunt game that demonstrates how our device enables blind users to navigate a complex virtual environment. Seven out of eight users were able to successfully navigate the virtual room (6x6m) to locate targets while avoiding collisions. We conclude with design consideration on creating immersive non-visual VR experiences based on user preferences for cane techniques, and cane material properties.2020ASAlexa F. Siu et al.Stanford UniversitySocial & Collaborative VRVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple DevicesDesigning interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross-device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research.2019FBFrederik Brudy et al.University College LondonKnowledge Management & Team AwarenessContext-Aware ComputingUbiquitous ComputingCHI
RealityCheck: Blending Virtual Environments with Situated Physical RealityToday's virtual reality (VR) systems offer chaperone rendering techniques that prevent the user from colliding with physical objects. Without a detailed geometric model of the physical world, these techniques offer limited possibility for more advanced compositing between the real world and the virtual. We explore this using a realtime 3D reconstruction of the real world that can be combined with a virtual environment. RealityCheck allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical space without losing the sense of immersion or presence inside their virtual world. We demonstrate RealityCheck with seven existing VR titles, and describe compositing approaches that address the potential conflicts when rendering the real world and a virtual environment together. A study with frequent VR users demonstrate the affordances provided by our system and how it can be used to enhance current VR experiences.2019JHJeremy Hartmann et al.University of Waterloo & Microsoft ResearchImmersion & Presence ResearchContext-Aware ComputingCHI
Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.2019SMSebastian Marwecki et al.Eye Tracking & Gaze InteractionImmersion & Presence ResearchUIST
Sensing Posture-Aware Pen+Touch Interaction on TabletsMany status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.2019YZYang Zhang et al.Microsoft Research & Carnegie Mellon UniversityHand Gesture RecognitionHuman Pose & Activity RecognitionCHI
SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low VisionCurrent virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.2019YZYuhang Zhao et al.Cornell University & Microsoft ResearchSocial & Collaborative VRVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger InteractionRecent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user's hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC's trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.2019JLJaeyeon Lee et al.Microsoft Research & Korea Advanced Institute of Science and TechnologyVibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightCHI
Project Zanzibar: A Portable and Flexible Tangible Interaction PlatformWe present Project Zanzibar: a flexible mat that can locate, uniquely identify and communicate with tangible objects placed on its surface, as well as sense a user's touch and hover hand gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.2018NVNicolas Villar et al.Microsoft ResearchShape-Changing Interfaces & Soft Robotic MaterialsData PhysicalizationDesktop 3D Printing & Personal FabricationCHI
CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual RealityCLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user's grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun. We describe the design considerations for CLAW and evaluate its performance through two user studies. The first study obtained qualitative user feedback on the naturalness, effectiveness, and comfort when using the device. The second study investigated the ease of the transition between grasping and touching when using our device.2018ICInrak Choi et al.Stanford UniversityForce Feedback & Pseudo-Haptic WeightHaptic WearablesCHI