GaussianNexus: Room-Scale Real-Time AR/VR Telepresence with Gaussian SplattingTelepresence systems with AR/VR immerse a remote user in a local physical environment, enabling virtual travel, remote guidance, and collaborative design. Contemporary systems typically rely on 360° video or RGB-D reconstruction—each with trade-offs between visual fidelity and spatial perception. Emerging rendering techniques like Gaussian Splatting unify these strengths, offering photo-realistic scene representations with spatial interactivity. However, due to the long training times required, updating such scenes in real-time is still largely infeasible. We present GaussianNexus, a system that applies Gaussian Splatting to room-scale telepresence. Our system uses Gaussian Splatting as the primary scene representation medium, and a 360° camera to stream and track 2D and 3D dynamic changes. For live 2D interaction, the system overlays rectified video onto user-selected surfaces. For live 3D interaction, users identify dynamic objects in the environment, which are then segmented, tracked and synchronized as real-time updates to the Gaussian Splatting environment, enabling smooth, low-latency telepresence without retraining. We demonstrate the utility of GaussianNexus through 2 example applications and evaluate it in a usability test.2025XHXincheng Huang et al.Mixed Reality WorkspacesImmersion & Presence ResearchTeleoperation & TelepresenceUIST
NFCGest: Gesture Interaction with NFC terminal Enabled by Super High Speed ADCNear Field Communication (NFC) is a widely applied technology embedded in credit cards, smartphones, and identity credentials like passports. By tapping a media to an NFC terminal, users can authorize payment transactions, gain access to spaces, and authenticate their identity. However, beyond tapping, current NFC application protocols define no other interactions, leaving significant room for exploration. In this work, we show that additional interactions can be enabled by analyzing the raw RX analog signals, which operate at high frequencies (848 kHz), using the test functions of an NFC terminal. We then sampled such signals using a custom low-cost, high-speed streaming ADC, enabling real-time streaming and visualization of the signals on an amplitude-phase plot. As users move the NFC media over the antenna, the raw signals form characteristic 2D curves on this plot. Accordingly, we identified three categories of card interactions: swipe, tap, and shake. By introducing asymmetric interference coils, we can further enable directional interactions. We showcase a set of 9 gestures based on these interaction categories and evaluated them in a ten-participant user study. Our classification model demonstrates cross-user accuracy of 91.8%, validating both our real-time processing pipeline and gesture design. To demonstrate the practical value of NFCGest, we propose applications for both display-based and display-less NFC terminals. We highlight purely contactless gesture interaction for display-based systems, and emphasize the enriched interaction space for display-less sensors.2025BLBu Li et al.Hand Gesture RecognitionVoice User Interface (VUI) DesignPasswords & AuthenticationUIST
HaloTouch: Using IR Multi-Path Interference to Support Touch Interactions with General SurfacesSensing touch on arbitrary surfaces has long been a goal of ubiquitous computing, but often requires instrumenting the surface. Depth camera-based systems have emerged as a promising solution for minimizing instrumentation, but at the cost of high touch-down detection error rates, high touch latency, and high minimum hover distance, limiting them to basic tasks. We developed HaloTouch, a vision-based system which exploits a multipath interference effect from an off-the-shelf time-of-flight depth camera to enable fast, accurate touch interactions on general surfaces. HaloTouch achieves a 99.2% touch-down detection accuracy across various materials, with a motion-to-photon latency of 150 ms. With a brief (20s) user-specific calibration, HaloTouch supports millimeter-accurate hover sensing as well as continuous pressure sensing. We conducted a user study with 12 participants, including a typing task demonstrating text input at 26.3 AWPM. HaloTouch shows promise for more robust, dynamic touch interactions without instrumenting surfaces or adding hardware to users.2025ZXZiyi Xia et al.University of British Columbia, Department of Computer ScienceOn-Skin Display & On-Skin InputUbiquitous ComputingCHI
TravelGalleria: Supporting Remembrance and Reflection of Travel Experiences through Digital Storytelling in Virtual RealityTravel is a powerful yet fleeting experience that can shape personal perspectives and support self-reflection. To recapture the essence of travel, we explored the use of VR as a medium for immersive re-experiencing with an emphasis on storytelling. We developed TravelGalleria, a VR authoring tool that allows users to curate personalized digital galleries. TravelGalleria encourages creative expression, enabling users to use audio narration, annotations, spatially arranged photos, and more to recount their travel stories. A probing user study with TravelGalleria (n = 20) showed promising trends toward emotional resonance and introspective learning. Our findings illustrate how our tool supports users in remembering, reliving, and deriving new insights regarding past experiences, as they were able to reconnect with emotions and themes central to their travels. We discuss these findings in the context of meaningful digital experiences and storytelling in reflective digital practices, highlighting design suggestions and open areas for future research.2025MYMichael Yin et al.University of British Columbia, Department of Computer ScienceImmersion & Presence ResearchIdentity & Avatars in XRInteractive Narrative & Immersive StorytellingCHI
PatternTrack: Multi-Device Tracking Using Infrared, Structured-Light Projections from Built-in LiDARAs augmented reality devices (e.g., smartphones and headsets) proliferate in the market, multi-user AR scenarios are set to become more common. Co-located users will want to share coherent and synchronized AR experiences, but this is surprisingly cumbersome with current methods. In response, we developed PatternTrack, a novel tracking approach that repurposes the structured infrared light patterns emitted by VCSEL-driven depth sensors, like those found in the Apple Vision Pro, iPhone, iPad, and Meta Quest 3. Our approach is infrastructure-free, requires no pre-registration, works on featureless surfaces, and provides the real-time 3D position and orientation of other users' devices. In our evaluation --- tested on six different surfaces and with inter-device distances of up to 260 cm --- we found a mean 3D positional tracking error of 11.02 cm and a mean angular error of 6.81°.2025DKDaehwa Kim et al.Carnegie Mellon University, Human-Computer Interaction InstituteAR Navigation & Context AwarenessContext-Aware ComputingUbiquitous ComputingCHI
How We See Changes How We Feel: Investigating the Effect of Visual Point-of-View on Decision-Making in VR EnvironmentsVirtual reality (VR) can immerse users into engaging experiences, affording opportunities to study behaviour in simulated contexts such as decision-making processes. However, methodological research into designing meaningful VR experiences - experiences that promote appreciation and deeper understanding of a work - is still underdeveloped. In this two-part study, we investigate how visual point-of-view (POV) in VR impacts feelings of meaningfulness and empathy as well as objective decision-making processes. Our study revolves around a VR application that situates users in moral dilemmas from three different POVs. Data from the choices made is augmented with self-reported subjective data. We find that, from different POVs, users' subjective feelings do show change; users show greater empathy for virtual agents and have an increasingly meaningful experience from a first-person perspective, even if this is not always reflected in changes in their decisions. Finally, we discuss the implications of our findings in the context of VR application design.2024MYMichael Yin et al.Session 3f: Embodiment and Experience: Social Behavior and Decision-Making in VRCSCW
SurfShare: Lightweight Spatially Consistent Physical Surface and Virtual Replica Sharing with Head-mounted Mixed-RealityHuang等人开发了SurfShare头戴式混合现实系统,实现物理表面与虚拟副本的轻量级空间一致性共享,提升多人协作交互效率。2024XHXincheng Huang et al.Mixed Reality WorkspacesUbiComp
VirtualNexus: Enhancing 360-Degree Video AR/VR Collaboration with Environment Cutouts and Virtual ReplicasAsymmetric AR/VR collaboration systems bring a remote VR user to a local AR user’s physical environment, allowing them to communicate and work within a shared virtual/physical space. Such systems often display the remote environment through 3D reconstructions or 360° videos. While 360° cameras stream an environment in higher quality, they lack spatial information, making them less interactable. We present VirtualNexus, an AR/VR collaboration system that enhances 360° video AR/VR collaboration with environment cutouts and virtual replicas. VR users can define cutouts of the remote environment to interact with as a world-in-miniature, and their interactions are synchronized to the local AR perspective. Furthermore, AR users can rapidly scan and share 3D virtual replicas of physical objects using neural rendering. We demonstrated our system’s utility through 3 example applications and evaluated our system in a dyadic usability test. VirtualNexus extends the interaction space of 360° telepresence systems, offering improved physical presence, versatility, and clarity in interactions.2024XHXincheng Huang et al.Social & Collaborative VRImmersion & Presence Research360° Video & Panoramic ContentUIST
Lies, Deceit, and Hallucinations: Player Perception and Expectations Regarding Trust and Deception in GamesLying and deception are important parts of social interaction; when applied to storytelling mediums such as video games, such elements can add complexity and intrigue. We developed a game, “AlphaBetaCity”, in which non-playable characters (NPCs) made various false statements, and used this game to investigate perceptions of deceptive behaviour. We used a mix of human-written dialogue incorporating deliberate falsehoods and LLM-written scripts with (human-approved) hallucinated responses. The degree of falsehoods varied between believable but untrue statements to outright fabrications. 29 participants played the game and were interviewed about their experiences. Participants discussed methods for developing trust and gauging NPC truthfulness. Whereas perceived intentional false statements were often attributed towards narrative and gameplay effects, seemingly unintentional false statements generally mismatched participants' mental models and lacked inherent meaning. We discuss how the perception of intentionality, the audience demographic, and the desire for meaning are major considerations when designing video games with falsehoods.2024MYMichael Yin et al.University of British ColumbiaGame UX & Player BehaviorRole-Playing & Narrative GamesCHI
GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VRAs the use of hand gestures becomes increasingly prevalent in virtual reality (VR) applications, prototyping Compound Freehand Interactions (CFIs) effectively and efficiently has become a critical need in the design process. Compound Freehand Interaction (CFI) is a sequence of freehand interactions where each sub-interaction in the sequence conditions the next. Despite the need for interactive prototypes of CFI in the early design stage, creating them is effortful and remains a challenge for designers since it requires a highly technical workflow that involves programming the recognizers, system responses and conditionals for each sub-interaction. To bridge this gap, we present GestureCanvas, a freehand interaction-based immersive prototyping system that enables a rapid, end-to-end, and code-free workflow for designing, testing, refining, and subsequently deploying CFI by leveraging the three pillars of interaction models: event-driven state machine, trigger-action authoring, and programming by demonstration. The design of GestureCanvas includes three novel design elements — (i) appropriating the multimodal recording of freehand interaction into a CFI authoring workspace called Design Canvas, (ii) semi-automatic identification of the input trigger logic from demonstration to reduce the manual effort of setting up triggers for each sub-interaction, (iii) on the fly testing for independently validating the input conditionals in-situ. We validate the workflow enabled by GestureCanvas through an interview study with professional designers and evaluate its usability through a user study with non-experts. Our work lays the foundation for advancing research on immersive prototyping systems allowing even highly complex gestures to be easily prototyped and tested within VR environments.2023ASAnika Sayara et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputMixed Reality WorkspacesUIST
Drifting Off in Paradise: Why People Sleep in Virtual RealitySleep is important for humans, and past research has considered methods of improving sleep through technologies such as virtual reality (VR). However, there has been limited research on how such VR technology may affect the experiential and practical aspects of sleep, especially outside of a clinical lab setting. We consider this research gap through the lens of individuals that voluntarily engage in the practice of sleeping in VR. Semi-structured interviews with 14 participants that have slept in VR reveal insights regarding the motivations, actions, and experiential factors that uniquely define this practice. We find that participant motives can be largely categorized through either the experiential or social affordances of VR. We tie these motives into findings regarding the unique customs of sleeping in VR, involving set-up both within the physical and virtual space. Finally, we identify current and future challenges for sleeping in VR, and propose prospective design directions.2023MYMichael Yin et al.University of British ColumbiaImmersion & Presence ResearchSleep & Stress MonitoringCHI
The Reward for Luck: Understanding the Effect of Random Reward Mechanisms in Video Games on Player ExperienceRandom Reward Mechanisms (RRMs) in video games are systems in which rewards are issued probabilistically upon certain trigger conditions, such as completing gameplay tasks, exceeding a playtime quota, or making in-game purchases. We investigated the relationship between RRM implementations and user experience. Video analysis of 35 RRM systems allowed for the creation of a classification system based on contrasting observed dimensions. Interviews with 14 video game players provided insights into how factors such as the affordances of non-optimal rewards and the trade-off between random luck and skill impact player perception and interaction with RRMs. We additionally investigated the relationship between auditory, visual, and gameplay design decisions and player expectations for RRM reward presentations, finding that the resources required to obtain the reward and the relative value of the reward impact its expected presentation. Finally, we applied our findings to propose design methodologies for creating engaging and significant RRM systems.2022MYMichael Yin et al.University of British ColumbiaGame UX & Player BehaviorSerious & Functional GamesGamification DesignCHI
Phasking on Paper: Accessing a Continuum of PHysically Assisted SKetchINGWhen sketching, we must choose between paper (expressive ease, ruler and eraser) and computational assistance (parametric support, a digital record). PHysically Assisted SKetching provides both, with a pen that displays force constraints with which the sketcher interacts as they draw on paper. Phasking provides passive, "bound" constraints (like a ruler); or actively "brings" the sketcher along a commanded path (e.g., a curve), which they can violate for creative variation. The sketcher modulates constraint strength (control sharing) by bearing down on the pen-tip. Phasking requires untethered, graded force-feedback, achieved by modifying a ballpoint drive that generates force through rolling surface contact. To understand phasking's viability, we implemented its interaction concepts, related them to sketching tasks and measured device performance. We assessed the experience of 10 sketchers, who could understand, use and delight in phasking, and who valued its control-sharing and digital twinning for productivity, creative control and learning to draw.2020SKSoheil Kianzad et al.University of British ColumbiaHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Force Feedback & Pseudo-Haptic WeightCHI