Virtual Voyages: Evaluating the Role of Real-Time and Narrated Virtual Tours in Shaping User Experience and MemoriesImmersive technologies are capable of transporting people to distant or inaccessible environments that they might not otherwise visit. Practitioners and researchers alike are discovering new ways to replicate and enhance existing tourism experiences using virtual reality, yet few controlled experiments have studied how users perceive virtual tours of real-world locations. In this paper we present an initial exploration of a new system for virtual tourism, measuring the effects of real-time experiences and storytelling on presence, place attachment, and user memories of the destination. Our results suggest that narrative plays an important role in inducing presence within and attachment to the destination, while livestreaming can further increase place attachment while providing flexible, tailored experiences. We discuss the design and evaluation of our system, including feedback from our tourism partners, and provide insights into current limitations and further opportunities for virtual tourism.2025LELillian Maria Eagan et al.University of Otago, School of ComputingImmersion & Presence ResearchInteractive Narrative & Immersive StorytellingCHI
Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision AugmentationsAcoustic noise control or cancellation (ANC) is a commonplace component of modern audio headphones. ANC aims to actively mitigate disturbing environmental noise for a quieter and improved listening experience. ANC is digitally controlling frequency and amplitude characteristics of sound. Much less explored is visual noise and active visual noise control, which we address here. We first explore visual noise and scenarios in which visual noise arises based on findings from four workshops we conducted. We then introduce the concept of visual noise cancellation (VNC) and how it can be used to reduce identified effects of visual noise. In addition, we developed head-worn demonstration prototypes to practically explore the concept of active VNC with selected scenarios in a user study. Finally, we discuss the application of VNC, including vision augmentations that moderate the userÕs view of the environment to address perceptual needs and to provide augmented reality content.2024JHJunlei Hong et al.AR Navigation & Context AwarenessImmersion & Presence ResearchMobileHCI
A Survey On Measuring Presence in Mixed RealityPresence is a defining element of virtual reality (VR), but it is also increasingly used when assessing mixed reality (MR) experiences. The increased interest in measuring presence in MR and recent works underpinning the specific nature of presence in MR raise the question of the current state and practice of assessing presence in MR. To address this question, we present an analysis of more than 320 studies that report on presence measurements in MR. Our analysis showed that questionnaires are the dominant measurement but also identify problematic trends that stem from the lack of a generally agreed-upon concept or measurement for presence in MR. More specifically, we show that using measurements that are not validated in MR or custom questionnaires limiting the comparability of results is commonplace and could contribute to a looming replication crisis in an increasingly relevant field.2024TTTanh Quang Tran et al.University of OtagoImmersion & Presence ResearchCHI
A Design Space for Vision Augmentations and Augmented Human Perception using Digital EyewearHead-mounted displays were originally introduced to directly present computer-generated information to the human eye. More recently, the potential to use this kind of technology to support human vision and augment human perception has become actively pursued with applications such as compensating for visual impairments or aiding unimpaired vision. Unfortunately, a systematic analysis of the field is missing. Within this work, we close that gap by presenting a design space for vision augmentations that allows research to systematically explore the field of digital eyewear for vision aid and how it can augment the human visual system. We test our design space against currently available solutions and conceptually develop new solutions. The design space and findings can guide future development and can lead to a consistent categorisation of the many existing approaches.2024TLTobias Langlotz et al.University of OtagoHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionCHI
Spatial Gaze Markers: Supporting Effective Task Switching in Augmented RealityTask switching can occur frequently in daily routines with physical activity. In this paper, we introduce Spatial Gaze Markers, an augmented reality tool to support users in immediately returning to the last point of interest after an attention shift. The tool is task-agnostic, using only eye-tracking information to infer distinct points of visual attention and to mark the corresponding area in the physical environment. We present a user study that evaluates the effectiveness of Spatial Gaze Markers in simulated physical repair and inspection tasks against a no-marker baseline. The results give insights into how Spatial Gaze Markers affect user performance, task load, and experience of users with varying levels of task type and distractions. Our work is relevant to assist physical workers with simple AR techniques and render task switching faster with less effort.2024MLMathias N. Lystbæk et al.Aarhus UniversityEye Tracking & Gaze InteractionAR Navigation & Context AwarenessCHI
Flicker Augmentations: Rapid Brightness Modulation for Real-World Visual Guidance using Augmented RealityProviding attention guidance, such as assisting in search tasks, is a prominent use for Augmented Reality. Typically, this is achieved by graphically overlaying geometrical shapes such as arrows. However, providing visual guidance can cause side effects such as attention tunnelling or scene occlusions, and introduce additional visual clutter. Alternatively, visual guidance can adjust saliency but this comes with different challenges such as hardware requirements and environment dependent parameters. In this work we advocate for using flicker as an alternative for real-world guidance using Augmented Reality. We provide evidence for the effectiveness of flicker from two user studies. The first compared flicker against alternative approaches in a highly controlled setting, demonstrating efficacy (N = 28). The second investigated flicker in a practical task, demonstrating feasibility with higher ecological validity (N = 20). Finally, our discussion highlights the opportunities and challenges when using flicker to provide real-world visual guidance using Augmented Reality.2024JSJonathan Sutton et al.University of Copenhagen, University of OtagoAR Navigation & Context AwarenessCHI
Eye-Perspective View Management for Optical See-Through Head-Mounted DisplaysOptical see-through (OST) head-mounted displays (HMDs) enable users to experience Augmented Reality (AR) support in the form of helpful real-world annotations. Unfortunately, the blend of the environment with virtual augmentations due to semitransparent OST displays often deteriorates the contrast and legibility of annotations. View management algorithms adapt the annotations' layout to improve legibility based on real-world information, typically captured by built-in HMD cameras. However, the camera views are different from the user's view through the OST display which decreases the final layout quality. We present eye-perspective view management that synthesizes high-fidelity renderings of the user’s view to optimize annotation placement. Our method significantly improves over traditional camera-based view management in terms of annotation placement and legibility. Eye-perspective optimizations open up opportunities for further research on use cases relying on the user's true view through OST HMDs.2023GEGerlinde Emsenhuber et al.Salzburg University of Applied SciencesAR Navigation & Context AwarenessCHI
Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality GlassesAugmented Reality has traditionally been used to display digital overlays in real environments. Many AR applications such as remote collaboration, picking tasks, or navigation require highlighting physical objects for selection or guidance. These highlights use graphical cues such as outlines and arrows. Whilst effective, they greatly contribute to visual clutter, possibly occlude scene elements, and can be problematic for long-term use. Substituting those overlays, we explore saliency modulation to accentuate objects in the real environment to guide the user’s gaze. Instead of manipulating video streams, like done in perception and cognition research, we investigate saliency modulation of the real world using optical-see-through head-mounted displays. This is a new challenge, since we do not have full control over the view of the real environment. In this work we provide our specific solution to this challenge, including built prototypes and their evaluation.2022JSJonathan Sutton et al.Eye Tracking & Gaze InteractionAR Navigation & Context AwarenessUIST
Mixed Reality Light Fields for Interactive Remote AssistanceRemote assistance represents an important use case for mixed reality. With the rise of handheld and wearable devices, remote assistance has become practical in the wild. However, spontaneous provisioning of remote assistance requires an easy, fast and robust approach for capturing and sharing of unprepared environments. In this work, we make a case for utilizing interactive light fields for remote assistance. We demonstrate the advantages of object representation using light fields over conventional geometric reconstruction. Moreover, we introduce an interaction method for quickly annotating light fields in 3D space without requiring surface geometry to anchor annotations. We present results from a user study demonstrating the effectiveness of our interaction techniques, and we provide feedback on the usability of our overall system.2020PMPeter Mohr et al.Graz University of Technology & VRVis GmbHMixed Reality WorkspacesTeleoperation & TelepresenceCHI
Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth EstimationTarget disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce VOR depth estimation, a method based on the Vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.2019DMDiako Mardanbegi et al.Lancaster UniversityEye Tracking & Gaze InteractionCHI
TrackCap: Enabling Smartphones for 3D Interaction on Mobile Head-Mounted DisplaysThe latest generation of consumer market Head-mounted displays (HMD) now include self-contained inside-out tracking of head motions, which makes them suitable for mobile applications. However, 3D tracking of input devices is either not included at all or requires to keep the device in sight, so that it can be observed from a sensor mounted on the HMD. Both approaches make natural interactions cumbersome in mobile applications. TrackCap, a novel approach for 3D tracking of input devices, turns a conventional smartphone into a precise 6DOF input device for an HMD user. The device can be conveniently operated both inside and outside the HMD's field of view, while it provides additional 2D input and output capabilities.2019PMPeter Mohr et al.Graz University of Technology & VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbHHand Gesture RecognitionMixed Reality WorkspacesCHI
ChromaGlasses: Computational Glasses for Compensating Colour BlindnessPrescription glasses are used by many people as a simple, and even fashionable way, to correct refractive problems of the eye. However, there are other visual impairments that cannot be treated with an optical lens in conventional glasses. In this work we present ChromaGlasses, Computational Glasses using optical head-mounted displays for compensating colour vision deficiency. Unlike prior work that required users to look at a screen in their visual periphery rather than at the environment directly, ChromaGlasses allow users to directly see the environment using a novel head-mounted displays design that analyzes the environment in real-time and changes the appearance of the environment with pixel precision to compensate the impairment of the user. In this work, we present first prototypes for ChromaGlasses and report on the results from several studies showing that ChromaGlasses are an effective method for managing colour blindness.2018TLTobias Langlotz et al.University of OtagoVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI