There Is More to Dwell Than Meets the Eye: Toward Better Gaze-Based Text Entry Systems With Multi-Threshold DwellDwell-based text entry seems to peak at 20 words per minute (WPM). Yet, little is known about the factors contributing to this limit, except that it requires extensive training. Thus, we conducted a longitudinal study, broke the overall dwell-based selection time into six different components, and identified several design challenges and opportunities. Subsequently, we designed two novel dwell keyboards that use multiple yet much shorter dwell thresholds: Dual-Threshold Dwell (DTD) and Multi-Threshold Dwell (MTD). The performance analysis showed that MTD (18.3 WPM) outperformed both DTD (15.3 WPM) and the conventional Constant-Threshold Dwell (12.9 WPM). Notably, absolute novices achieved these speeds within just 30 phrases. Moreover, MTD’s performance is also the fastest-ever reported average text entry speed for gaze-based keyboards. Finally, we discuss how our chosen parameters can be further optimized to pave the way toward more efficient dwell-based text entry.2025AMAunnoy K Mutasim et al.Simon Fraser University, School of Interactive Arts and TechnologyEye Tracking & Gaze InteractionMotor Impairment Assistive Input TechnologiesCHI
RetroSketch: A Retrospective Method for Measuring Emotions and Presence in Virtual RealityVirtual Reality (VR) designers and researchers often need to measure emotions and presence as they evolve over time. The experience sampling method (ESM) is a common way to achieve this, however, ESM disrupts the experience and lacks granularity. We propose RetroSketch, a new method for measuring subjective emotions and presence in VR, where users watch back their VR experience and retrospectively sketch a plot of their feelings. RetroSketch leaves the VR experience undisturbed and yields highly granular data, including information about salient events and qualitative descriptions of their feelings. We compared RetroSketch and ESM in a large study (n=140) using five different VR experiences over one-hour sessions. Our results show that RetroSketch and ESM measures are highly correlated with each other, as well as physiological measures indicative of emotion. The correlations are robust across different VR experiences and user demographics. They also highlight the impact of ESM on users' experience.2025DPDominic Potts et al.University of Bath, Computer ScienceImmersion & Presence ResearchCHI
Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality ExergamingThere is great potential for adapting Virtual Reality (VR) exergames based on a user's affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.2024DPDominic Potts et al.University of BathImmersion & Presence ResearchFitness Tracking & Physical Activity MonitoringCHI
Watch This! Observational Learning in VR Promotes Better Far Transfer than Active Learning for a Fine Psychomotor TaskVirtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a `learning-by-doing' active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with `hands-on' practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.2024IFIsabel Sophie Fitton et al.University of BathFull-Body Interaction & Embodied InputImmersion & Presence ResearchSTEM Education & Science CommunicationCHI
Dancing with the Avatars: Minimal Avatar Customisation Enhances Learning in a Psychomotor TaskVirtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated `minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.2023IFIsabel Sophie Fitton et al.University of BathImmersion & Presence ResearchIdentity & Avatars in XRDance & Body Movement ComputingCHI
FakeForward: Using Deepfake Technology for Feedforward LearningVideos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.2023CCChristopher Clarke et al.University of BathMental Health Apps & Online Support CommunitiesDeepfake & Synthetic Media DetectionCHI
Realism and Field of View Affect Presence in VR but Not the Way You ThinkPresence is one of the most studied and most important variables in immersive virtual reality (VR) and it influences the effectiveness of many VR applications. Separate bodies of research indicate that presence is determined by (1) technical factors such as the visual realism of a virtual environment (VE) and the field of view (FoV), and (2) human factors such as emotions and agency. However, it remains unknown how technical and human factors may interact in the presence formation process. We conducted a user study (n=360) to investigate the effects of visual realism (high/low), FoV (high/low), emotions (focusing on fear) and agency (yes/no) on presence. Counter to previous assumptions, technical factors did not affect presence directly but were moderated through human factors. We propose TAP-Fear, a structural equation model that describes how design decisions, technical factors and human factors combine and interact in the formation of presence.2023CJCrescent Jicol et al.University of BathImmersion & Presence ResearchCHI
Imagine That! Imaginative Suggestibility Affects Presence in Virtual RealityPersonality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience -- a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants' (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.2023CJCrescent Jicol et al.University of BathSocial & Collaborative VRImmersion & Presence ResearchIdentity & Avatars in XRCHI
Designing and Assessing a Virtual Reality Simulation to Build Resilience to Street HarassmentStreet harassment is a widespread problem that can constrain people's freedom to enjoy public spaces safely, along with many other negative psychological impacts. However, very little research has looked at how immersive technology can help in addressing it. We conducted three studies to investigate the design decisions, ethical issues and efficacy of an immersive simulation of street harassment: an online design study (n=20), an interview study with experts working in the area (n=9), and a comparative lab study investigating design, ethics and efficacy (n=44). Our results deepen understanding of the design decisions that contribute to a realistic psychological experience, such as the effects of screen-based video vs. passive VR vs. interactive VR. They also highlight important ethical issues such as traumatisation and potential for victim blaming, and how they can be approached in an ethical manner. Finally, they provide insights into efficacy in terms of perceived usefulness, competence and empathy.2022CJCrescent Jicol et al.University of BathSocial & Collaborative VROnline Harassment & Counter-ToolsSocial Platform Design & User BehaviorCHI
TapGazer: Text Entry with Finger Tapping and Gaze-directed Word SelectionWhile using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g.\ when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.2022ZHZhenyi He et al.New York University, New York UniversityHand Gesture RecognitionFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionCHI
Augmented Reality and Older Adults: A Comparison of Prompting TypesOlder adults can benefit from technologies that help them to complete everyday tasks. However, they are an often-under-represented population in augmented reality (AR) research. We present the results of a study in which people aged 50 years or older were asked to perform actions by interpreting visual AR prompts in a lab setting. Our results show that users were less successful at completing actions when using ARROW and HIGHLIGHT augmentations than when using ghosted OBJECT or GHOSTHAND augmentations. We found that user confidence in performing actions varied according to action and augmentation type. Users preferred combined AUDIO+TEXT prompts (our control condition) overall, but the GHOSTHAND was the most preferred visual prompt. We discuss reasons for these differences and provide insight for developers of AR content for older adults. Our work provides the first comparative study of AR with older adults in a non-industrial context.2021TWThomas J. Williams et al.University of BathAR Navigation & Context AwarenessAging-in-Place Assistance SystemsCHI
ReverseORC: Reverse Engineering of Resizable User Interface Layouts with OR-ConstraintsReverse engineering (RE) of user interfaces (UIs) plays an important role in software evolution. However, the large diversity of UI technologies and the need for UIs to be resizable make this challenging. We propose ReverseORC, a novel RE approach able to discover diverse layout types and their dynamic resizing behaviours independently of their implementation, and to specify them by using OR constraints. Unlike previous RE approaches, ReverseORC infers flexible layout constraint specifications by sampling UIs at different sizes and analyzing the differences between them. It can create specifications that replicate even some non-standard layout managers with complex dynamic layout behaviours. We demonstrate that ReverseORC works across different platforms with very different layout approaches, e.g., for GUIs as well as for the Web. Furthermore, it can be used to detect and fix problems in legacy UIs, extend UIs with enhanced layout behaviours, and support the creation of flexible UI layouts.2021YJYue Jiang et al.Max Planck Institute for Informatics360° Video & Panoramic ContentAlgorithmic Transparency & AuditabilityCHI
Effects of Emotion and Agency on Presence in Virtual RealityArguably one of the most important characteristics of virtual reality (VR) is its ability to induce higher feelings of presence. Still, research has remained inconclusive on how presence is affected by human factors such as emotion and agency. Here we adopt a novel design to investigate their effects by testing virtual environments inducing either happiness or fear, with or without user agency. Results from 121 participants showed that the dominant emotion induced by a virtual environment is positively correlated with presence. In addition, agency had a significant positive effect on presence and, furthermore, moderated the effect of emotion on presence. We show for the first time that the effects of emotion and agency on presence are not straightforward but they can be modelled by separating design factors from subjective measures. We discuss how these findings can explain seemingly conflicting results of related work and their implications for VR design.2021CJCrescent Jicol et al.University of BathImmersion & Presence ResearchCHI
ORCSolver: An Efficient Solver for Adaptive GUI Layout with OR-ConstraintsOR-constrained (ORC) graphical user interface layouts unify conventional constraint-based layouts with flow layouts, which enables the definition of flexible layouts that adapt to screens with different sizes, orientations, or aspect ratios with only a single layout specification. Unfortunately, solving ORC layouts with current solvers is time-consuming and the needed time increases exponentially with the number of widgets and constraints. To address this challenge, we propose ORCSolver, a novel solving technique for adaptive ORC layouts, based on a branch-and-bound approach with heuristic preprocessing. We demonstrate that ORCSolver simplifies ORC specifications at runtime and our approach can solve ORC layout specifications efficiently at near-interactive rates.2020YJYanqi Jiang et al.University of MarylandPrototyping & User TestingComputational Methods in HCICHI
Affect Recognition using Psychophysiological Correlates in High Intensity VR ExergamingUser experience estimation of VR exergame players by recognising their affective state could enable us to personalise and optimise their experience. Affect recognition based on psychophysiological measurements has been successful for moderate intensity activities. High intensity VR exergames pose challenges as the effects of exercise and VR headsets interfere with those measurements. We present two experiments that investigate the use of different sensors for affect recognition in a VR exergame. The first experiment compares the impact of physical exertion and gamification on psychophysiological measurements during rest, conventional exercise, VR exergaming, and sedentary VR gaming. The second experiment compares underwhelming, overwhelming and optimal VR exergaming scenarios. We identify gaze fixations, eye blinks, pupil diameter and skin conductivity as psychophysiological measures suitable for affect recognition in VR exergaming and analyse their utility in determining affective valence and arousal. Our findings provide guidelines for researchers of affective VR exergames.2020SBSoumya C. Barathi et al.University of BathHuman Pose & Activity RecognitionImmersion & Presence ResearchSleep & Stress MonitoringCHI
Race Yourselves: A Longitudinal Exploration of Self-Competition Between Past, Present, and Future Performances in a VR ExergameParticipating in competitive races can be a thrilling experience for athletes, involving a rush of excitement and sensations of flow, achievement, and self-fulfilment. However, for non-athletes, the prospect of competition is often a scary one which affects intrinsic motivation negatively, especially for less fit, less competitive individuals. We propose a novel method making the positive racing experience accessible to non-athletes using a high-intensity cycling VR exergame: by recording and replaying all their previous gameplay sessions simultaneously, including a projected future performance, players can race against a crowd of "ghost" avatars representing their individual fitness journey. The experience stays relevant and exciting as every race adds a new competitor. A longitudinal study over four weeks and a cross-sectional study found that the new method improves physical performance, intrinsic motivation, and flow compared to a non-competitive exergame. Additionally, the longitudinal study provides insights into the longer-term effects of VR exergames.2020AMAlexander Michael et al.University of BathFull-Body Interaction & Embodied InputGame UX & Player BehaviorCHI
Touché: Data-Driven Interactive Sword Fighting in Virtual RealityVR games offer new freedom for players to interact naturally using motion. This makes it harder to design games that react to player motions convincingly. We present a framework for VR sword fighting experiences against a virtual character that simplifies the necessary technical work to achieve a convincing simulation. The framework facilitates VR design by abstracting from difficult details on the lower "physical" level of interaction, using data-driven models to automate both the identification of user actions and the synthesis of character animations. Designers are able to specify the character's behaviour on a higher "semantic" level using parameterised building blocks, which allow for control over the experience while minimising manual development work. We conducted a technical evaluation, a questionnaire study and an interactive user study. Our results suggest that the framework produces more realistic and engaging interactions than simple hand-crafted interaction logic, while supporting a controllable and understandable behaviour design.2020JDJavier Dehesa et al.University of BathGame UX & Player BehaviorMultiplayer & Social GamesRole-Playing & Narrative GamesCHI
Me vs. Super(wo)man: Effects of Customization and Identification in a VR ExergameCustomised avatars are a powerful tool to increase identification, engagement and intrinsic motivation in digital games. We investigated the effects of customisation in a self-competitive VR exergame by modelling players and their previous performance in the game with customised avatars. In a first study we found that, similar to non-exertion games, customisation significantly increased identification and intrinsic motivation, as well as physical performance in the exergame. In a second study we identified a more complex relationship with the customisation style: idealised avatars increased wishful identification but decreased exergame performance compared to realistic avatars. In a third study, we found that 'enhancing' realistic avatars with idealised characteristics increased wishful identification, but did not have any adverse effects. We discuss the findings based on feedforward and self-determination theory, proposing notions of intrinsic identification (fostering a sense of self) and extrinsic identification (drawing away from the self) to explain the results.2020JKJordan Koulouris et al.University of BathIdentity & Avatars in XRGame UX & Player BehaviorGamification DesignCHI
CodeGazer: Making Code Navigation Easy and Natural With Gaze InputNavigating source code, an activity common in software development, is time consuming and in need of improvement. We present CodeGazer, a prototype for source code navigation using eye gaze for common navigation functions. These functions include actions such as "Go to Definition'' and "Find All Usages'' of an identifier, navigate to files and methods, move back and forth between visited points in code and scrolling. We present user study results showing that many users liked and even preferred the gaze-based navigation, in particular the "Go to Definition'' function. Gaze-based navigation is also holding up well in completion time when compared to traditional methods. We discuss how eye gaze can be integrated into traditional mouse & keyboard applications in order to make "look up'' tasks more natural.2019ASAsma Shakil et al.Media Design School & University of AucklandEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
Virtual Performance Augmentation in an Immersive Jump & Run ExergameHuman performance augmentation through technology has been a recurring theme in science and culture, aiming to increase human capabilities and accessibility. We investigate a related concept: virtual performance augmentation (VPA), using VR to give users the illusion of greater capabilities than they actually have. We propose a method for VPA of running and jumping, based on in place movements, and studied its effects in a VR exergame. We found that in place running and jumping in VR can be used to create a somewhat natural experience and can elicit medium to high physical exertion in an immersive and intrinsically motivating manner. We also found that virtually augmenting running and jumping can increase intrinsic motivation, perceived competence and flow, and may also increase motivation for physical activity in general. We discuss implications of VPA for safety and accessibility, with initial evidence suggesting that VPA may help users with physical impairments enjoy the benefits of exergaming.2019CIChristos Ioannou et al.University of BathFull-Body Interaction & Embodied InputInteractive Narrative & Immersive StorytellingCHI