SafeScreen: Evaluating a Screen-Detecting Smartphone Camera App under Benign and Adversarial UseCamera-equipped smartphones pose security risks to organisations by allowing intentional or accidental leaks of confidential on-screen information. We introduce SafeScreen, an Android camera app that detects and obfuscates screen content in real-time using deep-learning recognition for distant screens and Moiré pattern detection for close-up screen captures. Our mixed-methods, ecologically focused study compared "benign" (ordinary photography) and "malign" (circumventing detection) uses. Results show SafeScreen effectively prevents accidental leaks, but that the majority of users were able to exploit it by discovering workarounds such as partial screen occlusion. Our work contributes (1) a novel screen-blocking camera system, and (2) insights from real-world, unguided interactions. We show how evaluating security systems in authentic settings uncovers user-driven vulnerabilities and frustrations that inform future researchers and organisations. We close by discussing future technical features which could offer usability or security improvements, as well as emphasising the benefits of unscripted and adversarial user evaluations.2025SMShaun Macdonald et al.Privacy by Design & User ControlDeepfake & Synthetic Media DetectionIoT Device PrivacyMobileHCI
Stretch Gaze Targets Out: Experimenting with Target Sizes for Gaze-Enabled Interfaces on Mobile DevicesUsers hold their mobile phones at varying distances depending on their posture, the application being used, and the task's nature. Without considering such variation when designing UI target sizes limits the applicability of gaze selection for everyday interaction with mobile devices. Towards this end, we conducted a user study (N=24) to investigate the implications of different target sizes and viewing across different screen regions. While larger targets generally improve accuracy and decrease precision, accuracy is significantly higher in the horizontal than in the vertical direction. This subsequently led us to find that increasing the tracking area in the vertical direction only, while maintaining the same visual target size, significantly improves accuracy. This suggests that visually smaller targets with larger vertical tracking areas enhance accuracy. Based on our results, we present concrete design guidelines for developers to optimise target sizes on gaze-enabled mobile devices to improve accuracy across varying user-to-screen distances.2025ONOmar Namnakani et al.University of GlasgowEye Tracking & Gaze InteractionVoice User Interface (VUI) DesignCHI
Exploring the Perspectives of Social VR-Aware Non-Parent Adults and Parents on Children’s Use of Social Virtual RealitySocial Virtual Reality (VR), where people meet in virtual spaces via 3D avatars, is used by children and adults alike. Children experience new forms of harassment in social VR where it is often inaccessible to parental oversight. To date, there is limited understanding of how parents and non-parent adults within the child social VR ecosystem perceive the appropriateness of social VR for different age groups and the measures in place to safeguard children. We present results of a mixed-methods questionnaire (N=149 adults, including 79 parents) focusing on encounters with children in social VR and perspectives towards children's use of social VR. We draw novel insights on the frequency of social VR use by children under 13 and current use of, and future aspirations for, child protection interventions. Compared to non-parent adults, parents familiar with social VR propose lower minimum ages and are more likely to allow social VR without supervision. Adult users experience immaturity from children in social VR, while children face abuse, encounter age-inappropriate behaviours and self-disclose to adults. We present directions to enhance the safety of social VR through pre-planned controls, real-time oversight, post-event insight and the need for evidence-based guidelines to support parents and platforms around age-appropriate interventions.2024CFCristina Fiani et al.Session 2a: Navigating Family Dynamics and Youth Health JourneysCSCW
Out-of-Device Privacy Unveiled: Designing and Validating the Out-of-Device Privacy Scale (ODPS)This paper proposes an Out-of-Device Privacy Scale (ODPS) - a reliable, validated psychometric privacy scale that measures users’ importance of out-of-device privacy. In contrast to existing scales, ODPS is designed to capture the importance individuals attribute to protecting personal information from out-of-device threats in the physical world, which is essential when designing privacy protection mechanisms. We iteratively developed and refined ODPS in three high-level steps: item development, scale development, and scale validation, with a total of N=1378 participants. Our methodology included ensuring content validity by following various approaches to generate items. We collected insights from experts and target audiences to understand response variability. Next, we explored the underlying factor structure using multiple methods and performed dimensionality, reliability, and validity tests to finalise the scale. We discuss how ODPS can support future work predicting user behaviours and designing protection methods to mitigate privacy risks.2024HFHabiba Farzand et al.University of GlasgowPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
"Pikachu would electrocute people who are misbehaving": Expert, Guardian and Child Perspectives on Automated Embodied Moderators for Safeguarding Children in Social Virtual RealityAutomated embodied moderation has the potential to create safer spaces for children in social VR, providing a protective figure that takes action to mitigate harmful interactions. However, little is known about how such moderation should be employed in practice. Through interviews with 16 experts in online child safety and psychology, and workshops with 8 guardians and 13 children, we contribute a comprehensive overview of how Automated Embodied Moderators (AEMs) can safeguard children in social VR. We explore perceived concerns, benefits and preferences across the stakeholder groups and gather first-of-their-kind recommendations and reflections around AEM design. The results stress the need to adapt AEMs to children, whether victims or harassers, based on age and development, emphasising empowerment, psychological impact and humans/guardians-in-the-loop. Our work provokes new participatory design-led directions to consider in the development of AEMs for children in social VR taking child, guardian, and expert insights into account.2024CFCristina Fiani et al.University of GlasgowSocial & Collaborative VROnline Harassment & Counter-ToolsOnline Identity & Self-PresentationCHI
What You Experience is What We Collect: User Experience Based Fine-Grained Permissions for Everyday Augmented RealityEveryday Augmented Reality (AR) headsets pose significant privacy risks, potentially allowing prolonged sensitive data collection of both users and bystanders (e.g. members of the public). While users control data access through permissions, current AR systems inherit smartphone permission prompts, which may be less appropriate for all-day AR. This constrains informed choices and risks over-privileged access to sensors. We propose (N=20) a novel AR permission control system that allows better-informed privacy decisions and evaluate it using five mock application contexts. Our system's novelty lies in enabling users to experience the varying impacts of permission levels on not only a) privacy, but also b) application functionality. This empowers users to better understand what data an application depends on and how its functionalities are impacted by limiting said data. Participants found that our method allows for making better informed privacy decisions, and deemed it more transparent and trustworthy than state-of-the-art AR and smartphone permission systems taken from Android and iOS. Our results offer insights into new and necessary AR permission systems, improving user understanding and control over data access.2024MASophia J. Abraham et al.University of GlasgowAR Navigation & Context AwarenessPrivacy by Design & User ControlIoT Device PrivacyCHI
Comparing Dwell time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile DevicesGaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices.2023ONOmar Namnakani et al.University of GlasgowEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
Re-Evaluating VR User Awareness Needs During Bystander InteractionsVirtual reality (VR) users are often around bystanders, i.e. people in the real world the VR user may want to interact with. To facilitate bystander-VR user interactions, technology-mediated awareness systems have been introduced to increase a user’s awareness of bystanders. However, while prior works have found effective means of facilitating bystander-VR user interactions, it is unclear when and why one awareness system should be used over another. We reviewed, and selected, a breadth of bystander awareness systems from the literature and investigated their usability, and how they could be holistically used together to support varying awareness needs across 14 bystander-VR user interactions. Our results demonstrate VR users do not manage bystander awareness based solely on the usability of awareness systems but rather on the demands of social context weighted against desired immersion in VR (something existing evaluations fail to capture) and show the need for socially intelligent bystander awareness systems.2023JOJoseph O'Hagan et al.University of GlasgowSocial & Collaborative VRImmersion & Presence ResearchCHI
Impact of Privacy Protection Methods of Lifelogs on Remembered Memories Lifelogging is traditionally used for memory augmentation. However, recent research shows that users' trust in the completeness and accuracy of lifelogs might skew their memories. Privacy-protection alterations such as body blurring and content deletion are commonly applied to photos to circumvent capturing sensitive information. However, their impact on how users remember memories remain unclear. To this end, we conduct a white-hat memory attack and report on an iterative experiment (N=21) to compare the impact of viewing 1) unaltered lifelogs, 2) blurred lifelogs, and 3) a subset of the lifelogs after deleting private ones, on confidently remembering memories. Findings indicate that all the privacy methods impact memories' quality similarly and that users tend to change their answers in recognition more than recall scenarios. Results also show that users have high confidence in their remembered content across all privacy methods. Our work raises awareness about the mindful designing of technological interventions.2023PEPassant ElAgroudy et al.German Research Centre for Artificial Intelligence (DFKI), LMU MunichPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
"Your Eyes Say You Have Used This Password Before": Identifying Password Reuse from Gaze Behavior and Keystroke DynamicsA significant drawback of text passwords for end-user authentication is password reuse. We propose a novel approach to detect password reuse by leveraging gaze as well as typing behavior and study its accuracy. We collected gaze and typing behavior from 49 users while creating accounts for 1) a webmail client and 2) a news website. While most participants came up with a new password, 32% reported having reused an old password when setting up their accounts. We then compared different ML models to detect password reuse from the collected data. Our models achieve an accuracy of up to 87.7% in detecting password reuse from gaze, 75.8% accuracy from typing, and 88.75% when considering both types of behavior. We demonstrate that \revised{using gaze, password} reuse can already be detected during the registration process, before users entered their password. Our work paves the road for developing novel interventions to prevent password reuse.2022YAYasmeen Abdrabou et al.Bundeswehr University Munich, University of GlasgowEye Tracking & Gaze InteractionPasswords & AuthenticationCHI
The Dark Side of Perceptual Manipulations in Virtual Reality"Virtual-Physical Perceptual Manipulations'' (VPPMs) such as redirected walking and haptics expand the user's capacity to interact with Virtual Reality (VR) beyond what would ordinarily physically be possible. VPPMs leverage knowledge of the limits of human perception to effect changes in the user's physical movements, becoming able to (perceptibly and imperceptibly) nudge their physical actions to enhance interactivity in VR. We explore the risks posed by the malicious use of VPPMs. First, we define, conceptualize and demonstrate the existence of VPPMs. Next, using speculative design workshops, we explore and characterize the threats/risks posed, proposing mitigations and preventative recommendations against the malicious use of VPPMs. Finally, we implement two sample applications to demonstrate how existing VPPMs could be trivially subverted to create the potential for physical harm. This paper aims to raise awareness that the current way we apply and publish VPPMs can lead to malicious exploits of our perceptual vulnerabilities.2022WTWen-Jie Tseng et al.Institut Polytechnique de ParisImmersion & Presence ResearchDance & Body Movement ComputingCHI
VRception: Rapid Prototyping of Cross-Reality Systems in Virtual RealityCross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.2022UGUwe Gruenefeld et al.University of Duisburg-EssenMixed Reality WorkspacesImmersion & Presence ResearchCHI
RepliCueAuth: Validating the Use of a Lab-Based Virtual Reality Setup for Evaluating Authentication SystemsEvaluating novel authentication systems is often costly and time-consuming. In this work, we assess the suitability of using Virtual Reality (VR) to evaluate the usability and security of real-world authentication systems. To this end, we conducted a replication study and built a virtual replica of CueAuth [52], a recently introduced authentication scheme, and report on results from: (1) a lab-based in-VR usability study (N=20) evaluating user performance; (2) an online security study (N=22) evaluating system's observation resistance through virtual avatars; and (3) a comparison between our results and those previously reported in the real-world evaluation. Our analysis indicates that VR can serve as a suitable test-bed for human-centred evaluations of real-world authentication schemes, but the used VR technology can have an impact on the evaluation. Our work is a first step towards augmenting the design and evaluation spectrum of authentication systems and offers ground work for more research to follow.2021FMFlorian Mathis et al.University of Glasgow, University of EdinburghPrivacy by Design & User ControlPasswords & AuthenticationCHI
The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research DirectionsFor the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade.2020CKChristina Katsini et al.Human OpsisEye Tracking & Gaze InteractionPasswords & AuthenticationPrivacy Perception & Decision-MakingCHI
Virtual Field Studies: Conducting Studies on Public Displays in Virtual RealityField studies on public displays can be difficult, expensive, and time-consuming. We investigate the feasibility of using virtual reality (VR) as a test-bed to evaluate deployments of public displays. Specifically, we investigate whether results from virtual field studies, conducted in a virtual public space, would match the results from a corresponding real-world setting. We report on two empirical user studies where we compared audience behavior around a virtual public display in the virtual world to audience behavior around a real public display. We found that virtual field studies can be a powerful research tool, as in both studies we observed largely similar behavior between the settings. We discuss the opportunities, challenges, and limitations of using virtual reality to conduct field studies, and provide lessons learned from our work that can help researchers decide whether to employ VR in their research and what factors to account for if doing so.2020VMVille Mäkelä et al.Ludwig Maximilian University of Munich & Tampere UniversitySocial & Collaborative VRField StudiesCHI