Robot, Avatar, or Human: The Impact of Partner Representation and Task on the Communication ExperienceAvatars and telepresence robots have long received attention for remote communication. However, the specific nature of their physicality, expressiveness, and mobility may affect their usefulness for different tasks. This work compares using an avatar (presented in augmented reality) and a telepresence robot to Face-to-Face (F2F) communication. We focus on the influence of different communication tasks: free conversation, negotiation, and referential communication with movement. Thus, we conducted a user study (split-plot design, N=54) with the type of representation of the conversational partner as the within variable and the communication task as the between variable. Our results show that type of task, especially referential communication with movement, influenced attention to nonverbal cues and perceived closeness. Generally, gestures and body movements received the least focus with telepresence robots. Gestures in avatars and F2F drew similar attention, attributed to the avatar’s tracking fidelity. Gaze received less attention in both mediums than F2F, while facial expressions on the robot’s screen heightened attention compared to avatars. These findings advance the fundamental understanding of mediated communication and support researchers and practitioners in shaping the design of communication applications beyond today’s video calls.2025SAStephanie Arevalo Arboleda et al.Collaborating in Virtual EnvironmentsCSCW
Metaphors for Good Digital IdentitiesDigital identities are often discussed or explained as digital versions of physical documents such as passports. This metaphor tends to ignore, intentionally or not, the social challenges associated with real-world implementation of these technologies. This paper presents eight alternative metaphors for “good" digital identities which are derived from a 12-month Research-through-Design process. This process is presented as an annotated portfolio showcasing insights from a variety of design activities and stakeholder engagements, including design sprints, workshops, an artist residency and an exhibition, with the metaphors operating as “meta-annotations" on the portfolio. The eight metaphors intend to provoke and enable wider conversation with various stakeholders including academics, non-profits, industry professionals and policy makers about what “good" digital identities might mean, by focusing on societal rather than common technical concerns.2025KSKim Snooks et al.Online Identity & Self-PresentationInclusive DesignParticipatory DesignDIS
Adopting Mixed Reality Product Customisation in Brick-and-Mortar Retail: Stakeholder Insights for Commercialisation ChallengeAlthough increasing attention is being paid to implementing mixed reality (MR) technology in retail purchases, integrating MR with brick-and-mortar shops has been overlooked. This study focuses on adopting MR product customisation interfaces in brick-and-mortar shops using fashion retail as an example. It evaluates the designed Mixed Reality Product Configurator (MRPC) through prototype testing (n=15) and in-depth stakeholder interviews (n=26). The study provides recommendations for software developers, designers, and retail managers in four aspects: 1) addressed the significance of MRPC for brick-and-mortar retailing, 2) identified four aspects for MR retail mass adoption, namely system development, interior environment design, marketability, and management strategy, 3) defined prospective retail shop genres that adopt MRPC, and 4) defined prospective consumer genres that adopt MRPC. The findings define the challenges and requirements of MR retail commercialisation, facilitating stakeholders to develop MR-based retail for commercial mass customisation in the marketplace.2025LJLingyao JinMixed Reality WorkspacesMotor Impairment Assistive Input TechnologiesCustomizable & Personalized ObjectsDIS
"Suits as Masculine and Flowers as Feminine": Investigating Gender Expression in AI-Generated ImageryGenerative AI’s growing use in content creation significantly impacts societal perceptions by perpetuating and reinforcing gender stereotypes. The amplification of stereotypes in AI-generated content can lead to increased discrimination, exclusion, misinformation, and contribute to racial and gender disparities. To address this challenge, we explore the direct impact of generative AI on gender attribution and stereotype reinforcement in digital imagery through a survey with 111 participants, analysing interpretations of gender expression in 216 AI-generated images. Findings reveal a pronounced bias toward masculine-leaning attributions, particularly in images where gender identity is labelled as androgyne. This research provides three key contributions: (1) an in-depth understanding of how people perceive gender expressions in AI-generated images; (2) a dataset of 216 images evaluated by participants for masculinity, femininity, and neutrality; and (3) two key challenges to consider in order to address the stereotyped representations of gender expressions in AI-generated content, highlighting the need for more inclusive AI practices.2025GCGail Collyer-Hoar et al.Generative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityGender & Race Issues in HCIDIS
Towards Holistic Prompt CraftWe present an account of an ongoing practice-based Design Research programme that explores real-time AI image generation. Based on three installations, we reflect on the design of PromptJ, a user interface built around the concept of a prompt ‘mixer’. We present a series of strong concepts based on the design and deployment of PromptJ. Later, we cohere and abstract our strong concepts into the notion of Holistic Prompt Craft, which describes the importance of considering all relevant parameters concurrently. Finally, we present PromptTank, a prototype design which exemplifies these principles. Our contributions are articulated as strong concepts or intermediate knowledge, intended to be used generatively by informing and inspiring practitioners and researchers working in this space.2025JLJoseph Lindley et al.Generative AI (Text, Image, Music, Video)Prototyping & User TestingDIS
Online-EYE: Multimodal Implicit Eye Tracking Calibration for XRUnlike other inputs for extended reality (XR) that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration. While our evaluation focuses on VR and controller-based interactions, we anticipate the broader applicability of this approach to various XR devices and input modalities.2025BHBaosheng James HOU et al.Google; Lancaster University , Computing and CommunicationsEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
Child Centred Ethics (CCE): A Practical Framework for Enhanced Child Participation in HCIFollowing a review of papers in the ACM DL on ethics and children, this paper shows the growth of interest in this area, summarises the literature found, and then, using detail from 26 papers that offer practical advice, distils a Child Centred Ethics Framework that maps literature onto ethical concerns in relation to the practical application of ethics with children. The framework offers questions and solutions for researchers from the first inception of a project to the dissemination of the results back to the children. The framework is offered as an adjunct to an ethics / IRB document in that it places the child's experience at the centre of decision-making allowing fuller exploration of aspects like assent, anonymity, inclusion and contribution. As a practical resource that researchers can use, the framework is presented as a living document waiting to be owned by the community.2025JRJanet C. Read et al.ChiCI Lab, University of Central LancashireParticipatory DesignCHI
On-body Icons: Designing a 3D Interface for Launching Apps in Augmented RealityOn-body tapping provides a quick way to launch augmented reality (AR) apps using virtual shortcuts placed on the user’s skin, clothes, and jewelry. While prior work has focused on tapping performance, social acceptance, and sensing techniques, users’ behaviour in placing shortcuts on their body has been underexplored. In this work, we propose On-body Icons — a novel interface for launching apps via touching virtual icons placed across the user’s entire body, and use it to investigate locations, reasons for chosen icon placement, and users’ attitudes towards the feature. Results of the qualitative study conducted with 24 participants demonstrated that people employ a wide variety of placement strategies that balance memorability of the locations with accuracy and comfort of reaching the icons. We discuss these findings in regard to current understanding of memorability of icon placement, placement appropriateness, and privacy, and offer design implications for similar features in spatial applications.2025UTUliana Tsimbalistaia et al.HSE UniversityAR Navigation & Context AwarenessOn-Skin Display & On-Skin InputCHI
Making Hardware Devices at Scale is Still Hard: Challenges and Opportunities for the HCI CommunityEmbedded systems and interactive devices form an essential interface between the physical and digital world and are understandably an important focus for the HCI research community. However, scaling an interactive prototype of a new device concept to enable effective evaluation or to support the transition to a production-ready device is incredibly challenging. To better understand the issues innovators face when scaling up interactive device prototypes we report the results from 22 interviews with practitioners in the interactive device field, including eight academics involved in the HCI and manufacturing research communities. In our two-phase analysis we identify and validate the following four recurring themes. First and foremost is the observation that ``creating relationships with industry'' is hard. Second, ``effective communication requires a lot of effort'' despite the availability of modern collaboration tools. Thirdly, we observed that ``understanding the manufacturer's perspective'' can be difficult. Finally, ``prototyping is nothing like production''---the vast difference between these two activities still surprises many. Additionally, our university-based participants gave us further insights and helped us to identify challenges specific to the academic context, pointing to a number of opportunities relating to hardware device scaling.2025BKBo Kang et al.University of CambridgeCircuit Making & Hardware PrototypingCHI
The World is Not Enough: Growing Waste in HPC-enabled Academic PracticeMost research depends to some extent on technologies and computational infrastructures including, and perhaps especially, HCI. Despite the noted environmental impacts associated with information communication technology (ICT) globally, to date little consideration has been given as to how to limit the impact of research and innovation processes themselves. Working to understand the technical and cultural drivers of this impact within the specific but resource-intensive domain of High Performance Computing (HPC), we conducted 25 interviews with academic researchers, providers, funders, and commissioners of HPC. We find intersecting socio-cultural and technical dimensions that link to research institutions like conferences, funders, and universities that reinforce and embed, rather than challenge, expectations of growth and waste. At a time when large scale cloud systems, generative AI and ever larger models are multiplying, we argue to de-escalate demand for computing, aiming for more moderate, responsible and meaningful use of computational infrastructures - including within HCI itself.2025CLCarolynne Lord et al.UKCEH; Lancaster University, School of Computing and CommunicationsGenerative AI (Text, Image, Music, Video)Sustainable HCIEcological Design & Green ComputingCHI
Hidden Opportunities for Elder Living: Understanding Shared Technology Troubles and Benefits for Older Adults in the UK Cost of Living CrisisThe uptake of digital technology by older adults and service-providers has been partly driven by the pandemic but more recently by the erosion of in-person services because of increasing austerity and a harsher global economic climate. Against the backdrop of the UK’s cost of living crisis, we examine technology used frequently within five older adults’ households. Through two rounds of in- terviews and participant diaries, we show benefits and struggles of participants’ costly technology use, reflecting on what ‘cost of living’ means when technology designed to simplify older peoples lives, encounters problems. For HCI practitioners, we provide evi- dence of how personal smart devices can be better tailored to help older adults support themselves both economically and practically, during the cost of living crisis. We propose avenues for future re- search and design that better support indirect costs and reflect on how personal devices can be made self-sustaining, integrated and repairable.2025ESEwan Soubutts et al.University College London, UCL Interaction CentreAging-Friendly Technology DesignAging-in-Place Assistance SystemsCHI
Stretch Gaze Targets Out: Experimenting with Target Sizes for Gaze-Enabled Interfaces on Mobile DevicesUsers hold their mobile phones at varying distances depending on their posture, the application being used, and the task's nature. Without considering such variation when designing UI target sizes limits the applicability of gaze selection for everyday interaction with mobile devices. Towards this end, we conducted a user study (N=24) to investigate the implications of different target sizes and viewing across different screen regions. While larger targets generally improve accuracy and decrease precision, accuracy is significantly higher in the horizontal than in the vertical direction. This subsequently led us to find that increasing the tracking area in the vertical direction only, while maintaining the same visual target size, significantly improves accuracy. This suggests that visually smaller targets with larger vertical tracking areas enhance accuracy. Based on our results, we present concrete design guidelines for developers to optimise target sizes on gaze-enabled mobile devices to improve accuracy across varying user-to-screen distances.2025ONOmar Namnakani et al.University of GlasgowEye Tracking & Gaze InteractionVoice User Interface (VUI) DesignCHI
Beyond the 'Unofficial Proxy' - Navigating Technology Support for Older Adults' Banking Activities with Close OthersIn the context of extensive bank branch closures, and a rapidly ageing population, older adults’ (OAs’) reluctance to adopt digital banking platforms by themselves is concerning. However, many OAs rely on the support of close others (COs) to complete banking activities with them. This support is mostly provided through “unofficial” mechanisms such as sharing online banking credentials, which risk an OA’s privacy and security. This paper replicates a Canadian study with OAs in a UK context and extends it with co-design workshops focused on novel banking solutions for OAs and COs, helping to formalise the role of unofficial proxies within online platforms. Results show that unofficial proxy banking also occurs with COs in a UK context and co-design reveals barriers to OAs’ use of banking technology independently. We discuss recommendations for flexible, easily authenticated and easy to learn digital banking solutions for OAs in the future.2025PBPolly Barber et al.University College LondonAging-Friendly Technology DesignPrivacy by Design & User ControlPasswords & AuthenticationCHI
It’s Not Always the Same Eye That Dominates: Effects of Viewing Angle, Handedness and Eye Movement in 3DUnderstanding eye dominance, the subconscious preference for one eye, has significant implications for 3D user interfaces in VR and AR, particularly in interface design and rendering. Although HCI recognizes eye dominance, little is known about what causes it to switch from one eye to another. To explore this, we studied eye dominance in VR, where 28 participants manually aligned a cursor with a distant target across three tasks. We manipulated the horizontal viewing angle, the hand used for alignment, and eye movement induced by target behaviour. Our results confirm the dynamic nature of eye dominance, though with fewer switches than expected and varying influences across tasks. This highlights the need for adaptive HCI techniques, which account for shifts in eye dominance in system design, such as gaze-based interaction, visual design, or rendering, and can improve accuracy, usability, and experience.2025FPFranziska Prummer et al.Lancaster University, School of Computing and CommunicationsEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
A Systematic Review and Meta-Analysis of Research on Goals for Behavior ChangeHCI research on goals and behavior change has significantly increased over the past decade. However, while emerging work has synthesized personal informatics goals, fewer efforts have focused on also integrating HCI research on behavior change to chart future research directions.We conducted a systematic reviewof 180 papers focused on goals and behavior change from over 10 years of SIGCHI journals and conference proceedings. We further analyzed 37 papers from the data set that included evaluations of interventions’ effectiveness in-the-wild. We also reported on the effectiveness of 76 of such technology-based interventions and the meta-analysis of 28 of these interventions. We find that most research has focused on goals in the health and wellbeing domains, centered on the individual, low intrinsic goals, and partial use of theoretical constructs in technology-based interventions. We highlight opportunities for supporting multiple-domain, social, high intrinsic, and qualitative goals in HCI research for behavior change, and for more effective technology-based interventions with stronger theoretical underpinning, supporting users’ awareness of deep motives for qualitative goals.2025JZJun Zhu et al.University of California, Irvine, InformaticsMental Health Apps & Online Support CommunitiesFitness Tracking & Physical Activity MonitoringPrivacy by Design & User ControlCHI
Of Ironies and Agency: Energy Professionals’ Views on Digital Interventions and Their UsersThe efficacy of digital solutions to increase energy efficiency, including technical optimisations and behavioural influence, has long been a subject of debate within sustainable HCI (SHCI). While the viewpoints of policymakers and academics are frequently published (and often contradictory), less is known about the views of those on the ground. In this paper we ask: What are energy professionals' views of digital energy-saving interventions and their users? What are the challenges they face implementing these interventions? Based on a university campus case study with twelve semi-structured interviews and a focus group with energy and facilities' professionals, we illustrate how they strongly advocate digital efficiency as a pathway to sustainability; yet, this optimism is in apparent tension with key barriers they identify to realising 'their seamless visions', particularly the complexities of the human behaviour they are seeking to optimise. These findings underscore the seductiveness of techno-optimism and the need for more systemic change.2025CBChristina Bremer et al.Lancaster University, School of Computing and CommunicationsSustainable HCIEnergy Conservation Behavior & InterfacesCHI
How to Design with Ambiguity: Insights from Self-tracking WearablesNearly 20 years ago, Gaver et al. introduced ambiguity as a design resource, proposing tactics to reflect everyday uncertainty into interactive systems. This approach is especially relevant for self-tracking wearables, which often obscure the inherent ambiguity of system design and tracked phenomena with seemingly clear, prescriptive data and insights. Although scholars recognize the importance of ambiguity, its practical application in the design process remains underexplored. To address this, we conducted a two-week workshop with 60 designers, examining the application of Gaver et al.’s tactics into 11 design concepts, and performed interviews with 16 participants. Our findings reveal eight relevant ambiguity tactics for self-tracking and offer insights into participants' experiences with designing using ambiguity. We discuss prescription and overlooked ambiguity as levers for the operationalization of ambiguity, the potential benefits and downsides of ambiguity tactics for users, future directions for HCI research and practice, and the study limitations.2025CLChiara Di Lodovico et al.Politecnico di Milano, Design DepartmentVisualization Perception & CognitionBiosensors & Physiological MonitoringUser Research Methods (Interviews, Surveys, Observation)CHI
Hands-on, Hands-off: Gaze-Assisted Bimanual 3D InteractionExtended Reality (XR) systems with hand-tracking support direct manipulation of objects with both hands. A common interaction in this context is for the non-dominant hand (NDH) to orient an object for input by the dominant hand (DH). We explore bimanual interaction with gaze through three new modes of interaction where the input of the NDH, DH, or both hands is indirect based on Gaze+Pinch. These modes enable a new dynamic interplay between our hands, allowing flexible alternation between and pairing of complementary operations. Through applications, we demonstrate several use cases in the context of 3D modelling, where users exploit occlusion-free, low-effort, and fluid two-handed manipulation. To gain a deeper understanding of each mode, we present a user study on an asymmetric rotate-translate task. Most participants preferred indirect input with both hands for lower physical effort, without a penalty on user performance. Otherwise, they preferred modes where the NDH oriented the object directly, supporting preshaping of the hand, which is more challenging with indirect gestures. The insights gained are of relevance for the design of XR interfaces that aim to leverage eye and hand input in tandem.2024MLMathias N. Lystbæk et al.Hand Gesture RecognitionEye Tracking & Gaze InteractionMixed Reality WorkspacesUIST
Eye-Hand Movement of Objects in Near Space Extended RealityHand-tracking in Extended Reality (XR) enables moving objects in near space with direct hand gestures, to pick, drag and drop objects in 3D. In this work, we investigate the use of eye-tracking to reduce the effort involved in this interaction. As the eyes naturally look ahead to the target for a drag operation, the principal idea is to map the translation of the object in the image plane to gaze, such that the hand only needs to control the depth component of the operation. We have implemented four techniques that explore two factors: the use of gaze only to move objects in X-Y vs.\ extra refinement by hand, and the use of hand input in the Z axis to directly move objects vs.\ indirectly via a transfer function. We compared all four techniques in a user study (N=24) against baselines of direct and indirect hand input. We detail user performance, effort and experience trade-offs and show that all eye-hand techniques significantly reduce physical effort over direct gestures, pointing toward effortless drag-and-drop for XR environments.2024UWUta Wagner et al.Hand Gesture RecognitionEye Tracking & Gaze InteractionUIST
Understanding the Impact of the Reality-Virtuality Continuum on Visual Search using Physiological MeasuresWhile Mixed Reality allows the seamless blending of digital content in their surroundings, it is not clear if such a fusion of digital and physical information impacts users' perceptual and cognitive resources differently. While the fusion of real and virtual objects provides numerous opportunities to present additional information, it also introduces undesirable side effects, such as split attention and increased visual complexity. We conducted a visual search study in three manifestations of mixed reality to understand the effects of the environment on visual search behavior. We conducted a multimodal evaluation using EEG and eye-tracking correlates of search efficiency, distractor suppression, attention allocation, and behavioral measures. We found that, independently of the perceptual load, Augmented Reality environments reduce users' capacity to identify target information and suppress irrelevant stimuli. Participants reported AR as more demanding and distracting. We discuss design implications for MR interfaces based on physiological inputs for adaptive interactions.2024FCFrancesco Chiossi et al.Eye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackAR Navigation & Context AwarenessMobileHCI