SoilSense: Appropriating Soil-based Microbial Fuel Cells to Create Tangible InterfacesSoil-based Microbial Fuel Cells (SMFCs) offer a sustainable method for powering low-energy computing devices by harnessing electricity from microbial activity in soil. In this paper, we introduce SoilSense, a novel approach that repurposes SMFCs as tangible interfaces, transforming soil into an interactive, computationally responsive medium, instead of energy sources. We explore the voltage variations that occur when pressure is applied to the cathode and systematically characterize this mechanism across different electrode configurations and soil moisture levels. To demonstrate the feasibility of SMFC-based interfaces, we present a series of modular and proof-of-concept prototypes that support diverse interaction modalities. We further illustrate how SoilSense enables interactions through example applications and provide implications and envision for future studies to employ soil as an ecologically compatible material in interactive system design.2025TMShuto Takashita et al.Shape-Changing Materials & 4D PrintingEcological Design & Green ComputingEnergy Conservation Behavior & InterfacesUIST
eTactileKit: A Toolkit for Design Exploration and Rapid Prototyping of Electro-Tactile InterfacesElectro-tactile interfaces are becoming increasingly popular due to their unique advantages, such as delivering fast and localised tactile response, thin and flexible form factors, and the potential to create novel tactile experiences. However, insights from a formative study with typical designers highlighted the lack of resources, limited access to information and complexity of software and hardware tools. This establishes a high barrier to entry and limits the ability to rapidly prototype and experiment with electro-tactile interfaces. To address these challenges, we propose eTactileKit, a scalable and accessible toolkit providing end-to-end support for designing and prototyping electro-tactile interfaces. eTactileKit comprises a hardware platform and a software framework for designing, simulating and exploring electro-tactile stimuli. We evaluated the impact and usability of eTactileKit through a three-week long take-home study, which demonstrated increased accessibility, ease of use, and the toolkit's positive impact on design workflow. Additionally, we implemented a set of use cases to demonstrate the toolkit's practicality and effectiveness across various applications.2025PPPraneeth Bimsara Perera et al.Electrical Muscle Stimulation (EMS)Prototyping & User TestingUIST
Weight-Induced Consumed Endurance (WICE): A Model to Quantify Shoulder Fatigue with Weighted ObjectsFatigue is a major challenge in mid-air interactions, often resulting in a sensation of heaviness––particularly when users carry weighted objects on their arms. Existing models for characterising shoulder fatigue were primarily developed for bare-hand scenarios, limiting their applicability in situations involving encumbrance. In this paper, we introduce Weight-Induced Consumed Endurance (WICE), a novel model that accurately estimates shoulder fatigue when additional weight is attached at various locations on the arm. WICE enhances the calculation of instantaneous shoulder torque by incorporating information about the attached weight, integrates individual arm mass for more personalised fatigue estimation, and uses a Bayesian framework to simulate the distribution of shoulder fatigue. Our evaluation shows that WICE strongly correlates with both experimentally measured endurance time and subjective Borg CR10 ratings, demonstrating its reliability as an objective fatigue metric in both encumbered and no-weight conditions. We further demonstrate how WICE can be applied to examine the effects of controller and haptic devices on user fatigue. WICE provides a foundation for developing fatigue-aware systems that can sense and adapt encumbrance, allowing for more tailored ergonomic MR interactions.2025TLTinghui Li et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputBiosensors & Physiological MonitoringUIST
At a Glance to Your Fingertips: Enabling Direct Manipulation of Distant Objects Through SightWarpIn 3D user interfaces, reaching out to grab and manipulate something works great until it is out of reach. Indirect techniques like gaze and pinch offer an alternative for distant interaction, but do not provide the same immediacy or proprioceptive feedback as direct gestures. To support direct gestures for faraway objects, we introduce SightWarp: an interaction technique that exploits eye-hand coordination to seamlessly summon object proxies to the user’s fingertips. The idea is that after looking at a distant object, users either shift their gaze to the hand or move their hand into view—triggering the creation of a scaled near-space proxy of the object and its surrounding context. The proxy remains active until the eye–hand pattern is released. The key benefit is that users always have an option to immediately operate on the distant object through a natural, direct hand gesture. Through a user study of a 3D object docking task, we show that users can easily employ SightWarp, and that subsequent direct manipulation improves performance over gaze and pinch. Application examples illustrate its utility for 6DOF manipulation, overview-and-detail navigation, and world-in-miniature interaction. Our work contributes to expressive and flexible object interactions across near and far spaces.2025YLYang Liu et al.Hand Gesture RecognitionImmersion & Presence Research3D Modeling & AnimationUIST
AbstractExplorer: Leveraging Structure-Mapping Theory to Enhance Comparative Close Reading at ScaleIndividual flagship conferences today can have over a thousand papers; even reading just the abstract of every paper at the latest relevant conference to keep up with the research is time and memory prohibitive. Previous visualizations in this domain have ubiquitously followed Shneiderman's Visual Information-Seeking Mantra, with details available on demand. However, recently in other domains, system designers have leveraged Structure-Mapping Theory (SMT) to facilitate seeing both the overview and the details at the same time, facilitating abstraction without losing context. We compose and evaluate a system, called AbstractExplorer, with analogous SMT-derived characteristics for the domain of scientific abstract corpus familiarization. AbstractExplorer has a unique combination of LLM-powered (1) faceted comparative close reading with (2) role highlighting enhanced by (3) structure-based ordering and (4) alignment. An ablation study (N=24) validated that these features work best together. A summative study (N=16) describes how these features support users in familiarizing themselves with a corpus of paper abstracts from a single large conference with over 1000 papers.2025ZGZiwei Gu et al.Human-LLM CollaborationInteractive Data VisualizationUIST
Uncertainty on Display: The Effects of Communicating Confidence Cues in Autonomous Vehicle-Pedestrian InteractionsUncertainty is inherent in the decision-making of autonomous vehicles (AVs), yet it is rarely communicated to pedestrians, hindering transparency. This study explored approaches and outcomes of communicating AV uncertainty to pedestrians. Two communication approaches (explicit and implicit) were developed to convey different confidence levels (high and low) of AVs. Through a within-subject virtual reality experiment (n=26), we evaluated these approaches in a crossing scenario, examining their impact on participants’ perceptions of safety, trust, and user experience. Our results show that explicit communication is more effective and preferred for conveying uncertainty, fostering safer, more trusting, and positive interactions. Conversely, implicit communication introduces ambiguity, especially when AV confidence levels are low. This research advances the understanding of how uncertainty communication influences pedestrians and provides valuable guidance for designing future eHMIs to effectively communicate uncertainty.2025YLYue Luo et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsExplainable AI (XAI)Algorithmic Transparency & AuditabilityAutoUI
Animal Interaction with Autonomous Mobility Systems: Designing for Multi-Species CoexistenceAutonomous mobility systems increasingly operate in environments shared with animals, from urban pets to wildlife. However, their design has largely focused on human interaction, with limited understanding of how non-human species perceive, respond to, or are affected by these systems. Motivated by research in Animal-Computer Interaction (ACI) and more-than-human design, this study investigates animal interactions with autonomous mobility through a multi-method approach combining a scoping review (45 articles), online ethnography (39 YouTube videos and 11 Reddit discussions), and expert interviews (8 participants). Our analysis surfaces five key areas of concern: Physical Impact (e.g., collisions, failures to detect), Behavioural Effects (e.g., avoidance, stress), Accessibility Concerns (particularly for service animals), Ethics and Regulations, and Urban Disturbance. We conclude with design and policy directions aimed at supporting multi-species coexistence in the age of autonomous systems. This work underscores the importance of incorporating non-human perspectives to ensure safer, more inclusive futures for all species.2025TTTram Thi Minh Tran et al.Ubiquitous ComputingCommunity Engagement & Civic TechnologyHuman-Nature Relationships (More-than-Human Design)AutoUI
LoRA-Based Pattern Generation for Yi Ethnic Embroidery Heritage PreservationAlive Yi 2.0 combines cultural heritage, design innovation, and artificial intelligence (AI) to preserve and reimagine Yi minority embroidery patterns. Using a curated database of traditional Yi embroidery patterns, we implemented LoRA-based AI models to generate new designs that maintain cultural authenticity while enabling contemporary interpretations. This work transforms traditional patterns into modern variations through fine-tuned stable diffusion models, creating designs that respect cultural elements while appealing to younger generations. Our approach demonstrates the potential of AI-assisted design in cultural heritage preservation and provides a framework for using computational creativity to revitalize traditional heritage in the digital era.2025MGMengyao Guo et al.Generative AI (Text, Image, Music, Video)Digital Art Installations & Interactive PerformanceMuseum & Cultural Heritage DigitizationC&C
How your Physical Environment Affects Spatial Presence in Virtual RealityVirtual reality (VR) is often used in small physical environments, requiring users to remain aware of their environment to avoid injury or damage. However, this can reduce their spatial presence in VR. Previous work and theory lack an account of how the physical environment (PE) affects spatial presence. To address this gap, we investigated the effect on spatial presence of (1) the degree of spatial knowledge of the PE and (2) knowledge of and (3) collisions with obstacles in the PE. Estimates from Bayesian regression models suggest that limiting spatial knowledge of the PE increases spatial presence initially but amplifies the detrimental effect of obstacle collisions. Repeatedly avoiding obstacles further decreases spatial presence, but removing them from the user's path yields a partial recovery. Our work contributes empirical evidence to theories of spatial presence formation and highlights the need to consider the physical environment when designing for presence in VR.2025TGThomas van Gemert et al.University of Copenhagen, Department of Computer ScienceMixed Reality WorkspacesImmersion & Presence ResearchContext-Aware ComputingCHI
Theorising in HCI using Causal ModelsAlthough the literature on Human-Computer Interaction (HCI) catalogues many theories, it offers surprisingly few tools for theorising. This paper critiques dominant approaches to engaging with theory and proposes a working model for theorising in HCI. We then present graphical causal modelling as an effective theorising tool. This includes a step-by-step guide to building causal models and examples of their use in different stages of the research process. We explain how causal models help develop method-agnostic representations of research problems using directed acyclic graphs, identify potential confounders, and construct alternative interpretations of data. Finally, we discuss their limitations and challenges for adoption by the HCI community.2025EVEduardo Velloso et al.University of Sydney, School of Computer ScienceExplainable AI (XAI)Computational Methods in HCICHI
"It’s Not the AI’s Fault Because It Relies Purely on Data": How Causal Attributions of AI Decisions Shape Trust in AI SystemsHumans naturally seek to identify causes behind outcomes through causal attribution, yet Human-AI research often overlooks how users perceive causality behind AI decisions. We examine how this perceived locus of causality—internal or external to the AI—influences trust, and how decision stakes and outcome favourability moderate this relationship. Participants (N=192) engaged with AI-based decision-making scenarios operationalising varying loci of causality, stakes, and favourability, evaluating their trust in each AI. We find that internal attributions foster lower trust as participants perceive the AI to have high autonomy and decision-making responsibility. Conversely, external attributions portray the AI as merely "a tool" processing data, reducing its perceived agency and distributing responsibility, thereby boosting trust. Moreover, stakes moderate this relationship—external attributions foster even more trust in lower-risk, low-stakes scenarios. Our findings establish causal attribution as a crucial yet underexplored determinant of trust in AI, highlighting the importance of accounting for it when researching trust dynamics.2025SPSaumya Pareek et al.University of Melbourne, School of Computing and Information SystemsExplainable AI (XAI)AI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
Responsibility Attribution in Human Interactions with Everyday AI SystemsHow do individuals perceive AI systems as responsible entities in everyday collaborations between humans and AI? Drawing on psychological literature from attribution theory, praise-blame asymmetries and negativity bias, this study investigated the effects of perspective (actor vs observer) and outcome favorability (positive vs negative) on how participants (N=321) attributed responsibility for outcomes resulting from shared human-AI decision-making. Both Bayesian modelling and reflexive thematic analysis of results revealed that, overall, participants were more likely to attribute greater responsibility to the AI systems. When the outcome was positive, participants were more likely to ascribe shared responsibility to both Human and AI systems, rather than either separately. When the outcome was negative, participants were more likely to attribute responsibility to a single entity, but not consistently towards the human or the AI. These results build on the understanding of how individuals cast blame and praise for shared interactions involving AI systems.2025JBJoe Brailsford et al.The University of Melbourne, School of Computing and Information SystemsAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
Affective Interactions in Therapeutic Virtual Reality: A Critical PerspectiveTherapeutic virtual reality (VR) can be a tool, platform or experience imaginary in the healthcare setting. Emotions and affective interaction are integral to the care needs and experiences in digital healthcare, yet remain under-investigated in therapeutic VR. We reflect on four cases involving therapeutic VR to critically examine the function and value of affective interaction. Through a synthesis of the cases, we identify how affective interaction can enhance or diminish healthcare outcomes and even cause potential harm. We draw five recommendations for the design and evaluation of therapeutic VR to challenge assumptions about: (1) knowledge holders and knowledge co-production in design, (2) hyper-visiblity of medical gaze and invisibility of affective experiences, (3) diverse utilities of VR, (4) weaving assessment of affective benefits and harms into evaluation of therapeutic VR, and (5) implementation of therapeutic VR in collaboration with caregivers.2025NANaseem Ahmadpour et al.The University of Sydney, Affective Interactions lab, School of Architecture, Design and PlanningBrain-Computer Interface (BCI) & NeurofeedbackVR Medical Training & RehabilitationMental Health Apps & Online Support CommunitiesCHI
“They’re Scamming Me”: How Children Experience and Conceptualize Harm in Game MonetizationRegulatory shifts are increasingly placing the onus on online service providers such as digital game developers and platforms to ensure that their services do not harm children. This creates an urgent need to examine how children experience and conceptualize harm in digital contexts, which may differ from adult-driven perceptions of harm. In this paper, we present the results of a study into children’s experiences with game monetization which included a ‘think-aloud’ method in which children were given an AU$20 voucher to spend. Through our participants’ (aged 7-14) vernacular of feeling ‘scammed’ or ‘tricked’, we argue that children experience harm principally through being misled or deceived by monetization features, rather than being due to what parents perceive as a misattribution of value toward digital items or overspending. Based on these results, we make game design recommendations to minimize children’s harmful experiences with game monetization strategies.2025THTaylor Hardwick et al.The University of SydneyUniversal & Inclusive DesignGamification DesignGame AccessibilityCHI
Peek into the `White-Box': A Field Study on Bystander Engagement with Urban Robot UncertaintyUncertainty inherently exists in the autonomous decision-making process of robots. Involving humans in resolving this uncertainty not only helps robots mitigate it but is also crucial for improving human-robot interactions. However, in public urban spaces filled with unpredictability, robots often face heightened uncertainty without direct human collaborators. This study investigates how robots can engage bystanders for assistance in public spaces when encountering uncertainty and examines how these interactions impact bystanders' perceptions and attitudes towards robots. We designed and tested a speculative `peephole' concept that engages bystanders in resolving urban robot uncertainty. Our design is guided by considerations of non-intrusiveness and eliciting initiative in an implicit manner, considering bystanders' unique role as non-obligated participants in relation to urban robots. Drawing from field study findings, we highlight the potential of involving bystanders to mitigate urban robots' technological imperfections to both address operational challenges and foster public acceptance of urban robots. Furthermore, we offer design implications to encourage bystanders' involvement in mitigating the imperfections.2025XYXinyan Yu et al.School of Architecture, Design and Planning, The University of Sydney, Design LabHuman-Robot Collaboration (HRC)Community Engagement & Civic TechnologyTechnology Ethics & Critical HCICHI
Raising Awareness of Location Information Vulnerabilities in Social Media Photos using LLMsLocation privacy leaks can lead to unauthorised tracking, identity theft, and targeted attacks, compromising personal security and privacy. This study explores LLM-powered location privacy leaks associated with photo sharing on social media, focusing on user awareness, attitudes, and opinions. We developed and introduced an LLM-powered location privacy intervention app to 19 participants, who used it over a two-week period. The app prompted users to reflect on potential privacy leaks that a widely available LLM could easily detect, such as visual landmarks & cues that could reveal their location, and provided ways to conceal this information. Through in-depth interviews, we found that our intervention effectively increased users’ awareness of location privacy and the risks posed by LLMs. It also encouraged users to consider the importance of maintaining control over their privacy data and sparked discussions about the future of location privacy-preserving technologies. Based on these insights, we offer design implications to support the development of future user-centred, location privacy-preserving technologies for social media photos.2025YMYing Ma et al.The University of Melbourne, School of Computing and Information SystemsHuman-LLM CollaborationPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Spatial Heterogeneity in Distributed Mixed Reality CollaborationCollaborative Mixed Reality (MR) enables embodied meetings for distributed collaborators working across a variety of locations. However, providing a coherent experience for all users regardless of the spatial configurations of their respective physical environments is a central challenge. We present the Spatial Heterogeneity Framework, which breaks the problem into four core components: the activity zones, heterogeneity ladder, blended proxemics, and MR solutions matrix. We explain the interplay between these components, demonstrating their interconnectivity via a case study. Our framework enables researchers to navigate differences and trade-offs between solutions for distributed MR collaboration. It also supports designers to think about the role of space, technology, and social behaviours in MR collaboration. Ultimately, our contributions advance the field by conceptualising the challenges of spatial heterogeneity and strategies to overcome them.2025EWEmily Wong et al.The University of Sydney, School of Computer Science; The University of Melbourne, School of Computing and Information SystemsMixed Reality WorkspacesContext-Aware ComputingCHI
Estimating the Effects of Encumbrance and Walking on Mixed Reality InteractionThis paper investigates the effects of two situational impairments---encumbrance (i.e., carrying a heavy object) and walking---on interaction performance in canonical mixed reality tasks. We built Bayesian regression models of movement time, pointing offset, error rate, and throughput for target acquisition task, and throughput, UER, and CER for text entry task to estimate these effects. Our results indicate that 1.0 kg encumbrance increases selection movement time by 28%, decreases text entry throughput by 17%, and increase UER by 50%, but does not affect pointing offset. Walking led to a 63% increase in ray-cast movement time and a 51% reduction in text entry throughput. It also increased selection pointing offset by 16%, ray-cast pointing offset by 17%, and error rate by 8.4%. The interaction effect on 1.0 kg encumbrance and walking resulted in a 112% increase in ray-cast movement time. Our findings enhance the understanding of the effects of encumbrance and walking on mixed reality interaction, and contribute towards accumulating knowledge of situational impairments research in mixed reality.2025TLTinghui Li et al.University of Sydney, School of Computer ScienceFull-Body Interaction & Embodied InputMixed Reality WorkspacesCHI
Juggling Extra Limbs: Identifying Control Strategies for Supernumerary Multi-Arms in Virtual RealityUsing supernumerary multi-limbs for complex tasks is a growing research focus in Virtual Reality (VR) and robotics. Understanding how users integrate extra limbs with their own to achieve shared goals is crucial for developing efficient supernumeraries. This paper presents an exploratory user study (N=14) investigating strategies for controlling virtual supernumerary limbs with varying autonomy levels in VR object manipulation tasks. Using a Wizard-of-Oz approach to simulate semi-autonomous limbs, we collected both qualitative and quantitative data. Results show participants adapted control strategies based on task complexity and system autonomy, affecting task delegation, coordination, and body ownership. Based on these findings, we propose guidelines—commands, demonstration, delegation, and labeling instructions—to improve multi-limb interaction design by adapting autonomy to user needs and fostering better context-aware experiences.2025HZHongyu Zhou et al.The University of Sydney, School of Computer ScienceShape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputCHI
Wearable AR in Everyday Contexts: Insights from a Digital Ethnography of YouTube VideosWith growing investment in consumer augmented reality (AR) headsets and glasses, wearable AR is moving from niche applications to everyday use. However, current research primarily examines AR in controlled settings, offering limited insights into its use in real-world daily life. To address this gap, we adopt a digital ethnographic approach, analysing 27 hours of 112 YouTube videos featuring early adopters. These videos capture usage ranging from continuous periods of hours to intermittent use over weeks and months. Our analysis shows that currently, wearable AR is primarily used for media consumption and gaming. While productivity is a desired use case, frequent use is constrained by current hardware limitations and the nascent application ecosystem. Users seek continuity in their digital experience, desiring functionalities similar to those on smartphones, tablets, or computers. We propose implications for everyday AR development that promote adoption while ensuring safe, ethical, and socially-aware integration into daily life.2025TTTram Thi Minh Tran et al.School of Architecture, Design and Planning, The University of Sydney, Design LabAR Navigation & Context AwarenessMixed Reality WorkspacesContext-Aware ComputingCHI