Haru in the Care Network: Stakeholder Perspectives on Privacy with Social Robots in PediatricsSocial robots are beginning to be introduced as technologies to support the collective networks supporting pediatric treatment, but few studies on children's perceptions of privacy with robots in hospitals. Through a mixed-method approach, we introduced hypothetical vignettes and engaged in discussion with 15 youth who are either receiving cancer treatments or are in remission (ages 6-25), 11 of their parents, and 5 out of 8 of their clinical staff to learn how stakeholders in pediatric oncology discuss privacy concerns with child-robot interactions. Our thematic analysis imparts how stakeholders perceive robots as social, non-authoritative extensions of the hospital's care network. As 1) mediators of social interaction among various stakeholders, 2) companions for children and 3) informational tools for clinicians when consent is given by the family, social robots can maximize their social utility within care systems while critically engaging with the comfort and privacy preferences of stakeholders. We emphasize how assistive technologies in pediatrics should be co-designed within communities for identifying appropriate roles and returning agency to stakeholders as they navigate the blurry boundaries of privacy in healthcare.2025LLLeigh M Levinson et al.Perspectives on Data PrivacyCSCW
Animal Interaction with Autonomous Mobility Systems: Designing for Multi-Species CoexistenceAutonomous mobility systems increasingly operate in environments shared with animals, from urban pets to wildlife. However, their design has largely focused on human interaction, with limited understanding of how non-human species perceive, respond to, or are affected by these systems. Motivated by research in Animal-Computer Interaction (ACI) and more-than-human design, this study investigates animal interactions with autonomous mobility through a multi-method approach combining a scoping review (45 articles), online ethnography (39 YouTube videos and 11 Reddit discussions), and expert interviews (8 participants). Our analysis surfaces five key areas of concern: Physical Impact (e.g., collisions, failures to detect), Behavioural Effects (e.g., avoidance, stress), Accessibility Concerns (particularly for service animals), Ethics and Regulations, and Urban Disturbance. We conclude with design and policy directions aimed at supporting multi-species coexistence in the age of autonomous systems. This work underscores the importance of incorporating non-human perspectives to ensure safer, more inclusive futures for all species.2025TTTram Thi Minh Tran et al.Ubiquitous ComputingCommunity Engagement & Civic TechnologyHuman-Nature Relationships (More-than-Human Design)AutoUI
TeamVision : An AI-powered Learning Analytics System for Supporting Reflection in Team-based Healthcare SimulationHealthcare simulations help learners develop teamwork and clinical skills in a risk-free setting, promoting reflection on real-world practices through structured debriefs. However, despite video's potential, it is hard to use, leaving a gap in providing concise, data-driven summaries for supporting effective debriefing. Addressing this, we present TeamVision, an AI-powered multimodal learning analytics (MMLA) system that captures voice presence, automated transcriptions, body rotation, and positioning data, offering educators a dashboard to guide debriefs immediately after simulations. We conducted an in-the-wild study with 56 teams (221 students) and recorded debriefs led by six teachers using TeamVision. Follow-up interviews with 15 students and five teachers explored perceptions of its usefulness, accuracy, and trustworthiness. This paper examines: i) how TeamVision was used in debriefing, ii) what educators found valuable/challenging, and iii) perceptions of its effectiveness. Results suggest TeamVision enables flexible debriefing and highlights the challenges and implications of using AI-powered systems in healthcare simulation.2025VEVanessa Echeverria et al.Monash University, Department of Human Centred ComputingIntelligent Tutoring Systems & Learning AnalyticsTelemedicine & Remote Patient MonitoringSurgical Assistance & Medical TrainingCHI
AppAgent: Multimodal Agents as Smartphone UsersRecent advancements in large language models (LLMs) have led to the creation of intelligent agents capable of performing complex tasks. This paper introduces a novel LLM-based multimodal agent framework designed to operate smartphone applications. Our framework allows the agent to mimic human-like interactions such as tapping and swiping through a simplified action space, eliminating the need for system back-end access and enhancing its versatility across various apps. Central to the agent's functionality is an innovative in-context learning method, where it either autonomously explores or learns from human demonstrations, creating a knowledge base used to execute complex tasks across diverse applications. We conducted extensive testing with our agent on over 50 tasks spanning 10 applications, ranging from social media to sophisticated image editing tools. Additionally, a user study confirmed the agent's superior performance and practicality in handling a diverse array of high-level tasks, demonstrating its effectiveness in real-world settings. Our project page is available at \url{https://appagent-official.github.io/}.2025CZChi Zhang et al.Westlake University, School of EngineeringHuman-LLM CollaborationCHI
AEGIS: Human Attention-based Explainable Guidance for Intelligent Vehicle SystemsImproving decision-making capabilities in Autonomous Intelligent Vehicles (AIVs) has been a heated topic in recent years. Despite advancements, training machine to capture regions of interest for comprehensive scene understanding, like human perception and reasoning, remains a significant challenge. This study introduces a novel framework, Human Attention-based Explainable Guidance for Intelligent Vehicle Systems (AEGIS). AEGIS utilizes human attention, converted from eye-tracking, to guide reinforcement learning (RL) models to identify critical regions of interest for decision-making. AEGIS uses a pre-trained human attention model to guide reinforcement learning (RL) models to identify critical regions of interest for decision-making. By collecting 1.2 million frames from 20 participants across six scenarios, AEGIS pre-trains a model to predict human attention patterns. The learned human attention guides the RL agent’s focus on task-relevant objects, prioritizes critical instances, enhances robustness in unseen environments, and leads to faster learning convergence. This approach enhances interpretability by making machine attention more comparable to human attention and thus enhancing the RL agent’s performance in diverse driving scenarios. The code is available in https://github.com/ALEX95GOGO/AEGIS.2025ZZZhuoli Zhuang et al.University of Technology Sydney, School of Computer Science, FEITHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionExplainable AI (XAI)CHI
A Longitudinal Study on the Effects of Circadian Fatigue on Sound Source Identification and Localization using a Heads-Up DisplayCircadian fatigue, largely caused by sleep deprivation, significantly diminishes alertness and situational awareness. This issue becomes critical in environments where auditory awareness—such as responding to verbal instructions or localizing alarms—is essential for performance and safety. While head-mounted displays have demonstrated potential in enhancing situational awareness through visual cues, their effectiveness in supporting sound localization under the influence of circadian fatigue remains under-explored. This study addresses this knowledge gap through a longitudinal study (N=19) conducted over 2–4 months, tracking participants’ fatigue levels through daily assessments. Participants were called in to perform non-line-of-sight sound source identification and localization tasks in a virtual environment under high- and low-fatigue conditions, both with and without head-up display assistance. The results show task-dependent effects of circadian fatigue. Unexpectedly, reaction times were shorter across all tasks under high-fatigue conditions. Yet, in sound localization, where precision is key, the HUD offered the greatest performance enhancement by reducing pointing error. The results suggest the auditory channel is a robust means of enhancing situational awareness and providing support for incorporating spatial audio cues and HUD as standard features in augmented reality platforms for fatigue-prone scenarios.2025AMAlexander G Minton et al.University of Technology Sydney, School of Computer Science, FEITHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionCHI
Nudging with Narrative Visualization: Communicating to a Young Adult Audience in the PandemicEffective narrative visualization communicates information by integrating story-telling and data visualization in a comprehensible, compelling manner. The compelling aspect of effective narrative visualization consequentially results in the potential to shift the attitude of an audience. However, there is much to understand about how narrative visualization can best be designed to influence target audiences. This paper focuses on an empirical experiment where we examined the effects of two communication strategies - anthropomorphism and personal identification - on a young adult audience. In particular, we wanted to understand which strategy, when integrated into narrative visualization, can nudge a specific audience’s attitude towards greater consideration in the context of the COVID-19 pandemic. Our results indicated that the personal identification communication strategy was the most successful in nudging participants. This study contributes a better grasp of how technologies such as narrative visualization, using different communication strategies, can deliver more targeted messaging.2024NENina Errey et al.Session 3d: Teens in the Digital Age: Safety, Creativity, and Well-BeingCSCW
Encouraging Bystander Assistance for Urban Robots: Introducing Playful Robot Help-Seeking as a StrategyRobots in urban environments will inevitably encounter situations beyond their capabilities (e.g., delivery robots unable to press traffic light buttons), necessitating bystander assistance. These spontaneous collaborations possess challenges distinct from traditional human-robot collaboration, requiring design investigation and tailored interaction strategies. This study investigates playful help-seeking as a strategy to encourage such bystander assistance. We compared our designed playful help-seeking concepts against two existing robot help-seeking strategies: verbal speech and emotional expression. To assess these strategies and their impact on bystanders' experience and attitudes towards urban robots, we conducted a virtual reality evaluation study with 24 participants. Playful help-seeking enhanced people's willingness to help robots, a tendency more pronounced in scenarios requiring greater physical effort. Verbal help-seeking was perceived less polite, raising stronger discomfort assessments. Emotional expression help-seeking elicited empathy while leading to lower cognitive trust. The triangulation of quantitative and qualitative results highlights considerations for robot help-seeking from bystanders.2024XYMengxia Yu et al.Social Robot InteractionHuman-Robot Collaboration (HRC)DIS
Shared Bodily Fusion: Leveraging Inter-Body Electrical Muscle Stimulation for Social PlayTraditional games like "Tag" rely on shared control via inter-body interactions (IBIs) – touching, pushing, and pulling – that foster emotional and social connection. Digital games largely limit IBIs, with players using their bodies as input to control virtual avatars instead. Our “Shared Bodily Fusion” approach addresses this by fusing players' bodies through a mediating computer, creating a shared input and output system. We demonstrate this approach with "Hidden Touch", a game where a novel social electrical muscle stimulation system transforms touch (input) into muscle actuations (output), facilitating IBIs. Through a study (n=27), we identified three player experience themes. Informed by these findings and our design process, we mapped their trajectories across our three experiential spaces – threshold, tolerance, and precision – which collectively form our design framework. This framework facilitates the creation of future digital games where IBIs are intrinsic, ultimately promoting the many benefits of social play.2024RPRakesh Patibanda et al.Electrical Muscle Stimulation (EMS)Serious & Functional GamesMultiplayer & Social GamesDIS
"This is the kind of experience I want to have": Supporting the experiences of queer young men on social platforms through designQueer young men (similar to others in the LGBTQ+ community) depend heavily on social platforms but their use can often be problematic. Their needs are often not adequately considered in the design of general platforms and they can be exposed to intra-community harms on LGBTQ+ specific platforms such as dating apps. To explore how social platform design could be improved to better support the needs of queer young men, we conducted a co-design study. We recruited 13 queer men working in technology design to generate new concepts for social platform features. We then refined these concepts and evaluated them in group sessions with end users, a different cohort of 15 queer young men. Here we present mockups of the concepts and findings from evaluations. Our findings show specific ways that providing more agency to social platform users could improve their experiences and we discuss implications for design.2024TATommaso Armstrong et al.Social Platform Design & User BehaviorGender & Race Issues in HCILGBTQ+ Community Technology DesignDIS
Go-Go Biome: Evaluation of a Casual Game for Gut Health Engagement and ReflectionExperts emphasise that maintaining a healthy gut microbial balance requires the public to understand factors beyond diet, such as physical activity, lifestyle, and other real-world influences. Games as experiential systems are known to foster playful engagement and reflection. We propose a novel approach to promoting activity engagement for gut health and its reflection through the design of the Go-Go Biome game. The game simulates the interplay between friendly and unfriendly gut microbes, encouraging real-world activity engagement for gut-microbial balance through interactive visuals, unstructured play mechanics, and reflective design principles. A field study with 14 participants revealed that important facets of our game design led to awareness, playful visualisation, and reflection on factors influencing gut health. Our findings suggest four design lenses– bio-temporality, visceral conversations, wellness comparison, and inner discovery, to aid future playful design explorations to foster gut health engagement and reflection.2024NPNandini Pasumarthy et al.RMIT UniversitySerious & Functional GamesDiet Tracking & Nutrition ManagementCHI
A Design Space for Intelligent and Interactive Writing AssistantsIn our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities. We seek to address this challenge by proposing a design space as a structured way to examine and explore the multidimensional space of intelligent and interactive writing assistants. Through community collaboration, we explore five aspects of writing assistants: task, user, technology, interaction, and ecosystem. Within each aspect, we define dimensions and codes by systematically reviewing 115 papers while leveraging the expertise of researchers in various disciplines. Our design space aims to offer researchers and designers a practical tool to navigate, comprehend, and compare the various possibilities of writing assistants, and aid in the design of new writing assistants.2024MLMina Lee et al.Microsoft ResearchHuman-LLM CollaborationAI-Assisted Creative WritingCreative Collaboration & Feedback SystemsCHI
Grand Challenges in SportsHCIThe field of Sports Human-Computer Interaction (SportsHCI) investigates interaction design to support a physically active human being. Despite growing interest and dissemination of SportsHCI literature over the past years, many publications still focus on solving specific problems in a given sport. We believe in the benefit of generating fundamental knowledge for SportsHCI more broadly to advance the field as a whole. To achieve this, we aim to identify the grand challenges in SportsHCI, which can help researchers and practitioners in developing a future research agenda. Hence, this paper presents a set of grand challenges identified in a five-day workshop with 22 experts who have previously researched, designed, and deployed SportsHCI systems. Addressing these challenges will drive transformative advancements in SportsHCI, fostering better athlete performance, athlete-coach relationships, spectator engagement, but also immersive experiences for recreational sports or exercise motivation, and ultimately, improve human well-being.2024DEDon Samitha Elvitigala et al.Monash UniversityGame UX & Player BehaviorSerious & Functional GamesMental Health Apps & Online Support CommunitiesCHI
My Eyes Speak: Improving Perceived Sociability of Autonomous Vehicles in Shared Spaces Through Emotional Robotic EyesThe ability of autonomous vehicles (AVs) to interact socially with pedestrians poses a significant impact on their integration with urban traffic. This is particularly important for vehicle-pedestrian shared spaces due to increased social requirements in comparison to vehicular roads. Current pedestrian experience in shared spaces suffers from negative attitudes towards AVs and the consequently low acceptability of AVs in these spaces. HRI work shows that the acceptability of robots in public spaces can be positively impacted by their perceived sociability (i.e., possessing social skills), which can be enhanced by their ability to express emotions. Inspired by this approach, we follow a systematic process to design emotional expressions for AVs using the headlight (``eye'') area and investigate their impact on perceived sociability of AVs in shared spaces, by conducting expert focus groups (N=12) and an online video-based user study (N=106). Our findings confirm that the perceived sociability of AVs can be enhanced by emotional expressions indicated through emotional eyes. We further discuss implications of our findings for improving pedestrian experience and attitude in shared spaces and highlight opportunities to use AVs' emotional expressions as a new external communication strategy for future research.2023YWYiyuan Wang et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsMobileHCI
Towards designing for everyday embodied remembering: Findings from a diary studyOur bodies play an important part in our remembering practices, for example when we can remember passwords by typing, even if we cannot verbalise them. An increasing number of technologies are being developed to support remembering. However, so far, they seem to have not taken the opportunity yet to support remembering through bodily movements. To better understand how to design for such embodied remembering, we conducted a diary study with 12 participants who recorded their embodied remembering experiences in everyday life over a three-week period. Our thematic analysis of the diaries and interviews led to the creation of a framework that helps understand embodied remembering experiences (ERXs) based on the level of skilled and conscious movements used. We describe how this ERX framework could help with the design of technologies to support embodied remembering.2023NONathalie Overdevest et al.Full-Body Interaction & Embodied InputHuman Pose & Activity RecognitionDIS
A Neural Network-based Low-cost Soft Sensor for Touch Recognition and Deformation CaptureWe propose a novel, cost-effective soft sensor capable of detecting contact force, multiple touch points, and reflecting sensor interaction in real-time with a 3D virtual surface representation. Our fabrication process has been optimized for cost efficiency through careful material selection, utilization of automated machinery, and low-cost hardware. The sensor can be easily replicated without the need for complex laboratory equipment. The sensor employs trained neural network models for real-time signal translation into localization, force measurement, and deformation mapping. We have also developed an efficient data collection system that captures accurate 2D localization, force measurement, and 3D surface data to generate a high-quality pre-validated data set. This data set is filtered using prior knowledge before being fed to two neural network models. Our interactive prototype demonstrates the stability and accuracy of the low-cost soft sensor, delivering reliable results in both single-point and multi-point contact scenarios.2023YFYifan Fan et al.Shape-Changing Interfaces & Soft Robotic MaterialsComputational Methods in HCIDIS
A Study of Creative Development with an IoT-based Audiovisual System: Creative Strategies and Impacts for System DesignIn this paper we describe a qualitative study investigating how artists work with a scalable and distributed audio-visual installation system that utilises IoT technology. With no prior experience of the system invited artists incorporated the new technology according to a creative brief for a public performance. We examined how they (i) built an understanding of the technology's affordances, (ii) refined their creative goals, and (iii) deployed collaborative strategies to achieve creative outcomes. We examine how the artists worked from the examples we provided to integrate our audio-visual system and develop their creative work. We identify three distinct creative strategies and use these to suggest ways that the design of examples, presets and readymade configurations can be successfully integrated into interfaces for new creative technologies.2023KMKurt Mikolajczyk et al.Context-Aware ComputingDigital Art Installations & Interactive PerformanceC&C
Classroom Dandelions: Visualising Participants' Position, Trajectories and Body Orientation Augments Teachers' SensemakingDespite the digital revolution, physical space remains the site for teaching and learning embodied knowledge and skills. Both teachers and students must develop spatial competencies to effectively use classroom spaces, enabling fluid verbal and non-verbal interaction. While video permits rich activity capture, it provides no support for quickly seeing activity patterns that can assist learning. In contrast, position tracking systems permit the automated modelling of spatial behaviour, opening new possibilities for feedback. This paper introduces the design rationale for "Dandelion Diagrams" that integrate participant location, trajectory and body orientation over a variable period. Applied in two authentic teaching contexts (a science laboratory, and a nursing simulation) we show how heatmaps showing only teacher/student location led to misinterpretations that were resolved by overlaying Dandelion Diagrams. Teachers also identified a variety of ways they could aid professional development. We conclude Dandelion Diagrams assisted sensemaking, but discuss the ethical risks of over-interpretation.2022GFGloria Fernandez-Nieto et al.University of Technology SydneyVisualization Perception & CognitionCollaborative Learning & Peer TeachingUser Research Methods (Interviews, Surveys, Observation)CHI
“It’s A Drag”: Exploring How To Improve Parents’ Experiences of Managing Mobile Device Use During Family TimeResearch reveals that managing mobile device use during family time can be a source of stress for parents. In particular, it can create conflict in their relationships. As such, there is a need to understand how these problematic experiences might be addressed by new approaches to technology design. This paper presents a study in which 14 parents were prompted to reflect on how their experiences and relationships could be improved by four design proposals. These proposals resulted from ideation workshops involving 12 professional designers, and were presented as scenario-based storyboards during interviews. Our interviews revealed three design approaches that appealed to parents. We describe seven benefits that parents imagined these approaches would have, and discuss ways in which they should be further explored. Thus, we contribute to a more complete understanding of how technology design might better support parents’ aspirations for how devices are used within the family.2022EDEleanor Chin Derix et al.UTSSmart Home Interaction DesignAging-in-Place Assistance SystemsCHI
What Can Analytics for Teamwork Proxemics Reveal About Positioning Dynamics In Clinical Simulations?Effective teamwork is critical to improve patient outcomes in healthcare. However, achieving this capability requires that pre-service nurses develop the spatial abilities they will require in their clinical placements, such as: learning when to remain close to the patient and to other team members; positioning themselves correctly at the right time; and deciding on specific team formations (e.g. face-to-face or side-by-side) to enable effective interaction or avoid disrupting clinical procedures. However, positioning dynamics are ephemeral and can easily become occluded by the multiple tasks nurses have to accomplish. Digital traces automatically captured by indoor positioning sensors can be used to address this problem for the purpose of improving nurses’ reflection, learning and professional development. This paper presents a modelling approach that transforms nurses’ low-level position traces to higher-order proxemics constructs in simulation-based teamwork training.To illustrate our approach, we conducted an in-the-wild study with 55 undergraduate students and five educators from whom positioning traces were captured in eleven authentic nursing education classes. Low-level x-y data was used in models of three proxemics constructs: i) co-presence in interactional spaces, ii)socio-spatial formations (i.e. f-formations), and ii) presence in spaces of interest. Through a number of vignettes, we illustrate how indoor positioning analytics can be used to address questions that educators and researchers have about teamwork in healthcare simulation settings.2021GFGloria Fernandez-Nieto et al.Computer-Supported Teamwork and CollaborationCSCW