Walking with robots: Video analysis of human-robot interactions in transit spacesThe proliferation of robots in public spaces necessitates a deeper understanding of how these robots can interact with those they share the space with. In this paper, we present findings from video analysis of publicly deployed cleaning robots in a transit space—a major commercial airport, using their navigational troubles as a tool to document what robots currently lack in interactional competence. We demonstrate that these robots, while technically proficient, can disrupt the social order of a space due to their inability to understand core aspects of human movement: mutual adjustment to others, the significance of understanding social groups, and the purpose of different locations. In discussion we argue for exploring a new design space of movement: socially-aware movement. By developing strong concepts that treat movement as an interactional and collaborative accomplishment, we can create systems that better integrate into the everyday rhythms of public life.2026BBBarry Brown et al.Stockholm UniversitySocial Robot InteractionHuman-Robot Collaboration (HRC)Teleoperation & TelepresenceCHI
MarioChart: Autonomous Tangibles as Active Proxy Interfaces for Embodied Casual Data ExplorationWe introduce the notion of an Active Proxy interface, i.e. tangible models as proxies for physical data referents, supporting interactive exploration of data through active manipulation. We realise an active proxy data visualisation system, ``MarioChart", using robot carts relocating themselves on a tabletop, e.g., to align with their data referents in a map or other visual layout. We consider a casual-data exploration scenario involving a multivariate campus sustainability dataset, using scale models as proxies for their physical building data referents. Our empirical study (n=12) compares active proxy use with conventional tablet interaction, finding that our active proxy system enhances short-term spatial memory of data and enables faster completion of certain data analytic tasks. It shows no significant differences compared to traditional touchscreens in long-term memory, physical fatigue, mental workload, or user engagement. Our study offers an initial baseline for active proxy techniques and advances understanding of tangible interfaces in situated data visualisation.2026SDShaozhang Dai et al.Monash UniversityPhysical-Digital Hybrid InteractionTabletop Tangible InteractionInteractive Data VisualizationCHI
A Critical Reflection on the Values and Assumptions in Data VisualizationVisualization has matured into an established research field, producing widely adopted tools, design frameworks, and empirical foundations. As the field has grown, ideas from outside computer science have increasingly entered visualization discourse, questioning the fundamental values and assumptions on which visualization research stands. In this short position paper, we examine a set of values that we see underlying the seminal works of Jacques Bertin, John Tukey, Leland Wilkinson, Colin Ware, and Tamara Munzner. We articulate three prominent values in these texts — universality, objectivity, and efficiency — and examine how these values permeate visualization tools, curricula, and research practices. We situate these values within a broader set of critiques that call for more diverse priorities and viewpoints. By articulating these tensions, we call for our community to embrace a more pluralistic range of values to shape our future visualization tools and guidelines.2026SSShehryar Saharan et al.University of TorontoVisualization Perception & CognitionResearch Ethics & Open ScienceCHI
Opening Up Human-Robot CollaborationAs we see robots being deployed into new places in everyday life, questions arise about what the features of `human-robot collaboration' (HRC) might look like. Recent work anticipates the need for more CSCW-oriented studies of HRC, given the increasing adoption of CSCW concepts to HRC research. We address this via an ethnomethodological study of encounters between pedestrians and food delivery robots on public streets. Our analysis---using video recorded fragments of what happens on the street---demonstrates how passers-by manage walking trajectories in ways that account for robot actions, contributing an analysis that articulates how people accomplish practices of following and overtaking robots, passing by and crossing paths with them. We show that the picture of human-robot collaboration is drawn with `unequal' asymmetries of action and intelligibility, with humans contributing considerable work to get something that looks like collaboration achieved. This raises questions for how we talk about collaboration in HRC from a CSCW perspective, and how this notion can and should be applied to groups and teams which include robots.2025SRStuart Reeves et al.Human-AI (and Robot!) CollaborationCSCW
InSituTale: Enhancing Augmented Data Storytelling with Physical ObjectsAugmented data storytelling enhances narrative delivery by integrating visualizations with physical environments and presenter actions. Existing systems predominantly rely on body gestures or speech to control visualizations, leaving interactions with physical objects largely underexplored. We introduce augmented physical data storytelling, an approach enabling presenters to manipulate visualizations through physical object interactions. To inform this approach, we first conducted a survey of data-driven presentations to identify common visualization commands. We then conducted workshops with nine HCI/VIS researchers to collect mappings between physical manipulations and these commands.Guided by these insights, we developed InSituTale, a prototype that combines object tracking via a depth camera with Vision-LLM for detecting real-world events. Through physical manipulations, presenters can dynamically execute various visualization commands, delivering cohesive data storytelling experiences that blend physical and digital elements. A user study with 12 participants demonstrated that InSituTale enables intuitive interactions, offers high utility, and facilitates an engaging presentation experience.2025KTKentaro Takahira et al.Interactive Data VisualizationContext-Aware ComputingInteractive Narrative & Immersive StorytellingUIST
Exploring Assumptions about Sustainability: Towards a Constructive Framework for Action in Sustainable HCIThe global environmental crises continue to get worse, fast approaching various irreversible thresholds. While a vast array of approaches to solving sustainability problems are found under the umbrella of Sustainable HCI, their contributions are sometimes hard to compare. In this essay, we describe a set of assumptions that influence what is considered meaningful and important areas of sustainability research, along four dimensions of sustainability: 1) the depth and nature of the sustainability challenges; 2) the role of technological innovation in sustainability; 3) what gets defined as "externalities" to a design or system; and 4) the time perspective used to consider sustainability. We argue that what one assumes within each of these dimensions directly influences what one means by the term "sustainability", which is then reflected in the questions that are asked, the methods chosen, the proposed solutions and the developed systems. By describing these assumptions and some of their commensurate actions, we offer a framework that may enable members of the SHCI community to reflect on and better position their own work and that of others in the field. Our intention is for the framework to lead to better transparency and more constructive conversations about where we might collectively direct our efforts moving forward.2025MTMinna K Laurell Thorslund et al.KTH Royal Institute of Technology, Media Technology and Interaction DesignSustainable HCIEcological Design & Green ComputingCHI
Unveiling High-dimensional Backstage: A Survey for Reliable Visual Analytics with Dimensionality ReductionDimensionality reduction (DR) techniques are essential for visually analyzing high-dimensional data. However, visual analytics using DR often face unreliability, stemming from factors such as inherent distortions in DR projections. This unreliability can lead to analytic insights that misrepresent the underlying data, potentially resulting in misguided decisions. To tackle these reliability challenges, we review 133 papers that address the unreliability of visual analytics using DR. Through this review, we contribute (1) a workflow model that describes the interaction between analysts and machines in visual analytics using DR, and (2) a taxonomy that identifies where and why reliability issues arise within the workflow, along with existing solutions for addressing them. Our review reveals ongoing challenges in the field, whose significance and urgency are validated by five expert researchers. This review also finds that the current research landscape is skewed toward developing new DR techniques rather than their interpretation or evaluation, where we discuss how the HCI community can contribute to broadening this focus.2025HJHyeon Jeon et al.Seoul National University, Department of Computer Science and EngineeringInteractive Data VisualizationUncertainty VisualizationVisualization Perception & CognitionCHI
Uncovering How Scatterplot Features Skew Visual Class SeparationMulti-class scatterplots are essential for visually comparing data, such as examining class distributions in dimensionality reduction and evaluating classification models. Visual class separation (VCS) measures quantify human perception but are largely derived from and evaluated with datasets reflecting limited types of scatterplot features (e.g., data distribution, similar class densities). Quantitatively identifying which scatterplot features are influential to VCS tasks can enable more robust guidance for future measures. We analyze the alignment between VCS measures and people's perceptions of class separation through a crowdsourced study using 70 scatterplot features relevant to class separation. To cover a wide range of scatterplot features, we generated a set of multi-class scatterplots from 6,947 real-world datasets. Our results highlight that multiple combinations of features are needed to best explain VCS. From our analysis, we develop a composite feature model that identifies key scatterplot features for measuring VCS task performance.2025SBS. Sandra Bae et al.University of Colorado Boulder, ATLAS InstituteInteractive Data VisualizationVisualization Perception & CognitionCHI
The People Behind the Robots: How Wizards Wrangle Robots in Public DeploymentsIn the Wizard-of-Oz study paradigm, human "wizards" perform not-yet-implemented system behavior, simulating, among others, how autonomous robots could interact in public to see how unwitting bystanders respond. This paper analyzes a 60-minute video recording of two wizards in a public plaza who are operating two trash-collecting robots within their line of sight. We take an ethnomethodology and conversation analysis perspective to scrutinize interactions between the wizards and the people in the plaza, focusing on critical instances where one robot gets stuck and requires collaborative intervention by the wizards. Our analysis unpacks how the wizards deal with emergent problems by pushing one robot into the other, how they manage front and backstage interactions, and how they monitor the location of each other's robots. We discuss how scrutinizing the work of wizards can inform explorative Wizard-of-Oz paradigms, the design of multi-agent robot systems, and the operation of urban robots from a distance.2025HPHannah RM Pelikan et al.Linköping University, Department of Culture and SocietySocial Robot InteractionTeleoperation & TelepresenceCHI
Identifying Critical Points of Departure for the Design of Self-Fashioning TechnologiesDesigning technologies that clothe, adorn, or are otherwise placed on the body raises questions concerning the role they will play in dressing ourselves. We situate self-fashioning – or the process through which we stylise and present our bodies – as a complex practice where a series of social, material, and contextual factors shape how we present ourselves. Informed by reflective discussions and projective design tools, we contribute three critical points of departure for self-fashioning technologies: (i) Purposeful examining discomfort as an ongoing phenomenon, (ii) Supporting mimesis and visibility as qualities to be negotiated, and (iii) Envisioning the multiplicity of the body. We call for the design community to help devise fashionable technologies that are sensitive, caring, and responsive to the complexities of fashioning our bodies.2025RCRebeca Blanco Cardozo et al.KTH Royal Institute of TechnologyHaptic WearablesInclusive DesignCHI
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented RealitySynchronous data-driven storytelling with network visualizations presents significant challenges due to the complexity of real-time manipulation of network components. While existing research addresses asynchronous scenarios, there is a lack of effective tools for live presentations. To address this gap, we developed TangibleNet, a projector-based AR prototype that allows presenters to interact with node-link diagrams using double-sided magnets during live presentations. The design process was informed by interviews with professionals experienced in synchronous data storytelling and workshops with 14 HCI/VIS researchers. Insights from the interviews helped identify key design considerations for integrating physical objects as interactive tools in presentation contexts. The workshops contributed to the development of a design space mapping user actions to interaction commands for node-link diagrams. Evaluation with 12 participants confirmed that TangibleNet supports intuitive interactions and enhances presenter autonomy, demonstrating its effectiveness for synchronous network-based data storytelling.2025KTKentaro Takahira et al.Department of Computer Science and Engineering, The Hong Kong University of Science and TechnologyHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Interactive Data VisualizationData StorytellingCHI
Honkable Gestalts: Why Autonomous Vehicles Get Honked AtThis paper analyzes honks directed at autonomous vehicles (AVs) by other drivers. As honks often mark problems, this focus allows us to better understand the challenges that AVs face in real traffic. Performing a sequential video analysis of 63 honk incidents uploaded by Tesla beta testers on YouTube, we identify how problematic situations emerge as honkable Traffic Gestalts. We identify four types of situated problems with AV driving performance marked by other drivers’ honks: they may wait too long, steer inconsistently, stop instead of going, and go too fast. We further show how a honk may be understandable as a warning, a nudge or a reprimand. Our work suggests designing honks for AVs to focus on relevant contexts, supported by developing bidirectional interfaces and audio analysis methods that consider the interplay of auditory and visual information in traffic.2024SPSergio Passero et al.Automated Driving Interface & Takeover DesignV2X (Vehicle-to-Everything) Communication DesignAutoUI
Changing Lanes Toward Open Science: Openness and Transparency in Automotive User ResearchWe review the state of open science and the perspectives on open data sharing within the automotive user research community. Openness and transparency are critical not only for judging the quality of empirical research, but also for accelerating scientific progress and promoting an inclusive scientific community. However, there is little documentation of these aspects within the automotive user research community. To address this, we report two studies that identify (1) community perspectives on motivators and barriers to data sharing, and (2) how openness and transparency have changed in papers published at AutomotiveUI over the past 5 years. We show that while open science is valued by the community and openness and transparency have improved, overall compliance is low. The most common barriers are legal constraints and confidentiality concerns. Although research published at AutomotiveUI relies more on quantitative methods than research published at CHI, openness and transparency are not as well established. Based on our findings, we provide suggestions for improving openness and transparency, arguing that the motivators for open science must outweigh the barriers. All supporting materials are freely available at: https://osf.io/zdpek/2024PEPatrick Ebel et al.Research Ethics & Open ScienceAutoUI
Beyond Text and Speech in Conversational Agents: Mapping the Design Space of AvatarsConversational agents have gained widespread popularity due to their ability to simulate and sustain contextual conversations. Prior works predominantly focused on computational challenges. However, avatars — the representation of the agent — impact user interactions and perception of conversational agents' trustworthiness and usefulness. Despite their importance, we lack a holistic understanding of conversational agent avatar design space. In this work, we address this gap by defining a categorization of 10 dimensions that is based on the analysis and iterative coding of 266 conversational agent papers from 160 venues spanning 2003 to the present. In addition, we built an interactive browser to facilitate exploration and interaction with these dimensions and their interrelationships. Our categorization lays the groundwork for researchers, designers, and practitioners to discern task-specific and contextual aspects of conversational agent avatar design. Our work fosters innovative ideas to facilitate new interactions with avatars by surfacing current patterns and highlighting open challenges.2024MRMashrur Rashik et al.Conversational ChatbotsAgent Personality & AnthropomorphismDIS
The Human Behind the Robot: Rethinking the Low Social Status of Service RobotsRobots in our society are commonly perceived as subordinate servants with a lower social status than humans. This often leads to humans prioritizing themselves during conflict situations. This becomes problematic when robots start to directly represent humans as proxies if people do not think of the human operator behind them. This could be considered a cognitive bias of human representation in HRI. To explore the extent of this problem, we conducted a user study featuring several conflict situations. Participants granted more priority to the robot when the human representation was visible. This paper explores the societal consequences and emerging inequities such as potentially deprioritizing humans by deprioritizing a robot in certain situations. Possible strategies to address potential negative consequences are discussed on a design level while acknowledging that a societal change in how we perceive and treat robots that represent humans might be necessary.2024FBFranziska Babel et al.Privacy by Design & User ControlSocial Robot InteractionHuman-Robot Collaboration (HRC)HRI
A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot EncountersIncreasing encounters between people and autonomous service robots may lead to conflicts due to mismatches between human expectations and robot behaviour. This interactive online study (N = 335) investigated human-robot interactions at an elevator, focusing on the effect of communication and behavioural expectations on participants' acceptance and compliance. Participants evaluated a humanoid delivery robot primed as either submissive or assertive. The robot either matched or violated these expectations by using a command or appeal to ask for priority and then entering either first or waiting for the next ride. The results highlight that robots are less accepted if they violate expectations by entering first or using a command. Interactions were more effective if participants expected an assertive robot which then asked politely for priority and entered first. The findings emphasize the importance of power expectations in human-robot conflicts for the robot's evaluation and effectiveness in everyday situations.2024FBFranziska Babel et al.Linköping UniversitySocial Robot InteractionEmpowerment of Marginalized GroupsCHI
Encountering Autonomous Robots in Public StreetsRobots deployed in public settings enter spaces that humans live and work in. Understandings of robots, such as those in HRI and beyond, tend to prioritise direct, obvious and deliberate interactions with robots. Yet this fails to identify the most common form of mundane, everyday response to robots in public, which ranges from the unobvious and subtle to virtually ignoring them. Drawing on a collection of video recordings, we show how public delivery robots encounter the lived-in spaces of urban streets both from a perspective of the social assembly of the physical environment and the socially organised nature of everyday street life that such robots are entering. Ultimately we show how such robots are effectively `granted passage' through these spaces as a result of the mundane, practical work of the streets' human inhabitants. We demonstrate the importance of studying robots during their whole deployment, in the spaces that they enter, and challenge the current understanding of what `counts' as human-robot interaction, highlighting that we may want to re-think who we consider as `user' as well as the conceptualisation of human-robot interaction itself.2024HPHannah RM Pelikan et al.Social Robot InteractionSmart Cities & Urban SensingHRI
Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRIHuman-Robot Interaction (HRI) is inherently a human-centric field of technology. The role of feminist theories in related fields (e.g. Human-Computer Interaction, Data Science) are taken as a starting point to present a vision for Feminist HRI which can support better, more ethical HRI practice everyday, as well as a more activist research and design stance. We first define feminist design for an HRI audience and use a set of feminist principles from neighboring fields to examine existent HRI literature, showing the progress that has been made already alongside some additional potential ways forward. Following this we identify a set of reflexive questions to be posed throughout the HRI design, research and development pipeline, encouraging a sensitivity to power and to individuals' goals and values. Importantly, we do not look to present a definitive, fixed notion of Feminist HRI, but rather demonstrate the ways in which bringing feminist principles to our field can lead to better, more ethical HRI, and to discuss how we, the HRI community, might do this in practice.2023KWKatie Winkle et al.Human-Robot Collaboration (HRC)Gender & Race Issues in HCITechnology Ethics & Critical HCIHRI
Designing Robot Sound-In-Interaction: The Case of Autonomous Public Transport Shuttle BusesHorns and sirens are important tools for communicating on the road, which are still understudied in autonomous vehicles. While HRI has explored different ways in which robots could sound, we focus on the range of actions that a single sound can accomplish in interaction. In a Research through Design study involving autonomous shuttle buses in public transport, we explored sound design with the help of voice-overs to video recordings of the buses on the road and Wizard-of-Oz tests in live traffic. The buses are slowed down by (unnecessary) braking in response to people getting close. We found that prolonged jingles draw attention to the bus and invite interaction, while repeated short beeps and bell sounds can instruct the movement of others away from the bus. We highlight the importance of designing sound in sequential interaction and describe a new method for embedding video interaction analysis in the design process.2023HPHannah RM Pelikan et al.In-Vehicle Haptic, Audio & Multimodal FeedbackTeleoperated DrivingHRI
Working with Forensic Practitioners to Understand the Opportunities and Challenges for Mixed-Reality Digital AutopsyForensic practitioners analyse intrinsic 3D data daily on 2D screens. We explore novel immersive visualisation techniques that enable digital autopsy through analysis of 3D imagery. We employ a user-centred design process involving four rounds of user feedback: (1) formative interviews eliciting opportunities and requirements for mixed-reality digital autopsies; (2) a larger workshop identifying our prototype's limitations and further use-cases and interaction ideas; (3+4) two rounds of qualitative user validation of successive prototypes of novel interaction techniques for pathologist sensemaking. Overall, we find MR holds great potential to enable digital autopsy, initially to supplement physical autopsy, but ultimately to replace it. We found that experts were able to use our tool to perform basic virtual autopsy tasks, MR setup promotes exploration and sense making of cause of death, and subject to limitations of current MR technology, the proposed system is a valid option for digital autopsies, according to experts' feedback.2023VPVahid Pooryousef et al.Monash UniversityMixed Reality WorkspacesVR Medical Training & RehabilitationMedical & Scientific Data VisualizationCHI