Opening Up Human-Robot CollaborationAs we see robots being deployed into new places in everyday life, questions arise about what the features of `human-robot collaboration' (HRC) might look like. Recent work anticipates the need for more CSCW-oriented studies of HRC, given the increasing adoption of CSCW concepts to HRC research. We address this via an ethnomethodological study of encounters between pedestrians and food delivery robots on public streets. Our analysis---using video recorded fragments of what happens on the street---demonstrates how passers-by manage walking trajectories in ways that account for robot actions, contributing an analysis that articulates how people accomplish practices of following and overtaking robots, passing by and crossing paths with them. We show that the picture of human-robot collaboration is drawn with `unequal' asymmetries of action and intelligibility, with humans contributing considerable work to get something that looks like collaboration achieved. This raises questions for how we talk about collaboration in HRC from a CSCW perspective, and how this notion can and should be applied to groups and teams which include robots.2025SRStuart Reeves et al.Human-AI (and Robot!) CollaborationCSCW
Exploring Assumptions about Sustainability: Towards a Constructive Framework for Action in Sustainable HCIThe global environmental crises continue to get worse, fast approaching various irreversible thresholds. While a vast array of approaches to solving sustainability problems are found under the umbrella of Sustainable HCI, their contributions are sometimes hard to compare. In this essay, we describe a set of assumptions that influence what is considered meaningful and important areas of sustainability research, along four dimensions of sustainability: 1) the depth and nature of the sustainability challenges; 2) the role of technological innovation in sustainability; 3) what gets defined as "externalities" to a design or system; and 4) the time perspective used to consider sustainability. We argue that what one assumes within each of these dimensions directly influences what one means by the term "sustainability", which is then reflected in the questions that are asked, the methods chosen, the proposed solutions and the developed systems. By describing these assumptions and some of their commensurate actions, we offer a framework that may enable members of the SHCI community to reflect on and better position their own work and that of others in the field. Our intention is for the framework to lead to better transparency and more constructive conversations about where we might collectively direct our efforts moving forward.2025MTMinna K Laurell Thorslund et al.KTH Royal Institute of Technology, Media Technology and Interaction DesignSustainable HCIEcological Design & Green ComputingCHI
Unveiling High-dimensional Backstage: A Survey for Reliable Visual Analytics with Dimensionality ReductionDimensionality reduction (DR) techniques are essential for visually analyzing high-dimensional data. However, visual analytics using DR often face unreliability, stemming from factors such as inherent distortions in DR projections. This unreliability can lead to analytic insights that misrepresent the underlying data, potentially resulting in misguided decisions. To tackle these reliability challenges, we review 133 papers that address the unreliability of visual analytics using DR. Through this review, we contribute (1) a workflow model that describes the interaction between analysts and machines in visual analytics using DR, and (2) a taxonomy that identifies where and why reliability issues arise within the workflow, along with existing solutions for addressing them. Our review reveals ongoing challenges in the field, whose significance and urgency are validated by five expert researchers. This review also finds that the current research landscape is skewed toward developing new DR techniques rather than their interpretation or evaluation, where we discuss how the HCI community can contribute to broadening this focus.2025HJHyeon Jeon et al.Seoul National University, Department of Computer Science and EngineeringInteractive Data VisualizationUncertainty VisualizationVisualization Perception & CognitionCHI
Uncovering How Scatterplot Features Skew Visual Class SeparationMulti-class scatterplots are essential for visually comparing data, such as examining class distributions in dimensionality reduction and evaluating classification models. Visual class separation (VCS) measures quantify human perception but are largely derived from and evaluated with datasets reflecting limited types of scatterplot features (e.g., data distribution, similar class densities). Quantitatively identifying which scatterplot features are influential to VCS tasks can enable more robust guidance for future measures. We analyze the alignment between VCS measures and people's perceptions of class separation through a crowdsourced study using 70 scatterplot features relevant to class separation. To cover a wide range of scatterplot features, we generated a set of multi-class scatterplots from 6,947 real-world datasets. Our results highlight that multiple combinations of features are needed to best explain VCS. From our analysis, we develop a composite feature model that identifies key scatterplot features for measuring VCS task performance.2025SBS. Sandra Bae et al.University of Colorado Boulder, ATLAS InstituteInteractive Data VisualizationVisualization Perception & CognitionCHI
Identifying Critical Points of Departure for the Design of Self-Fashioning TechnologiesDesigning technologies that clothe, adorn, or are otherwise placed on the body raises questions concerning the role they will play in dressing ourselves. We situate self-fashioning – or the process through which we stylise and present our bodies – as a complex practice where a series of social, material, and contextual factors shape how we present ourselves. Informed by reflective discussions and projective design tools, we contribute three critical points of departure for self-fashioning technologies: (i) Purposeful examining discomfort as an ongoing phenomenon, (ii) Supporting mimesis and visibility as qualities to be negotiated, and (iii) Envisioning the multiplicity of the body. We call for the design community to help devise fashionable technologies that are sensitive, caring, and responsive to the complexities of fashioning our bodies.2025RCRebeca Blanco Cardozo et al.KTH Royal Institute of TechnologyHaptic WearablesInclusive DesignCHI
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented RealitySynchronous data-driven storytelling with network visualizations presents significant challenges due to the complexity of real-time manipulation of network components. While existing research addresses asynchronous scenarios, there is a lack of effective tools for live presentations. To address this gap, we developed TangibleNet, a projector-based AR prototype that allows presenters to interact with node-link diagrams using double-sided magnets during live presentations. The design process was informed by interviews with professionals experienced in synchronous data storytelling and workshops with 14 HCI/VIS researchers. Insights from the interviews helped identify key design considerations for integrating physical objects as interactive tools in presentation contexts. The workshops contributed to the development of a design space mapping user actions to interaction commands for node-link diagrams. Evaluation with 12 participants confirmed that TangibleNet supports intuitive interactions and enhances presenter autonomy, demonstrating its effectiveness for synchronous network-based data storytelling.2025KTKentaro Takahira et al.Department of Computer Science and Engineering, The Hong Kong University of Science and TechnologyHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Interactive Data VisualizationData StorytellingCHI
Honkable Gestalts: Why Autonomous Vehicles Get Honked AtThis paper analyzes honks directed at autonomous vehicles (AVs) by other drivers. As honks often mark problems, this focus allows us to better understand the challenges that AVs face in real traffic. Performing a sequential video analysis of 63 honk incidents uploaded by Tesla beta testers on YouTube, we identify how problematic situations emerge as honkable Traffic Gestalts. We identify four types of situated problems with AV driving performance marked by other drivers’ honks: they may wait too long, steer inconsistently, stop instead of going, and go too fast. We further show how a honk may be understandable as a warning, a nudge or a reprimand. Our work suggests designing honks for AVs to focus on relevant contexts, supported by developing bidirectional interfaces and audio analysis methods that consider the interplay of auditory and visual information in traffic.2024SPSergio Passero et al.Automated Driving Interface & Takeover DesignV2X (Vehicle-to-Everything) Communication DesignAutoUI
Changing Lanes Toward Open Science: Openness and Transparency in Automotive User ResearchWe review the state of open science and the perspectives on open data sharing within the automotive user research community. Openness and transparency are critical not only for judging the quality of empirical research, but also for accelerating scientific progress and promoting an inclusive scientific community. However, there is little documentation of these aspects within the automotive user research community. To address this, we report two studies that identify (1) community perspectives on motivators and barriers to data sharing, and (2) how openness and transparency have changed in papers published at AutomotiveUI over the past 5 years. We show that while open science is valued by the community and openness and transparency have improved, overall compliance is low. The most common barriers are legal constraints and confidentiality concerns. Although research published at AutomotiveUI relies more on quantitative methods than research published at CHI, openness and transparency are not as well established. Based on our findings, we provide suggestions for improving openness and transparency, arguing that the motivators for open science must outweigh the barriers. All supporting materials are freely available at: https://osf.io/zdpek/2024PEPatrick Ebel et al.Research Ethics & Open ScienceAutoUI
The Human Behind the Robot: Rethinking the Low Social Status of Service RobotsRobots in our society are commonly perceived as subordinate servants with a lower social status than humans. This often leads to humans prioritizing themselves during conflict situations. This becomes problematic when robots start to directly represent humans as proxies if people do not think of the human operator behind them. This could be considered a cognitive bias of human representation in HRI. To explore the extent of this problem, we conducted a user study featuring several conflict situations. Participants granted more priority to the robot when the human representation was visible. This paper explores the societal consequences and emerging inequities such as potentially deprioritizing humans by deprioritizing a robot in certain situations. Possible strategies to address potential negative consequences are discussed on a design level while acknowledging that a societal change in how we perceive and treat robots that represent humans might be necessary.2024FBFranziska Babel et al.Privacy by Design & User ControlSocial Robot InteractionHuman-Robot Collaboration (HRC)HRI
A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot EncountersIncreasing encounters between people and autonomous service robots may lead to conflicts due to mismatches between human expectations and robot behaviour. This interactive online study (N = 335) investigated human-robot interactions at an elevator, focusing on the effect of communication and behavioural expectations on participants' acceptance and compliance. Participants evaluated a humanoid delivery robot primed as either submissive or assertive. The robot either matched or violated these expectations by using a command or appeal to ask for priority and then entering either first or waiting for the next ride. The results highlight that robots are less accepted if they violate expectations by entering first or using a command. Interactions were more effective if participants expected an assertive robot which then asked politely for priority and entered first. The findings emphasize the importance of power expectations in human-robot conflicts for the robot's evaluation and effectiveness in everyday situations.2024FBFranziska Babel et al.Linköping UniversitySocial Robot InteractionEmpowerment of Marginalized GroupsCHI
Encountering Autonomous Robots in Public StreetsRobots deployed in public settings enter spaces that humans live and work in. Understandings of robots, such as those in HRI and beyond, tend to prioritise direct, obvious and deliberate interactions with robots. Yet this fails to identify the most common form of mundane, everyday response to robots in public, which ranges from the unobvious and subtle to virtually ignoring them. Drawing on a collection of video recordings, we show how public delivery robots encounter the lived-in spaces of urban streets both from a perspective of the social assembly of the physical environment and the socially organised nature of everyday street life that such robots are entering. Ultimately we show how such robots are effectively `granted passage' through these spaces as a result of the mundane, practical work of the streets' human inhabitants. We demonstrate the importance of studying robots during their whole deployment, in the spaces that they enter, and challenge the current understanding of what `counts' as human-robot interaction, highlighting that we may want to re-think who we consider as `user' as well as the conceptualisation of human-robot interaction itself.2024HPHannah RM Pelikan et al.Social Robot InteractionSmart Cities & Urban SensingHRI
Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRIHuman-Robot Interaction (HRI) is inherently a human-centric field of technology. The role of feminist theories in related fields (e.g. Human-Computer Interaction, Data Science) are taken as a starting point to present a vision for Feminist HRI which can support better, more ethical HRI practice everyday, as well as a more activist research and design stance. We first define feminist design for an HRI audience and use a set of feminist principles from neighboring fields to examine existent HRI literature, showing the progress that has been made already alongside some additional potential ways forward. Following this we identify a set of reflexive questions to be posed throughout the HRI design, research and development pipeline, encouraging a sensitivity to power and to individuals' goals and values. Importantly, we do not look to present a definitive, fixed notion of Feminist HRI, but rather demonstrate the ways in which bringing feminist principles to our field can lead to better, more ethical HRI, and to discuss how we, the HRI community, might do this in practice.2023KWKatie Winkle et al.Human-Robot Collaboration (HRC)Gender & Race Issues in HCITechnology Ethics & Critical HCIHRI
Designing Robot Sound-In-Interaction: The Case of Autonomous Public Transport Shuttle BusesHorns and sirens are important tools for communicating on the road, which are still understudied in autonomous vehicles. While HRI has explored different ways in which robots could sound, we focus on the range of actions that a single sound can accomplish in interaction. In a Research through Design study involving autonomous shuttle buses in public transport, we explored sound design with the help of voice-overs to video recordings of the buses on the road and Wizard-of-Oz tests in live traffic. The buses are slowed down by (unnecessary) braking in response to people getting close. We found that prolonged jingles draw attention to the bus and invite interaction, while repeated short beeps and bell sounds can instruct the movement of others away from the bus. We highlight the importance of designing sound in sequential interaction and describe a new method for embedding video interaction analysis in the design process.2023HPHannah RM Pelikan et al.In-Vehicle Haptic, Audio & Multimodal FeedbackTeleoperated DrivingHRI
Working with Forensic Practitioners to Understand the Opportunities and Challenges for Mixed-Reality Digital AutopsyForensic practitioners analyse intrinsic 3D data daily on 2D screens. We explore novel immersive visualisation techniques that enable digital autopsy through analysis of 3D imagery. We employ a user-centred design process involving four rounds of user feedback: (1) formative interviews eliciting opportunities and requirements for mixed-reality digital autopsies; (2) a larger workshop identifying our prototype's limitations and further use-cases and interaction ideas; (3+4) two rounds of qualitative user validation of successive prototypes of novel interaction techniques for pathologist sensemaking. Overall, we find MR holds great potential to enable digital autopsy, initially to supplement physical autopsy, but ultimately to replace it. We found that experts were able to use our tool to perform basic virtual autopsy tasks, MR setup promotes exploration and sense making of cause of death, and subject to limitations of current MR technology, the proposed system is a valid option for digital autopsies, according to experts' feedback.2023VPVahid Pooryousef et al.Monash UniversityMixed Reality WorkspacesVR Medical Training & RehabilitationMedical & Scientific Data VisualizationCHI
Troubling Collaboration: Matters of Care for Visualization Design StudyA common research process in visualization is for visualization researchers to collaborate with domain experts to solve particular applied data problems. While there is existing guidance and expertise around how to structure collaborations to strengthen research contributions, there is comparatively little guidance on how to navigate the implications of, and power produced through the socio-technical entanglements of collaborations. In this paper, we qualitatively analyze reflective interviews of past participants of collaborations from multiple perspectives: visualization graduate students, visualization professors, and domain collaborators. We juxtapose the perspectives of these individuals, revealing tensions about the tools that are built and the relationships that are formed --- a complex web of competing motivations. Through the lens of \textit{matters of care}, we interpret this web, concluding with considerations that both trouble and necessitate reformation of current patterns around collaborative work in visualization design studies to promote more equitable, useful, and care-ful outcomes.2023DADerya Akbaba et al.Linköping UniversityInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)CHI
Dirty Data in the Newsroom: Comparing Data Preparation in Journalism and Data ScienceThe work involved in gathering, wrangling, cleaning, and otherwise preparing data for analysis is often the most time consuming and tedious aspect of data work. Although many studies describe data preparation within the context of data science workflows, there has been little research on data preparation in data journalism. We address this gap with a hybrid form of thematic analysis that combines deductive codes derived from existing accounts of data science workflows and inductive codes arising from an interview study with 36 professional data journalists. We extend a previous model of data science work to incorporate detailed activities of data preparation. We synthesize 60 dirty data issues from 16 taxonomies on dirty data and our interview data, and we provide a novel taxonomy to characterize these dirty data issues as discrepancies between mental models. We also identify four challenges faced by journalists: diachronic, regional, fragmented, and disparate data sources.2023SKStephen Kasica et al.University of British ColumbiaInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)Computational Methods in HCICHI
The Halting problem: Video analysis of self-driving cars in trafficUsing publicly uploaded videos of the Waymo and Tesla FSD self-driving cars, this paper documents how self-driving vehicles still struggle with some basics of road interaction. To drive safely self-driving cars need to interact in traffic with other road users. Yet traffic is a complex, long established social domain. We focus on one core element of road interaction: when road users yield for each other. Yielding – slowing down for others in traffic – involves communication between different road users to decide who will ‘go’ and who will ‘yield’. Videos of the Waymo and Tesla FSD self-driving cars show how these systems fail to both yield for others, as well as failing to go when yielded to. In discussion, we explore how these ‘problems’ illustrate both the complexity of designing for road interaction, but also how the space of physical machine/human social interactions more broadly can be designed for.2023BBBarry Brown et al.Stockholm UniversityExternal HMI (eHMI) — Communication with Pedestrians & CyclistsCHI
"Are You Sad, Cozmo?" How Humans Make Sense of a Home Robot’s Emotion DisplaysThis paper explores how humans interpret displays of emotion produced by a social robot in real world situated interaction. Taking a multimodal conversation analytic approach, we analyze video data of families interacting with a Cozmo robot in their homes. Focusing on one happy and one sad robot animation, we study, on a turn-by-turn basis, how participants respond to audible and visible robot behavior designed to display emotion. We show how emotion animations are consequential for interactional progressivity: While displays of happiness typically move the interaction forward, displays of sadness regularly lead to a reconsideration of previous actions by humans. Furthermore, in making sense of the robot animations people may move beyond the designer’s reported intentions, actually broadening the opportunities for their subsequent engagement. We discuss how sadness functions as an interactional "rewind button" and how the inherent vagueness of emotion displays can be deployed in design.2020HPHannah RM Pelikan et al.Agent Personality & AnthropomorphismSocial Robot InteractionHRI
Unwind: Interactive Fish StraighteningThe ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication.2020FWFrancis Williams et al.New York UniversityData PhysicalizationPrototyping & User TestingCHI
Towards an Understanding of Augmented Reality Extensions for Existing 3D Data Analysis ToolsWe present an observational study with domain experts to understand how augmented reality (AR) extensions to traditional PC-based data analysis tools can help particle physicists to explore and understand 3D data. Our goal is to allow researchers to integrate stereoscopic AR-based visual representations and interaction techniques into their tools, and thus ultimately to increase the adoption of modern immersive analytics techniques in existing data analysis workflows. We use Microsoft's HoloLens as a lightweight and easily maintainable AR headset and replicate existing visualization and interaction capabilities on both the PC and the AR view. We treat the AR headset as a second yet stereoscopic screen, allowing researchers to study their data in a connected multi-view manner. Our results indicate that our collaborating physicists appreciate a hybrid data exploration setup with an interactive AR extension to improve their understanding of particle collision events.2020XWXiyao Wang et al.Université Paris-Saclay, CNRS, Inria, LRIMixed Reality WorkspacesMedical & Scientific Data VisualizationCHI