The Making of Performative Accuracy in AI Training: Precision Labor and Its ConsequencesAccuracy and precision are central values in the AI communities and the technology sector. This paper provides empirical evidence on the construction and organizational management of technical accuracy, demonstrating how technology companies' preoccupation with such values leads to harm. Drawing on nine months of multi-sited ethnographic fieldwork in China, we document how AI trainers' everyday work practices, challenges, and harms stem from clients' demands for high levels of technical accuracy. We introduce the concept of precision labor to unpack the labor dimension of constructing and performing accuracy in AI training. This concept highlights the hidden and excessive labor required to reconcile the ambiguity and uncertainty involved in this process. We argue that precision labor offers a new lens to illuminate three critical aspects of AI training: 1) the negative health and financial impacts of hidden and excessive labor on AI workers; 2) emerging harms, including workers' subordinate roles to machines and financial precarity; and 3) a conceptual contribution to contexts beyond AI training. This contribution re-centers arbitrariness in technical production, highlights the excessive demands of precision labor, and examines the legitimization of labor and harm. Our study also contributes to existing scholarship on the prevailing values and invisible labor in AI production, underscoring accuracy as performative rather than self-evident and unambiguous. A precision labor lens challenges the legitimacy and sustainability of relentlessly pursuing technical accuracy, raising new questions about its consequences and ethical implications. We conclude by proposing recommendations and alternative approaches to enhance worker agency and well-being.2025BZBen Zefeng Zhang et al.University of MichiganAI-Assisted Decision-Making & AutomationPrivacy by Design & User ControlTechnology Ethics & Critical HCICHI
Who is Trusted for a Second Opinion? Comparing Collective Advice from a Medical AI and Physicians in Biopsy Decisions After Mammography ScreeningArtificial Intelligence (AI) is increasingly integrated into clinical practice, but its influence on patient decision-making, particularly when AI and physicians disagree, remains unclear. To examine collective advice, we investigated a breast cancer screening scenario using (1) a qualitative interview study (N=9) and (2) a quantitative experiment (N=339) where participants received either consistent or conflicting biopsy recommendations. Qualitative findings include the need for empathetic care, the importance of patient autonomy, and a desire for a four-eyes principle. Quantitative findings accordingly show that patients generally trust physicians more than AI but still tend to follow AI recommendations due to risk aversion. When both advised a biopsy, 99% adhered; if both advised against it, 25% still proceeded. In conflicting scenarios, 97% followed the physician’s advice, whereas 66% followed the AI if it recommended the biopsy. These results underscore the need for careful interaction design of collective healthcare advice to prevent unnecessary healthcare procedures.2025HDHenrik Detjen et al.Fraunhofer Institute for Digital Medicine MEVISAI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
Responding to Generative AI Technologies with Research-through-Design: The Ryelands AI Lab as an Exploratory StudyGenerative AI technologies demand new practical and critical competencies, which call on design to respond to and foster these. We present an exploratory study guided by Research-through-Design, in which we partnered with a primary school to develop a constructionist curriculum centered on students interacting with a generative AI technology. We provide a detailed account of the design of and outputs from the curriculum and learning materials, finding centrally that the reflexive and prolonged 'hands-on' approach led to a co-development of students' practical and critical competencies. From the study, we contribute guidance for designing constructionist approaches to generative AI technology education; further arguing to do so with 'critical responsivity.' We then discuss how HCI researchers may leverage constructionist strategies in designing interactions with generative AI technologies; and suggest that Research-through-Design can play an important role as a 'rapid response methodology' capable of reacting to fast-evolving, disruptive technologies such as generative AI.2024JBJesse Josua Benjamin et al.Generative AI (Text, Image, Music, Video)Programming Education & Computational ThinkingParticipatory DesignDIS
Explaining It Your Way - Findings from a Co-Creative Design Workshop on Designing XAI Applications with AI End-Users from the Public SectorHuman-Centered AI prioritizes end-users' needs like transparency and usability. This is vital for applications that affect people's everyday lives, such as social assessment tasks in the public sector. This paper discusses our pioneering effort to involve public sector AI users in XAI application design through a co-creative workshop with unemployment consultants from Estonia. The workshop's objectives were identifying user needs and creating novel XAI interfaces for the used AI system. As a result of our user-centered design approach, consultants were able to develop AI interface prototypes that would support them in creating success stories for their clients by getting detailed feedback and suggestions. We present a discussion on the value of co-creative design methods with end-users working in the public sector to improve AI application design and provide a summary of recommendations for practitioners and researchers working on AI systems in the public sector.2024KWKatharina Weitz et al.University of AugsburgExplainable AI (XAI)Participatory DesignCHI
Experiencing Dynamic Weight Changes in Virtual Reality Through Pseudo-Haptics and Vibrotactile FeedbackVirtual reality (VR) objects react dynamically to users' touch interactions in real-time. However, experiencing changes in weight through the haptic sense remains challenging with consumer VR controllers due to their limited vibrotactile feedback. While prior works successfully applied pseudo-haptics to perceive absolute weight by manipulating the control-display (C/D) ratio, we continuously adjusted the C/D ratio to mimic weight changes. Vibrotactile feedback additionally emphasises the modulation in the virtual object's physicality. In a study (N=18), we compared our multimodal technique with pseudo-haptics alone and a baseline condition to assess participants' experiences of weight changes. Our findings demonstrate that participants perceived varying degrees of weight change when the C/D ratio was adjusted, validating its effectiveness for simulating dynamic weight in VR. However, the additional vibrotactile feedback did not improve weight change perception. This work extends the understanding of designing haptic experiences for lightweight VR systems by leveraging perceptual mechanisms.2024CSCarolin Stellmacher et al.University of BremenVibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightCHI
Imagination vs. Reality: Investigating the Acceptance and Preferred Anthropomorphism in Service HRI While the use of robots in public spaces is increasing, still few studies explore the resulting everyday human-robot interactions (HRI). The present study sought to bridge the disparity between real-world interactions and the frequently examined hypothetical interactions. To do so, we investigate the imagined and actual interaction with an ice cream serving robot. In two studies and an exploratory study comparison, we investigated user acceptance and preference for the degree of anthropomorphic appearance. Although a typical human service task was taken over by a robot, an industrial robot was preferred according to participant’s ratings in both studies. Moreover, both studies demonstrated that robot enthusiasm significantly relates to participants' acceptance of the robot for the task. Besides these commonalities, the results showed also that while humans were preferred over robots in the imagined setting, no clear preference was found in the real-life setting. Additional analyses compared the free text answers of the two studies and provided insights into participants' general attitudes toward robots in the workforce. In line with the higher preferences for humans over robots in the imagined setting, considerably more participants mentioned a better customer experience with humans as important in the imagined study compared to the participants who actually interacted with the robot. The studies strikingly demonstrated that imaginary settings yield similar outcomes to those where participants physically engage with the robot in certain aspects, such as their preference for anthropomorphism. However, this phenomenon does not appear to hold for other facets, such as their favored service agent.2024KWKatharina Wzietek et al.Agent Personality & AnthropomorphismAI Ethics, Fairness & AccountabilitySocial Robot InteractionHRI
Exploring Millions of User Interactions with ICEBOAT: Big Data Analytics for Automotive User InterfacesUser Experience (UX) professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for In-Vehicle Information Systems (IVISs) lacks data-driven support and effective tools for visualizing and analyzing user interaction data. Therefore, we propose ICEBOAT, an interactive visualization tool tailored to the needs of automotive UX experts to effectively and efficiently evaluate driver interactions with IVIS. ICEBOAT visualizes telematics data collected from production line vehicles, allowing UX experts to perform task-specific analyses. Following a mixed methods User-Centered Design (UCD) approach, we conducted an interview study (N=4) to extract the domain specific information and interaction needs of automotive UX experts and used a co-design approach (N=4) to develop an interactive analysis tool. Our evaluation (N=12) shows that ICEBOAT enables UX experts to efficiently generate knowledge that facilitates data-driven design decisions.2023PEPatrick Ebel et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Interactive Data VisualizationAutoUI
The God-I-Box: Iteratively Provotyping Technology-Mediated Worship ServicesThe COVID-19 pandemic accelerated the development of alternative formats for religious rituals, such as Protestant online worship services. However, current design approaches focus on problem-solving, and the resulting online solutions merely imitate the offline status quo. To overcome these limitations, we suggest adopting a provotype approach that allows for a more holistic, open-ended dialogue with those affected. We iteratively developed a first provotype in response to tensions found in observation-based field research, aiming to test whether and how it can trigger productive impulses for exploring future technology-mediated worship services based on existing experiences and perspectives. The resulting God-I-Box exaggerates individuality and allows congregants to act almost like liturgists. An analysis of congregants' and pastors' (online) first encounters with the God-I-Box revealed three reaction modes: spontaneous emotions, reflective coping, and exploratory imagination. We conclude with reflections and recommendations for provocative research and design in this context and beyond.2023SWSara Wolf et al.Mental Health Apps & Online Support CommunitiesDesign FictionDIS
Towards an Implicit Metric of Sensory-Motor Accuracy: Brain Responses to Auditory Prediction Errors in PianistsDuring listening to music, the brain expects specific acoustic events based on learned musical rules. During music performance expectancy is additionally created based on motor action by linking keypresses to their sounds. We investigated EEG (Electroencephalography) signals to auditory expectancy violations in piano performance and perception. In our study, pianists experienced manipulations of different acoustic features, such as pitch and loudness, during playing and listening to piano sequences. We found that manipulations during performance elicited deflections with stronger amplitudes compared to manipulations during perception indicating that the action of producing sounds strengthens auditory expectancy. Loudness manipulations, violating musical regularity, elicited deflections with smaller latencies compared to pitch manipulations, which violate harmonic expectancy, suggesting that the brain processes expectancy violations of distinct acoustic features in a different way. These EEG signatures may prove useful for applications in intelligent music interfaces by providing information about sensory-motor accuracy.2023EPElisabeth Pangratz et al.Brain-Computer Interface (BCI) & NeurofeedbackBiosensors & Physiological MonitoringC&C
Designing for Uncontrollability: Drawing Inspiration from the Blessing CompanionThis paper presents an inspirational concept for companion technology design, uncontrollability, and a corresponding artefact, the Blessing Companion. Both originated from a research through design project exploring companion technologies for blessing rituals. We established an exchange with Protestant theologians, explored believers' experiences of blessings, co-speculated on potential technologies, and refined the resulting ideas through ideation, prototyping, and testing. Inspired by believers' descriptions of blessing experiences as not plannable, predictable, controllable, or enforceable, we adopted the concept of uncontrollability, explored how it might be implemented in companion technologies, and designed the Blessing Companion. The Blessing Companion embodies uncontrollability through its ambiguous appearance and (partly) uncontrollable behaviour. It thus stands in contrast to the prevailing on-demand and user-driven interaction paradigms. We discuss how uncontrollability can be reflected in content, form, and interaction, highlight respective possibilities for companion technologies, and reflect on the Blessing Companion as an example of designing for religious rituals.2023SWSara Wolf et al.Institute Human-Computer Media, Julius-Maximilians-UniversitätDesign FictionHuman-Nature Relationships (More-than-Human Design)Interactive Narrative & Immersive StorytellingCHI
What Makes Civic Tech Initiatives To Last Over Time? Dissecting Two Global Cases Civic tech initiatives dedicated to environmental issues have become a worldwide phenomenon and made invaluable contributions to data, community building, and publics. However, many of them stop after a relatively short time. Therefore, we studied two long-lasting civic tech initiatives of global scale, to understand what makes them sustain over time. To this end, we conducted two mixed-method case studies, combining social network analysis and qualitative content analysis of Twitter data with insights from expert interviews. Drawing on our findings, we identified a set of key factors that help the studied civic tech initiatives to grow and last. Contributing to Digital Civics in HCI, we argue that the civic tech initiatives’ scaling and sustaining are configured through the entanglement of (1) civic data both captured and owned by the citizens for the citizens, (2) the use of open and accessible technology, and (3) the initiatives’ public narrative, giving them a voice on the environmental issue.2021AHAndrea Hamm et al.Weizenbaum Institute for the Networked Society, Technical University BerlinCitizen Science & Crowdsourced DataCommunity Engagement & Civic TechnologySustainable HCICHI
Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer VisionThe interpretation of data is fundamental to machine learning. This paper investigates practices of image data annotation as performed in industrial contexts. We define data annotation as a sense-making practice, where annotators assign meaning to data through the use of labels. Previous human-centered investigations have largely focused on annotators’ subjectivity as a major cause of biased labels. We propose a wider view on this issue: guided by constructivist grounded theory, we conducted several weeks of fieldwork at two annotation companies. We analyzed which structures, power relations, and naturalized impositions shape the interpretation of data. Our results show that the work of annotators is profoundly informed by the interests, values, and priorities of other actors above their station. Arbitrary classifications are vertically imposed on annotators, and through them, on data. This imposition is largely naturalized. Assigning meaning to data is often presented as a technical matter. This paper shows it is, in fact, an exercise of power with multiple implications for individuals and society.2020MMMilagros Miceli et al.Crowds and CollaborationCSCW
Useful Uselessness? Teaching Robots to Knit with HumansThis pictorial uses imagery of human-robot collaboration, or cobots, as a site to examine the potential of queer use within design research. Through close documentation of our process, we reflect on acts of teaching a commercially available robot to knit with us—a messy and seemingly unproductive process. However, this uselessness of the chosen task allows us to re-consider the idealization of robotic collaboration. We question the optimization of a largely human labor force and the associated drive to increase efficiency within a range of sectors, from the service industry to industrial production. Building on non-use literatures examining technological limits, and drawing on performative explorations and critique, we show how knitting enlarges our capacity to visualize what might be a suitable use case for cobots.2020PTPat Treusch et al.Human-Robot Collaboration (HRC)Shape-Changing Materials & 4D PrintingDesign FictionDIS
Detecting Visuo-Haptic Mismatches in Virtual Reality using the Prediction Error Negativity of Event-Related Brain PotentialsDesigning immersion is the key challenge in virtual reality; this challenge has driven advancements in displays, rendering and recently, haptics. To increase our sense of physical immersion, for instance, vibrotactile gloves render the sense of touching, while electrical muscle stimulation (EMS) renders forces. Unfortunately, the established metric to assess the effectiveness of haptic devices relies on the user's subjective interpretation of unspecific, yet standardized, questions.<br>Here, we explore a new approach to detect a conflict in visuo-haptic integration (e.g., inadequate haptic feedback based on poorly configured collision detection) using electroencephalography (EEG). We propose analyzing event-related potentials (ERPs) during interaction with virtual objects. In our study, participants touched virtual objects in three conditions and received either no haptic feedback, vibration, or vibration and EMS feedback. To provoke a brain response in unrealistic VR interaction, we also presented the feedback prematurely in 25% of the trials.<br>We found that the early negativity component of the ERP (so called prediction error) was more pronounced in the mismatch trials, indicating we successfully detected haptic conflicts using our technique. Our results are a first step towards using ERPs to automatically detect visuo-haptic mismatches in VR, such as those that can cause a loss of the user's immersion.2019LGLukas Gehrke et al.Technical University of BerlinBrain-Computer Interface (BCI) & NeurofeedbackCHI
The Mental Image Revealed by Gaze TrackingHumans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.2019XWXi Wang et al.Technische Universität BerlinEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
Personalized Motivation-supportive Messages for Increasing Participation in Crowd-civic SystemsIn crowd-civic systems, citizens form groups and work towards shared goals, such as discovering social issues or reforming official policies. Unfortunately, many real-world systems have been unsuccessful in continually motivating large numbers of citizens to participate voluntarily, despite various approaches such as gamification and persuasion techniques. In this paper, we examine the influence of personalized messages designed to support motivation as asserted by the Self-Determination Theory (SDT). We designed a crowd-civic platform for collecting community issues with personalized motivation-supportive messages and conducted two studies: a pair-comparison experiment with 150 participants on Amazon's Mechanical Turk and a live deployment study with 120 university members. Results of the pair-comparison study indicate applicability of SDT's perspective in crowd-civic systems. While applying it in the live system surfaced several challenges, including recruiting participants without interfering with general motivations, the collected data exhibited similar promising trends.2018PGPaul Grau et al.Motivation in Online CollaborationCSCW
OptiSpace: Automated Placement of Interactive 3D Projection Mapping ContentWe present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces.2018AFAndreas Fender et al.Aarhus UniversityMixed Reality Workspaces3D Modeling & AnimationCHI