What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused SystemsAI models are constantly evolving, with new versions released frequently. Human-AI interaction guidelines encourage notifying users about changes in model capabilities, ideally supported by thorough benchmarking. However, as AI systems integrate into domain-specific workflows, exhaustive benchmarking can become impractical, often resulting in silent or minimally communicated updates. This raises critical questions: Can users notice these updates? What cues do they rely on to distinguish between models? How do such changes affect their behavior and task performance? We address these questions through two studies in the context of facial recognition for historical photo identification: an online experiment examining users’ ability to detect model updates, followed by a diary study exploring perceptions in a real-world deployment. Our findings highlight challenges in noticing AI model updates, their impact on downstream user behavior and performance, and how they lead users to develop divergent folk theories. Drawing on these insights, we discuss strategies for effectively communicating model updates in AI-infused systems.2025VMVikram Mohanty et al.Bosch Research North AmericaGenerative AI (Text, Image, Music, Video)Explainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
Connecting Home: Human-Centric Setup Automation in the Augmented Smart HomeControlling smart homes via vendor-specific apps on smartphones is cumbersome. Augmented Reality (AR) offers a promising alternative by enabling direct interactions with Internet of Things (IoT) devices. However, using AR for smart home control requires knowledge of each device's 3D position. In this paper, we introduce and evaluate three concepts for identifying IoT device positions with varying degrees of automation. Our mixed-methods laboratory study with 28 participants revealed that, despite being recognized as the most efficient option, the majority of participants opted against a fast, fully automated detection, favoring a balance between efficiency and perceived autonomy and control. We link this decision to psychological needs grounded in self-determination theory and discuss the strengths and weaknesses of each alternative, motivating a user-adaptive solution. Additionally, we observed a “wow-effect” in response to AR interaction for smart homes, suggesting potential benefits of a human-centric approach to the smart home of the future.2024MSMarius Schenkluhn et al.Karlsruhe Institute of Technology, Robert Bosch GmbHAR Navigation & Context AwarenessSmart Home Interaction DesignCHI
Towards In-vehicle Driver Fainting DetectionCurrent interior sensing systems already enable the detection of critical driver states such as drowsiness or inattention. In order to extend the system's capabilities, this work firstly investigates a possible detection of driver fainting via an interior sensing camera. An approach that supports the simulation of driver fainting is developed and realized in a parked vehicle as well as during manual and automated driving, with 61 participants in total. Moreover, multiple instructed intentional movements with for- and side-ward movements of the body are recorded. Classification models are developed based on features that are derived from head and body pose data. These models are then applied to the complete video streams that include various waiting and driving scenarios. The best classification results are seen with Random Forest classifiers with up to 84\% true positive detections and 0.33~false positive detections per hour. The majority of false positive detections were seen during automated driving. Implications and options for future research are discussed.2023MGMoritz Gebert et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Human Pose & Activity RecognitionAutoUI
How to Make Reading in Fully Automated Vehicles a Better Experience? Effects of Active Seat Belt Retractions and a 2-Step Driving Profile on Subjective Motion Sickness, Ride Comfort and AcceptanceThe paper presents a study on motion sickness mitigation while driving with a fully automated vehicle on a test track. 31 participants who were susceptible to motion sickness experienced a 25-minuntes drive with multiple motion sickness provoking decelerations and accelerations while reading a text on a tablet. The participants experienced three different conditions in separate sessions: 1) control condition without countermeasure, 2) drive with active seat belt tensioner, 3) drive with a two-step driving profile. The participants rated their motion sickness on the MSTT Scale (during the drive) and on the MSAQ (pre and post drive). After each drive, drivers rated their subjective experience of vehicle behavior and the countermeasures. On MSTT, the results showed no significant differences in the development of motion sickness across the three conditions. However, the two-step driving profile reduced the development of motion sickness assessed via MSAQ. Furthermore, both countermeasures seem to have the potential to positively influence the perception of the automation as safer, more trustworthy and more reliable.2023MTMarkus Tomzig et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Motion Sickness & Passenger ExperienceAutoUI
Exploring Driver Responses to Authoritative Control Interventions in Highly Automated DrivingFuture automated driving systems (ADS) are discussed as having the ability to "override" driver control inputs. Yet, little is known about how drivers respond to this, nor how a human-machine interaction (HMI) for them should be designed. This work identifies intervention types associated with an ADS that has change control authority and outlines an experiment method which simulates a deficit in driver situation awareness, enabling the study of their responses to interventions in a controlled environment. In a simulator study (N = 18), it was found that drivers express more negative valence when their control input is blocked (p = .046) than when it is taken away. In safety-critical scenarios, drivers respond more positively to interventions (p = .021) and are willing to give the automation more control (p = .018). An experimental method and HMI design insights are presented and ethical questions about the development of automated driving are provoked.2023LDLiza Dixon et al.Automated Driving Interface & Takeover DesignAutoUI
Designing an Interaction Concept for Assisted Cooking in Smart Kitchens: Focus on Human Agency, Proactivity, and MultimodalityConnected homes and smart assistants shape the future practices of humans, but they do not yet perfectly fit their needs and processes. Our research explores how smart assistants can effectively support users during cooking. First, we completed an observational study with ten participants to understand their needs for competence and autonomy in relation to their individual cooking. Following the empirical results, we prototyped a multimodal assistant that interactively provides stepwise guidance for a multi-part recipe. We evaluated the prototype in a Wizard-of-Oz approach with ten participants. The classification according to cooking competence and need for autonomy turned out to be an efficient way to understand the different user perspectives on the prototype. We could observe under which conditions users prefer graphical or voice interaction and how proactivity of the assistant affects human agency and derived general insights for the design and co-performance of smart assistants in other domains.2023JWJohanna Weber et al.Smart Home Interaction DesignFood Culture & Food InteractionDIS
ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual AnalysisClassification models learn to generalize the associations between data samples and their target classes. However, researchers have increasingly observed that machine learning practice easily leads to systematic errors in AI applications, a phenomenon referred to as "AI blindspots.'' Such blindspots arise when a model is trained with training samples (e.g., cat/dog classification) where important patterns (e.g., black cats) are missing or periphery/undesirable patterns (e.g., dogs with grass background) are misleading towards a certain class. Even more sophisticated techniques cannot guarantee to capture, reason about, and prevent the spurious associations. In this work, we propose ESCAPE, a visual analytic system that promotes a human-in-the-loop workflow for countering systematic errors. By allowing human users to easily inspect spurious associations, the system facilitates users to spontaneously recognize concepts associated misclassifications and evaluate mitigation strategies that can reduce biased associations. We also propose two statistical approaches, relative concept association to better quantify the associations between a concept and instances, and debias method to mitigate spurious associations. We demonstrate the utility of our proposed ESCAPE system and statistical measures through extensive evaluation including quantitative experiments, usage scenarios, expert interviews, and controlled user experiments.2023YAYongsu Ahn et al.University of PittsburghExplainable AI (XAI)AI-Assisted Decision-Making & AutomationUncertainty VisualizationCHI
Would you do it? Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-MakingA moral dilemma is a decision-making paradox without unambiguously acceptable or preferable options. This paper investigates if and how the virtual enactment of two renowned moral dilemmas---the Trolley and the Mad Bomber---influence decision-making when compared with mentally visualizing such situations. We conducted two user studies with two gender-balanced samples of 60 participants in total that compared between paper-based and virtual-reality (VR) conditions, while simulating 5 distinct scenarios for the Trolley dilemma, and 4 storyline scenarios for the Mad Bomber's dilemma. Our findings suggest that the VR enactment of moral dilemmas further fosters utilitarian decision-making, while it amplifies biases such as sparing juveniles and seeking retribution. Ultimately, we theorize that the VR enactment of renowned moral dilemmas can yield ecologically-valid data for training future Artificial Intelligence (AI) systems on ethical decision-making, and we elicit early design principles for the training of such systems.2020ENEvangelos Niforatos et al.Norwegian University of Science and TechnologyImmersion & Presence ResearchAI Ethics, Fairness & AccountabilityCHI
The Case for Implicit External Human-Machine Interfaces for Autonomous VehiclesAutonomous vehicles' (AVs) interactions with pedestrians remain an ongoing uncertainty. Several studies have claimed the need for explicit external human-machine interfaces (eHMI) such as lights or displays to replace the lack of eye contact with and explicit gestures from drivers, however this need is not thoroughly understood. We review literature on explicit and implicit eHMI, and discuss results from a field study with a Wizard-of-Oz driverless vehicle that tested pedestrians' reactions in everyday traffic without explicit eHMI. While some pedestrians were surprised by the vehicle, others did not notice its autonomous nature, and all crossed in front without explicit signaling, suggesting that pedestrians may not need explicit eHMI in routine interactions—the car's implicit eHMI (its motion) may suffice.2019DMDylan James Moore et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAutoUI
Visualizing Implicit eHMI for Autonomous VehiclesAutonomous vehicles' (AVs) interactions with pedestrians remain an ongoing uncertainty. Studies claim the need for explicit external human-machine interfaces (eHMI) such as lights to replace the lack of eye contact with and explicit gestures from drivers. To further explore this area, we conducted a naturalistic field study using the Ghostdriver protocol to explore how pedestrians react to a simulated driverless vehicle stopping at a crosswalk in real traffic on real roads. All pedestrians crossed in front of the vehicle with little hesitation, even though we did not signal anything beyond the vehicle's stopping motion. A few were surprised at the vehicle's novelty, however most paid little attention to its autonomous appearance. The video includes demonstrative examples of the kinds of reactions we observed, which we hope will further a dialogue on the role of eHMI in AV-pedestrian interactions.2019DMDylan James Moore et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAutoUI
P9 - Designing a Guardian Angel: Giving an Automated Vehicle the Possibility to Override its DriverA function of an automated driving vehicle that can override a human driver while driving manually could work as a guardian angel in the car. It can take over control if it detects an imminent accident and has a possibility to avoid it. Because of the urgency of the intervention, there is not enough time to warn the driver in advance. In our study, we collected feedback from users how they perceived such an action while driving in a simulator. We collected feedback about the general design and user interface of such a system. From an ethical point of view, we discovered discrepancies in the views of our participants regarding automated driving functions that need to be addressed in future development.2018SMSteffen Maurer et al.Automated Driving Interface & Takeover DesignAI-Assisted Decision-Making & AutomationAutoUI
Workshop: The Industrial Internet of Things: New Perspectives on HCI and CSCW ...Digital products and services are commonplace in our personal lives where software and its algorithms provide assistance and amenities. However, interactive systems within industrial settings have yet to catch up with consumer products, especially with regard to the quality of interaction and user experience. With the rise of automation and data exchange on massive scales, the role of human work is challenged and the importance of cooperation emphasised. New concepts of smart factories in which machines and software are doing parts of the work tasks emerge, drastically altering the nature of work in industrial settings from manual labor to increasingly complex tasks. HCI and especially CSCW offer concepts, technical tools and methods to cope with this disruptive shift towards an Industrial Internet of Things (IIoT).2018HMHenrik Mucha et al.Workshop: The Industrial Internet of Things: New Perspectives on HCI and CSCW ...CSCW