Domain Experience and Expertise in Explainable AI Applications: A Bearing Fault Diagnosis Case StudyThe importance of human-centred explainable artificial intelligence (XAI) has been widely recognised, leading to a growing focus on users and practitioners during the explanation design and deployment processes. Previous studies have identified that users with different domain expertise may have diverse needs for explainability. However, current XAI research often conflates a user's practical experience with domain expertise, ignoring the distinctions between the two; generally, experience relates to acquiring skill and insight through active participation or observation, while domain expertise denotes a high level of (often highly local and/or specific) knowledge. This paper investigates the impact of users' practical experience and domain expertise on how AI recommendations are considered in a high-risk decision-making context, using the example of ball-bearing fault diagnosis in the manufacturing sector. As an interdisciplinary team of human-computer interaction (HCI) researchers and mechanical engineers, we co-design an XAI-based simulated ball-bearing fault diagnosis task. We conduct task-led interviews with several professionals, structured around three distinct decision processes, and use an innovative sketch-based exercise to gather data to demonstrate how their decision-making behaviours change under ML recommendations and AI explanations. Our results show that highly experienced and knowledgeable practitioners understand but rely less on LIME-based explanations, while those with high experience but low expertise are more easily misled. Practitioners with high expertise but low experience trust XAI but struggle to use LIME-based explanations effectively. Based on these observations, we reflect on our methods and argue for considering both domain expertise and practical experience when designing and deploying AI explanations.2025ZZZibin Zhao et al.Explainable AI (XAI)CSCW
ConvBoost: Boosting ConvNets for Sensor-based Activity Recognition"Human activity recognition (HAR) is one of the core research themes in ubiquitous and wearable computing. With the shift to deep learning (DL) based analysis approaches, it has become possible to extract high-level features and perform classification in an end-to-end manner. Despite their promising overall capabilities, DL-based HAR may suffer from overfitting due to the notoriously small, often inadequate, amounts of labeled sample data that are available for typical HAR applications. In response to such challenges, we propose ConvBoost -- a novel, three-layer, structured model architecture and boosting framework for convolutional network based HAR. Our framework generates additional training data from three different perspectives for improved HAR, aiming to alleviate the shortness of labeled training data in the field. Specifically, with the introduction of three conceptual layers--Sampling Layer, Data Augmentation Layer, and Resilient Layer--we develop three ""boosters""--R-Frame, Mix-up, and C-Drop--to enrich the per-epoch training data by dense-sampling, synthesizing, and simulating, respectively. These new conceptual layers and boosters, that are universally applicable for any kind of convolutional network, have been designed based on the characteristics of the sensor data and the concept of frame-wise HAR. In our experimental evaluation on three standard benchmarks (Opportunity, PAMAP2, GOTOV) we demonstrate the effectiveness of our ConvBoost framework for HAR applications based on variants of convolutional networks: vanilla CNN, ConvLSTM, and Attention Models. We achieved substantial performance gains for all of them, which suggests that the proposed approach is generic and can serve as a practical solution for boosting the performance of existing ConvNet-based HAR models. This is an open-source project, and the code can be found at https://github.com/sshao2013/ConvBoost https://dl.acm.org/doi/10.1145/3596234"2023SSSHUAI SHAO et al.Human Pose & Activity RecognitionContext-Aware ComputingUbiComp
From Asymptomatics to Zombies: Visualization-Based Education of Disease ModelingThroughout the COVID-19 pandemic, visualizations became commonplace in public communications to help people make sense of the world and the reasons behind government-imposed restrictions. Though the adult population were the main target of these messages, children were affected by restrictions through not being able to see friends and virtual schooling. However, through these daily models and visualizations, the pandemic response provided a way for children to understand what data scientists really do and provided new routes for engagement with STEM subjects. In this paper, we describe the development of an interactive and accessible visualization tool to be used in workshops for children to explain computational modeling of diseases, in particular COVID-19. We detail our design decisions based on approaches evidenced to be effective and engaging such as unplugged activities and interactivity. We share reflections and learnings from delivering these workshops to 140 children and assess their effectiveness.2023GMGraham McNeill et al.University of Warwick, King's College LondonMedical & Scientific Data VisualizationSTEM Education & Science CommunicationCHI
Beyond Skin Deep: Generative Co-Design for Aesthetic Prosthetics There is a trend for handcrafting bespoke prostheses that embody their wearers’ aesthetic tastes and identities. We explore how this might be extended by enabling users to co-design with algorithms. We report a design-led exploration (Figure 1) in which professional disabled dancers danced with a generative design algorithm to create personalised designs called aesthetic seeds. Further algorithms applied these to prosthetic greaves, rendering them in various materials before optimising for additive manufacture. Interviews with our dancers revealed that the aesthetics of prosthetics reach beyond visual decoration to encompass form, function, bodily experience, body image, and identity; that interactions with generative design algorithms can harness people's expressive and aesthetic skills; and that we must redesign supporting technologies for diverse bodies. We generalise our findings into a process for how people may co-design 3D printable products with algorithms.2023FZFeng Zhou et al.University of NottinghamShape-Changing Interfaces & Soft Robotic MaterialsDesktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsCHI
Microblog Analysis as a Programme of WorkInspired by a European project, PHEME, that requires the close analysis of Twitter-based conversations in order to look at the spread of rumors via social media, this paper has two objectives. The first of these is to take the analysis of microblogs back to first principles and lay out what microblog analysis should look like as a foundational programme of work. The other is to describe how this is of fundamental relevance to Human-Computer Interaction’s interest in grasping the constitution of people’s interactions with technology within the social order. Our critical finding is that, despite some surface similarities, Twitter- based conversations are a wholly distinct social phenomenon requiring an independent analysis that treats them as unique phenomena in their own right, rather than as another species of conversation that can be handled within the framework of existing Conversation Analysis. This motivates the argument that Microblog Analysis be established as a foundationally independent programme, examining the organizational characteristics of microblogging from the ground up. We articulate how aspects of this approach have already begun to shape our design activities within the PHEME project.2018PTPeter Tolmie et al.Universität SiegenSocial Platform Design & User BehaviorMisinformation & Fact-CheckingCHI
P7 - Evaluating How Interfaces Influence the User Interaction with Fully Autonomous VehiclesWith increasing automation, occupants of fully autonomous vehicles are likely to be completely disengaged from the driving task. However, even with no driving involved, there are still activities that will require interfaces between the vehicle and passengers. This study evaluated different configurations of screens providing operational-related information to occupants for tracking the progress of journeys. Surveys and interviews were used to measure trust, usability, workload and experience after users were driven by an autonomous low speed pod. Results showed that participants want to monitor the state of the vehicle and see details about the ride, including a map of the route and related information. There was a preference for this information to be displayed via an onboard touchscreen device combined with an overhead letterbox display versus a smartphone-based interface. This paper provides recommendations for the design of devices with the potential to improve the user interaction with future autonomous vehicles.2018LOLuis Oliveira et al.Automated Driving Interface & Takeover DesignMotion Sickness & Passenger ExperienceAutoUI
Selection Facilitation Schemes for Predictive Touch with Mid-air Pointing Gestures in Automotive DisplaysPredictive touch is an HMI technology that relies on inferring, early in the pointing gesture, the interface item a driver or passenger intends to select on an in-vehicle display [1, 2]. It simplifies and expedites the selection task, thereby reducing the associated interaction effort. This paper presents two studies on drivers using predictive touch and focuses on evaluating the best means to facilitate selecting the intended on-display item. This includes immediate mid-air selection with the system autonomously auto-selecting the predicted interface component, hover/dwell and drivers pressing a button on the steering wheel to execute the selection action. These were arrived at in an expert workshop study with twelve participants. The results of the subsequent evaluation study with twenty four participants demonstrate, using quantitative and qualitative measures, that immediate mid-air selection is a promising assistive scheme, where drivers need not touch a physical surface to select interface components, thus touch-free control.2018BABashar I. Ahmad et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Hand Gesture RecognitionAutoUI