Varif.ai to Vary and Verify User-Driven Diversity in Scalable Image GenerationDiversity in image generation is essential to ensure fair representations and support creativity in ideation. Hence, many text-to-image models have implemented diversification mechanisms. Yet, after a few iterations of generation, a lack of diversity becomes apparent, because each user has their own diversity goals (e.g., different colors, brands of cars), and there are diverse attributions to be specified. To support user-driven diversity control, we propose Varif.ai that employs text-to-image and Large Language models to iteratively i) (re)generate a set of images, ii) verify if user-specified attributes have sufficient coverage, iii) vary existing or new attributes. Through an elicitation study, we uncovered user needs for diversity in image generation. A pilot validation showed that Varif.ai made achieving diverse image sets easier. In a controlled evaluation with 20 participants, Varif.ai proved more effective than baseline methods across various scenarios. Thus, this supports user control of diversity in image generation for creative ideation and scalable image generation.2025MMMario Michelessa et al.Generative AI (Text, Image, Music, Video)Recommender System UXAI-Assisted Creative WritingDIS
Robust Relatable Explanations of Machine Learning with Disentangled Cue-specific SaliencyConcept-based explanations help users understand the relation between model predictions and meaningful cues. However, under noisy real-world conditions, data perturbations lead to distorted and deviated explanations. We hypothesize that these corruptions affect specific cues rather than all, so disentangling them may help reduce model dependency on degraded cues. For the application of explaining emotional speech recognition, we propose RobustRexNet to explain with disentangled and discretized saliency maps for separate speech cues (e.g., loudness, pitch) to improve robustness against noise. Modeling evaluations show that RobustRexNet improved both model performance and explanation faithfulness in noisy and privacy-preserving distortions. User studies further indicate that the robust explanations aligned better with human intuition and improved user emotion labeling under noise. This work contributes toward robust explainable AI to improve user trust under real-world conditions.2025HAHarshavardhan Sunil Abichandani et al.Explainable AI (XAI)AI Ethics, Fairness & AccountabilityIUI
Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical DiagnosisMany visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.2025BLBrian Y Lim et al.National University of Singapore, Department of Computer ScienceExplainable AI (XAI)Medical & Scientific Data VisualizationCHI
Incremental XAI: Memorable Understanding of AI with Incremental ExplanationsMany explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors × values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.2024JBJessica Y Bo et al.University of Toronto, National University of SingaporeExplainable AI (XAI)Algorithmic Transparency & AuditabilityAlgorithmic Fairness & BiasCHI
RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise ExpressionsGenerative AI models have shown impressive ability to produce images with text prompts, which could benefit creativity in visual art creation and self-expression. However, it is unclear how precisely the generated images express contexts and emotions from the input texts. We explored the emotional expressiveness of AI-generated images and developed RePrompt, an automatic method to refine text prompts toward precise expression of the generated images. Inspired by crowdsourced editing strategies, we curated intuitive text features, such as the number and concreteness of nouns, and trained a proxy model to analyze the feature effects on the AI-generated image. With model explanations of the proxy model, we curated a rubric to adjust text prompts to optimize image generation for precise emotion expression. We conducted simulation and user studies, which showed that RePrompt significantly improves the emotional expressiveness of AI-generated images, especially for negative emotions.2023YWYunlong Wang et al.National University of SingaporeGenerative AI (Text, Image, Music, Video)AI-Assisted Creative WritingCHI
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learningModel explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias) by perturbations and corruptions. Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations im- proved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.2022WZHaimo Zhang et al.School of Computing, National University of SingaporeExplainable AI (XAI)Algorithmic Transparency & AuditabilityCHI
Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd IdeationFeedback in creativity support tools can help crowdworkers to improve their ideations. However, current feedback methods require human assessment from facilitators or peers. This is not scalable to large crowds. We propose Interpretable Directed Diversity to automatically predict ideation quality and diversity scores, and provide AI explanations — Attribution, Contrastive Attribution, and Counterfactual Suggestions — to feedback on why ideations were scored (low), and how to get higher scores. These explanations provide multi-faceted feedback as users iteratively improve their ideations. We conducted formative and controlled user studies to understand the usage and usefulness of explanations to improve ideation diversity and quality. Users appreciated that explanation feedback helped focus their efforts and provided directions for improvement. This resulted in explanations improving diversity compared to no feedback or feedback with scores only. Hence, our approach opens opportunities for explainable AI towards scalable and rich feedback for iterative crowd ideation and creativity support tools.2022YWYunlong Wang et al.National University of SingaporeGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCrowdsourcing Task Design & Quality ControlCHI
Towards Relatable Explainable AI with the Perceptual ProcessMachine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that {\color{blue}counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations}. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications.2022WZWencan Zhang et al.School of Computing, National University of SingaporeEye Tracking & Gaze InteractionExplainable AI (XAI)CHI
Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd IdeationCrowdsourcing can collect many diverse ideas by prompting ideators individually, but this can generate redundant ideas. Prior methods reduce redundancy by presenting peers’ ideas or peer-proposed prompts, but these require much human coordination. We introduce Directed Diversity, an automatic prompt selection approach that leverages language model embedding distances to maximize diversity. Ideators can be directed towards diverse prompts and away from prior ideas, thus improving their collective creativity. Since there are diverse metrics of diversity, we present a Diversity Prompting Evaluation Framework consolidating metrics from several research disciplines to analyze along the ideation chain — prompt selection, prompt creativity, prompt-ideation mediation, and ideation creativity. Using this framework, we evaluated Directed Diversity in a series of a simulation study and four user studies for the use case of crowdsourcing motivational messages to encourage physical activity. We show that automated diverse prompting can variously improve collective creativity across many nuanced metrics of diversity.2021SCSamuel Rhys Cox et al.National University of SingaporeGenerative AI (Text, Image, Music, Video)Crowdsourcing Task Design & Quality ControlCHI
TableChat: Mobile Food Journaling to Facilitate Family Support for Healthy EatingSupport from family members is an important determinant of health. In this work, we probe opportunities for facilitating family support with TableChat, a chat-based mobile application for food journaling. Leveraging food as a test case of family support, TableChat virtually extends the experience of bonding over the dinner table. We surveyed 158 people about their existing family support practices and deployed TableChat with 10 families in the field. We found that tangible support was the most common form of support shared in TableChat and also the most appreciated by participants. However, we found that participants valued not only supportive actions taken by their family members, but also those deliberately not taken (e.g., not buying junk food). Finally, families reported that journaling meals eaten apart aided the exchange of support, satisfied curiosity, and provided a “check-in” that everything was alright, whereas journaling meals eaten together felt redundant. We conclude with a framework that illustrates how informatics tools can be designed to complement rather than compete with existing family interactions.2018KLKai Lukoff et al.Chatting and LivestreamingCSCW
Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research AgendaAdvances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.2018AAAshraf Abdul et al.National University of SingaporeExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAlgorithmic Transparency & AuditabilityCHI