Will Health Experts Adopt a Clinical Decision Support System for Game-Based Digital Biomarkers? Investigating the Impact of Different Explanations on Perceived Ease-of-Use, Perceived Usefulness, and TrustThis paper explores the adoption of a clinical decision support system (cDSS) utilizing game-based digital biomarkers for diagnosing mild cognitive impairment (MCI). Specifically, it investigates how different explanation methods, with a focus on data-centric explanations, impact perceived ease-of-use, perceived usefulness, and trust among healthcare professionals (HCPs). Through a qualitative study with 12 HCPs, we assess their interactions with an explainable AI (XAI)-enriched cDSS. The findings indicate that HCPs are open to adopting XAI-enriched cDSS to communicate the outcomes of game-based digital biomarkers. HCPs preferred to receive key diagnostic information in an easily digestible format. Both local explanations of intra-personal evolutionary data and global overview of normative data were found to be valuable for interpreting digital biomarkers. HCPs tended to trust the machine learning algorithms as a black box, but they considered the dataset used for training the model and the outcome prediction to be crucial. Therefore, presenting the uncertainty alongside the prediction was deemed important. These insights underscore the importance of designing cDSS tools that foster trust through clear, actionable explanations, paving the way for improved decision-making in clinical contexts.2025CYChen Yu et al.Explainable AI (XAI)Mental Health Apps & Online Support CommunitiesIUI
Explanatory Debiasing: Involving Domain Experts in the Data Generation Process to Mitigate Representation Bias in AI SystemsRepresentation bias is one of the most common types of biases in artificial intelligence (AI) systems, causing AI models to perform poorly on underrepresented data segments. Although AI practitioners use various methods to reduce representation bias, their effectiveness is often constrained by insufficient domain knowledge in the debiasing process. To address this gap, this paper introduces a set of generic design guidelines for effectively involving domain experts in representation debiasing. We instantiated our proposed guidelines in a healthcare-focused application and evaluated them through a comprehensive mixed-methods user study with 35 healthcare experts. Our findings show that involving domain experts can reduce representation bias without compromising model accuracy. Based on our findings, we also offer recommendations for developers to build robust debiasing systems guided by our generic design guidelines, ensuring more effective inclusion of domain experts in the debiasing process.2025ABAditya Bhattacharya et al.KU Leuven, Computer ScienceAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data ConfigurationsExplanations in interactive machine-learning systems facilitate debugging and improving prediction models. However, the effectiveness of various global model-centric and data-centric explanations in aiding domain experts to detect and resolve potential data issues for model improvement remains unexplored. This research investigates the influence of data-centric and model-centric global explanations in systems that support healthcare experts in optimising models through automated and manual data configurations. We conducted quantitative (n=70) and qualitative (n=30) studies with healthcare experts to explore the impact of different explanations on trust, understandability and model improvement. Our results reveal the insufficiency of global model-centric explanations for guiding users during data configuration. Although data-centric explanations enhanced understanding of post-configuration system changes, a hybrid fusion of both explanation types demonstrated the highest effectiveness. Based on our study results, we also present design implications for effective explanation-driven interactive machine-learning systems.2024ABAditya Bhattacharya et al.KU LeuvenExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
The effect of personalizing a psychotherapy conversational agent on therapeutic bond and usage intentionsWhile 33.6% of college students suffer from mental health problems, only 24.6% of these students with symptoms would seek professional help due to their personal attitudes or costs associated with therapy. Psychotherapy chatbots may offer a solution as they are always available, anonymous, and cost-effective. Research has shown that these chatbots can significantly reduce symptoms of anxiety and depression. However, there is a lack of understanding about the personalization preferences of users and the effects of personalization on health outcomes. To investigate this, we developed a personalizable psychotherapy chatbot designed to provide personalized help. In a randomized controlled trial (n=54), participants were either assigned to a personalizable condition or a non-personalizable control condition. After 1 week of usage, participants had a significantly higher therapeutic bond with the personalized version compared to the baseline. In fact, the therapeutic bond was similar to that between a psychologist and his client. This is a promising result, as a high therapeutic bond has been linked to therapeutic success in psychotherapy. Participants reported that the therapy style, personality, and avatar were the most important personalizable aspects of the chatbot. Participants also liked the chatbot's usage of their name and the transparency about what the chatbot had learned about them. These features are likely important for establishing a strong therapeutic bond with users. However, the ability to personalize the chatbot had no impact on the usage intentions of the participants. This can be explained by the fact that users from both conditions equally reported that the chatbot was able to help them with their mental health. 53 participants also indicated that they would be willing to use a psychotherapy chatbot when integrated with a human therapist. These findings indicate the potential of psychotherapy chatbots and the need for further research on their integration with traditional psychotherapy.2024WVWout Vossen et al.Conversational ChatbotsMental Health Apps & Online Support CommunitiesIUI
Towards Tangible Algorithms: Exploring the Experiences of Tangible Interactions with Movie Recommender AlgorithmsArtificial Intelligence (AI) supports many of our everyday activities and decisions. However, personalized algorithmic recommendations often produce adverse experiences due to a lack of awareness, control, or transparency. While research has directed solutions on graphical user interfaces (GUIs), there are no explorations of Tangible User Interfaces (TUIs) to improve the experience with such systems, despite the valid existing academic arguments in favor of this exploration. Therefore, centering on transparency and control, we analyzed how 18 users of movie recommender systems perceived four different TUIs using individual co-design sessions and post-interview questionnaires. Through thematic analysis, we identified seven design considerations while designing TUIs to interact with algorithmic movie recommender systems: (1) Distinctions between TUIs and GUIs; (2) TUIs replacing predominant interfaces; (3) Preference for single-device TUIs; (4) The relevance of granular control for TUIs; (5) Apparent transparency limitations of TUIs; (6) TUIs and algorithmic social computing; and (7) Overview of specific design choices, including advantages and disadvantages of soft, hard, rounded, cubic, and humanoid interfaces. These findings inspired Recffy: the first functional TUI designed to enhance awareness and control in personalized movie recommendations. Based on this study, we propose the concept of Tangible Algorithms: TUIs dedicated to enhancing the interaction of algorithmic systems and their profiling processes or decisions in a specific context. Furthermore, we describe the relevance of tangible algorithms and design guidelines to promote them in diverse AI contexts. Finally, we invite the HCI and CSCW community to continue exploring tangible algorithms to address the interaction with algorithmic systems, including the collaborative and social computing dynamics they can promote in diverse AI contexts.2022OAOscar Alvarado et al.Users' Understanding of Algorithms; Users' Understanding of AlgorithmsCSCW
A Systematic Review of Interaction Design Strategies for Group Recommendation SystemsSystems involving artificial intelligence (AI) are protagonists in many everyday activities. Moreover, designers are increasingly implementing these systems for groups of users in various social and cooperative domains. Unfortunately, research on personalized recommendation systems often reports negative experiences due to a lack of diversity, control, or transparency. Providing a meta-analysis of the interaction design strategies for group recommendation systems (GRS) offers designers and practitioners a departure to address these issues and imagine new interaction possibilities for this context. Therefore, we systematically reviewed the ACM, IEEE, and Scopus digital libraries to identify GRS interface designs, resulting in a final corpus of 142 academic papers. After a systematic coding process, we used descriptive statistics and thematic analysis to uncover the current state of the art regarding interaction design strategies for GRS in six areas: (1) application domains; (2) devices chosen to implement the systems; (3) prototype fidelity; (4) strategies for profile transparency, justification, control, and diversity; (5) strategies for group formation and final group consensus; and, (6) evaluation methods applied in user studies during the design process. Based on our findings, we present an exhaustive typology of interaction design strategies for GRS and a set of research opportunities to foster human-centered interfaces for personalized recommendations in cooperative and social computing contexts.2022OAOscar Alvarado et al.Online Platforms; Online PlatformsCSCW
Explaining Recommendations in E-Learning, Effects on Adolescents' TrustRecommender systems are increasingly supporting explanations to increase trust in their recommendations. However, studies on explaining recommendations typically target adults in low-risk e-commerce or media contexts, and using explanations in e-learning has received little research attention. To address these limits, we investigated how explanations affect adolescents' trust in an exercise recommender on a mathematical e-learning platform. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that explanations can significantly increase initial trust when measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, as not all adolescents in our study attached equal importance to explanations, it remains important to tailor them. To study the impact of tailored explanations, we advise researchers to include placebo baselines in their studies as they may give more insights into how much transparency people actually need, compared to no-explanation baselines.2022JOJeroen Ooge et al.Multilingual & Cross-Cultural Voice InteractionRecommender System UXUniversal & Inclusive DesignIUI
Explaining Call Recommendations in Nursing Homes: A User-Centered Design Approach for Interacting with Knowledge-Based Health Decision Support SystemsRecommender systems are increasingly used in high-risk application domains, including healthcare. It has been shown that explanations are crucial in this context to support decision-making. In this paper, we explore how to explain call recommendations to nurses in nursing homes, providing insight into call priority, notifications, and resident information that may contribute to residents' safety and quality of care. We present the design and implementation of a recommender engine, and a mobile application designed to support call recommendations and explanations of these recommendations. More specifically, we report on the results of a user-centered design approach with residents (N=12) and healthcare professionals (N=4), and a final evaluation (N=12) after four months of deployment. The results show that our design approach provides a valuable tool for more accurate and efficient decision-making. The overall system encourages nursing home staff to provide feedback and annotate, resulting in more confidence in the system. We discuss usability issues, challenges and reflections to be considered in future health recommender systems.2022FGFrancisco Gutiérrez et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationIUI
Visual, textual or hybrid: the effect of user experience on different explanationsAs the use of AI algorithms keeps rising continuously, so does the need for their transparency and accountability. However, literature often adopts a one-size-fits-all approach for developing explanations when in practice, the type of explanations needed depends on the type of end-user. This research will look at user expertise as a variable to see how different levels of expertise influence the understanding of explanations. The first iteration consists of developing two common types of explanations (visual and textual explanations) that explain predictions made by a general class of predictive model learners. These explanations are then evaluated by users of different expertise backgrounds to compare the understanding and ease-of-use of each type of explanation with respect to the different expertise groups. Results show strong differences between experts and lay users when using visual and textual explanations, as well as lay users having a preference for visual explanations which they perform significantly worse with. To solve this problem, the second iteration of this research focuses on the shortcomings of the first two explanations and tries to minimize the difference in understanding between both expertise groups. This is done through the means of developing and testing a candidate solution in the form of hybrid explanations, which essentially combine both visual and textual explanations. This hybrid form of explanations shows a significant improvement in terms of correct understanding (for lay users in particular) when compared to visual explanations, whilst not compromising on ease-of-use at the same time.2021MSMaxwell Szymanski et al.Explainable AI (XAI)Algorithmic Transparency & AuditabilityIUI
Perception of Fairness in Group Music Recommender SystemsFairness is an important aspect in group recommender systems (GRSs). They must ensure that potentially diverse preferences of all group members are taken into consideration when providing recommendations. Previous work has proposed a number of conflict elicitation and merging techniques to produce preferable recommendations for group members. However, we have yet to understand the influence of user personality on the perception of fairness in GRSs. To examine this gap, we use music recommendation as an example domain. We have developed a web-based group music recommender system using the Spotify API and two simple ranking algorithms: one based on the time the songs were voted by users (time-based) and the other based on a dissimilarity score (dissimilarity-based). A within-subjects experiment was conducted with 45 participants divided into groups of 3 (15 groups). Results showed that openness personality has a negative correlation with the perception that fairness is important in groups.2021NHNyi Nyi Htun et al.AI Ethics, Fairness & AccountabilityRecommender System UXAlgorithmic Fairness & BiasIUI
Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations on YouTubeUser beliefs about algorithmic systems are constantly co-produced through user interaction and the complex socio-technical systems that generate recommendations. Identifying these beliefs is crucial because they influence how users interact with recommendation algorithms. With no prior work on user beliefs of algorithmic video recommendations, practitioners lack relevant knowledge to improve the user experience of such systems. To address this problem, we conducted semi-structured interviews with middle-aged YouTube video consumers to analyze their user beliefs about the video recommendation system. Our analysis revealed different factors that users believe influence their recommendations. Based on these factors, we identified four groups of user beliefs: Previous Actions, Social Media, Recommender System, and Company Policy. Additionally, we propose a framework to distinguish the four main actors that users believe influence their video recommendations: the current user, other users, the algorithm, and the organization. This framework provides a new lens to explore design suggestions based on the agency of these four actors. It also exposes a novel aspect previously unexplored: the effect of corporate decisions on the interaction with algorithmic recommendations. While we found that users are aware of the existence of the recommendation system on YouTube, we show that their understanding of this system is limited.2020OAOscar Alvarado et al.UX of AICSCW
Supporting job mediator and job seeker through an actionable dashboardJob mediation services can assist job seekers in finding suitable employment through a personalised approach. Consultation or mediation sessions, supported by personal profile data of the job seeker, help job mediators understand personal situation and requests. Prediction and recommendation systems can directly provide job seekers with possible job vacancies. However, incorrect or unrealistic suggestions, and bad interpretations can result in bad decisions or demotivation of the job seeker. This paper explores how an interactive dashboard visualising prediction and recommendation output can help support the dialogue between job mediator and job seeker, by increasing the "explainability" and providing mediators with control over the information that is shown to job seekers.2019SCSven Charleer et al.AI-Assisted Decision-Making & AutomationRecommender System UXInteractive Data VisualizationIUI
To Explain or not to Explain: the Effects of Personal Characteristics when Explaining Music RecommendationsRecommender systems have been increasingly used in online services that we consume daily, such as Facebook, Netflix, YouTube, and Spotify. However, these systems are often presented to users as a ``black box'', i.e. the rationale for providing individual recommendations remains unexplained to users. In recent years, various attempts have been made to address this black box issue by providing textual explanations or interactive visualisations that enable users to explore the provenance of recommendations. Among other things, results demonstrated benefits in terms of precision and user satisfaction. Previous research had also indicated that personal characteristics such as domain knowledge, trust propensity and persistence may also play an important role on such perceived benefits. Yet, to date, little is known about the effects of personal characteristics on explaining recommendations. To address this gap, we developed a music recommender system with explanations and conducted an online study using a within-subject design. We captured various personal characteristics of participants and administered both qualitative and quantitative evaluation methods. Results indicate that personal characteristics have significant influence on the interaction and perception of recommender systems, and that this influence changes by adding explanations. For people with a low need for cognition are the explained recommendations the most beneficial. For people with a high need for cognition, we observed that explanations could create a lack of confidence. Based on these results, we present some design implications for explaining recommendations.2019MMMartijn Millecamp et al.Explainable AI (XAI)Recommender System UXIUI