Visual-Conversational Interface for Evidence-Based Explanation of Diabetes Risk PredictionHealthcare professionals need effective ways to use, understand, and validate AI-driven clinical decision support systems. Existing systems face two key limitations: complex visualizations and a lack of grounding in scientific evidence. We present an integrated decision support system that combines interactive visualizations with a conversational agent to explain diabetes risk assessments. We propose a hybrid prompt handling approach combining fine-tuned language models for analytical queries with general Large Language Models (LLMs) for broader medical questions, a methodology for grounding AI explanations in scientific evidence, and a feature range analysis technique to support deeper understanding of feature contributions. We conducted a mixed-methods study with 30 healthcare professionals and found that the conversational interactions helped healthcare professionals build a clear understanding of model assessments, while the integration of scientific evidence calibrated trust in the system's decisions. Most participants reported that the system supported both patient risk evaluation and recommendation.2025RSReza Samimi et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationMedical & Scientific Data VisualizationCUI
Articulation Work and Tinkering for Fairness in Machine LearningThe field of fair AI aims to counter biased algorithms through computational modelling. However, it faces increasing criticism for perpetuating the use of overly technical and reductionist methods. As a result, novel approaches appear in the field to address more socially-oriented and interdisciplinary (SOI) perspectives on fair AI. In this paper, we take this dynamic as the starting point to study the tension between computer science (CS) and SOI research. By drawing on STS and CSCW theory, we position fair AI research as a matter of `organizational alignment’: what makes research `doable’ is the successful alignment of three levels of work organization (the social world, the laboratory and the experiment). Based on qualitative interviews with CS researchers, we analyze the tasks, resources, and actors required for doable research in the case of fair AI. We find that CS researchers engage with SOI to some extent, but organizational conditions, articulation work, and ambiguities of the social world constrain the doability of SOI research. Based on our findings, we identify and discuss problems for aligning CS and SOI as fair AI continues to evolve.2024MFMiriam Fahimi et al.Session 1g: Contextualizing Fairness in AICSCW
Not Only for Contact Tracing: Use of Belgium’s Contact Tracing App among Young Adults"Many countries developed and deployed contact tracing apps to reduce the spread of the COVID-19 coronavirus. Prior research explored people's intent to install these apps, which is necessary to ensure effectiveness. However, adopting contact tracing apps is not enough on its own, and much less is known about how people actually use these apps. Exploring app use can help us identify additional failures or risk points in the app life cycle. In this study, we conducted 13 semi-structured interviews with young adult users of Belgium's contact-tracing app, Coronalert. The interviews were conducted approximately a year after the onset of the COVID-19 pandemic. Our findings offer potential design directions for addressing issues identified in prior work - such as methods for maintaining long-term use and better integrating with the local health systems - and offer insight into existing design tensions such as the trade-off between maintaining users' privacy (by minimizing the personal data collected) and users' desire to have more information about an exposure incident. We distill from our results and the results of prior work a framework of people's decision points in contact-tracing app use that can serve to motivate careful design of future contact tracing technology. https://doi.org/10.1145/3570348"2023OAOshrat Ayalon et al.Mental Health Apps & Online Support CommunitiesPrivacy by Design & User ControlPrivacy Perception & Decision-MakingUbiComp
A Systematic Review of Interaction Design Strategies for Group Recommendation SystemsSystems involving artificial intelligence (AI) are protagonists in many everyday activities. Moreover, designers are increasingly implementing these systems for groups of users in various social and cooperative domains. Unfortunately, research on personalized recommendation systems often reports negative experiences due to a lack of diversity, control, or transparency. Providing a meta-analysis of the interaction design strategies for group recommendation systems (GRS) offers designers and practitioners a departure to address these issues and imagine new interaction possibilities for this context. Therefore, we systematically reviewed the ACM, IEEE, and Scopus digital libraries to identify GRS interface designs, resulting in a final corpus of 142 academic papers. After a systematic coding process, we used descriptive statistics and thematic analysis to uncover the current state of the art regarding interaction design strategies for GRS in six areas: (1) application domains; (2) devices chosen to implement the systems; (3) prototype fidelity; (4) strategies for profile transparency, justification, control, and diversity; (5) strategies for group formation and final group consensus; and, (6) evaluation methods applied in user studies during the design process. Based on our findings, we present an exhaustive typology of interaction design strategies for GRS and a set of research opportunities to foster human-centered interfaces for personalized recommendations in cooperative and social computing contexts.2022OAOscar Alvarado et al.Online Platforms; Online PlatformsCSCW
'Transparency is meant for control' and vice versa. Learning from co-designing and evaluating algorithmic news recommendersAlgorithmic systems that recommend content often lack transparency about how they come to their suggestions. One area in which recommender systems are increasingly prevalent is online news distribution. In this paper, we explore how a lack of transparency of (news) recommenders can be tackled by involving users in the design of interface elements. In the context of automated decision-making, legislative frameworks such as the GDPR in Europe introduce a specific conception of transparency, granting `data subjects' specific rights and imposing obligations on service providers. An important related question is how people using personalized recommender systems relate to the issue of transparency, not as legal data subjects but as users. This paper builds upon a two-phase study on how users conceive of transparency and related issues in the context of algorithmic news recommenders. We organized co-design workshops to elicit participants' `algorithmic imaginaries' and invited them to ideate interface elements for increased transparency. This revealed the importance of combining legible transparency features with features that increase user control. We then conducted a qualitative evaluation of mock-up prototypes to investigate users' preferences and concerns when dealing with design features to increase transparency and control. Our investigation illustrates how users' expectations and impressions of news recommenders are closely related to their news reading practices. On a broader level, we show how transparency and control are conceptually intertwined. Transparency without control leaves users frustrated. Conversely, without a basic level of transparency into how a system works, users remain unsure of the impact of controls.2022ESElias Storms et al.Users' Understanding of Algorithms; Users' Understanding of AlgorithmsCSCW
Explaining Recommendations in E-Learning, Effects on Adolescents' TrustRecommender systems are increasingly supporting explanations to increase trust in their recommendations. However, studies on explaining recommendations typically target adults in low-risk e-commerce or media contexts, and using explanations in e-learning has received little research attention. To address these limits, we investigated how explanations affect adolescents' trust in an exercise recommender on a mathematical e-learning platform. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that explanations can significantly increase initial trust when measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, as not all adolescents in our study attached equal importance to explanations, it remains important to tailor them. To study the impact of tailored explanations, we advise researchers to include placebo baselines in their studies as they may give more insights into how much transparency people actually need, compared to no-explanation baselines.2022JOJeroen Ooge et al.Multilingual & Cross-Cultural Voice InteractionRecommender System UXUniversal & Inclusive DesignIUI
Perception of Fairness in Group Music Recommender SystemsFairness is an important aspect in group recommender systems (GRSs). They must ensure that potentially diverse preferences of all group members are taken into consideration when providing recommendations. Previous work has proposed a number of conflict elicitation and merging techniques to produce preferable recommendations for group members. However, we have yet to understand the influence of user personality on the perception of fairness in GRSs. To examine this gap, we use music recommendation as an example domain. We have developed a web-based group music recommender system using the Spotify API and two simple ranking algorithms: one based on the time the songs were voted by users (time-based) and the other based on a dissimilarity score (dissimilarity-based). A within-subjects experiment was conducted with 45 participants divided into groups of 3 (15 groups). Results showed that openness personality has a negative correlation with the perception that fairness is important in groups.2021NHNyi Nyi Htun et al.AI Ethics, Fairness & AccountabilityRecommender System UXAlgorithmic Fairness & BiasIUI
Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations on YouTubeUser beliefs about algorithmic systems are constantly co-produced through user interaction and the complex socio-technical systems that generate recommendations. Identifying these beliefs is crucial because they influence how users interact with recommendation algorithms. With no prior work on user beliefs of algorithmic video recommendations, practitioners lack relevant knowledge to improve the user experience of such systems. To address this problem, we conducted semi-structured interviews with middle-aged YouTube video consumers to analyze their user beliefs about the video recommendation system. Our analysis revealed different factors that users believe influence their recommendations. Based on these factors, we identified four groups of user beliefs: Previous Actions, Social Media, Recommender System, and Company Policy. Additionally, we propose a framework to distinguish the four main actors that users believe influence their video recommendations: the current user, other users, the algorithm, and the organization. This framework provides a new lens to explore design suggestions based on the agency of these four actors. It also exposes a novel aspect previously unexplored: the effect of corporate decisions on the interaction with algorithmic recommendations. While we found that users are aware of the existence of the recommendation system on YouTube, we show that their understanding of this system is limited.2020OAOscar Alvarado et al.UX of AICSCW
Paper to Pixels: A Chronicle of Map Interfaces in GamesGame map interfaces provide an alternative perspective on the worlds players inhabit. Compared to navigation applications popular in day-to-day life, game maps have different affordances to match players' situated goals. To contextualize and understand these differences and how they developed, we present a historical chronicle of game map interfaces. Starting from how games came to involve maps, we trace how maps are first separate from the game, becoming more and more integrated into play until converging in smartphone-style interfaces. We synthesize several game history texts with critical engagement with 123 key games to develop this map-focused chronicle, from which we highlight trends and opportunities for future map designs. Our work contributes a record of trends in game map interfaces that can serve as a source of reference and inspiration to game designers, digital physical-world map designers, and game scholars.2020ZTZ O. Toups et al.Geospatial & Map VisualizationGame UX & Player BehaviorDIS
Responsive news summarization for ubiquitous consumption on multiple mobile devicesWith the proliferation of online news read on devices ranging from desktops to smart watches, the need for meaningful summaries of long texts is growing. Manual summaries are labour-intensive and cannot be offered for all display sizes, whereas today’s abstracts of most news texts are teasers designed to attract the reader’s interest more than to provide an overview of an article’s content suited to the reader’s information needs. We propose responsive news summarization as a technological approach for filling this gap. Responsive news summarization provides an automatically generated content summary that has the right length for the device requesting the article, plus access to the full text. We describe the system prototype available at multisizenews.com along with the initial user study results and give an outlook on future work.2018RCRocio Chongtay et al.Generative AI (Text, Image, Music, Video)Recommender System UXIUI