"The Prophet said so!": On Exploring Hadith Presence on Arabic Social MediaHadith, the recorded words and actions of the prophet Muhammad, is a key source of the instructions and foundations of Islam, alongside the Quran. Interpreting individual hadiths and verifying their authenticity can be difficult, even controversial, and the subject has attracted the attention of many scholars who have established an entire science of Hadith criticism. Recent quantitative studies of hadiths focus on developing systems for automatic classification, authentication, and information retrieval that operate over existing hadith compilations. Qualitative studies on the other hand try to discuss different social and political issues from the perspective of hadiths, or they inspect how hadiths are used in specific contexts in official communications and press releases for argumentation and propaganda. However, there are no studies that attempt to understand the actual presence of hadiths among Muslims in their daily lives and interactions. In this study, we try to fill this gap by exploring the presence of hadiths on Twitter from January 2019 to January 2023. We highlight the challenges that quantitative methods should consider while processing texts that include hadiths and we provide a methodology for Islamic scholars to validate their hypotheses about hadiths on big data that better represent the position of the society and Hadith influence on it.2025MFMahmoud Fawzi et al.Diverse Uses of Social Media PlatformsCSCW
Illustrating Creative Applications of Data and Technology: A Visual VocabularyContemporary technologies and data-driven methods have much potential to support innovation in the creative industries - from design and craft, to film and music. However, discussing and understanding the applied potential of data and technology can be especially difficult for creative practitioners who have limited previous experience with data-driven research and development. In this pictorial, we address this challenge through the design, and initial evaluation, of a ‘Visual Vocabulary’ of illustrations aimed to scaffold creative practitioners’ thinking about how they might employ a diverse range of data and technology to address their creative and business challenges. The illustrations serve as a resource for subverting common imageries of technologies and computational methods in popular media - which often fail to showcase their many creative affordances. Moreover, as an ideation card deck, they also serve to support discussion and exploration of new data-driven projects for creative practitioners.2025SLSusan Lechelt et al.Interactive Data VisualizationVisualization Perception & CognitionGraphic Design & Typography ToolsC&C
Content Authenticities: A Discussion on the Values of Provenance Data for Creatives and Their AudiencesThe proliferation of AI-generated digital content has intensified the user demand for accurate provenance information to ensure content authenticity. Technical advancements now provide tools to make the digital media content supply chain more transparent through the use of provenance data. This paper foregrounds the importance of understanding how the situated nature of user-content engagement influences perceptions and uses of this data. Insights from a workshop with experts in the creative media sector suggest that, as the adoption of provenance data becomes more common, users need richer and more nuanced information. We suggest that analyzing the increasing demand for content authenticity through the lens of multiple “authenticities”, each reflecting different user needs and contexts, can help identify and address the needs for, and uses of, provenance data by creators and audiences alike.2025CMCaterina Moruzzi et al.Explainable AI (XAI)Algorithmic Transparency & AuditabilityPrivacy by Design & User ControlC&C
How Does AI Represent Social Concepts? Examining the Visual Representation of Care in Text-to-Image ToolsText-to-image (T2I) generative AI tools like Midjourney are growing in capability and popularity, promising a wide range of applications. However, concerns are rising over the biases in how they represent social concepts like care and the lack of guidance for designers and users to address these in practice. This paper first presents an analysis of 140 “photos of care” generated by Midjourney, and then explores how prompting might influence the results. The findings reveal that AI-generated images reproduce stereotypical and reductive representations of care by default, neglecting the broad spectrums of care practices in everyday life. Furthermore, we find that while prompt engineering might mitigate certain biases, it requires specialised skills, knowledge, and an ongoing reflexive approach to generate meaningful outputs. We conclude by proposing a reflexive prompting framework, and discussing the implications for future T2I evaluation and its responsible use and design.2025ZWZezhong Wang et al.Generative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityDIS
Designing Exchangeopoly: A Boardgame to Explore Value Exchange within CommunitiesIn this pictorial, we discuss the design of Exchangeopoly, a boardgame developed to investigate exchanges between people in communities when they help each other out. Such exchanges are often acts of kindness for forms of volunteering that are not remunerated financially and are built on social capital. The boardgame scaffolded explorations of scenarios with participants where informal altruistic interactions in their communities are tokenised, rewarded and incentivised. We focus on the designed-in features and considerations that went into the visual and material production of the game and its gameplay mechanics. We discuss how Exchangeopoly was a valuable method that surfaced existing and speculated practices of exchange, and supported participants to explore the opportunities and problems of representing and rewarding such interactions. We contribute insights about the usefulness of exchangeopoly as a tool to explore scenarios and surface tensions about tokenisation in community value exchange.2025SCSimran Chopra et al.Digitalization of Board & Tabletop GamesMakerspace CultureDIS
From Temporal to Spatial: Designing Spatialized Interactions with Segmented-audios in Immersive Environments for Active Engagement with Performing Arts Intangible Cultural HeritagePerformance artforms like Peking opera face transmission challenges due to the extensive passive listening required to understand their nuance. To create engaging forms of experiencing auditory Intangible Cultural Heritage (ICH), we designed a spatial interaction-based segmented-audio (SISA) Virtual Reality system that transforms passive ICH experiences into active ones. We undertook: (1) a co-design workshop with seven stakeholders to establish design requirements, (2) prototyping with five participants to validate design elements, and (3) user testing with 16 participants exploring Peking Opera. We designed transformations of temporal music into spatial interactions by cutting sounds into short audio segments, applying t-SNE algorithm to cluster audio segments spatially. Users navigate through these sounds by their similarity in audio property. Analysis revealed two distinct interaction patterns (Progressive and Adaptive), and demonstrated SISA's efficacy in facilitating active auditory ICH engagement. Our work illuminates the design process for enriching traditional performance artform using spatially-tuned forms of listening.2025YWYuqi Wang et al.Immersion & Presence ResearchIdentity & Avatars in XRInteractive Narrative & Immersive StorytellingDIS
Judging Phishing Under Uncertainty: How Do Users Handle Inaccurate Automated Advice?Providing accurate and actionable advice about phishing emails is challenging. The majority of advice is generic and hard to implement. Phishing emails that pass through filters and land in user inboxes are usually sophisticated and exploit differences between how humans and computers interpret emails. Therefore, users need accurate and relevant guidance to take the right action. This study investigates the effectiveness of guidance based on features extracted from emails, which even in AI-driven systems can sometimes be inaccurate, leading to poor advice. We examined three conditions: control (generic advice), perfect advice, and realistic advice, through an online survey of 489 participants on Prolific, and measured user accuracy and confidence in phishing detection with and without guidance. Our findings indicate that having advice specific to the email is more effective than generic guidance (control). Inaccuracies in the guidance can also impact user decisions and reduce detection accuracy.2025TSTarini Saka et al.University of Edinburgh, School of InformaticsPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Investigating the Capabilities and Limitations of Machine Learning for Identifying Bias in English Language Data with Information and Heritage ProfessionalsDespite numerous efforts to mitigate their biases, ML systems continue to harm already-marginalized people. While predominant ML approaches assume bias can be removed and fair models can be created, we show that these are not always possible, nor desirable, goals. We reframe the problem of ML bias by creating models to identify biased language, drawing attention to a dataset’s biases rather than trying to remove them. Then, through a workshop, we evaluated the models for a specific use case: workflows of information and heritage professionals. Our findings demonstrate the limitations of ML for identifying bias due to its contextual nature, the way in which approaches to mitigating it can simultaneously privilege and oppress different communities, and its inevitability. We demonstrate the need to expand ML approaches to bias and fairness, providing a mixed-methods approach to investigating the feasibility of removing bias or achieving fairness in a given ML use case.2025LHLucy Havens et al.University of EdinburghAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
``I am not the primary focus" - Understanding the Perspectives of Bystanders in Photos Shared OnlineWhen taking photos in a crowd, unintended individuals, such as bystanders, are often captured alongside the main subject(s). In an effort to protect bystanders' privacy, existing methods have been developed to automatically detect bystanders. However, inconsistent definitions of who qualifies as a bystander limit their effectiveness. To better understand bystanders' perceptions, we conducted an online survey with 486 participants, analyzing their responses to 864 image-based scenarios and their comfort with sharing these images online. Our results revealed no significant correlation between comfort with public photo sharing and bystander status. We identified limitations in current bystander detection methodologies, as they often fail to recognize bystanders who are not clearly in the background, hence missing individuals with privacy concerns. Moreover, comfort with public sharing varied significantly depending on the image context. Our findings highlight the importance of considering the context of captured images to address privacy concerns in image sharing.2025YNYuqi Niu et al.Shanghai Jiao Tong University; The University of Edinburgh, School of InformaticsPrivacy by Design & User ControlPrivacy Perception & Decision-MakingMisinformation & Fact-CheckingCHI
Labour Provenance as a Lens to Reveal More-Than-Human Ecologies in Biological Design and HCIEfforts to integrate living organisms in the design of new technologies are often motivated by prospects of greater sustainability and increased connection with more-than-human worlds. In this paper, we critically discuss these motivations by analysing the vast and mostly hidden ecologies of more-than-human organisms implicated in a biodesign lab experiment. Through the lenses of labour theory, we investigate the extent to which organisms’ bodily functions and relationships can be subsumed into capitalist modes of production. In order to help reveal and map out the network of more-than-human contributors to biodesign, we develop a workshop method and a labour provenance analytical framework that identifies five types of more-than-human labourers, stretching from the centre to the periphery of biodesign. We conclude by discussing how sustainable approaches should account for wider more-than-human ecologies, and how the labour lens could help stress conflicting goals, implicit anthropocentric agendas and ways of improving organismal welfare in biological design and HCI.2025YCYuning Chen et al.University of Edinburgh, Design InformaticsSustainable HCIHuman-Nature Relationships (More-than-Human Design)CHI
The Role of Expertise in Effectively Moderating Harmful Social Media ContentSocial media platforms played a significant role in spreading genocidal content in the 2020-2022 Tigray war, where the deadliest genocide of the 21st century was committed. While linguistic expertise is clearly needed to adequately moderate such content, we ask: What additional expertise is needed? Why and to what extent do experts disagree on what constitutes harmful content, and what is the best way to resolve these disagreements? What do social media platforms do instead? We examine these questions through a 4 month study with 7 experts labeling 340 X (formerly Twitter) posts, and by interviewing 15 commercial content moderators. We find in-depth cultural knowledge and dialects to be most important for accurate hate speech annotation – knowledge which social media platforms do not prioritize. Even amongst experts, disagreements are high (71%), dropping to 40% after deliberation meetings. Based on these results, we present 7 recommendations to improve hate speech annotation and moderation practices.2025NANuredin Ali Abdelkadir et al.University of Minnesota, Computer Science and Engineering; The Distributed AI Research InstituteAI Ethics, Fairness & AccountabilityContent Moderation & Platform GovernanceMisinformation & Fact-CheckingCHI
Who should set the Standards? Analysing Censored Arabic Content on Facebook during the Palestine-Israel ConflictNascent research on human-computer interaction concerns itself with fairness of content moderation systems. Designing globally applicable content moderation systems requires considering historical, cultural, and socio-technical factors. Inspired by this line of work, we investigate Arab users' perception of Facebook's moderation practices. We collect a set of 448 deleted Arabic posts, and we ask Arab annotators to evaluate these posts based on (a) Facebook Community Standards (FBCS) and (b) their personal opinion. Each post was judged by 10 annotators to account for subjectivity. Our analysis shows a clear gap between the Arabs' understanding of the FBCS and how Facebook implements these standards. The study highlights a need for discussion on the moderation guidelines on social media platforms about who decides the moderation guidelines, how these guidelines are interpreted, and how well they represent the views of marginalised user communities.2025WMWalid Magdy et al.University of Edinburgh, School of InformaticsContent Moderation & Platform GovernanceEmpowerment of Marginalized GroupsTechnology Ethics & Critical HCICHI
Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) SurveyHumans now interact with a variety of digital minds, systems that appear to have mental faculties such as reasoning, emotion, and agency, and public figures are discussing the possibility of sentient AI. We present initial results from 2021 and 2023 for the nationally representative AI, Morality, and Sentience (AIMS) survey (N = 3,500). Mind perception and moral concern for AI welfare were surprisingly high and significantly increased: in 2023, one in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI. People became more opposed to building digital minds: in 2023, 63% supported banning smarter-than-human AI, and 69% supported banning on sentient AI. The median 2023 forecast was that sentient AI would arrive in just five years. The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.2025JAJacy Reese Anthis et al.Sentience Institute; University of ChicagoAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlAlgorithmic Fairness & BiasCHI
The World is Not Enough: Growing Waste in HPC-enabled Academic PracticeMost research depends to some extent on technologies and computational infrastructures including, and perhaps especially, HCI. Despite the noted environmental impacts associated with information communication technology (ICT) globally, to date little consideration has been given as to how to limit the impact of research and innovation processes themselves. Working to understand the technical and cultural drivers of this impact within the specific but resource-intensive domain of High Performance Computing (HPC), we conducted 25 interviews with academic researchers, providers, funders, and commissioners of HPC. We find intersecting socio-cultural and technical dimensions that link to research institutions like conferences, funders, and universities that reinforce and embed, rather than challenge, expectations of growth and waste. At a time when large scale cloud systems, generative AI and ever larger models are multiplying, we argue to de-escalate demand for computing, aiming for more moderate, responsible and meaningful use of computational infrastructures - including within HCI itself.2025CLCarolynne Lord et al.UKCEH; Lancaster University, School of Computing and CommunicationsGenerative AI (Text, Image, Music, Video)Sustainable HCIEcological Design & Green ComputingCHI
Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial IntelligencesAI systems have rapidly advanced, diversified, and proliferated, but our knowledge of people’s perceptions of mind and morality in them is limited, despite its importance for outcomes such as whether people trust AIs and how they assign responsibility for AI-caused harms. In a preregistered online study, 975 participants rated 26 AI and non-AI entities. Overall, AIs were perceived to have low-to-moderate agency (e.g., planning, acting), between inanimate objects and ants, and low experience (e.g., sensing, feeling). For example, ChatGPT was rated only as capable of feeling pleasure and pain as a rock. The analogous moral faculties, moral agency (doing right or wrong) and moral patiency (being treated rightly or wrongly) were higher and more varied, particularly moral agency: The highest-rated AI, a Tesla Full Self-Driving car, was rated as morally responsible for harm as a chimpanzee. We discuss how design choices can help manage perceptions, particularly in high-stakes moral contexts.2025ALAli Ladak et al.Sentience Institute; University of EdinburghAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
"Impressively Scary:" Exploring User Perceptions and Reactions to Unraveling Machine Learning Models in Social Media ApplicationsMachine learning models deployed locally on social media applications are used for features, such as face filters which read faces in-real time, and they expose sensitive attributes to the apps. However, the deployment of machine learning models, e.g., when, where, and how they are used, in social media applications is opaque to users. We aim to address this inconsistency and investigate how social media user perceptions and behaviors change once exposed to these models. We conducted user studies (N=21) and found that participants were unaware to both what the models output and when the models were used in Instagram and TikTok, two major social media platforms. In response to being exposed to the models' functionality, we observed long term behavior changes in 8 participants. Our analysis uncovers the challenges and opportunities in providing transparency for machine learning models that interact with local user data.2025JWJack West et al.University of Wisconsin -- Madison, Department of Computer SciencesAI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityCHI
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AIIt is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning. However, empirical investigations of how concepts from cognitive science can aid the design of XAI are lacking. Based on insights from cognitive science, we propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual. Using the complex safety-critical domain of autonomous driving, we conduct an experiment consisting of two studies on (i) how people explain the behavior of a vehicle in 14 unique scenarios ($N_1=54$) and (ii) how they perceive these explanations ($N_2=382$), curating the novel Human Explanations for Autonomous Driving Decisions (HEADD) dataset. Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality. Based on our results, we argue that explanatory modes are an important axis of analysis when designing and evaluating XAI and highlight the need for a principled and empirically grounded understanding of the cognitive mechanisms of explanation. The HEADD dataset and our code are available at: \url{https://datashare.ed.ac.uk/handle/10283/8930}.2025BGBalint Gyevnar et al.University of Edinburgh, School of InformaticsAutomated Driving Interface & Takeover DesignExplainable AI (XAI)CHI
Seeking Inspiration through Human-LLM InteractionLarge language model (LLM) systems have been shown to stimulate creative thinking among creators, yet empirical research on whether users can seek inspiration in their everyday lives through these technologies is lacking. This paper explores which attributes of LLMs influence inspiration-seeking processes. Focusing on use cases of travel, cooking, and self-care, we interviewed 20 participants as they explored scenarios of these use cases using LLMs. Thematic analysis revealed that the vast data of LLMs inspires users with unexpected ideas, many of which were highly personalized, and inspired participants towards being motivated to act. Participants were also sensitive to the deficiencies of LLMs, and noted how ethical issues associated with these technologies could negatively impact them applying inspirational ideas into practice. We discuss the behavioral patterns of users actively seeking inspiration via LLMs, and provide design opportunities for LLMs that make the inspiration-seeking process more human-centric.2025XLXinrui Lin et al.Beijing Institute of Technology; University of EdinburghGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
Human-Precision Medicine Interaction: Public Perceptions of Polygenic Risk Score for Genetic Health PredictionPrecision Medicine (PM) transforms the traditional "one-drug-fits-all" paradigm by customising treatments based on individual characteristics, and is an emerging topic for HCI research on digital health. A key element of PM, the Polygenic Risk Score (PRS), uses genetic data to predict an individual's disease risk. Despite its potential, PRS faces barriers to adoption, such as data inclusivity, psychological impact, and public trust. We conducted a mixed-methods study to explore how people perceive PRS, formed of surveys (n=254) and interviews (n=11) with UK-based participants. The interviews were supplemented by interactive storyboards with the ContraVision technique to provoke deeper reflection and discussion. We identified ten key barriers and five themes to PRS adoption and proposed design implications for a responsible PRS framework. To address the complexities of PRS and enhance broader PM practices, we introduce the term Human-Precision Medicine Interaction (HPMI), which integrates, adapts, and extends HCI approaches to better meet these challenges.2025YSYuhao Sun et al.University of EdinburghAI Ethics, Fairness & AccountabilityMental Health Apps & Online Support CommunitiesCHI
Fairness by Design: Cross-Cultural Perspectives from Children on AI and Fair Data Processing in their Education FuturesAI-driven educational technologies (AI-EdTech) process extensive data, raising concerns about commercial exploitation of children’s data and risks to their privacy, wellbeing, agency, and legal rights. The ‘fairness principle’ in data protection law requires fair data processing that meets children’s expectations and avoids unexpected, detrimental, discriminatory, or misleading practices. However, children’s own perspectives on what fairness means in AI-EdTech are underexplored in design. This study bridges the gap between law and design research to contextualize what fairness means through co-design workshops with 72 children (aged 10–12) and 4 teachers (N=76) in Scotland and Türkiye. We examine how children's perspectives can inform the operationalization of ‘fairness by design’ for AI-EdTech. Our contributions include: (1) an understanding of children’s perspectives on how fairness manifests (or does not) in AI-EdTech and (2) recommendations for both design and legal communities to align AI-EdTech design and data practices with children's values and rights.2025AAAyça Atabey et al.University of Edinburgh, School of LawMultilingual & Cross-Cultural Voice InteractionAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI