From Interaction to Attitude: Exploring the Impact of Human-AI Cooperation on Mental Illness StigmaAI conversational agents have demonstrated efficacy in social contact interventions for stigma reduction at a low cost. However, the underlying mechanisms of how interaction designs contribute to these effects remain unclear. This study investigates how participating in three human-chatbot interactions affects attitudes toward mental illness. We developed three chatbots capable of engaging in either one-way information dissemination from chatbot to a human or two-way cooperation where the chatbot and a human exchange thoughts and work together on a cooperation task. We then conducted a two-week mixed-methods study to investigate variations over time and across different group memberships. The results indicate that human-AI cooperation can effectively reduce stigma toward individuals with mental illness by fostering relationships between humans and AI through social contact. Additionally, compared to a one-way chatbot, interacting with a cooperative chatbot led participants to perceive it as more competent and likable, promoting greater empathy during the conversation. However, despite the success in reducing stigma, inconsistencies between the chatbot’s role and the mental health context raised concerns. We discuss the implications of our findings for human-chatbot interaction designs aimed at changing human attitudes.2025TSTianqi Song et al.AI Applications for Safety and SupportCSCW
Understanding How Chatbot Phrasing Styles and Care Demonstration Influence Overweight Users’ Adherence Intention Towards Chatbots Supporting Weight Management Chatbots hold promise as a technology to aid in sustained weight management. However, determining the optimal way for chatbots to deliver advice to effectively change user behaviors remains a significant hurdle. This research investigates the effects of different chatbot communication styles and expressions of care on user satisfaction, misinterpretation, and intent to adhere to the advice in weight-related conversations. A mixed method study with 97 participants classified as overweight was conducted, dividing them into four groups based on explicit/implicit communication styles and the presence or absence of caring language. Surprisingly, the study found that most participants in the explicit communication groups viewed the chatbot as non-offensive. These participants also reported higher levels of enjoyment and a greater intention to follow the chatbot's recommendations. Utilizing caring language may diminish users' perception of the chatbot as a marketing tool, thereby increasing their willingness to interact. The article discusses the implications for the design of healthcare chatbots.2025WCWen-Hsuan Cheng et al.AI-Assisted HealthcareCSCW
Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward ChatbotsChatbots are increasingly integrated into people’s lives and are widely used to help people. Recently, there has also been growing interest in the reverse direction—humans help chatbots—due to a wide range of benefits including better chatbot performance, human well-being, and collaborative outcomes. However, little research has explored the factors that motivate people to help chatbots. To address this gap, we draw on the Computers Are Social Actors (CASA) framework to examine how chatbot anthropomorphism—including human-like identity, emotional expression, and non-verbal expression—influences human empathy toward chatbots and their subsequent prosocial behaviors and intentions. We also explore people's own interpretations of their prosocial behaviors toward chatbots. We conducted an online experiment (N = 244) in which chatbots made mistakes in a collaborative image labeling task and explained the reasons to participants. We then measured participants’ prosocial behaviors and intentions toward the chatbots. Our findings revealed that human identity and emotional expression of chatbots increased participants’ prosocial behavior and intention toward chatbots, with empathy mediating these effects. Qualitative analysis identified two motivations for participants' prosocial behaviors: empathy for the chatbot and perceiving the chatbot as human-like. We discuss the implications of these results for understanding and promoting human prosocial behaviors toward chatbots.2025JLJingshu Li et al.Communicating With/Through AICSCW
ActionaBot: Structuring Metacognitive Conversations towards In-Situ Awareness in How-To Instruction FollowingPeople often rely on shared procedures and tips to handle unfamiliar tasks, but following tutorials can be challenging. Individuals may skip steps, alter actions, or miss information, leading to mistakes or task failure. Tutorials are often based on personal experiences and may omit important details, which vary with context. Furthermore, when others attempt to follow these tutorials, differing situations can make it hard to follow the steps or track progress. Inspired by how coworkers discuss work status and work approach in-situ through metacognitive conversations, we propose Action-a-bot, a chatbot framework that transforms static tutorials into interactive, structural, step-by-step guidance. Action-a-bot drives users to focus on each step, review what they’ve completed, and anticipate the next steps, while adapting actions and solving problems. Our study explores how human-chatbot interaction can improve task completion and make tutorials more actionable by increasing user engagement and awareness of the work situation. We discuss the potential of chatbots in supporting instructional communication and task execution.2025QZQingxiaoyang Zhu et al.Conversational ChatbotsAgent Personality & AnthropomorphismPrototyping & User TestingCUI
As Confidence Aligns: Understanding the Effect of AI Confidence on Human Self-confidence in Human-AI Decision MakingComplementary collaboration between humans and AI is essential for human-AI decision making. One feasible approach to achieving it involves accounting for the calibrated confidence levels of both AI and users. However, this process would likely be made more difficult by the fact that AI confidence may influence users' self-confidence and its calibration. To explore these dynamics, we conducted a randomized behavioral experiment. Our results indicate that in human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved. This alignment then affects users' self-confidence calibration. We also found the presence of real-time correctness feedback of decisions reduced the degree of alignment. These findings suggest that users' self-confidence is not independent of AI confidence, which practitioners aiming to achieve better human-AI collaboration need to be aware of. We call for research focusing on the alignment of human cognition and behavior with AI.2025JLJingshu Li et al.National University of Singapore, Computer ScienceAI-Assisted Decision-Making & AutomationVisualization Perception & CognitionCHI
The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI RelationshipsAs conversational AI systems increasingly engage with people socially and emotionally, they bring notable risks and harms, particularly in human-AI relationships. However, these harms remain underexplored due to the private and sensitive nature of such interactions. This study investigates the harmful behaviors and roles of AI companions through an analysis of 35,390 conversation excerpts between 10,149 users and the AI companion Replika. We develop a taxonomy of AI companion harms encompassing six categories of harmful algorithmic behaviors: relational transgression, harassment, verbal abuse, self-harm, mis/disinformation, and privacy violations. These harmful behaviors stem from four distinct roles that AI plays: perpetrator, instigator, facilitator, and enabler. Our findings highlight relational harm as a critical yet understudied type of AI harm and emphasize the importance of examining AI's roles in harmful interactions to address root causes. We provide actionable insights for designing ethical and responsible AI companions that prioritize user safety and well-being.2025RZRenwen Zhang et al.National University of Singapore, Department of Communications and New MediaConversational ChatbotsAgent Personality & AnthropomorphismAI Ethics, Fairness & AccountabilityCHI
Timing Matters: How Using LLMs at Different Timings Influences Writers' Perceptions and Ideation Outcomes in AI-Assisted IdeationLarge Language Models (LLMs) have been widely used to support ideation in the writing process. However, whether generating ideas with the help of LLMs leads to idea fixation or idea expansion is unclear. This study examines how different timings of LLM usage - either at the beginning or after independent ideation - affect people's perceptions and ideation outcomes in a writing task. In a controlled experiment with 60 participants, we found that using LLMs from the beginning reduced the number of original ideas and lowered creative self-efficacy and self-credit, mediated by changes in autonomy and ownership. We discuss the challenges and opportunities associated with using LLMs to assist in idea generation. We propose delaying the use of LLMs to support ideation while considering users' self-efficacy, autonomy, and ownership of the ideation outcomes.2025PQPeinuan Qin et al.National University of Singapore, School of ComputingHuman-LLM CollaborationAI-Assisted Creative WritingCHI
Deconstructing Depression Stigma: Integrating AI-driven Data Collection and Analysis with Causal Knowledge GraphsMental-illness stigma is a persistent social problem, hampering both treatment-seeking and recovery. Accordingly, there is a pressing need to understand it more clearly, but analyzing the relevant data is highly labor-intensive. Therefore, we designed a chatbot to engage participants in conversations; coded those conversations qualitatively with AI assistance; and, based on those coding results, built causal knowledge graphs to decode stigma. The results we obtained from 1,002 participants demonstrate that conversation with our chatbot can elicit rich information about people’s attitudes toward depression, while our AI-assisted coding was strongly consistent with human-expert coding. Our novel approach combining large language models (LLMs) and causal knowledge graphs uncovered patterns in individual responses and illustrated the interrelationships of psychological constructs in the dataset as a whole. The paper also discusses these findings’ implications for HCI researchers in developing digital interventions, decomposing human psychological constructs, and fostering inclusive attitudes.2025HMHan Meng et al.National University of Singapore, School of ComputingMid-Air Haptics (Ultrasonic)Human-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
Understanding How Psychological Distance Influences User Preferences in Conversational versus Web SearchConversational search offers an easier and faster alternative to conventional web search, while having downsides like a lack of source verification. Research has examined performance disparities between these two systems in various settings. However, little work has investigated how changes in the nature of a search task affect user preferences. We investigate how psychological distance - the perceived closeness of one to an event - affects user preferences between conversational and web search. We hypothesise that tasks with different psychological distances elicit different information needs, which in turn affect user preferences between systems. Our study finds that, under fixed condition ordering, greater psychological distances lead users to prefer conversational search, which they perceive as more credible, useful, enjoyable, and easy to use. We reveal qualitative reasons for these differences and provide design implications for search system designers.2025YYYitian Yang et al.National University of Singapore, Computer ScienceConversational ChatbotsExplainable AI (XAI)CHI
Exploring Effects of Chatbot's Interpretation and Self-disclosure on Mental Illness StigmaChatbots are increasingly being used in mental healthcare – e.g., for assessing mental-health conditions and providing digital counseling – and have been found to have considerable potential for facilitating people’s behavioral changes. Nevertheless, little research has examined how specific chatbot designs may help reduce public stigmatization of mental illness. To help fill that gap, this study explores how stigmatizing attitudes toward mental illness may be affected by conversations with chatbots that have 1) varying ways of expressing their interpretations of participants’ statements and 2) different styles of self-disclosure. More specifically, we implemented and tested four chatbot designs that varied in terms of whether they interpreted participants’ comments as stigmatizing or non-stigmatizing, and whether they provided stigmatizing, non-stigmatizing, or no self-disclosure of chatbot's own views. Over the two-week period of the experiment, all four chatbots’ conversations with our participants centered on seven mental-illness vignettes, all featuring the same character. We found that the chatbot featuring non-stigmatizing interpretations and non-stigmatizing self-disclosure performed best at reducing the participants’ stigmatizing attitudes, while the one that provided stigmatizing interpretations and stigmatizing self-disclosures had the least beneficial effect. We also discovered side effects of chatbot’s self-disclosure: notably, that chatbots were perceived to have inflexible and strong opinions, which undermined their credibility. As such, this paper contributes to knowledge about how chatbot designs shape users’ perceptions of the chatbots themselves, and how chatbots’ interpretation and self-disclosure may be leveraged to help reduce mental-illness stigma.2024YCYichao Cui et al.Session 3b: Bridging Technology and TherapyCSCW
Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural AnalysisConversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media. While previous research has examined public perceptions of AI in general, there is a notable lack in research focused on CAs, with fewer investigations into cultural variations in CA perceptions. To address this gap, this study used computational methods to analyze about one million social media discussions surrounding CAs and compared people's discourses and perceptions of CAs in the US and China. We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent, and generally expressed positive emotions. In contrast, US participants saw CAs more functionally, with an ambivalent attitude. Warm perception was a key driver of positive emotions toward CAs in both countries. We discussed practical implications for designing contextually sensitive and user-centric CAs to resonate with various users' preferences and needs.2024ZLZihan Liu et al.National University of SingaporeConversational ChatbotsMultilingual & Cross-Cultural Voice InteractionAgent Personality & AnthropomorphismCHI
Exploring Effects of Chatbot-based Social Contact on Reducing Mental Illness StigmaChatbots have been designed to provide interventions in mental healthcare. However, how chatbot-based social contact can mitigate social stigma in mental illness remains under-explored. We designed two chatbots that deliver either first-person or third-person narratives about mental illness and evaluated them using a mixed methods study. Compared to a web survey group, participants in both chatbot groups decreased their beliefs that individuals are personally responsible for their mental illnesses, and increased their intentions to help. Additionally, participants in the first-person chatbot group showed a reduced level of fear, and a lower desire for social distance from people with mental illness. Many in the first-person chatbot group also reported a feeling of relationship with the chatbot, and chose to phrase their responses empathetically. Results demonstrated that chatbot-based social contact has promising potential for mitigating mental illness stigma. Implications for designing chatbot-based social contact are discussed.2023YLYI-CHIEH LEE et al.National University of Singapore, NTTConversational ChatbotsMental Health Apps & Online Support CommunitiesEmpowerment of Marginalized GroupsCHI