Lyric Poetry in the Face of Posthumanism: An Analysis of Generative AI-Assisted Poetry WritingGenerative AI seems poised to transform a wide range of endeavors once thought to be solely the domain of humans—from journalism to legal practice to creative expression—into collaborative activities involving both human and machine. Poetry is no exception, as even general-purpose language models now routinely generate convincing emulations of poetic form. While researchers have closely examined such machine-generated poetry, few have studied human-AI collaboration in poetry writing from a posthuman perspective. Through semi-structured interviews with ten participants in an AI English poetry contest and an analysis of their dialogs with AIs, we summarize the affordances and challenges of such collaborative practice using posthumanism as a lens. We then expose interesting tensions, for example, between human self-expression and the diminished (or relocated) agency that AI collaboration often entails. This collaborative, yet often adversarial, process provides insights into the nature of the posthuman condition as regards creative collaboration between human and machine.2025YHYuxuan Huang et al.Generative AI (Text, Image, Music, Video)AI-Assisted Creative WritingC&C
The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI RelationshipsAs conversational AI systems increasingly engage with people socially and emotionally, they bring notable risks and harms, particularly in human-AI relationships. However, these harms remain underexplored due to the private and sensitive nature of such interactions. This study investigates the harmful behaviors and roles of AI companions through an analysis of 35,390 conversation excerpts between 10,149 users and the AI companion Replika. We develop a taxonomy of AI companion harms encompassing six categories of harmful algorithmic behaviors: relational transgression, harassment, verbal abuse, self-harm, mis/disinformation, and privacy violations. These harmful behaviors stem from four distinct roles that AI plays: perpetrator, instigator, facilitator, and enabler. Our findings highlight relational harm as a critical yet understudied type of AI harm and emphasize the importance of examining AI's roles in harmful interactions to address root causes. We provide actionable insights for designing ethical and responsible AI companions that prioritize user safety and well-being.2025RZRenwen Zhang et al.National University of Singapore, Department of Communications and New MediaConversational ChatbotsAgent Personality & AnthropomorphismAI Ethics, Fairness & AccountabilityCHI
Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural AnalysisConversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media. While previous research has examined public perceptions of AI in general, there is a notable lack in research focused on CAs, with fewer investigations into cultural variations in CA perceptions. To address this gap, this study used computational methods to analyze about one million social media discussions surrounding CAs and compared people's discourses and perceptions of CAs in the US and China. We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent, and generally expressed positive emotions. In contrast, US participants saw CAs more functionally, with an ambivalent attitude. Warm perception was a key driver of positive emotions toward CAs in both countries. We discussed practical implications for designing contextually sensitive and user-centric CAs to resonate with various users' preferences and needs.2024ZLZihan Liu et al.National University of SingaporeConversational ChatbotsMultilingual & Cross-Cultural Voice InteractionAgent Personality & AnthropomorphismCHI
Exploring the Design of Generative AI in Supporting Music-based Reminiscence for Older AdultsMusic-based reminiscence has the potential to positively impact the psychological well-being of older adults. However, the aging process and physiological changes, such as memory decline and limited verbal communication, may impede the ability of older adults to recall their memories and life experiences. Given the advanced capabilities of generative artificial intelligence (AI) systems, such as generated conversations and images, and their potential to facilitate the reminiscing process, this study aims to explore the design of generative AI to support music-based reminiscence in older adults. This study follows a user-centered design approach incorporating various stages, including detailed interviews with two social workers and two design workshops (involving ten older adults). Our work contributes to an in-depth understanding of older adults’ attitudes toward utilizing generative AI for supporting music-based reminiscence and identifies concrete design considerations for the future design of generative AI to enhance the reminiscence experience of older adults.2024YJYucheng Jin et al.Hong Kong Baptist UniversityGenerative AI (Text, Image, Music, Video)Mental Health Apps & Online Support CommunitiesReproductive & Women's HealthCHI
Understanding Human-AI Collaboration in Music Therapy Through Co-Design with TherapistsThe rapid development of musical AI technologies has expanded the creative potential of various musical activities, ranging from music style transformation to music generation. However, little research has investigated how musical AIs can support music therapists, who urgently need new technology support. This study used a mixed method, including semi-structured interviews and a participatory design approach. By collaborating with music therapists, we explored design opportunities for musical AIs in music therapy. We presented the co-design outcomes involving the integration of musical AIs into a music therapy process, which was developed from a theoretical framework rooted in emotion-focused therapy. After that, we concluded the benefits and concerns surrounding music AIs from the perspective of music therapists. Based on our findings, we discussed the opportunities and design implications for applying musical AIs to music therapy. Our work offers valuable insights for developing human-AI collaborative music systems in therapy involving complex procedures and specific requirements.2024JSJingjing Sun et al.Tsinghua UniversityMental Health Apps & Online Support CommunitiesAI-Assisted Creative WritingCHI
Understanding Disclosure and Support in Social Music Communities for Youth Mental HealthOnline music platforms that embed social features are enabling the creation of supportive social communities where many young people disclose their distressing feelings and seek support. However, there is a limited understanding of the content young people disclose or the support they may provide in such social music communities. In this work, using a large online music platform as our research site, we employed mixed methods to analyze users' comments (N=163) and the associated replies (N=2,732) related to young people's psychological distress (e.g., depression, anxiety, stress, and loneliness). We found that experience sharing dominates the types of comments, which often invokes peers' support in the form of encouragement, caring, or self-disclosure. Furthermore, we conducted an interview study with 13 young people to understand their perceptions of and motives for disclosure and support on our research site. The interviewees expressed that music-induced and comment-induced emotional resonance is the main drive for their disclosure and support. Finally, we discuss design implications for a supportive social music community that could benefit youth mental health.2023YJYucheng Jin et al.Mental Health IICSCW
"Listen to Music, Listen to Yourself": Design of a Conversational Agent to Support Self-Awareness While Listening to MusicMusic can affect the human brain and cognition. Melodies and lyrics that resonate with us can awaken our inner feelings and thoughts; being in touch with these feelings and expressing them allow us to understand ourselves better and increase our self-awareness. To support self-awareness elicited by music, we designed a novel conversational agent (CA) that guides users to become self-aware and express their thoughts when they listen to music. Moreover, we investigated two prominent design factors in the CA, proactive guidance and social information. We then conducted a 2x2 between-subjects experiment (N = 90) to investigate how the two design factors affect self-awareness, user acceptance, and mental well-being. The results of a five-day user study reveal that high proactive guidance and social information increased self-awareness, but high proactive guidance tended to influence perceived autonomy and usefulness negatively. Further, users’ subjective feedback revealed the CA's potential to support mental well-being.2023WCWanling Cai et al.Hong Kong Baptist University, Hong Kong Baptist UniversityConversational ChatbotsMental Health Apps & Online Support CommunitiesCHI
A Systematic Review of Interaction Design Strategies for Group Recommendation SystemsSystems involving artificial intelligence (AI) are protagonists in many everyday activities. Moreover, designers are increasingly implementing these systems for groups of users in various social and cooperative domains. Unfortunately, research on personalized recommendation systems often reports negative experiences due to a lack of diversity, control, or transparency. Providing a meta-analysis of the interaction design strategies for group recommendation systems (GRS) offers designers and practitioners a departure to address these issues and imagine new interaction possibilities for this context. Therefore, we systematically reviewed the ACM, IEEE, and Scopus digital libraries to identify GRS interface designs, resulting in a final corpus of 142 academic papers. After a systematic coding process, we used descriptive statistics and thematic analysis to uncover the current state of the art regarding interaction design strategies for GRS in six areas: (1) application domains; (2) devices chosen to implement the systems; (3) prototype fidelity; (4) strategies for profile transparency, justification, control, and diversity; (5) strategies for group formation and final group consensus; and, (6) evaluation methods applied in user studies during the design process. Based on our findings, we present an exhaustive typology of interaction design strategies for GRS and a set of research opportunities to foster human-centered interfaces for personalized recommendations in cooperative and social computing contexts.2022OAOscar Alvarado et al.Online Platforms; Online PlatformsCSCW
Impacts of Personal Characteristics on User Trust in Conversational Recommender SystemsConversational recommender systems (CRSs) imitate human advisors to assist users in finding items through conversations and have recently gained increasing attention in domains such as media and e-commerce. Like in human communication, building trust in human-agent communication is essential given its significant influence on user behavior. However, inspiring user trust in CRSs with a “one-size-fits-all” design is difficult, as individual users may have their own expectations for conversational interactions (e.g., who, user or system, takes the initiative), which are potentially related to their personal characteristics. In this study, we investigated the impacts of three personal characteristics, namely personality traits, trust propensity, and domain knowledge, on user trust in two types of text-based CRSs, i.e., user-initiative and mixed-initiative. Our between-subjects user study (N=148) revealed that users’ trust propensity and domain knowledge positively influenced their trust in CRSs, and that users with high conscientiousness tended to trust the mixed-initiative system.2022WCWanling Cai et al.Hong Kong Baptist UniversityConversational ChatbotsMultilingual & Cross-Cultural Voice InteractionHuman-LLM CollaborationCHI
Critiquing for Music Exploration in Conversational Recommender SystemsDialogue-based conversational recommender systems allow users to give language-based feedback on the recommended item, which has great potential for supporting users to explore the space of recommendations through conversation. In this work, we consider incorporating critiquing techniques into conversational systems to facilitate users' exploration of music recommendations. Thus, we have developed a music chatbot with three system variants, which are respectively featured with three different critiquing techniques, i.e., user-initiated critiquing (UC), progressive system-suggested critiquing (Progressive SC), and cascading system-suggested critiquing (Cascading SC). We conducted a between-subject study (N=107) to compare these three types of systems with regards to music exploration in terms of user perception and user interaction. Results show that both UC and SC are useful for music exploration, while users perceive higher diversity of recommendations with the system that offers Cascading SC and perceive more serendipitous with the system that offers Progressive SC. In addition, we find that the critiquing techniques significantly moderate the relationships between some interaction metrics (e.g., number of listened songs, number of dialogue turns) and users' perceived helpfulness and serendipity during music exploration.2021WCWanling Cai et al.Conversational ChatbotsRecommender System UXIUI