Script-Strategy Aligned Generation: Aligning LLMs with Expert-Crafted Dialogue Scripts and Therapeutic Strategies for PsychotherapyChatbots or conversational agents (CAs) are increasingly used to improve access to digital psychotherapy. Many current systems rely on rigid, rule-based designs, heavily dependent on expert-crafted dialogue scripts for guiding therapeutic conversations. Although advances in large language models (LLMs) offer potential for more flexible interactions, their lack of controllability and explanability poses challenges in psychotherapy. In this work, we explored how aligning LLMs with expert-crafted scripts can enhance psychotherapeutic chatbot performance. Our comparative Study 1 showed that LLMs aligned with expert-crafted scripts through prompting and fine-tuning significantly outperformed both pure LLMs and rule-based chatbots, achieving an effective balance between dialogue flexibility and adherence to therapeutic principles. Building on findings, we proposed ``Script-Strategy Aligned Generation (SSAG)'', a more flexible alignment approach that reduces reliance on fully scripted content while maintaining LLMs' therapeutic adherence and controllability. In a 10-day field Study 2, SSAG demonstrated performance comparable to full script alignment, empirically supporting SSAG as an efficient approach for aligning LLMs with domain expertise. Our work advances LLM applications in psychotherapy by providing a controllable and scalable solution, reducing reliance on expert effort. It also provides a collaborative framework for domain experts and developers to efficiently build expertise-aligned chatbots, broadening access to broader context of psychotherapy.2025XSXin Sun et al.Facilitating Support and BelongingCSCW
Designing Multisensory Biophilic Futures: Exploring the Potential of Interaction Design to Deepen Human Connections With Nature in Indoor EnvironmentsAdvances in interaction design, architecture, and artificial intelligence offer new possibilities for built environments. Yet, systems focus on improving physical parameters such as indoor air quality. While these enhance physical comfort, they often overlook an innate aspect of human experience-our connection with nature-fundamental to physical and mental health. In contrast, architecture offers a rich legacy of biophilic design that creates sensory-rich spaces evoking a connection to nature. What insights can biophilic architecture offer to guide interactive experiences in future buildings? Drawing on 13 expert interviews, we expose the gap between current biophilic practices in smart buildings and the multidimensional potential of nature-inspired design. We present eight themes reflecting expert imaginaries of biophilic futures and five design opportunities, illustrating how emerging technologies can position biophilic interaction as multi-sensory, interpretive, reciprocal, and aligned with more-than-human, justice-oriented futures.2025SRShruti Rao et al.Context-Aware ComputingSustainable HCIHuman-Nature Relationships (More-than-Human Design)DIS
Let's Influence Algorithms Together: How Millions of Fans Build Collective Understanding of Algorithms and Organize Coordinated Algorithmic ActionsPrevious research pays attention to how users strategically understand and consciously interact with algorithms but mainly focuses on an individual level, making it difficult to explore how users within communities could develop a collective understanding of algorithms and organize collective algorithmic actions. Through a two-year ethnography of online fan activities, this study investigates 43 core fans who always organize large-scale fans collective actions and their corresponding general fan groups. This study aims to reveal how these core fans mobilize millions of general fans through collective algorithmic actions. These core fans reported the rhetorical strategies used to persuade general fans, the steps taken to build a collective understanding of algorithms, and the collaborative processes that adapt collective actions across platforms and cultures. Our findings highlight the key factors that enable computer-supported collective algorithmic actions and extend collective action research into the large-scale domain targeting algorithms.2025QXQing Xiao et al.Carnegie Mellon UniversityAlgorithmic Transparency & AuditabilityOnline Harassment & Counter-ToolsContent Moderation & Platform GovernanceCHI
What Do We Design for When We Design "Smart Buildings"? - A Scoping Review of Human Experience Design Research in BuildingsBuilt environments increasingly incorporate new forms of intelligence, creating opportunities for enhancing human interactive experiences with and within building spaces. This scoping review examines design interventions and discourses within the domain of "Smart Buildings". The goal is to identify and characterise the type of human experiences that research in this domain aims to address. Using a hybrid deductive-inductive coding approach, we analysed 192 papers related to human experiences and smart buildings from ACM Digital Library and Scopus published between 1996 and 2024. Our analysis revealed 11 distinct "targeted human experiences", 20 commonly used "design mechanisms" to achieve those design goals, as well as two typologies of "technological interventions". Our findings create a foundation for understanding building design research and the range of human experience they entail.2025SRShruti Rao et al.University of AmsterdamSmart Home Interaction DesignEmpowerment of Marginalized GroupsCHI
Super Kawaii Vocalics: Amplifying the “Cute” Factor in Computer Voice"Kawaii" is the Japanese concept of cute, which carries sociocultural connotations related to social identities and emotional responses. Yet, virtually all work to date has focused on the visual side of kawaii, including in studies of computer agents and social robots. In pursuit of formalizing the new science of kawaii vocalics, we explored what elements of voice relate to kawaii and how they might be manipulated, manually and automatically. We conducted a four-phase study (grand 𝑁 = 512) with two varieties of computer voices: text-to-speech (TTS) and game character voices. We found kawaii "sweet spots" through manipulation of fundamental and formant frequencies, but only for certain voices and to a certain extent. Findings also suggest a ceiling effect for the kawaii vocalics of certain voices. We offer empirical validation of the preliminary kawaii vocalics model and an elementary method for manipulating kawaii perceptions of computer voice.2025YMYuto Mandai et al.Tokyo Institute of Technology, Department of Industrial Engineering and EconomicsIntelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismCHI
"Python is for girls!": Masculinity, Femininity, and Queering Inclusion at HackathonsThis paper explores how queerness intersects with hackathon culture, reinforcing or challenging its masculine norms. By utilizing autoethnographic insights from seven UK hackathons, it reveals that while queerness is visibly celebrated, inclusion remains conditional—accepted only when it aligns with masculine-coded technical authority. Femininity, regardless of the queer identities of those who embody it, is devalued and associated with lesser technical competence. Beyond social dynamics, gendered hierarchies influence programming tools, roles, and physical environments, embedding exclusion within technical culture. Although gender-fluid expressions like cosplay provide moments of subversion, they remain limited by the masculine framework of hackathons. This study contributes to human-computer interaction and feminist technology studies by showing that queerness alone does not dismantle gendered hierarchies. It advocates for moving beyond visibility to actively challenge masculinized definitions of technical legitimacy, promoting alternative, non-exclusionary models of expertise.2025SBSiân BrookeUniversity of Amsterdam, Digital Interactions Lab; London School of Economics, Data Science InstituteGender & Race Issues in HCIEmpowerment of Marginalized GroupsTechnology Ethics & Critical HCICHI
PAIRcolator: Pair Collaboration for Sensemaking and Reflection on Personal DataThis paper explores pair collaboration as a novel approach for making sense of personal data. Pair collaboration---characterized by dyadic comparison and structured roles for questioning and reasoning---has proven effective for co-constructing knowledge. However, current collaborative visualization tools primarily focus on group comparisons, overlooking the challenges of accommodating pair collaboration in the context of personal data. To address this gap, we propose a set of design rationales supporting subjective data analysis through dyadic comparison and mixed-focus collaboration styles for co-constructing personal narratives. We operationalize these principles in a tangible visualization toolkit, \projectname. Our user study demonstrates that pairwise collaboration facilitated by the toolkit: 1) reveals detailed data insights that are effective for recalling personal experiences, and 2) fosters a structured, reciprocal sensemaking process for interpreting and reconstructing personal experiences beyond data insights. Our results shed light on the design rationales for, and the processes of pair sensemaking of personal data, and their effects to foster deep levels of reflection.2025DYDi Yan et al.Delft University of Technology, Faculty of Industrial Design EngineeringData StorytellingVisualization Perception & CognitionCHI
Policy Sandboxing: Empathy as an Enabler Towards Inclusive Policy-MakingDigitally-supported participatory methods are often used in policy-making to develop inclusive policies by collecting and integrating citizen's opinions. However, these methods fail to capture the complexity and nuances in citizen's needs, i.e., citizens are generally unaware of other's needs, perspectives, and experiences. Consequently, policies developed with this underlying gap tend to overlook the alignment of multistakeholder perspectives, and design policies based on the optimization of high-level demographic features. In our contribution, we propose a method to enable citizens understand other's perspectives and calibrate their positions. First, we collected requirements and design principles to develop our approach by involving stakeholders and experts in policymaking in a series of workshops. Then, we conducted a crowdsourcing study with 420 participants to compare the effect of different text and images, on people’s initial and final motivations and their willingness to change opinions. We observed that both influence participant's opinion change, however, the effect is more pronounced for textual modality. Finally, we discuss overarching implications of designing with empathy to mediate alignment of citizen's perspectives.2024AMAndrea Mauri et al.Session 3c: Speculative Design and Emerging TechnologiesCSCW
Exploring User Engagement Through an Interaction Lens: What Textual Cues Can Tell Us about Human-Chatbot InteractionsMonitoring and maintaining user engagement in human-chatbot interactions is challenging. Researchers often use cues observed in the interactions as indicators to infer engagement. However, evaluation of these cues is lacking. In this study, we collected an inventory of potential textual engagements cues from the literature, including linguistic features, utterance features, and interaction features. These cues were subsequently used to annotate a dataset of 291 user-chatbot interactions, and we examined which of these cues predicted self-reported user engagement. Our results show that engagement can indeed be recognized at the level of individual utterances. Notably, words indicating cognitive thinking processes and motivational utterances were strong indicators of engagement. An overall negative tone could also predict engagement, highlighting the importance of nuanced interpretation and contextual awareness of user utterances. Our findings demonstrated initial feasibility of recognizing utterance-level cues and using them to infer user engagement, although further validation is needed across different content-domains.2024LHLinwei He et al.Conversational ChatbotsExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCUI
Can a Funny Chatbot Make a Difference? Infusing Humor into Conversational Agent for Behavioral InterventionRegular physical activity is crucial for reducing the risk of non-communicable disease (NCD). With NCDs on the rise globally, there is an urgent need for effective health interventions, with chatbots emerging as a viable and cost-effective option because of limited healthcare accessibility. Although health professionals often utilize behavior change techniques (BCTs) to boost physical activity levels and enhance client engagement and motivation by affiliative humor, the efficacy of humor in chatbot-delivered interventions is not well-understood. This study conducted a randomized controlled trial to examine the impact of the generative humorous communication style in a 10-day chatbot-delivered intervention for physical activity. It further investigated whether user engagement and motivation act as mediators between the communication style and changes in physical activity levels. 66 participants engaged with the chatbots across three groups (humorous, non-humorous, and no-intervention) and responded to daily ecological momentary assessment questionnaires assessing engagement, motivation, and physical activity levels. Multilevel time series analyses revealed that an affiliative humorous communication style positively impacted physical activity levels over time, with user engagement acting as a mediator in this relationship, whereas motivation did not. These findings clarify the role of humorous communication style in chatbot-delivered interventions for physical activity, offering valuable insights for future development of intelligent conversational agents incorporating humor.2024XSXin Sun et al.Conversational ChatbotsMental Health Apps & Online Support CommunitiesCUI
Designing a Couples-Based Conversational Agent to Promote Safe Sex in New, Young Couples: A User-Centred Design ApproachThe uptake of conversational agents (CAs) to deliver digital sexual health interventions is growing. While current CAs only address one user at a time, research suggests that couples-based interventions may be more effective at promoting safe sex in non-casual relationships by improving relationship functioning. In this paper, we describe user-centred design activities undertaken towards the design of a couples-based chatbot to address safe sex in new, young couples. A two-step approach was undertaken, in which young people were interviewed about their preferences and ideas, and sexual health professionals took part in a design thinking workshop. The design activities yielded a rich set of design guidelines from both groups, as well as a paper-and-pen prototype of the proposed CA from the workshop. As expected, trust was raised by both stakeholders as an important determinant of use and therefore heavily informs the design guidelines.2024DBDivyaa Balaji et al.Conversational ChatbotsMental Health Apps & Online Support CommunitiesReproductive & Women's HealthCUI
Affective Driver-Pedestrian Interaction: Exploring Driver Affective Responses Toward Pedestrian Crossing Actions Using Camera and Physiological SensorsEliciting and capturing drivers' affective responses in a realistic outdoor setting with pedestrians poses a challenge when designing in-vehicle, empathic interfaces. To address this, we designed a controlled, outdoor car driving circuit where drivers (N=27) drove and encountered pedestrian confederates who performed non-verbal positive or non-positive road crossing actions towards them. Our findings reveal that drivers reported higher valence upon observing positive, non-verbal crossing actions, and higher arousal upon observing non-positive crossing actions. Drivers' heart signals (BVP, IBI and BPM), skin conductance and facial expressions (brow lowering, eyelid tightening, nose wrinkling, and lip stretching) all varied significantly when observing positive and non-positive actions. Our car driving study, by drawing on realistic driving conditions, further contributes to the development of in-vehicle empathic interfaces that leverage behavioural and physiological sensing. Through automatic inference of driver affect resulting from pedestrian actions, our work can enable novel empathic interfaces for supporting driver emotion self-regulation.2023SRShruti Rao et al.In-Vehicle Haptic, Audio & Multimodal FeedbackHuman Pose & Activity RecognitionAutoUI
Intonation in Robot Speech: Does it work the same as with people?Human-robot interaction (HRI) research aims to design natural interactions between humans and robots. Intonation, a social signaling function in human speech investigated thoroughly in linguistics, has not yet been studied in HRI. This study investigates the effect of robot speech intonation in four conditions (no intonation, focus intonation, end-of-utterance intonation, or combined intonation) on conversational naturalness, social engagement, and people’s humanlike perception of the robot collecting objective and subjective data of participant conversations (n = 120). Our results showed that humanlike intonation partially improved subjective naturalness but not observed fluency, and that intonation partially improved social engagement but did not affect humanlike perceptions of the robot. Given that our results mainly differed from our hypotheses based on human speech intonation, we discuss the implications and provide suggestions for future research to further investigate conversational naturalness in robot speech intonation.2020EVElla Velner et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismSocial Robot InteractionHRI