Finding Understanding and Support: Navigating Online Communities to Share and Connect at the intersection of Abuse and Foster Care Experiences Many children in foster care experience trauma that is rooted in unstable family relationships. Other members of the foster care system like foster parents and social workers face secondary trauma. Drawing on 10 years of Reddit data, we used a mixed methods approach to analyze how different members of the foster care system find support and similar experiences at the intersection of two Reddit communities - foster care, and abuse. We found that users who cross the boundary between the two communities focus on trauma experiences specific to different roles in foster care. While representing a small number of users, cross-posters contribute heavily to both communities, and, compared to other community members, receive higher scores and more replies. We explore the roles boundary crossing users have both in the online community and in the context of foster care. Finally, we present design, practice, and policy recommendations that would support survivors of trauma find communities more suited to their personal experiences.2025TATawfiq Ammari et al.Recovering From a CrisisCSCW
Exploring the Design Space of Real-time LLM Knowledge Support Systems: A Case Study of Jargon ExplanationsKnowledge gaps often arise during communication due to diverse backgrounds, knowledge bases, and vocabularies. With recent LLM developments, providing real-time knowledge support is increasingly viable, but is challenging due to shared and individual cognitive limitations (e.g., attention, memory, and comprehension) and the difficulty in understanding the user's context and internal knowledge. To address these challenges, we explore the key question of understanding how people want to receive real-time knowledge support. We built StopGap---a prototype that provides real-time knowledge support for explaining jargon words in videos---to conduct a design probe study (N=24) that explored multiple visual knowledge representation formats. Our study revealed individual differences in preferred representations and highlighted the importance of user agency, personalization, and mixed-initiative assistance. Based on our findings, we map out six key design dimensions for real-time LLM knowledge support systems and offer insights for future research in this space.2025YLYuhan Liu et al.Princeton University, Computer ScienceHuman-LLM CollaborationExplainable AI (XAI)CHI
Children using Tabletop Telepresence Robots for Collaboration: A Longitudinal Case Study of Hybrid and Online Intergenerational Participatory DesignImproving telepresence for children expands educational opportunities and connects faraway family. Yet, research about child-centered physical telepresence systems (tangible interfaces for telepresence) remains sparse, despite established benefits of tangible interaction for children. To address this gap, we collaborated with child designers (ages 8-12) over 2-years of online/1-year of hybrid participatory design. Together, we adapted one approach to physical telepresence (tabletop robots) for child users. Using a case study methodology, we explore how our tabletop telepresence robot platform influenced children’s connections with one another over the 3-year study. In our analysis, we compare four vignettes representing cooperation/conflict between children while using the platform; centering theories of ownership, collaboration, and co-design roles. Through this exploration of children’s interpersonal dynamics while using the platform, we uncover four key features of tabletop telepresence robots for children: (1) Anonymous Robot Control (2) Robot/Material Distribution, (3) Robot Form/Size, and (4) Robot Stewardship.2025CHCasey Lee Hunt et al.CU Boulder, ATLAS InstituteCollaborative Learning & Peer TeachingSpecial Education TechnologyTeleoperation & TelepresenceCHI
CardioAI: A Multimodal AI-based System to Support Symptom Monitoring and Risk Prediction of Cancer Treatment-Induced CardiotoxicityDespite recent advances in cancer treatments that prolong patients' lives, treatment-induced cardiotoxicity (i.e., the various heart damages caused by cancer treatments) emerges as one major side effect. The clinical decision-making process of cardiotoxicity is challenging, as early symptoms may happen in non-clinical settings and are too subtle to be noticed until life-threatening events occur at a later stage; clinicians already have a high workload focusing on the cancer treatment, no additional effort to spare on the cardiotoxicity side effect. Our project starts with a participatory design study with 11 clinicians to understand their decision-making practices and their feedback on an initial design of an AI-based decision-support system. Based on their feedback, we then propose a multimodal AI system, CardioAI, that can integrate wearables data and voice assistant data to model a patient's cardiotoxicity risk to support clinicians' decision-making. We conclude our paper with a small-scale heuristic evaluation with four experts and the discussion of future design considerations.2025SWSiyi Wu et al.University of Toronto, Department of Computer ScienceEV Charging & Eco-Driving InterfacesAI-Assisted Decision-Making & AutomationBiosensors & Physiological MonitoringCHI
The Impact of Perceived Tone, Age, and Gender on Voice Assistant Persuasiveness in the Context of Product RecommendationsVoice Assistants (VAs) can assist users in various everyday tasks, but many users are reluctant to rely on VAs for intricate tasks like online shopping. This study aims to examine whether the vocal characteristics of VAs can serve as an effective tool to persuade users and increase user engagement with VAs in online shopping. Prior studies have demonstrated that the perceived tone, age, and gender of a voice influence the perceived persuasiveness of the speaker in interpersonal interactions. Furthermore, persuasion in product communication has been shown to affect purchase decisions in online shopping. We investigate whether variations in a VA voice's perceived tone, age, and gender characteristics can persuade users and ultimately affect their purchase decisions. Our experimental study showed that participants were more persuaded to make purchase decisions by VA voices having positive or neutral tones as well as middle-aged male or younger female voices. Our results suggest that VA designers should offer users the ability to easily customize VA voices with a range of tones, ages, and genders. This customization can enhance user comfort and enjoyment, potentially leading to higher engagement with VAs. Additionally, we discuss the boundaries of ethical persuasion, emphasizing the importance of safeguarding users' interests against unwarranted manipulation.2024SPSabid Bin Habib Pias et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismCUI
Collaborative Job Seeking for People with Autism: Challenges and Design OpportunitiesSuccessful job search results from job seekers' well-shaped social communication. While well-known differences in communication exist between people with autism and neurotypicals, little is known about how people with autism collaborate with their social surroundings to strive in the job market. To better understand the practices and challenges of collaborative job seeking for people with autism, we interviewed 20 participants including applicants with autism, their social surroundings, and career experts. Through the interviews, we identified social challenges that people with autism face during their job seeking; the social support they leverage to be successful; and the technological limitations that hinder their collaboration. We designed four probes that represent major collaborative features found from the interviews--executive planning, communication, stage-wise preparation, and neurodivergent community formation--and discussed their potential usefulness and impact through three focus groups. We provide implications regarding how our findings can enhance collaborative job seeking experiences for people with autism through new designs.2024ZAZinat Ara et al.George Mason UniversityCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Job Search & Employment SupportEmpowerment of Marginalized GroupsCHI
Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis DiagnosisToday's AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection that requires an early diagnosis with high uncertainty from the clinician. Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. The study begins with a formative study investigating why clinical experts abandon an existing AI-powered Sepsis predictive module in their electrical health record (EHR) system. We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process (e.g., generating hypotheses or gathering data), instead of focusing only on the final decision. Therefore, we build SepsisLab based on a state-of-the-art AI algorithm and extend it to predict the future projection of sepsis development, visualize the prediction uncertainty, and propose actionable suggestions (i.e., which additional laboratory tests can be collected) to reduce such uncertainty. Through heuristic evaluation with six clinicians using our prototype system, we demonstrate that \system enables a promising human-AI collaboration paradigm for the future of AI-assisted sepsis diagnosis and other high-stakes medical decision making.2024SZShao Zhang et al.Northeastern UniverisityExplainable AI (XAI)AI-Assisted Decision-Making & AutomationMedical & Scientific Data VisualizationCHI
An Iterative Participatory Design Approach to Develop Collaborative Augmented Reality Activities for Older Adults in Long-Term Care Facilities Over four million older adults living in long-term care (LTC) communities experience loneliness, adversely impacting their health. Increased contact with friends and family is an evidence-based intervention to reduce loneliness, but in-person visits are not always possible. Augmented Reality (AR)-based telepresence activities can offer viable alternatives with increased immersion and presence compared to video calls. However, its feasibility as an interaction technology for older adults is not known. In this paper, we detail the design of two dyadic collaborative AR activities that accommodate diminished physical and cognitive abilities of older adults. The findings include a general design framework based on an iterative participatory design focusing on preferred activities, modes of interaction, and overall AR experience of eight older adults, two family members, and five LTC staff. Results demonstrate the potential of collaborative AR as an effective means of interaction for older adults with their family, if designed to cater to their needs.2024AUAkshith Ullal et al.Vanderbilt UniversityMixed Reality WorkspacesUniversal & Inclusive DesignAging-in-Place Assistance SystemsCHI
AttFL: A Personalized Federated Learning Framework for Time-series Mobile and Embedded Sensor Data Processing"This work presents AttFL, a federated learning framework designed to continuously improve a personalized deep neural network for efficiently analyzing time-series data generated from mobile and embedded sensing applications. To better characterize time-series data features and efficiently abstract model parameters, AttFL appends a set of attention modules to the baseline deep learning model and exchanges their feature map information to gather collective knowledge across distributed local devices at the server. The server groups devices with similar contextual goals using cosine similarity, and redistributes updated model parameters for improved inference performance at each local device. Specifically, unlike previously proposed federated learning frameworks, AttFL is designed specifically to perform well for various recurrent neural network (RNN) baseline models, making it suitable for many mobile and embedded sensing applications producing time-series sensing data. We evaluate the performance of AttFL and compare with five state-of-the-art federated learning frameworks using three popular mobile/embedded sensing applications (e.g., physiological signal analysis, human activity recognition, and audio processing). Our results obtained from CPU core-based emulations and a 12-node embedded platform testbed shows that AttFL outperforms all alternative approaches in terms of model accuracy and communication/computational overhead, and is flexible enough to be applied in various application scenarios exploiting different baseline deep learning model architectures." https://doi.org/10.1145/36109172023JPJaeyeon Park et al.Human Pose & Activity RecognitionBiosensors & Physiological MonitoringComputational Methods in HCIUbiComp
AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected EnvironmentsActivity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition)1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of "paired" demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples. https://dl.acm.org/doi/10.1145/35808182023SLShinan Liu et al.Human Pose & Activity RecognitionContext-Aware ComputingUbiquitous ComputingUbiComp
Mediated Social Support for Distress Reduction: AI Chatbots vs. HumanThe emerging uptake of AI chatbots for social support entails systematic comparisons between human and non-human entities as sources of support. In a between-subject experimental study, a human and two types of ostensible chatbots (using a wizard of oz design) had supportive conversations with college students who were experiencing stressful situations during the pandemic. We found that when compared with a less ideal chatbot (i.e., low-contingent chatbot), (1) the human support provider was perceived with more warmth, which directly reduced emotional distress among participants; (2) the ideal chatbot (i.e., high-contingent chatbot) was perceived to be more competent, which activated participants’ cognitive reappraisal of their stressful situations and subsequently reduced emotional distress. The human provider and the ideal chatbot did not differ in users’ perceived competence or warmth, although the human provider was more effective at activating participants’ cognitive reappraisal. This study integrates human communication theories into human-computer interaction work and contributes by positioning and theorizing user perceptions of chatbots in a larger process from support sources with varying communication competence to users’ cognitive and emotional responses, and ultimately to the distress outcome. Theoretical and design implications are discussed.2023JMJingbo Meng et al.AI ApplicationsCSCW
Flux Capacitors for JavaScript DeLoreans: Approximate Caching for Physics-based Data InteractionInteractive visualizations have become an effective and pervasive mode of allowing users to explore the data in a visual, fluid, and immersive manner. While modern web, mobile, touch, and gesture-driven next-generation interfaces such as Leap Motion allow for highly interactive experiences, they pose unique and unprecedented workloads to the underlying data platform. Usually, these visualizations do not need precise results for most queries generated during an interaction, and the users require the intermediate results as feedback only to guide them towards their goal query. We present a middleware component - \emph{Flux Capacitor}, that insulates the backend from bursty and query-intensive workloads. \emph{Flux Capacitor} uses prefetching and caching strategies devised by exploiting the inherent physics-metaphor of UI widgets such as friction and inertia in range sliders, and typical user-interaction. This enables low interaction response times while intelligently trading off accuracy.2019MKMeraj Ahmed Khan et al.Interactive Data VisualizationVisualization Perception & CognitionComputational Methods in HCIIUI
Transformer: A Database-Driven Approach to Generating Forms for Constrained InteractionForm-based data insertion or querying is often one of the most time-consuming steps in data-driven workflows. The small screen and lack of physical keyboard in devices such as smartphones and smartwatches introduce imprecision during user input. This can lead to data quality issues such as incomplete responses and errors, increasing user input time. We present Transformer, a system that leverages the contents of the database to automatically optimize forms for constrained input settings. Our cost function models the user input effort based on the schema and data distribution. This is used by Transformer to find the user interface (UI) widget and layout with ideal input cost for each form field. We demonstrate through user studies that Transformer provides a significantly improved user experience, with up to 50% and 57% reduction in form completion time for smartphones and smartwatches respectively.2019PRProtiva Rahman et al.Prototyping & User TestingIUI