Family In The Loop: Enabling Family Involvement in Dementia Care at Long-term Facilities with Person-centered AI ToolsFamily involvement is crucial to person-centered dementia care, yet communication breakdowns, fragmented documentation, and resource constraints often hinder meaningful collaboration in long-term care facilities. This paper introduces Family In The Loop (FITL), an AI-enabled platform designed through a year of fieldwork and deployed across multiple facilities to strengthen person-centered care coordination between families and staff. FITL ingests passive data, (video streams and electronic health records), along with active user feedback, integrating these into four interconnected realtime views: Essentials View provides concise AI-generated care recommendations; Reels View offers personalized video highlights to reassure families; Explanations View answers user queries via a natural-language interface with links to supporting evidence; and Archives View ensures accountability by aggregating, indexing, and presenting the underlying care data in a structured historical record. Our contributions are: (1) empirical insights into how communication breakdowns, documentation gaps, and trust barriers undermine person-centered care and family involvement; (2) field-based design principles, developed in response to these challenges and instantiated in FITL, that enhance transparency and emotional reassurance without adding staff burden; (3) deployment findings detailing practical strategies for integrating AI tools into real-world care routines; and (4) a conceptual vision for scalable, data-driven, and role-sensitive person-centered care. These contributions advance CSCW by showing how integrated AI can support person-centered care in complex, resource-limited environments.2025DMDylan Edward Moore et al.Palliative CareCSCW
IntelliLining: Activity Sensing through Textile Interlining Sensors Using TENGsWe introduce a novel component for smart garments: smart interlining, and validate its technical feasibility through a series of experiments. Our work involved the implementation of a prototype that employs a textile vibration sensor based on Triboelectric Nanogenerators (TENGs), commonly used for activity detection. We explore several unique features of smart interlining, including how sensor signals and patterns are influenced by factors such as the size and shape of the interlining sensor, the location of the vibration source within the sensor area, and various propagation media, such as airborne and surface vibrations. We present our study results and discuss how these findings support the feasibility of smart interlining. Additionally, we demonstrate that smart interlinings on a shirt can detect a variety of user activities involving the hand, mouth, and upper body, achieving an accuracy rate of 93.9% in the tested activities.2025MEMahdie Ghane Ezabadi et al.Simon Fraser University, Computing ScienceHaptic WearablesElectronic Textiles (E-textiles)CHI
An Investigation of Interaction and Information Needs for Protocol Reverse Engineering AutomationProtocol reverse engineering (ProtocolREing) consists of taking streams of network data and inferring the communication protocol. ProtocolREing is critical task in malware and system security analysis. Several ProtocolREing automation tools have been developed, however, in practice, they are not used because they offer limited interaction. Instead, reverse engineers (ProtocolREs) perform this task manually or use less complex visualization tools. To give ProtocolREs the power of more complex automation, we must first understand ProtocolREs processes and information and interaction needs to design better interfaces. We interviewed 16 ProtocolREs, presenting a paper prototype ProtocolREing automation interface, and ask them to discuss their approach to ProtocolREing while using the tool and suggest missing information and interactions. We designed our prototype based on existing ProtocolREing tool features and prior reverse engineering research's usability guidelines. We found ProtocolREs follow a flexible, hypothesis-driven process and identified multiple information and interaction needs when validating the automation's inferences. We provide suggestions for future interaction design.2025SKSamantha Katcher et al.Tufts University, Department of Computer Science; MITRE CorporationExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAlgorithmic Transparency & AuditabilityCHI
Enhancing the Educational Potential of Online Movement Videos: System Development and Empirical Studies with TikTok Dance ChallengesWe hypothesize that online movement videos have untapped potential for teaching physical skills, and we developed a platform that automatically generates practice plans from raw TikTok dance videos. The practice plans teach one segment at a time using fading guidance and part-learning principles and are presented using a web-based interface featuring concurrent visual aids. Two user studies (n=54, n=38) were conducted. The first showed significant improvements in learning outcomes compared to standard tutorials, underscoring the importance of well-structured practice plans and offering nuanced insights into the design and effectiveness of visual aids. The second study found that segmentation and emoji-based dual-coding only benefit learning when integrated into a well-designed lesson structure. We provide a set of practical recommendations for enhancing online movement learning, focusing on the need for substantive part-learning activities and careful use of visual aids to prevent cognitive overload.2025JBJules Brooks Blanchet et al.Dartmouth CollegeDance & Body Movement ComputingCHI
Investigating Context-Aware Collaborative Text Entry on Smartphones using Large Language ModelsText entry is a fundamental and ubiquitous task, but users often face challenges such as situational impairments or difficulties in sentence formulation. Motivated by this, we explore the potential of large language models (LLMs) to assist with text entry in real-world contexts. We propose a collaborative smartphone-based text entry system, CATIA, that leverages LLMs to provide text suggestions based on contextual factors, including screen content, time, location, activity, and more. In a 7-day in-the-wild study with 36 participants, the system offered appropriate text suggestions in over 80% of cases. Users exhibited different collaborative behaviors depending on whether they were composing text for interpersonal communication or information services. Additionally, the relevance of contextual factors beyond screen content varied across scenarios. We identified two distinct mental models: AI as a supportive facilitator or as a more equal collaborator. These findings outline the design space for human-AI collaborative text entry on smartphones.2025WCWeihao Chen et al.Tsinghua University, Department of Computer Science and TechnologyVoice User Interface (VUI) DesignHuman-LLM CollaborationContext-Aware ComputingCHI
Therapy for Therapists: Design Opportunities to Support the Psychological Well-being of Mental Health WorkersOn-demand mental health services—including counseling, crisis hotlines, and peer support programs—are vital to the healthcare system, providing acute and ongoing support through telephone, online, and text-based communication. Although such services have proven effective at reducing hopelessness, psychological pain, and suicidality, they put mental health professionals at high risk of burnout, secondary traumatic stress, and compassion fatigue. Our interviews with mental health workers across professions from four mental health organizations revealed that while mental health workers have a strong motivation to help individuals struggling to meet their mental health needs, they face various challenges, including heavy caseloads, having to navigate dealing with crisis clients and managing the impact of abuse and harassment. Although organizations spend a significant time training workers prior to their involvement with clients, the training lacks components of self-compassion and self-care. To overcome their challenges, participants identify the need to be self-reliant and engage in care practices ranging from socializing with coworkers to yoga and meditation. Although technology is an integral part of their work routine, participants, irrespective of their age, had misapprehensions regarding technology use in the mental health care space and for managing one’s own psychological well-being. We recommend design guidelines for HCI researchers, including developing contextualized just-in-time adaptive interventions to promote self-compassion and educating workers regarding the use of various technologies to manage their psychological well-being.2024ACAishwarya Chandrasekaran et al.Session 1c: Care for the CaregiversCSCW
EarSE: Bringing Robust Speech Enhancement to COTS HeadphonesDuan 等人开发 EarSE 系统,基于深度学习为普通商业耳机提供稳健语音增强,使消费级耳机也能实现专业级降噪效果,语音信噪比提升 25dB。2024DDDi Duan et al.Voice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)UbiComp
PyroSense: 3D Posture Reconstruction Using Pyroelectric Infrared SensingZeng 等人提出 PyroSense 系统,利用热释电红外传感技术实现无摄像头的3D姿势重建,在保护隐私的同时提供高精度人体姿态感知。2024HZHuaili Zeng et al.Human Pose & Activity RecognitionBiosensors & Physiological MonitoringSmart Home Interaction DesignUbiComp
Laser-Powered Vibrotactile RenderingSu 等人提出激光触觉渲染技术,利用激光在皮肤表面产生热振感,实现无接触触觉反馈,为 VR/AR 提供新型交互方案。2024YSYuning Su et al.Mid-Air Haptics (Ultrasonic)UbiComp
ECSkin: Tessellating Electrochromic Films for Customizable On-skin DisplaysKu等人提出ECSkin,利用镶嵌电致变色薄膜制作可定制皮肤显示器,实现个性化可穿戴显示功能。2024PKPin-Sung Ku et al.Shape-Changing Interfaces & Soft Robotic MaterialsOn-Skin Display & On-Skin InputUbiComp
Investigating Generalizability of Speech-based Suicidal Ideation Detection Using Mobile PhonesPillai等人研究基于移动电话的自杀意念语音检测模型的跨情境泛化能力,为心理健康监测技术的实际应用提供参考。2024APArvind Pillai et al.Brain-Computer Interface (BCI) & NeurofeedbackMental Health Apps & Online Support CommunitiesUbiComp
Symptom Detection with Text Message Log Distributions for Holistic Depression and Anxiety ScreeningReisch 等人利用短信日志分布特征进行症状检测,实现抑郁和焦虑障碍的自动化筛查。2024MRMiranda Reisch et al.Mental Health Apps & Online Support CommunitiesUbiComp
Capturing the College Experience: A Four-year Mobile Sensing Study of Mental Health, Resilience and Behavior of College Students during the PandemicNepal 等人通过四年移动传感研究,追踪疫情间大学生的心理健康、适应力与行为变化,为高校心理健康干预提供数据支持。2024SNSubigya Nepal et al.Mental Health Apps & Online Support CommunitiesSleep & Stress MonitoringUbiComp
StructCurves: Interlocking Block-Based Line StructuresWe present a new class of curved block-based line structures whose component chains are flexible when separated, and provably rigid when assembled together into an interlocking double chain. The joints are inspired by traditional zippers, where a binding fabric or mesh connects individual teeth. Unlike traditional zippers, the joint design produces a rigid interlock with programmable curvature. This allows fairly strong curved structures to be built out of easily stored flexible chains. In this paper, we introduce a pipeline for generating these curved structures using a novel block design template based on revolute joints. Mesh embedded in these structures maintains block spacing and assembly order. We evaluate the rigidity of the curved structures through mechanical performance testing and demonstrate several applications.2024ZSZezhou Sun et al.Shape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingUIST
MoodCapture: Depression Detection using In-the-Wild Smartphone ImagesMoodCapture presents a novel approach that assesses depression based on images automatically captured from the front-facing camera of smartphones as people go about their daily lives. We collect over 125,000 photos in the wild from N=177 participants diagnosed with major depressive disorder for 90 days. Images are captured naturalistically while participants respond to the PHQ-8 depression survey question: "I have felt down, depressed, or hopeless''. Our analysis explores important image attributes, such as angle, dominant colors, location, objects, and lighting. We show that a random forest trained with face landmarks can classify samples as depressed or non-depressed and predict raw PHQ-8 scores effectively. Our post-hoc analysis provides several insights through an ablation study, feature importance analysis, and bias assessment. Importantly, we evaluate user concerns about using MoodCapture to detect depression based on sharing photos, providing critical insights into privacy concerns that inform the future design of in-the-wild image-based mental health assessment tools.2024SNSubigya Nepal et al.Dartmouth CollegeMental Health Apps & Online Support CommunitiesPrivacy by Design & User ControlBiosensors & Physiological MonitoringCHI
Teaching artificial intelligence in extracurricular contexts through narrative-based learnersourcingCollaborative technology provides powerful opportunities to engage young people in active learning experiences that are inclusive, immersive, and personally meaningful. In particular, interactive narratives have proven to be effective scaffolds for learning, and learnersourcing has emerged as a promising student-driven approach to enable personalized education and quality control at-scale. We introduce the first synthesis of these ideas in the context of teaching artificial intelligence (AI), which is now seen as a critical component of 21st-century education. Specifically, we explore the design of a narrative-based learnersourcing platform where engagement is centered around a learner-made choose-your-own-adventure story. In grounding our approach, we draw from pedagogical literature, digital storytelling, and recent work on learnersourcing. We report on our iterative, learner-centered design process as well as our study findings that demonstrate the platform’s positive effects on knowledge gains, interest in AI concepts, and the overall user experience of narrative-based learnersourcing technology.2024DMDylan Edward Moore et al.Dartmouth CollegeSTEM Education & Science CommunicationInteractive Narrative & Immersive StorytellingCHI
Sketching AI Concepts with Capabilities and Examples: AI Innovation in the Intensive Care UnitAdvances in artificial intelligence (AI) have enabled unprecedented capabilities, yet innovation teams struggle when envisioning AI concepts. Data science teams think of innovations users do not want, while domain experts think of innovations that cannot be built. A lack of effective ideation seems to be a breakdown point. How might multidisciplinary teams identify buildable and desirable use cases? This paper presents a first hand account of ideating AI concepts to improve critical care medicine. As a team of data scientists, clinicians, and HCI researchers, we conducted a series of design workshops to explore more effective approaches to AI concept ideation and problem formulation. We detail our process, the challenges we encountered, and practices and artifacts that proved effective. We discuss the research implications for improved collaboration and stakeholder engagement, and discuss the role HCI might play in reducing the high failure rate experienced in AI innovation.2024NYNur Yildirim et al.Carnegie Mellon UniversityGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationMental Health Apps & Online Support CommunitiesCHI
The Power of Speech in the Wild: Discriminative Power of Daily Voice Diaries in Understanding Auditory Verbal Hallucinations Using Deep Learning"Mobile phone sensing is increasingly being used in clinical research studies to assess a variety of mental health conditions (e.g., depression, psychosis). However, in-the-wild speech analysis -- beyond conversation detecting -- is a missing component of these mobile sensing platforms and studies. We augment an existing mobile sensing platform with a daily voice diary to assess and predict the severity of auditory verbal hallucinations (i.e., hearing sounds or voices in the absence of any speaker), a condition that affects people with and without psychiatric or neurological diagnoses. We collect 4809 audio diaries from N=384 subjects over a one-month-long study period. We investigate the performance of various deep-learning architectures using different combinations of sensor behavioral streams (e.g., voice, sleep, mobility, phone usage, etc.) and show the discriminative power of solely using audio recordings of speech as well as automatically generated transcripts of the recordings; specifically, our deep learning model achieves a weighted f-1 score of 0.78 solely from daily voice diaries. Our results surprisingly indicate that a simple periodic voice diary combined with deep learning is sufficient enough of a signal to assess complex psychiatric symptoms (e.g., auditory verbal hallucinations) collected from people in the wild as they go about their daily lives." https://doi.org/10.1145/36108902023WWWeichen Wang et al.Brain-Computer Interface (BCI) & NeurofeedbackMental Health Apps & Online Support CommunitiesUbiComp
“It’s Not an Issue of Malice, but of Ignorance”: Towards Inclusive Video Conferencing for Presenters Who are d/Deaf or Hard of Hearing"As video conferencing (VC) has become necessary for many professional, educational, and social tasks, people who are d/Deaf and hard of hearing (DHH) face distinct accessibility barriers. We conducted studies to understand the challenges faced by DHH people during VCs and found that they struggled to easily present or communicate effectively due to accessibility limitations of VC platforms. These limitations include the lack of tools for DHH speakers to discreetly communicate their accommodation needs to the group. Based on these findings, we prototyped a suite of tools, called Erato that enables DHH speakers to be aware of their performance while speaking and remind participants of proper etiquette. We evaluated Erato by running a mock classroom case study over VC for three sessions. All participants felt more confident in their speaking ability and paid closer attention to making the classroom more inclusive while using our tool. We share implications of these results for the design of VC interfaces and human-the-the-loop assistive systems that can support users who are DHH to communicate effectively and advocate for their accessibility needs." https://doi.org/10.1145/36109012023JDJosh Urban Davis et al.Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Universal & Inclusive DesignUbiComp
Narrative-Based Visual Feedback to Encourage Sustained Physical Activity: A Field Trial of the WhoIsZuki Mobile Health Platform"Stories are a core way human beings make meaning and sense of the world and our lived experiences, including our behaviors, desires, and goals. Narrative structures, both visual and textual, help us understand and act on information, while also evoking strong emotions. Focusing on the health context, this research examines the effectiveness of narrative-based feedback in motivating physical activity behaviors and underlying attitudes over longitudinal periods. After collecting two weeks of baseline physical activity levels, N=39 participants installed our smartphone application, WhoIsZuki. The WhoIsZuki app supports goal setting and semi-automated activity tracking, and it provides an ambient display that visually encodes these tracked activities as well as progress toward goals. Half of participants received a version of the interface that supplied behavioral feedback in the form of a multi-chapter episodic narrative, while the other half received a control condition version that provided an aesthetically-similar visualization but without any characterization, episodic structure, dramatic effect, or other narrative elements. After interacting with these versions for four months, our analysis showed that participants receiving the multi-chapter narrative feedback performed more physical activity, achieved more goals, experienced more positive psychological shifts, and overall engaged more meaningfully with the digital intervention. https://dl.acm.org/doi/10.1145/3580786"2023EMElizabeth L Murnane et al.Mental Health Apps & Online Support CommunitiesFitness Tracking & Physical Activity MonitoringInteractive Narrative & Immersive StorytellingUbiComp