“Poker with Play Money”: Exploring Psychotherapist Training with Virtual PatientsRole-play exercises are widely utilized for training across a variety of domains; however, they have many shortcomings, including low availability, resource intensity, and lack of diversity. Large language model-driven virtual agents offer a potential avenue to mitigate these limitations and offer lower-risk role-play. The implications, however, of shifting this human-human collaboration to human-agent collaboration are still largely unexplored. In this work we focus on the context of psychotherapy, as psychotherapists-in-training extensively engage in role-play exercises with peers and/or supervisors to practice the interpersonal and therapeutic skills required for effective treatment. We provide a case study of a realistic ``virtual patient'' system for mental health training, evaluated by trained psychotherapists in comparison to their previous experiences with both real role-play partners and real patients. Our qualitative, reflexive analysis generated three themes and thirteen subthemes regarding key interpersonal skills of psychotherapy, the utility of the system compared to traditional role-play techniques, and factors which impacted psychotherapist-perceived ``humanness'' of the virtual patient. Although psychotherapists were optimistic about the system's potential to bolster therapeutic skills, this utility was impacted by the extent to which the virtual patient was perceived as human-like. We leverage the Computers Are Social Actors framework to discuss human--virtual-patient collaboration for practicing rapport, and discuss challenges of prototyping novel human-AI systems for clinical contexts which require a high degree of unpredictability. We pull from the ``SEEK'' three-factor theory of anthropomorphism to stress the importance of adequately representing a variety of cultural communities within mental health AI systems, in alignment with decolonial computing.2025CBCynthia M Baseman et al.AI Applications for Safety and SupportCSCW
Branch Explorer: Leveraging Branching Narratives to Support Interactive 360° Video Viewing for Blind and Low Vision Users360° videos enable users to freely choose their viewing paths, but blind and low vision (BLV) users are often excluded from this interactive experience. To bridge this gap, we present Branch Explorer, a system that transforms 360° videos into branching narratives—stories that dynamically unfold based on viewer choices—to support interactive viewing for BLV audiences. Our formative study identified three key considerations for accessible branching narratives: providing diverse branch options, ensuring coherent story progression, and enabling immersive navigation among branches. To address these needs, Branch Explorer employs a multi-modal machine learning pipeline to generate diverse narrative paths, allowing users to flexibly make choices at detected branching points and seamlessly engage with each storyline through immersive audio guidance. Evaluation with 12 BLV viewers showed that Branch Explorer significantly enhanced user agency and engagement in 360° video viewing. Users also developed personalized strategies for exploring 360° content. We further highlight implications for supporting accessible exploration of videos and virtual environments.2025SXKe Xu et al.360° Video & Panoramic ContentAccessible GamingUIST
From Following to Understanding: Investigating the Role of Reflective Prompts in AR-Guided Tasks to Promote User UnderstandingAugmented Reality (AR) is a promising medium for guiding users through tasks, yet its impact on fostering deeper task understanding remains underexplored. This paper investigates the impact of reflective prompts---strategic questions that encourage users to challenge assumptions, connect actions to outcomes, and consider hypothetical scenarios---on task comprehension and performance. We conducted a two-phase study: a formative survey and co-design sessions (N=9) to develop reflective prompts, followed by a within-subject evaluation (N=16) comparing AR instructions with and without these prompts in coffee-making and circuit assembly tasks. Our results show that reflective prompts significantly improved objective task understanding and resulted in more proactive information acquisition behaviors during task completion. These findings highlight the potential of incorporating reflective elements into AR instructions to foster deeper engagement and learning. Based on data from both studies, we synthesized design guidelines for integrating reflective elements into AR systems to enhance user understanding without compromising task performance.2025NZNandi Zhang et al.University of CalgaryAR Navigation & Context AwarenessPrototyping & User TestingCHI
Briteller: Shining a Light on AI Recommendations for ChildrenUnderstanding how AI recommendations work can help the younger generation become more informed and critical consumers of the vast amount of information they encounter daily. However, young learners with limited math and computing knowledge often find AI concepts too abstract. To address this, we developed Briteller, a light-based recommendation system that makes learning tangible. By exploring and manipulating light beams, Briteller enables children to understand an AI recommender system's core algorithmic building block, the dot product, through hands-on interactions. Initial evaluations with ten middle school students demonstrated the effectiveness of this approach, using embodied metaphors, such as "merging light" to represent addition. To overcome the limitations of the physical optical setup, we further explored how AR could embody multiplication, expand data vectors with more attributes, and enhance contextual understanding. Our findings provide valuable insights for designing embodied and tangible learning experiences that make AI concepts more accessible to young learners.2025XZXiaofei Zhou et al.University of Rochester, Department of Computer ScienceProgramming Education & Computational ThinkingSTEM Education & Science CommunicationCHI
Can you pass that tool?: Implications of Indirect Speech in Physical Human-Robot CollaborationIndirect speech acts (ISAs) are a natural pragmatic feature of human communication, allowing requests to be conveyed implicitly while maintaining subtlety and flexibility. Although advancements in speech recognition have enabled natural language interactions with robots through direct, explicit commands—providing clarity in communication—the rise of large language models presents the potential for robots to interpret ISAs. However, empirical evidence on the effects of ISAs on human-robot collaboration (HRC) remains limited. To address this, we conducted a Wizard-of-Oz study (N=36), engaging a participant and a robot in collaborative physical tasks. Our findings indicate that robots capable of understanding ISAs significantly improve human's perceived robot anthropomorphism, team performance, and trust. However, the effectiveness of ISAs is task- and context-dependent, thus requiring careful use. These results highlight the importance of appropriately integrating direct and indirect requests in HRC to enhance collaborative experiences and task performance.2025YZZheng Zhang et al.University of Melbourne, School of Computing and Information SystemsAgent Personality & AnthropomorphismHuman-LLM CollaborationHuman-Robot Collaboration (HRC)CHI
BallistoBud: Heart Rate Variability Monitoring using Earbud Accelerometry for Stress AssessmentThis paper examines the potential of commercial earbuds for detecting physiological biomarkers like heart rate (HR) and heart rate variability (HRV) for stress assessment. Using accelerometer (IMU) and photoplethysmography (PPG) data from earbuds, we compared these estimates with reference electrocardiogram (ECG) data from 81 healthy participants. We explored using low-power accelerometer sensors for capturing ballistocardiography (BCG) signals. However, BCG signal quality can vary due to individual differences and body motion. Therefore, BCG data quality assessment is critical before extracting any meaningful biomarkers. To address this, we introduced the ECG-gated BCG heatmap, a new method for assessing BCG signal quality. We trained a Random Forest model to identify usable signals, achieving 82% test accuracy. Filtering out unusable signals improved HR/HRV estimation accuracy to levels comparable to PPG-based estimates. Our findings demonstrate the feasibility of accurate physiological monitoring with earbuds, advancing the development of user-friendly wearable health technologies for stress management.2025MIMd Saiful Islam et al.University of Rochester, Department of Computer Science; Samsung Research America, Digital HealthSleep & Stress MonitoringBiosensors & Physiological MonitoringCHI
DanmuA11y: Making Time-Synced On-Screen Video Comments (Danmu) Accessible to Blind and Low Vision Users via Multi-Viewer Audio DiscussionsBy overlaying time-synced user comments on videos, Danmu creates a co-watching experience for online viewers. However, its visual-centric design poses significant challenges for blind and low vision (BLV) viewers. Our formative study identified three primary challenges that hinder BLV viewers' engagement with Danmu: the lack of visual context, the speech interference between comments and videos, and the disorganization of comments. To address these challenges, we present DanmuA11y, a system that makes Danmu accessible by transforming it into multi-viewer audio discussions. DanmuA11y incorporates three core features: (1) Augmenting Danmu with visual context, (2) Seamlessly integrating Danmu into videos, and (3) Presenting Danmu via multi-viewer discussions. Evaluation with twelve BLV viewers demonstrated that DanmuA11y significantly improved Danmu comprehension, provided smooth viewing experiences, and fostered social connections among viewers. We further highlight implications for enhancing commentary accessibility in video-based social media and live-streaming platforms.2025SXShuchang Xu et al.Hong Kong University of Science and TechnologyVoice AccessibilityAccessible GamingUniversal & Inclusive DesignCHI
Modeling the Impact of Visual Stimuli on Redirection Noticeability with Gaze Behavior in Virtual RealityWhile users could embody virtual avatars that mirror their physical movements in Virtual Reality, these avatars' motions can be redirected to enable novel interactions. Excessive redirection, however, could break the user's sense of embodiment due to perceptual conflicts between vision and proprioception. While prior work focused on avatar-related factors influencing the noticeability of redirection, we investigate how the visual stimuli in the surrounding virtual environment affect user behavior and, in turn, the noticeability of redirection. Given the wide variety of different types of visual stimuli and their tendency to elicit varying individual reactions, we propose to use users' gaze behavior as an indicator of their response to the stimuli and model the noticeability of redirection. We conducted two user studies to collect users' gaze behavior and noticeability, investigating the relationship between them and identifying the most effective gaze behavior features for predicting noticeability. Based on the data, we developed a regression model that takes users' gaze behavior as input and outputs the noticeability of redirection. We then conducted an evaluation study to test our model on unseen visual stimuli, achieving an accuracy of 0.012 MSE. We further implemented an adaptive redirection technique and conducted a proof-of-concept study to evaluate its effectiveness with complex visual stimuli in two applications. The results indicated that participants experienced less physical demanding and a stronger sense of body ownership when using our adaptive technique, demonstrating the potential of our model to support real-world use cases.2025ZLZhipeng Li et al.ETH Zürich, Department of Computer ScienceEye Tracking & Gaze InteractionMixed Reality WorkspacesImmersion & Presence ResearchCHI
Computational Trichromacy Reconstruction: Empowering the Color-Vision Deficient to Recognize Colors Using Augmented RealityWe propose an assistive technology that helps individuals with Color Vision Deficiencies (CVD) to recognize/name colors. A dichromat's color perception is a reduced two-dimensional (2D) subset of a normal trichromat's three dimensional color (3D) perception, leading to confusion when visual stimuli that appear identical to the dichromat are referred to by different color names. Using our proposed system, CVD individuals can interactively induce distinct perceptual changes to originally confusing colors via a computational color space transformation. By combining their original 2D precepts for colors with the discriminative changes, a three dimensional color space is reconstructed, where the dichromat can learn to resolve color name confusions and accurately recognize colors. Our system is implemented as an Augmented Reality (AR) interface on smartphones, where users interactively control the rotation through swipe gestures and observe the induced color shifts in the camera view or in a displayed image. Through psychophysical experiments and a longitudinal user study, we demonstrate that such rotational color shifts have discriminative power (initially confusing colors become distinct under rotation) and exhibit structured perceptual shifts dichromats can learn with modest training. The AR App is also evaluated in two real-world scenarios (building with lego blocks and interpreting artistic works); users all report positive experience in using the App to recognize object colors that they otherwise could not.2024YZYuhao Zhu et al.AR Navigation & Context AwarenessVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UIST
Auptimize: Optimal Placement of Spatial Audio Cues for Extended RealitySpatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as angular discrimination error and front-back confusion. This decreases the efficiency of XR interfaces because users misidentify from which XR element a sound is coming. To address this, we propose Auptimize, a novel computational approach for placing XR sound sources, which mitigates such localization errors by utilizing the ventriloquist effect. Auptimize disentangles the sound source locations from the visual elements and relocates the sound sources to optimal positions for unambiguous identification of sound cues, avoiding errors due to inter-source proximity and front-back confusion. Our evaluation shows that Auptimize decreases spatial audio-based source identification errors compared to playing sound cues at the paired visual-sound locations. We demonstrate the applicability of Auptimize for diverse spatial audio-based interactive XR scenarios.2024HCHyunsung Cho et al.Social & Collaborative VRImmersion & Presence ResearchUIST
Memory Reviver: Supporting Photo-Collection Reminiscence for People with Visual Impairment via a Proactive ChatbotReminiscing with photo collections offers significant psychological benefits but poses challenges for people with visual impairment (PVI). Their current reliance on sighted help restricts the flexibility of this activity. In response, we explored using a chatbot in a preliminary study. We identified two primary challenges that hinder effective reminiscence with a chatbot: the scattering of information and a lack of proactive guidance. To address these limitations, we present Memory Reviver, a proactive chatbot that helps PVI reminisce with a photo collection through natural language communication. Memory Reviver incorporates two novel features: (1) a Memory Tree, which uses a hierarchical structure to organize the information in a photo collection; and (2) a Proactive Strategy, which actively delivers information to users at proper conversation rounds. Evaluation with twelve PVI demonstrated that Memory Reviver effectively facilitated engaging reminiscence, enhanced understanding of photo collections, and delivered natural conversational experiences. Based on our findings, we distill implications for supporting photo reminiscence and designing chatbots for PVI.2024SXKe Xu et al.Conversational ChatbotsCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)UIST
PromptCharm: Text-to-Image Generation through Multi-modal Prompting and RefinementThe recent advancements in Generative AI have significantly advanced the field of text-to-image generation. The state-of-the-art text-to-image model, Stable Diffusion, is now capable of synthesizing high-quality images with a strong sense of aesthetics. Crafting text prompts that align with the model's interpretation and the user's intent thus becomes crucial. However, prompting remains challenging for novice users due to the complexity of the stable diffusion model and the non-trivial efforts required for iteratively editing and refining the text prompts. To address these challenges, we propose PromptCharm, a mixed-initiative system that facilitates text-to-image creation through multi-modal prompt engineering and refinement. To assist novice users in prompting, PromptCharm first automatically refines and optimizes the user's initial prompt. Furthermore, PromptCharm supports the user in exploring and selecting different image styles within a large database. To assist users in effectively refining their prompts and images, PromptCharm renders model explanations by visualizing the model's attention values. If the user notices any unsatisfactory areas in the generated images, they can further refine the images through model attention adjustment or image inpainting within the rich feedback loop of PromptCharm. To evaluate the effectiveness and usability of PromptCharm, we conducted a controlled user study with 12 participants and an exploratory user study with another 12 participants. These two studies show that participants using PromptCharm were able to create images with higher quality and better aligned with the user's expectations compared with using two variants of PromptCharm that lacked interaction or visualization support.2024ZWZhijie Wang et al.University of AlbertaGenerative AI (Text, Image, Music, Video)Explainable AI (XAI)AI-Assisted Creative WritingCHI
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational AgentsThe widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.2024ZZZheng Zhang et al.Khoury College of Computer SciencesAgent Personality & AnthropomorphismHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
Teachers, Parents, and Students' perspectives on Integrating Generative AI into Elementary Literacy EducationThe viral launch of new generative AI (GAI) systems, such as ChatGPT and Text-to-Image (TTL) generators, sparked questions about how they can be effectively incorporated into writing education. However, it is still unclear how teachers, parents, and students perceive and suspect GAI systems in elementary school settings. We conducted a workshop with twelve families (parent-child dyads) with children ages 8-12 and interviewed sixteen teachers in order to understand each stakeholder's perspectives and opinions on GAI systems for learning and teaching writing. We found that the GAI systems could be beneficial in generating adaptable teaching materials for teachers, enhancing ideation, and providing students with personalized, timely feedback. However, there are concerns over authorship, students’ agency in learning, and uncertainty concerning bias and misinformation. In this article, we discuss design strategies to mitigate these constraints by implementing an adults-oversight system, balancing AI-role allocation, and facilitating customization to enhance students’ agency over writing projects.2024AHAriel Han et al.UC IrvineGenerative AI (Text, Image, Music, Video)K-12 Digital Education ToolsOnline Learning & MOOC PlatformsCHI
Getting on the Right Foot: Using Observational and Quantitative Methods to Evaluate Movement DisordersCurrently doctors rely on tools such as the Unified Parkinson’s Disease Rating Scale (MDS-UDPRS) and the Scale for the Assessment and Rating of Ataxia (SARA) to make diagnoses for movement disorders based on clinical observations of a patient’s motor movement. Observation-based assessments however are inherently subjective and can differ by person. Moreover, different movement disorders show overlapping symptoms, challenging neurologists to make a correct diagnosis based on eyesight alone. In this work, we create an intelligent interface to highlight movements and gestures that are indicative of a movement disorder to observing doctors. First, we analyzed the walking patterns of 43 participants with Parkinson's Disease (PD), 60 participants with ataxia, and 52 participants with no movement disorder to find 10 metrics that can be used to distinguish PD from ataxia. Next, we designed an interface that provides context to the gestures that are relevant to a movement disorder diagnosis. Finally, we surveyed two neurologists (one who specializes in PD and the other who specializes in ataxia) on how useful this interface is for making a diagnosis. Our results not only showcase additional metrics that can be used to evaluate movement disorders quantitatively but also outline steps to be taken when designing an interface for these kinds of diagnostic tasks.2024JSJames Spann et al.Human Pose & Activity RecognitionMotor Impairment Assistive Input TechnologiesFitness Tracking & Physical Activity MonitoringIUI
Auto-Gait: Automatic Ataxia Risk Assessment with Computer Vision from Gait Task VideosMany patients with neurological disorders, such as Ataxia, do not have easy access to neurologists, -especially those living in remote localities and developing/underdeveloped countries. Ataxia is a degenerative disease of the nervous system that surfaces as difficulty with motor control, such as walking imbalance. Previous studies have attempted automatic diagnosis of Ataxia with the help of wearable biomarkers, Kinect, and other sensors. These sensors, while accurate, do not scale efficiently well to naturalistic deployment settings. In this study, we propose a method for identifying ataxic symptoms by analyzing videos of participants walking down a hallway, captured with a standard monocular camera. In a collaboration with 11 medical sites located in 8 different states across the United States, we collected a dataset of 155 videos along with their severity rating from 89 participants (24 controls and 65 diagnosed with or are pre-manifest spinocerebellar ataxias). The participants performed the gait task of the Scale for the Assessment and Rating of Ataxia (SARA). We develop a computer vision pipeline to detect, track, and separate the participants from their surroundings and construct several features from their body pose coordinates to capture gait characteristics such as step width, step length, swing, stability, speed, etc. Our system is able to identify and track a patient in complex scenarios. For example, if there are multiple people present in the video or an interruption from a passerby. Our Ataxia risk-prediction model achieves 83.06% accuracy and an 80.23% F1 score. Similarly, our Ataxia severity-assessment model achieves a mean absolute error (MAE) score of 0.6225 and a Pearson's correlation coefficient score of 0.7268. Our model competitively performed when evaluated on data from medical sites not used during training. Through feature importance analysis, we found that our models associate wider steps, decreased walking speed, and increased instability with greater Ataxia severity, which is consistent with previously established clinical knowledge. Furthermore, we are releasing the models and the body-pose coordinate dataset to the research community - the largest dataset on ataxic gait (to our knowledge). Our models could contribute to improving health access by enabling remote Ataxia assessment in non-clinical settings without requiring any sensors or special cameras. Our dataset will help the computer science community to analyze different characteristics of Ataxia and to develop better algorithms for diagnosing other movement disorders. https://dl.acm.org/doi/10.1145/35808452023WRWasifur Rahman et al.Human Pose & Activity RecognitionTelemedicine & Remote Patient MonitoringUbiComp
NeuralGait: Assessing Brain Health Using Your Smartphone"Brain health attracts more recent attention as the population ages. Smartphone-based gait sensing and analysis can help identify the risks of brain diseases in daily life for prevention. Existing gait analysis approaches mainly hand-craft temporal gait features or developing CNN-based feature extractors, but they are either prone to lose some inconspicuous pathological information or are only dedicated to a single brain disease screening. We discover that the relationship between gait segments can be used as a principle and generic indicator to quantify multiple pathological patterns. In this paper, we propose NeuralGait, a pervasive smartphone-cloud system that passively captures and analyzes principle gait segments relationship for brain health assessment. On the smartphone end, inertial gait data are collected while putting the smartphone in the pants pocket. We then craft local temporal-frequent gait domain features and develop a self-attention-based gait segment relationship encoder. Afterward, the domain features and relation features are fed to a scalable RiskNet in the cloud for brain health assessment. We also design a pathological hot update protocol to efficiently add new brain diseases in the RiskNet. NeuralGait is practical as it provides brain health assessment with no burden in daily life. In the experiment, we recruit 988 healthy people and 417 patients with a single or combination of PD, TBI, and stroke, and evaluate the brain health assessment using a set of specifically designed metrics including global accuracy, exact accuracy, sensitivity, and false alarm rate. We also demonstrate the generalization (e.g., analysis of feature effectiveness and model efficiency) and inclusiveness of NeuralGait. https://dl.acm.org/doi/10.1145/3569476"2023HLHuining Li et al.Fitness Tracking & Physical Activity MonitoringBiosensors & Physiological MonitoringUbiComp
PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual DataAudio-visual learning seeks to enhance the computer’s multi-modal perception leveraging the correlation between the auditory and visual modalities. Despite their many useful downstream tasks, such as video retrieval, AR/VR, and accessibility, the performance and adoption of existing audio-visual models have been impeded by the availability of high quality datasets. Annotating audio-visual datasets is laborious, expensive, and time consuming. To address this challenge, we designed and developed an efficient audio visual annotation tool called Peanut. Peanut’s human-AI collaborative pipeline separates the multi-modal task into two single-modal tasks, and utilizes state-of-the-art object detection and sound-tagging models to reduce the annotators’ effort to process each frame and the number of manually-annotated frames needed. A within-subject user study with 20 participants found that Peanut can significantly accelerate the audio-visual data annotation process while maintaining high annotation accuracy.2023ZZZheng Zhang et al.Conversational ChatbotsHuman-LLM CollaborationRecommender System UXUIST
A Minimalistic Approach to Predict and Understand the Relation of App Usage with Students’ Academic PerformanceDue to usage of self-reported data which may contain biasness, the existing studies may not unveil the exact relation between academic grades and app categories such as Video. Additionally, the existing systems’ requirement for data of prolonged period to predict grades may not facilitate early intervention to improve it. Thus, we presented an app that retrieves past 7 days’ actual app usage data within a second (Mean=0.31s, SD=1.1s). Our analysis on 124 Bangladeshi students’ real-time data demonstrates app usage sessions have a significant (p<0.05) negative association with CGPA. However, the Productivity and Books categories have a significant positive association whereas Video has a significant negative association. Moreover, the high and low CGPA holders have significantly different app usage behavior. Leveraging only the instantly accessed data, our machine learning model predicts CGPA within ±0.36 of the actual CGPA. We discuss the design implications that can be potential for students to improve grades.2023MAMd Sabbir Ahmed et al.Online Learning & MOOC PlatformsIntelligent Tutoring Systems & Learning AnalyticsMobileHCI
MeetingCoach: An Intelligent Dashboard for Supporting Effective & Inclusive MeetingsVideo-conferencing is essential for many companies, but its limitations in conveying social cues can lead to ineffective meetings. We present MeetingCoach, an intelligent post-meeting feedback dashboard that summarizes contextual and behavioral meeting information. Through an exploratory survey (N=120), we identified important signals (e.g., turn taking, sentiment) and used these insights to create a wireframe dashboard. The design was evaluated with in situ participants (N=16) who helped identify the components they would prefer in a post-meeting dashboard. After recording video-conferencing meetings of eight teams over four weeks, we developed an AI system to quantify the meeting features and created personalized dashboards for each participant. Through interviews and surveys (N=23), we found that reviewing the dashboard helped improve attendees' awareness of meeting dynamics, with implications for improved effectiveness and inclusivity. Based on our findings, we provide suggestions for future feedback system designs of video-conferencing meetings.2021SSSamiha Samrose et al.University of RochesterRemote Work Tools & ExperienceNotification & Interruption ManagementCHI