AI as a Bridge Across Ages: Exploring The Opportunities of Artificial Intelligence in Supporting Inter-Generational Communication in Virtual RealityInter-generational communication is essential for bridging generational gaps and fostering mutual understanding. However, maintaining it is complex due to cultural, communicative, and geographical differences. Recent research indicated that while Virtual Reality (VR) creates a relaxed atmosphere and promotes companionship, it inadequately addresses the complexities of inter-generational dialogue, including variations in values and relational dynamics. To address this gap, we explored the opportunities of Artificial Intelligence (AI) in supporting inter-generational communication in VR. We developed three technology probes (e.g., Content Generator, Communication Facilitator, and Info Assistant) in VR and employed them in a probe-based participatory design study with twelve inter-generational pairs. Our results show that AI-powered VR facilitates inter-generational communication by enhancing mutual understanding, fostering conversation fluency, and promoting active participation. We also introduce several chall2025QDQiuxin Du et al.Caring at a DistanceCSCW
BiaSeer: A Visual Analytics System for Identifying and Understanding Media BiasMedia bias refers to bias in news reporting and coverage, which exists pervasively. By identifying media bias, social scientists can understand different perspectives held by media outlets in news reporting. Existing studies only focus on the media bias analysis of isolated incidents but neglect its sustained characteristics. Thus, they cannot provide a comprehensive understanding of specific news topics. We develop BiaSeer, a visual analytics system for identifying and understanding sustained bias of media outlets. BiaSeer employs an overview-to-detail approach for interactive identification of media bias. The overview assists users in determining the analysis scope of media outlets. It further visualizes the variances in coverage patterns among selected media outlets using a matrix visualization to facilitate the identification of biased news articles. BiaSeer visualizes the sustained bias in the context of event evolution. It first summarizes news articles into events based on a keyword co-occurrence graph and then connects events into a narrative structure using a path-aware story tree construction method. In addition, BiaSeer integrates a sustained bias computation algorithm and enables analysts to compare the narrative structures of different media outlets using the juxtaposition-based visualization approach. We conduct a user experiment to validate the effectiveness of BiaSeer in assisting social scientists in understanding news topics and the usability of the visualization designs. To examine the effectiveness of BiaSeer, we conducted a case study with social scientists on the topics of the Russia-Ukraine conflict. The results demonstrate the utility and usability of BiaSeer in efficiently analyzing media bias and attaining a well-rounded comprehension of news topics.2025GLGuozheng Li et al.Data VisualizationCSCW
InReAcTable: LLM-powered Interactive Visual Data Story Construction from Tabular DataInsights in tabular data capture valuable patterns that help analysts understand critical information. Organizing related insights into visual data stories is crucial for in-depth analysis. However, constructing such stories is challenging because of the complexity of the inherent relations between extracted insights. Users face difficulty sifting through a vast number of discrete insights to integrate specific ones into a unified narrative that meets their analytical goals. Existing methods either heavily rely on user expertise, making the process inefficient, or employ automated approaches that cannot fully capture their evolving goals. In this paper, we introduce InReAcTable, a framework that enhances visual data story construction by establishing both structural and semantic connections between data insights. Each user interaction triggers the Acting module, which utilizes an insight graph for structural filtering to narrow the search space, followed by the Reasoning module using the retrieval-augmented generation method based on large language models for semantic filtering, ultimately providing insight recommendations aligned with the user’s analytical intent. Based on the InReAcTable framework, we develop an interactive prototype system that guides users to construct visual data stories aligned with their analytical requirements. We conducted a case study and a user experiment to demonstrate the utility and effectiveness of the InReAcTable framework and the prototype system for interactively building visual data stories.2025GAGerile Aodeng et al.Human-LLM CollaborationInteractive Data VisualizationData StorytellingUIST
A Card-based Co-Design Toolkit for Exploring Smart Material Applications with Multiple Stakeholders: A Case Study on Automotive Interior DesignSmart materials have garnered significant attention in both academia and industry, yet identifying pragmatically impactful applications still requires contributions from multiple stakeholders, including researchers, designers, and industry professionals. Although previous research has explored novel technical approaches or user-centered applications of smart materials, this study focuses on how to stimulate effective dialogue among stakeholders to explore impactful smart material applications. To this end, we provided a card-based co-design toolkit, tailored to automotive interior design as a case study. We conducted a multi-stakeholder co-design workshop to examine the performance and reveal the benefits of employing the toolkit in the co-design process. The workshops resulted in 16 concept designs. Qualitative interviews further revealed that, using our toolkit, the co-design process effectively fostered mutual understanding among stakeholders, enhanced both creativity and depth throughout the design process, and provided practical insights for each stakeholder's future work.2025TYTianyu Yu et al.Shape-Changing Interfaces & Soft Robotic MaterialsDIS
Seeking Inspiration through Human-LLM InteractionLarge language model (LLM) systems have been shown to stimulate creative thinking among creators, yet empirical research on whether users can seek inspiration in their everyday lives through these technologies is lacking. This paper explores which attributes of LLMs influence inspiration-seeking processes. Focusing on use cases of travel, cooking, and self-care, we interviewed 20 participants as they explored scenarios of these use cases using LLMs. Thematic analysis revealed that the vast data of LLMs inspires users with unexpected ideas, many of which were highly personalized, and inspired participants towards being motivated to act. Participants were also sensitive to the deficiencies of LLMs, and noted how ethical issues associated with these technologies could negatively impact them applying inspirational ideas into practice. We discuss the behavioral patterns of users actively seeking inspiration via LLMs, and provide design opportunities for LLMs that make the inspiration-seeking process more human-centric.2025XLXinrui Lin et al.Beijing Institute of Technology; University of EdinburghGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
DistKey: Incorporating Physical Activities into Daily Workflow through Spatially Distributed Hotkeys It is important to motivate healthy behaviors, especially in office environments. However, there are few systems which integrate physical engagement mechanisms in such environments. This paper presents the design and evaluation of DistKey, a set of hotkeys allocated in different spatial interfaces of the workspace, enabling users to engage in some office tasks through intentional body movements. Through a within-subject experiment with 20 office workers, we compared DistKey with a traditional keyboard to assess the health benefits and effectiveness of integrating exercise into the workflow. Our results confirmed the benefits of DistKey-led healthful interactions in enhancing physical health and reducing mental stress during different tasks at work. Based on our follow-up qualitative research, a range of design insights are discussed to enlighten the design and development of future healthful spatial interfaces for increased office vitality.2025DHDongjun Han et al.Beijing Institute of Technology, School of Design and ArtsMotion Sickness & Passenger ExperienceFoot & Wrist InteractionMental Health Apps & Online Support CommunitiesCHI
Effects of Information Widgets on Time Perception during Mentally Demanding TasksThis article examined how different time and task management information widgets affect time perception across modalities. In mentally demanding office environments, effective countdown representations are crucial for enhancing temporal awareness and productivity. We developed TickSens, a set of information widgets with different modalities, and conducted a within-subjects experiment with 30 participants to evaluate the five types of time perception modes: visual, auditory, haptic, as well as the blank and the timer modes. Our assessment focused on the technology acceptance, cognitive performance and emotional responses. Results indicated that compared to the blank and the timer modes, the use of modalities significantly improved the cognitive performance and positive emotional responses, and was better received by participants. The visual mode had the best task performance, while the auditory feedback was effective in boosting focus and the haptic mode significantly enhances user acceptance. The study revealed varied user preferences that enlightened the integration of these widgets into office.2025ZLZengrui Li et al.Beijing Institute of Technology, School of Design and ArtsHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Voice User Interface (VUI) DesignNotification & Interruption ManagementCHI
Hicclip: Sonification of Augmented Eating Sounds to Intervene Snacking BehaviorsIn this paper, we present a field study on using sonification of augmented eating sounds to intervene snacking behaviors in daily routines. The sonic feedback achieved through a snack storing device named Hicclip for verifying snacking behaviors and producing augmented eating sounds. The study was conducted with nine participants who were commonly addicted to snacking. The effectiveness of the sonification was examined by comparing snack-related data and questionnaire over the three study weeks: a baseline week, a Hicclip intervention week, and a post-intervention week. We also analyzed interview results to understand user experiences and opportunities for future research. Quantitative results showed that the snacking pattern has been improved due to reduced eating duration and snack consumptions. Qualitative results suggested that Hicclip may benefit self-regulation, afford easy adoption, and support data acquisition. We discuss design implications for embodiment of augmented eating sounds for healthy snacking.2024XLXinyue Liu et al.Mental Health Apps & Online Support CommunitiesDiet Tracking & Nutrition ManagementDIS
LightSword: A Customized Virtual Reality Exergame for Long-Term Cognitive Inhibition Training in Older AdultsThe decline of cognitive inhibition significantly impacts older adults' quality of life and well-being, making it a vital public health problem in today's aging society. Previous research has demonstrated that Virtual reality (VR) exergames have great potential to enhance cognitive inhibition among older adults. However, existing commercial VR exergames were unsuitable for older adults' long-term cognitive training due to the inappropriate cognitive activation paradigm, unnecessary complexity, and unbefitting difficulty levels. To bridge these gaps, we developed a customized VR cognitive training exergame (LightSword) based on Dual-task and Stroop paradigms for long-term cognitive inhibition training among healthy older adults. Subsequently, we conducted an eight-month longitudinal user study with 12 older adults aged 60 years and above to demonstrate the effectiveness of LightSword in improving cognitive inhibition. After the training, the cognitive inhibition abilities of older adults were significantly enhanced, with benefits persisting for 6 months. This result indicated that LightSword has both short-term and long-term effects in enhancing cognitive inhibition. Furthermore, qualitative feedback revealed that older adults exhibited a positive attitude toward long-term training with LightSword, which enhanced their motivation and compliance.2024QDQiuxin Du et al.Beijing Institute of Technology, Beijing Institute of TechnologyVR Medical Training & RehabilitationAging-Friendly Technology DesignSerious & Functional GamesCHI
IntelliTex: Fabricating Low-cost and Washable Functional Textiles using A Double-coating Process We present IntelliTex, a low-cost and highly accessible double-coating fabrication method for washable and reusable functional textiles with customized input functionalities. Specifically, off-the-shelf textiles are firstly coated with conductive carbon black using pen ink, which endows textiles with rich sensing capabilities, such as pressure, stretch, slide, and temperature. Secondly, textiles are coated with polyurethane to enhance the sensing stability over wash cycles for good reusability. To support user customization, we enrich the design space of double-coating by exploring various coating methods and diverse textiles to be coated. We further contribute a comprehensive library of input components and an online document to make our approach accessible to novice users. Finally, five application examples and a user study showcase the versatile functionalities and user accessibility of our method, with which we hope to support designers, makers, and researchers to easily create functional textiles ready to use in everyday life.2024YPYuecheng Peng et al.Zhejiang UniversityElectronic Textiles (E-textiles)Customizable & Personalized ObjectsCHI
Grand Challenges in SportsHCIThe field of Sports Human-Computer Interaction (SportsHCI) investigates interaction design to support a physically active human being. Despite growing interest and dissemination of SportsHCI literature over the past years, many publications still focus on solving specific problems in a given sport. We believe in the benefit of generating fundamental knowledge for SportsHCI more broadly to advance the field as a whole. To achieve this, we aim to identify the grand challenges in SportsHCI, which can help researchers and practitioners in developing a future research agenda. Hence, this paper presents a set of grand challenges identified in a five-day workshop with 22 experts who have previously researched, designed, and deployed SportsHCI systems. Addressing these challenges will drive transformative advancements in SportsHCI, fostering better athlete performance, athlete-coach relationships, spectator engagement, but also immersive experiences for recreational sports or exercise motivation, and ultimately, improve human well-being.2024DEDon Samitha Elvitigala et al.Monash UniversityGame UX & Player BehaviorSerious & Functional GamesMental Health Apps & Online Support CommunitiesCHI
I Know Your Intent: Graph-enhanced Intent-aware User Device Interaction Prediction via Contrastive Learning"With the booming of smart home market, intelligent Internet of Things (IoT) devices have been increasingly involved in home life. To improve the user experience of smart homes, some prior works have explored how to use machine learning for predicting interactions between users and devices. However, the existing solutions have inferior User Device Interaction (UDI) prediction accuracy, as they ignore three key factors: routine, intent and multi-level periodicity of human behaviors. In this paper, we present SmartUDI, a novel accurate UDI prediction approach for smart homes. First, we propose a Message-Passing-based Routine Extraction (MPRE) algorithm to mine routine behaviors, then the contrastive loss is applied to narrow representations among behaviors from the same routines and alienate representations among behaviors from different routines. Second, we propose an Intent-aware Capsule Graph Attention Network (ICGAT) to encode multiple intents of users while considering complex transitions between different behaviors. Third, we design a Cluster-based Historical Attention Mechanism (CHAM) to capture the multi-level periodicity by aggregating the current sequence and the semantically nearest historical sequence representations through the attention mechanism. SmartUDI can be seamlessly deployed on cloud infrastructures of IoT device vendors and edge nodes, enabling the delivery of personalized device service recommendations to users. Comprehensive experiments on four real-world datasets show that SmartUDI consistently outperforms the state-of-the-art baselines with more accurate and highly interpretable results." https://doi.org/10.1145/36109062023JXJingyu Xiao et al.Context-Aware ComputingSmart Home Interaction DesignUbiComp
Toward Automatic Audio Description Generation for Accessible VideosVideo accessibility is essential for people with visual impairments. Audio descriptions describe what is happening on-screen, e.g., physical actions, facial expressions, and scene changes. Generating high-quality audio descriptions requires a lot of manual description generation. To address this accessibility obstacle, we built a system that analyzes the audiovisual contents of a video and generates the audio descriptions. The system consisted of three modules: AD insertion time prediction, AD generation, and AD optimization. We evaluated the quality of our system on five types of videos by conducting qualitative studies with 20 sighted users and 12 users who were blind or visually impaired. Our findings revealed how audio description preferences varied with user types and video types. Based on our study's analysis, we provided recommendations for the development of future audio description generation technologies.2021YWYujia Wang et al.Beijing Institute of TechnologyConversational ChatbotsVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
RCEA-360VR: Real-time, Continuous Emotion Annotation in 360° VR Videos for Collecting Precise Viewport-dependent Ground Truth LabelsPrecise emotion ground truth labels for 360° virtual reality (VR) video watching are essential for fine-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, continuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360° VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users' workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and effective techniques for collecting fine-grained viewport-dependent emotion labels in 360° VR.2021TXTong Xue et al.Beijing Institute of Technology, Centrum Wiskunde & Informatica (CWI)Eye Tracking & Gaze InteractionSocial & Collaborative VRImmersion & Presence ResearchCHI
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed RealityVirtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.2021WLWei Liang et al.Beijing Institute of TechnologyMixed Reality WorkspacesDigital Art Installations & Interactive PerformanceCHI