SimSpark: Interactive Simulation of Social Media BehaviorsUnderstanding user behaviors on social media has garnered significant scholarly attention, enhancing our comprehension of how virtual platforms impact society and empowering decision-makers. Simulating social media behaviors provides a robust tool for capturing the patterns of social media behaviors, testing hypotheses, and predicting the effects of various interventions, ultimately contributing to a deeper understanding of social media environments. Moreover, it can overcome difficulties associated with utilizing real data for analysis, such as data accessibility issues, ethical concerns, and the complexity of processing large and heterogeneous datasets. However, researchers and stakeholders need more flexible platforms to investigate different user behaviors by simulating different scenarios and characters, which is not possible yet. Therefore, this paper introduces SimSpark, an interactive system including simulation algorithms and interactive visual interfaces which is capable of creating small simulated social media platforms with customizable characters and social environments. We address three key challenges: generating believable behaviors, validating simulation results, and supporting interactive control for generation and results analysis. A simulation workflow is introduced to generate believable behaviors of agents by utilizing large language models. A visual interface enables real-time parameter adjustment and process monitoring for customizing generation settings. A set of visualizations and interactions are also designed to display the models' outputs for further analysis. Effectiveness is evaluated through case studies, quantitative simulation model assessments, and expert interviews.2025ZLZiyue Lin et al.Data VisualizationCSCW
Double Tap for This Post: Understanding the Communication of Data Visualization on Social MediaData visualizations are increasingly used by news outlets on social media to communicate insights to a broad audience. However, little is known about how readers interact with and respond to data visualizations in these quick-consumption environments. In this work, we introduce a conceptual model that categorizes visualization reading that leads to the communication effect of likes on Instagram. The model was developed through a grounded theory analysis of the statements explaining the reasoning behind the likes of visualization, which were recorded from a preliminary study. Informed by coding the statements from two dimensions including scopes and design patterns concerning visualization, our model consists of three levels: depicting the "look'' of a visualization (e.g., artistic style and color scheme); interpreting the "flesh and bones'' of a visualization (e.g., visualization and narrative); and elucidating the "heart and soul'' of a visualization (e.g., insights and conclusion). We also conducted an online crowdsourcing user study with 200 participants to demonstrate how our model can be applied to improve the communication of visualization by comparing the three levels.2025YSYang Shi et al.Data VisualizationCSCW
DeMod: A Holistic Tool with Explainable Detection and Personalized Modification for Toxicity CensorshipAlthough there have been automated approaches and tools supporting toxicity censorship for social posts, most of them focus on detection. Toxicity censorship is a complex process, wherein detection is just an initial task and a user can have further needs such as rationale understanding and content modification. For this problem, we conduct a needfinding study to investigate people's diverse needs in toxicity censorship and then build a ChatGPT-based censorship tool named DeMod accordingly. DeMod is equipped with the features of explainable Detection and personalized Modification, providing fine-grained detection results, detailed explanations, and personalized modification suggestions. We also implemented the tool and recruited 35 Weibo users for evaluation. The results suggest DeMod's multiple strengths like the richness of functionality, the accuracy of censorship, and ease of use. Based on the findings, we further propose several insights into the design of content censorship systems.2025YLYaqiong Li et al.Explainable AI (XAI)CSCW
WePilot: Integrating Younger Family Members and Chatbot to Support Older Adults Learning Smartphone UsageOlder adults (OAs) usually face various challenges when using smartphones due to their limited knowledge and the declines in memory and information processing capabilities. Many studies in HCI and CSCW communities have focused on supporting OAs to independently use smartphones. However, compared to independent exploration, support from younger family members (YFMs) has specific advantages in problem understanding, solution personalization, and security protection. However, OAs and YFMs generally have gaps in time, knowledge, and experience, affecting the efficiency of support and their experience. For this problem, we conduct a formative study to gather insights into OAs and YFMs' perspectives and expectations in the supporting procedure. Then we introduce chatbot to mediate the gaps between OAs and YFMs and build a system named WePilot to assist them to collaboratively solve smartphone usage problems. Evaluations with 12 pairs of participants (OA and corresponding YFM) suggest WePilot's strengths in improving problem solving efficiency and OAs and YFMs' experience. Based on these findings, we propose several insights into the future design of intergenerational technical support systems.2025HZHaonan Zhang et al.Enhancing Older Adults' Learning and Well-BeingCSCW
KinemaFX: A Kinematic-Driven Interactive System for Particle Effect Exploration and CustomizationParticle effects are widely used in games and animation to simulate natural phenomena or stylized visual effects. However, creating effect artworks is challenging for non-expert users due to their lack of specialized skills, particularly in finding particle effects with kinematic behaviors that match their intent. To address these issues, we present KinemaFX, a kinematic-driven interactive system, to assist non-expert users in constructing customized particle effect artworks. We propose a conceptual model of particle effects that captures both semantic features and kinematic behaviors. Based on the model, KinemaFX adopts a workflow powered by Large Language Models (LLMs) that supports intent expression through combined semantic and kinematic inputs, while enabling implicit preference-guided exploration and subsequent creation of customized particle effect artworks based on exploration results. Additionally, we developed a kinematic-driven method to facilitate efficient interactive particle effect search within KinemaFX via structured representation and measurement of particle effects. To evaluate KinemaFX, we illustrate usage scenarios and conduct a user study employing an ablation approach. Evaluation results demonstrate that KinemaFX effectively supports users in efficiently and customarily creating particle effect artworks.2025YZYifei Zhang et al.Generative AI (Text, Image, Music, Video)3D Modeling & AnimationComputational Methods in HCIUIST
DobbyEar: Inducing Body Illusion of Ear Deformation with Haptic RetargetingThe use of haptic and visual stimuli to create body illusions and enhance body ownership of virtual avatars in virtual reality (VR) has been extensively studied in the fields of psychology and Human-Computer Interaction (HCI). However, previous studies have relied on mechanical devices or corresponding proxies to provide haptic feedback. In this paper, we applied haptic retargeting to induce body illusions by redirecting users’ hand movements, altering their perception of the shape of body parts when touched. Our technique allows for the realization of more precise and complex deformations. We implemented mapping of the ear’s contour, thereby creating illusions of different ear shapes, such as elf ears and dog ears. To determine the scope of retargeting, we conducted a user study to identify the maximum tolerable deviation angle for virtual ears. Subsequently, we explored the impact of haptic retargeting on body ownership of virtual avatars.2025HSHan Shi et al.Southern University of Science and Technology; Fudan UniversityMid-Air Haptics (Ultrasonic)Identity & Avatars in XRCHI
"AI Afterlives" as Digital Legacy: Perceptions, Expectations, and ConcernsThe rise of generative AI technology has sparked interest in using digital information to create AI-generated agents as digital legacy. These agents, often referred to as "AI Afterlives", present unique challenges compared to traditional digital legacy. Yet, there is limited human-centered research on "AI Afterlife" as digital legacy, especially from the perspectives of the individuals being represented by these agents. This paper presents a qualitative study examining users' perceptions, expectations, and concerns regarding AI-generated agents as digital legacy. We identify factors shaping people's attitudes, their perceived differences compared with the traditional digital legacy, and concerns they might have in real practices. We also examine the design aspects throughout the life cycle and interaction process. Based on these findings, we situate "AI Afterlife" in digital legacy, and delve into design implications for maintaining identity consistency and balancing intrusiveness and support in "AI Afterlife" as digital legacy.2025YLYing Lei et al.Simon Fraser University, School of Interactive Arts and TechnologyGenerative AI (Text, Image, Music, Video)Online Identity & Self-PresentationCHI
Characterizing LLM-Empowered Personalized Story Reading and Interaction for Children: Insights From Multi-Stakeholders' PerspectivePersonalized interaction is highly valued by parents in their story-reading activities with children. While AI-empowered story-reading tools have been increasingly used, their abilities to support personalized interaction with children are still limited. Recent advances in large language models (LLMs) show promise in facilitating personalized interactions, but little is known about how to effectively and appropriately use LLMs to enhance children's personalized story-reading experiences. This work explores this question through a design-based study. Drawing on a formative study, we designed and developed StoryMate, an LLM-empowered personalized interactive story-reading tool for children, following an empirical study with children, parents, and education experts. Our participants valued the personalized features in StoryMate, and also highlighted the need to support personalized content, guiding mechanisms, reading context variations, and interactive interfaces. Based on these findings, we propose a series of design recommendations for better using LLMs to empower children's personalized story reading and interaction.2025JCJiaju Chen et al.East China Normal UniversityHuman-LLM CollaborationEarly Childhood Education TechnologyInteractive Narrative & Immersive StorytellingCHI
'Douyin is My Nourishment of the Mind': Exploring the Infrastructuralization Process of Short Video Sharing Platforms From Rural People’s PerspectiveInfrastructure is a common topic in rural areas around the world. While most existing research attention has been paid to the difficulties with Internet access and the fragile infrastructure of rural areas, our study contributes an empirical understanding of digital platform-as-infrastructure - short video-sharing platforms (SVSPs) in rural China. Through semi-structured interviews with 26 rural users including content creators and regular users, we elaborate on their practices, experiences, and perceptions of SVSPs. We foreground that SVSPs have reshaped rural people's daily routines and enhanced their self-worth and identity, which in turn led to deeper and more sustained engagement with these platforms. We then situate our findings within the broader context of platform-as-infrastructure, discussing how rural people's adoption and usage intertwine with the infrastructuralization process of SVSPs. We end by discussing how to make future platform-as-infrastructure more engaged and beneficial to rural populations, meeting their practical usage and well-being requirements.2025YSYuling Sun et al.Fudan UniversitySocial Platform Design & User BehaviorOnline Identity & Self-PresentationCHI
RemiHaven: Integrating "In-Town" and "Out-of-Town" Peers to Provide Personalized Reminiscence Support for Older DriftersWith increasing social mobility and an aging society, more older adults in China are migrating to new cities, known as “older drifters”. Due to fewer social connections and cultural adaptation, they face negative emotions such as loneliness and depression. While reminiscence-based interventions have been used to improve older adults' psychological well-being, challenges such as the lack of tangible materials and limited social resources constrain the feasibility of traditional reminiscence approaches for older drifters. To address this challenge, we designed RemiHaven, a personalized reminiscence support tool based on a two-phase formative study. It integrates “In-Town” and “Out-of-Town” peer agents to enhance personalization, engagement, and emotional resonance in the reminiscence process powered by Multimodal Large Language Models (MLLMs). Our evaluations show RemiHaven's strengths in supporting reminiscence while identifying potential challenges. We conclude by offering insights for the future design of reminiscence support tools for older migrants.2025XZXuechen Zhang et al.Fudan UniversityHuman-LLM CollaborationMental Health Apps & Online Support CommunitiesElderly Care & Dementia SupportCHI
ReachPad: Interacting with Multiple Virtual Screens using a Single Physical Pad through Haptic RetargetingThe advancement of Virtual Reality (VR) has expanded 2D user interfaces into 3D space. This change has introduced richer interaction modalities but also brought challenges, especially the lack of haptic feedback in mid-air interactions. Previous research has explored various methods to provide feedback for interface interactions, but most approaches require specialized haptic devices. We introduce haptic retargeting to enable users to control multiple virtual screens in VR using a simple flat pad, which serves as a single physical proxy to support seamless interaction across multiple virtual screens. We conducted user studies to explore the appropriate virtual screen size and positioning under our retargeting method and then compared various drag-and-drop methods for cross-screen interaction. Finally, we compared our method with controller-based interaction in application scenarios.2025HSHan Shi et al.Southern University of Science and Technology; Fudan UniversityIn-Vehicle Haptic, Audio & Multimodal FeedbackMixed Reality WorkspacesImmersion & Presence ResearchCHI
YouthCare: Building a Personalized Collaborative Video Censorship Tool to Support Parent-Child Joint Media EngagementTo mitigate the negative impacts of online videos on teenagers, existing research and platforms have implemented various parental mediation mechanisms, such as Parent-Child Joint Media Engagement (JME). However, JME generally relies heavily on parents' time, knowledge, and experience. To fill this gap, we aim to design an automatic tool to help parents/children censor videos more effectively and efficiently in JME. For this goal, we first conducted a formative study to identify the needs and expectations of teenagers and parents for such a system. Based on the findings, we designed YouthCare, a personalized collaborative video censorship tool that supports parents and children to collaboratively filter out inappropriate content and select appropriate content in JME. An evaluation with 10 parent-child pairs demonstrated YouthCare's several strengths in supporting video censorship, while also highlighting some potential problems. These findings inspire us to propose several insights for the future design of par2025WZWenxin Zhao et al.Fudan UniversityConversational ChatbotsUniversal & Inclusive DesignCHI
Unlocking Scientific Concepts: How Effective Are LLM-Generated Analogies for Student Understanding and Classroom Practice?Teaching scientific concepts is essential but challenging, and analogies help students connect new concepts to familiar ideas. Advancements in large language models (LLMs) enable generating analogies, yet their effectiveness in education remains underexplored. In this paper, we first conducted a two-stage study involving high school students and teachers to assess the effectiveness of LLM-generated analogies in biology and physics through a controlled in-class test and a classroom field study. Test results suggested that LLM-generated analogies could enhance student understanding particularly in biology, but require teachers' guidance to prevent over-reliance and overconfidence. Classroom experiments suggested that teachers could refine LLM-generated analogies to their satisfaction and inspire new analogies from generated ones, encouraged by positive classroom feedback and homework performance boosts. Based on findings, we developed and evaluated a practical system to help teachers generate and refine teaching analogies. We discussed future directions for developing and evaluating LLM-supported teaching and learning by analogy.2025ZSZekai Shao et al.Fudan UniversityHuman-LLM CollaborationIntelligent Tutoring Systems & Learning AnalyticsCHI
Unveiling Causal Attention in Dogs’ Eyes with Smart Eyewear"Our goals are to better understand dog cognition, and to support others who share this interest. Existing investigation methods predominantly rely on human-manipulated experiments to examine dogs' behavioral responses to visual stimuli such as human gestures. As a result, existing experimental paradigms are usually constrained to in-lab environments and may not reveal the dog's responses to real-world visual scenes. Moreover, visual signals pertaining to dog behavioral responses are empirically derived from observational evidence, which can be prone to subjective bias and may lead to controversies. We aim to overcome or reduce the existing limitations of dog cognition studies by investigating a challenging issue: identifying the visual signal(s) from dog eye motion that can be utilized to infer causal explanations of its behaviors, namely estimating causal attention. To this end, we design a deep learning framework named Causal AtteNtIon NEtwork (CANINE) to unveil the dogs' causal attention mechanism, inspired by the recent advance in causality analysis with deep learning. Equipped with CANINE, we developed the first eyewear device to enable inference on the vision-related behavioral causality of canine wearers. We demonstrate the technical feasibility of the proposed CANINE glasses through their application in multiple representative experimental scenarios of dog cognitive study. Various in-field trials are also performed to demonstrate the generality of the CANINE eyewear in real-world scenarios. With the proposed CANINE glasses, we collect the first large-scale dataset, named DogsView, which consists of automatically generated annotations on the canine wearer's causal attention across a wide range of representative scenarios. The DogsView dataset is available online to facilitate research. https://doi.org/10.1145/3569490"2023YZYingying Zhao et al.Eye Tracking & Gaze InteractionHuman Pose & Activity RecognitionComputational Methods in HCIUbiComp
CASES: A Cognition-Aware Smart Eyewear System for Understanding How People Read"The process of reading has attracted decades of scientific research. Work in this field primarily focuses on using eye gaze patterns to reveal cognitive processes while reading. However, eye gaze patterns suffer from limited resolution, jitter noise, and cognitive biases, resulting in limited accuracy in tracking cognitive reading states. Moreover, using sequential eye gaze data alone neglects the linguistic structure of text, undermining attempts to provide semantic explanations for cognitive states during reading. Motivated by the impact of the semantic context of text on the human cognitive reading process, this work uses both the semantic context of text and visual attention during reading to more accurately predict the temporal sequence of cognitive states. To this end, we present a Cognition-Aware Smart Eyewear System (CASES), which fuses semantic context and visual attention patterns during reading. The two feature modalities are time-aligned and fed to a temporal convolutional network based multi-task classification deep model to automatically estimate and further semantically explain the reading state timeseries. CASES is implemented in eyewear and its use does not interrupt the reading process, thus reducing subjective bias. Furthermore, the real-time association between visual and semantic information enables the interactions between visual attention and semantic context to be better interpreted and explained. Ablation studies with 25 subjects demonstrate that CASES improves multi-label reading state estimation accuracy by 20.90% for sentence compared to eye tracking alone. Using CASES, we develop an interactive reading assistance system. Three and a half months of deployment with 13 in-field studies enables several observations relevant to the study of reading. In particular, observed how individual visual history interacts with the semantic context at different text granularities. Furthermore, CASES enables just-in-time intervention when readers encounter processing difficulties, thus promoting self-awareness of the cognitive process involved in reading and helping to develop more effective reading habits." https://doi.org/10.1145/36109102023XQXiangyao Qi et al.Eye Tracking & Gaze InteractionMental Health Apps & Online Support CommunitiesUbiComp
Wakey-Wakey: Animate Text by Mimicking Characters in a GIFWith appealing visual effects, kinetic typography (animated text) has prevailed in movies, advertisements, and social media. However, it remains challenging and time-consuming to craft its animation scheme. We propose an automatic framework to transfer the animation scheme of a rigid body on a given meme GIF to text in vector format. First, the trajectories of key points on the GIF anchor are extracted and mapped to the text's control points based on local affine transformation. Then the temporal positions of the control points are optimized to maintain the text topology. We also develop an authoring tool that allows intuitive human control in the generation process. A questionnaire study provides evidence that the output results are aesthetically pleasing and well preserve the animation patterns in the original GIF, where participants were impressed by a similar emotional semantics of the original GIF. In addition, we evaluate the utility and effectiveness of our approach through a workshop with general users and designers.2023ZZZhaoyu Zhou et al.Graphic Design & Typography Tools3D Modeling & AnimationUIST
ContextWing: Pair-wise Visual Comparison for Evolving Sequential Patterns of Contexts in Social Media Data StreamsUnderstanding and comparing the evolution of public opinions on a social media event is important. However, such a task requires summarizing rich semantic information and an in-depth comparison of semantics and dynamics at the same time, which is difficult for the analysis. To tackle these challenges, we propose ContextWing, an interactive visual analytics system to support pair-wise comparison for evolving sequential patterns of contexts between two data streams. The computational model of ContextWing generates dynamic topics and sequential patterns, and characterizes public attention and pair-wise correlations. A novel multi- layer bilateral wing metaphor is designed to intuitively visualizes sequential patterns merged by different contexts to reveal the similarities and differences in both temporal and semantic aspects between two streams. Interactive tools support the selection of a central keyword and its contexts to iteratively generate patterns for a focused exploration. The system supports analysis on both static and streaming settings that enables a wider range of application scenarios. We verify the effectiveness and usability of ContextWing from multiple facets, including three case studies, two expert interviews, and a user study.2023YZYuheng Zhao et al.VisualizationCSCW
IntimaSea: Exploring Shared Stress Display in Close RelationshipsAutomatic stress tracking has become increasingly available on wearable devices. Research has investigated its use for individual stress management, largely within the traditional data-as-care framing. However, its use for stress sharing in social relationships, particularly close relationships, is still under explored. Inspired by the idea of "caring-through-data", which focuses on mediating the social and emotional experiences of the collective "us" with data, this paper presents a design study with a prototype called IntimaSea, a display featuring illustrative stress data in collective forms to be shared among close relationships. The field trials with nine groups of intimately-connected users (N=19) highlight its potential on stress awareness, interpretation and management, as well as intimacy promotion. We end by discussing sharing stress for social ways of stress management, stress data as a meaningful social cue mediating relationships, as well as design implications for caring-through-data.2023YJYanqi Jiang et al.Fudan University, Fudan UniversitySleep & Stress MonitoringPrivacy by Design & User ControlCHI
GeoCamera: Telling Stories in Geographic Visualizations with Camera MovementsIn geographic data videos, camera movements are frequently used and combined to present information from multiple perspectives. However, creating and editing camera movements requires significant time and professional skills. This work aims to lower the barrier of crafting diverse camera movements for geographic data videos. First, we analyze a corpus of 66 geographic data videos and derive a design space of camera movements with a dimension for geospatial targets and one for narrative purposes. Based on the design space, we propose a set of adaptive camera shots and further develop an interactive tool called GeoCamera. This interactive tool allows users to flexibly design camera movements for geographic visualizations. We verify the expressiveness of our tool through case studies and evaluate its usability with a user study. The participants find that the tool facilitates the design of camera movements.2023WLWenchao Li et al.The Hong Kong University of Science and TechnologyInteractive Data VisualizationGeospatial & Map VisualizationData StorytellingCHI
UEyes: Understanding Visual Saliency across User Interface TypesWhile user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.2023YJYanqi Jiang et al.Aalto UniversityEye Tracking & Gaze InteractionVisualization Perception & CognitionCHI