Understanding the Challenges Students Face in Non-English Programming Environments Due to the Programming Language Transition: A Case Study of Keywords in the Chinese Version of ScratchAs the importance of computer science (CS) education gains global recognition, the learner population is expanding to include all manner of backgrounds. However, students from non-English backgrounds face challenges in understanding instructional material, technical communication, and reading and writing code, which further impacts their learning outcomes. These issues have attracted attention in the fields of Human-Computer Interaction (HCI), programming languages, and computer education, which have demonstrated that using programming tools in mother tongues or local languages enhances learners' ability to grasp computing concepts. Consequently, extensive efforts have been dedicated to translating English technical terms across various languages and even developing non-English-based programming languages.2025SWSiyu Wang et al.Wuhan University, School of Computer ScienceMultilingual & Cross-Cultural Voice InteractionProgramming Education & Computational ThinkingK-12 Digital Education ToolsCHI
Immersive Biography: Supporting Intercultural Empathy and Understanding for Displaced Cultural Objects in Virtual RealityDisplaced cultural objects often act as mediators of intercultural understanding due to their connection between the original and host communities. This study explores how immersive embodied VR biography enhances intercultural empathy and understanding of displaced cultural objects. We took the famous Chinese painting, the Admonitions Scroll, housed at the British Museum as an example to design an Immersive Biography in VR. We conducted an empirical study with 24 participants from source and non-source communities. Findings suggested that interacting with biographical narratives of displaced cultural objects in a personified embodied way can effectively promote intercultural empathy and understanding. Additionally, simulated intercultural scenarios and dialogues with personified cultural objects fostered intercultural empathy in both groups, with a stronger effect observed in non-source communities due to differences in cultural identity and personal connections. Our study provided the potential and practical insights of immersive technologies to inspire intercultural communication for displaced cultural objects.2025KZKe Zhao et al.Wuhan University, School of Information Management; Duke Kunshan UniversityIdentity & Avatars in XRMuseum & Cultural Heritage DigitizationInteractive Narrative & Immersive StorytellingCHI
BrickSmart: Leveraging Generative AI to Support Children's Spatial Language Learning in Family Block PlayBlock-building activities are crucial for developing children's spatial reasoning and mathematical skills, yet parents often lack the expertise to guide these activities effectively. BrickSmart, a pioneering system, addresses this gap by providing spatial language guidance through a structured three-step process: Discovery & Design, Build & Learn, and Explore & Expand. This system uniquely supports parents in 1) generating personalized block-building instructions, 2) guiding parents to teach spatial language during building and interactive play, and 3) tracking children's learning progress, altogether enhancing children's engagement and cognitive development. In a comparative study involving 12 parent-child pairs children aged 6-8 years) for both experimental and control groups, BrickSmart demonstrated improvements in supportiveness, efficiency, and innovation, with a significant increase in children's use of spatial vocabularies during block play, thereby offering an effective framework for fostering spatial language skills in children.2025YLYujia Liu et al.Tsinghua UniversityHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Generative AI (Text, Image, Music, Video)Early Childhood Education TechnologyCHI
Reviving Mural Art through Generative AI: A Comparative Study of AI-Generated and Hand-Crafted RecreationsVirtual reality (VR) provides an immersive and interactive platform for presenting ancient murals, enhancing users' understanding and appreciation of these invaluable culture treasures. However, traditional hand-crafted methods for recreating murals in VR are labor-intensive, time-consuming, and require significant expertise, limiting their scalability for large-scale mural scenes. To address these challenges, we propose a comprehensive pipeline that leverages generative AI to automate the mural recreation process. This pipeline is validated by the reconstruction of Foguang Temple scene in Dunhuang Murals. A user study comparing the AI-generated scene with a hand-crafted one reveals no significant differences in presence, authenticity, engagement and enjoyment, and emotion. Additionally, our findings identify areas for improvement in AI-generated recreations, such as enhancing historical fidelity and offering customization. This work paves the way for more scalable, efficient, and accessible methods of revitalizing cultural heritage in VR, offering new opportunities for mural preservation, demonstration, and dissemination using VR.2025SZShuo Zhao et al.Duke Kunshan University, Data Science Research CenterImmersion & Presence ResearchGenerative AI (Text, Image, Music, Video)Museum & Cultural Heritage DigitizationCHI
Waffle: A Waterproof mmWave-based Human Sensing System inside Bathrooms with Running WaterZhang 等人开发 Waffle 防水毫米波传感系统,专门解决浴室有流水环境中的人体感知难题,实现全天候室内监测。2024XZXusheng Zhang et al.Human Pose & Activity RecognitionContext-Aware ComputingUbiComp
Deus Ex Machina and Personas from Large Language Models: Investigating the Composition of AI-Generated Persona DescriptionsLarge language models (LLMs) can generate personas based on prompts that describe the target user group. To understand what kind of personas LLMs generate, we investigate the diversity and bias in 450 LLM-generated personas with the help of internal evaluators (n=4) and subject-matter experts (SMEs) (n=5). The research findings reveal biases in LLM-generated personas, particularly in age, occupation, and pain points, as well as a strong bias towards personas from the United States. Human evaluations demonstrate that LLM persona descriptions were informative, believable, positive, relatable, and not stereotyped. The SMEs rated the personas slightly more stereotypical, less positive, and less relatable than the internal evaluators. The findings suggest that LLMs can generate consistent personas perceived as believable, relatable, and informative while containing relatively low amounts of stereotyping.2024JSJoni Salminen et al.University of VaasaHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
Echo: Reverberation-based Fast Black-Box Adversarial Attacks on Intelligent Audio Systems"Intelligent audio systems are ubiquitous in our lives, such as speech command recognition and speaker recognition. However, it is shown that deep learning-based intelligent audio systems are vulnerable to adversarial attacks. In this paper, we propose a physical adversarial attack that exploits reverberation, a natural indoor acoustic effect, to realize imperceptible, fast, and targeted black-box attacks. Unlike existing attacks that constrain the magnitude of adversarial perturbations within a fixed radius, we generate reverberation-alike perturbations that blend naturally with the original voice sample 1. Additionally, we can generate more robust adversarial examples even under over-the-air propagation by considering distortions in the physical environment. Extensive experiments are conducted using two popular intelligent audio systems in various situations, such as different room sizes, distance, and ambient noises. The results show that Echo can invade into intelligent audio systems in both digital and physical over-the-air environment." https://doi.org/10.1145/36108742023MXMeng Xue et al.Privacy by Design & User ControlUbiComp
Predicting and Diagnosing User Engagement with Mobile UI Animation via a Data-Driven ApproachAnimation, a common design element in user interfaces (UI), can impact user engagement (UE) with mobile applications. To avoid impairing UE due to improper design of animation, designers rely on resource-intensive evaluation methods like user studies or expert reviews. To alleviate this burden, we propose a data-driven approach to assisting designers in examining UE issues with their animation designs. We first crowdsource UE assessments of mobile UI animations. Based on the collected data, we then build a novel deep learning model that captures both spatial and temporal features of animations to predict their UE levels. Evaluations show that our model achieves a reasonable accuracy. We further leverage the animation feature encoded by our model and a sample set of expert reviews to derive potential UE issues of a particular animation. Finally, we develop a proof-of-concept tool and evaluate its potential usage in actual design practices with experts2020ZWZiming Wu et al.Hong Kong University of Science and Technology360° Video & Panoramic ContentPrototyping & User TestingCHI
MessageOnTap: A Suggestive Interface to Facilitate Messaging-related TasksText messages are sometimes prompts that lead to information related tasks, e.g. checking one's schedule, creating reminders, or sharing content. We introduce MessageOnTap, a suggestive inter-face for smartphones that uses the text in a conversation to suggest task shortcuts that can streamline likely next actions. When activated, MessageOnTap uses word embeddings to rank relevant external apps, and parameterizes associated task shortcuts using key phrases mentioned in the conversation, such as times, persons, or events. MessageOnTap also tailors the auto-complete dictionary based on text in the conversation, to streamline any text input.We first conducted a month-long study of messaging behaviors(N=22) that informed our design. We then conducted a lab study to evaluate the effectiveness of MessageOnTap's suggestive interface, and found that participants can complete tasks 3.1x faster withMessageOnTap than their typical task flow.2019FCFanglin Chen et al.Carnegie Mellon UniversityVoice User Interface (VUI) DesignHuman-LLM CollaborationCHI