Behind the Same Mask: Understanding the Practice of Spontaneous Collective Anonymity on Chinese Social PlatformsAnonymity plays a crucial role in social interactions online. Recently, a new phenomenon has emerged on Chinese social platforms where users collectively adopt a uniform avatar and nickname "momo", thereby achieving anonymity. However, understanding such spontaneous collective anonymity within Chinese cultural and contextual factors remains limited since much of the anonymity research focuses on Western users. Yet, it is unclear how users perceive the usage of "momo", their motivations, and how using this collective anonymity impacts their social interaction. To answer these questions, we conducted interviews with 20 "momo" users. We found that the shared identity "momo" provides an additional layer of anonymity on identity-constrained Chinese social platforms. Users adopted "momo" to engage in more inclusive discussions and to balance anonymity and self-presentation. Moreover, this collective anonymity fosters connections and forms a meaningful group identity in a loosely organized community. We also identified the benefits and risks associated with this unique collective anonymity. This work makes significant contributions to CSCW and HCI research by (1) extending the knowledge of anonymity practices and privacy concerns within non-Western and mainly Chinese contexts. (2) advancing the work on anonymity models by revealing the dual role of the Momo identity in facilitating collective anonymity and community bonds. (3) providing design implications to support future social technologies in identity design and anonymous communities.2025SLSuqi Lou et al.Designing for PrivacyCSCW
CoGrader: Transforming Instructors' Assessment of Project Reports through Collaborative LLM IntegrationGrading project reports are increasingly significant in today’s educational landscape, where they serve as key assessments of students' comprehensive problem-solving abilities. However, it remains challenging due to the multifaceted evaluation criteria involved, such as creativity and peer-comparative achievement. Meanwhile, instructors often struggle to maintain fairness throughout the time-consuming grading process. Recent advances in AI, particularly large language models, have demonstrated potential for automating simpler grading tasks, such as assessing quizzes or basic writing quality. However, these tools often fall short when it comes to complex metrics, like design innovation and the practical application of knowledge, that require an instructor’s educational insights into the class situation. To address this challenge, we conducted a formative study with six instructors and developed CoGrader, which introduces a novel grading workflow combining human-LLM collaborative metrics design, benchmarking, and AI-assisted feedback. CoGrader was found effective in improving grading efficiency and consistency while providing reliable peer-comparative feedback to students. We also discuss design insights and ethical considerations for the development of human-AI collaborative grading systems.2025ZCZixin Chen et al.Human-LLM CollaborationIntelligent Tutoring Systems & Learning AnalyticsSTEM Education & Science CommunicationUIST
From Sports Videos to Immersive Training: Augmenting Human Motion to Enrich Basketball Training ExperienceVideo plays a crucial role in sports training, enabling participants to analyze their movements and identify opponents' weaknesses. Despite the easy access to sports videos, the rich motion data within them remains underutilized due to the lack of clear performance indicators and discrepancies from real-game conditions. To address this, we employed advanced computer vision algorithms to reconstruct human motions in an immersive environment, where users can freely observe and interact with the movements. Basketball shooting was chosen as a representative scenario to validate this framework, given its fast pace and extensive physical contact. Collaborating with experts, we iteratively designed motion-related visualizations to improve the understanding of complex movements. A one-on-one matchup simulating real games was also provided, allowing users to compete directly with the reconstructed motions. Our user studies demonstrate that this method enhances participants' movement comprehension and engagement, while insights derived from interviews inform future immersive training designs.2025YWYihong Wu et al.Full-Body Interaction & Embodied InputHuman Pose & Activity RecognitionUIST
Sensible Agent: A Framework for Unobtrusive Interaction with Proactive AR AgentsProactive AR agents promise context-aware assistance, but their interactions often rely on explicit voice prompts or responses, which can be disruptive or socially awkward. We introduce Sensible Agent,a framework designed for unobtrusive interaction with these proactive agents. Sensible Agent dynamically adapts both “what” assistance to offer and, crucially, “how” to deliver it, based on real-time multimodal context sensing. Informed by an expert workshop (n=12) and a data annotation study (n=40), the framework leverages egocentric cameras, multimodal sensing, and Large Multimodal Models (LMMs) to infer context and suggest appropriate actions delivered via minimally intrusive interaction modes. We demonstrate our prototype on an XR headset through a user study (n=10) in both AR and VR scenarios. Results indicate that Sensible Agent significantly reduces perceived intrusiveness and interaction ef-fort compared to voice-prompted baseline, while maintaining high usability.2025GLGeonsun Lee et al.AR Navigation & Context AwarenessMixed Reality WorkspacesContext-Aware ComputingUIST
ViseGPT: Towards Better Alignment of LLM-generated Data Wrangling Scripts and User PromptsLarge language models (LLMs) enable the rapid generation of data wrangling scripts based on natural language instructions, but these scripts may not fully adhere to user-specified requirements, necessitating careful inspection and iterative refinement. Existing approaches primarily assist users in understanding script logic and spotting potential issues themselves, rather than providing direct validation of correctness. To enhance debugging efficiency and optimize the user experience, we develop ViseGPT, a tool that automatically extracts constraints from user prompts to generate comprehensive test cases for verifying script reliability. The test results are then transformed into a tailored Gantt chart, allowing users to intuitively assess alignment with semantic requirements and iteratively refine their scripts. Our design decisions are informed by a formative study (N=8) that explores user practices and challenges. We further evaluate the effectiveness and usability of ViseGPT through a user study (N=18). Results indicate that ViseGPT significantly improves debugging efficiency for LLM-generated data-wrangling scripts, enhances users’ ability to detect and correct issues, and streamlines the workflow experience.2025JZJiajun Zhu et al.Human-LLM CollaborationExplainable AI (XAI)Interactive Data VisualizationUIST
CAnnotator: Photo-Guided Color Annotation for Degraded Ancient PaintingsAncient paintings suffer irreversible color degradation due to aging and improper conservation. Labeling degraded paintings with authentic colors becomes vital to protect these valuable cultural heritages, which is challenging due to missing color information. Users typically need to investigate relevant photos to infer authentic colors and then validate these colors by mixing traditional pigments. However, such a task could be exhausting. To ease the difficulty, we propose an interactive visualization tool, namely CAnnotator, that streamlines efficient human-AI collaboration for the color annotation of degraded ancient paintings. CAnnotator consists of three views: a paint-annotation view, a photo-reference view, and a pigment-mixing view. Given an ancient painting, the paint-annotation view is developed to help users extract its color-degraded object textures that would be propagated to the relevant photos using a texture tracking model. Based on the tracking results, the photo-reference view provides texture-color and object-posture filters to explore the photos that include the given texture colors and object postures. We train a deep learning model to simulate the mixing of physical pigments and employ the chain rule to support progressive pigment mixture using a novel flow-based color visualization. We demonstrate the usage of CAnnotator through a use case and evaluate its effectiveness through model experiments and an in-lab user study. Compared to the baseline, CAnnotator could improve user confidence of labeled colors and foster user engagement at the cost of additional time.2025TTTan Tang et al.Museum & Cultural Heritage DigitizationInteractive Narrative & Immersive StorytellingUIST
ReSpark: Leveraging Previous Data Reports as References to Generate New Reports with LLMsCreating data reports is a labor-intensive task involving iterative data exploration, insight extraction, and narrative construction. A key challenge lies in composing the analysis logic-from defining objectives and transforming data to identifying and communicating insights. Manually crafting this logic can be cognitively demanding. While experienced analysts often reuse scripts from past projects, finding a perfect match for a new dataset is rare. Even when similar analyses are available online, they usually share only results or visualizations, not the underlying code, making reuse difficult. To address this, we present ReSpark, a system that leverages large language models (LLMs) to reverse-engineer analysis logic from existing reports and adapt it to new datasets. By generating draft analysis steps, ReSpark provides a warm start for users. It also supports interactive refinement, allowing users to inspect intermediate outputs, insert objectives, and revise content. We evaluate ReSpark through comparative and user studies, demonstrating its effectiveness in lowering the barrier to generating data reports without relying on existing analysis code.2025YTYuan Tian et al.Human-LLM CollaborationInteractive Data VisualizationData StorytellingUIST
KiriInflate: Fabricating Cross-Scale Inflatables with Large-Magnitude Contraction and Tunable Stretchability for Tangible InteractionWe present KiriInflate, a rapid, precise, and accessible fabrication method for creating stretchable inflatables with Kirigami structures. These inflatables, fabricated at multiple scales (from fingernail-sized to body-sized), exhibit rapid, large contraction upon inflation up to 83.5% and provide tunable stretchability. Our fabrication process leverages the electrostatic adhesion of plastic films and an off-the-shelf laser cutter to simultaneously cut and fuse the edges of inflatables, achieving ultra-narrow seals (< 0.125 mm). Our structural design enables versatile 3D morphing upon inflation and tunable stretch behavior, with experimental studies offering design guidelines for key geometric parameters. A series of applications, including an eyelid assistive device, a multi-mode game handle, a dynamic elbow brace, and breathable lamps, highlight its potential for diverse interaction in HCI.2025YYYue Yang et al.Shape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingUIST
EmbroChet: A Hybrid Textile Fabrication Approach for 3D Personalized Handicraft via Heat-ShrinkingWe propose EmbroChet, a hybrid approach that bridges digital fabrication and textile craftsmanship, empowering individuals unfamiliar with intricate craft techniques to design and fabricate 3D textile handicrafts intuitively. EmbroChet allows the creation of handicrafts by embroidering chain stitches (a fundamental embroidery technique) onto a heat-shrinkable film, which subsequently self-transforms from a 2D composite to a 3D textile through a freely controllable heating triggering process. Through a single stitch type, the method enables custom designs and intricate geometries to be achieved without complex manual skills that often requires expertise between different stitch knowledge. To better demonstrate EmbroChet, we propose a design tool that includes shape-changing libraries to assist users in customizing 3D shapes. The evaluation demonstrates its unique strength in balancing geometric complexity and textile softness. Furthermore, our workshop verifies the feasibility of EmbroChet, exploring its potential for personalized textile fabrication, and synergizing the precision of digital fabrication with the tactile artistry of textile craftsmanship.2025GWGuanyun Wang et al.Shape-Changing Interfaces & Soft Robotic MaterialsProgramming Education & Computational ThinkingShape-Changing Materials & 4D PrintingUIST
FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid AnimationsWe present FabObscura: a system for creating interactive barrier-grid animations, a classic technique that uses occlusion patterns to create the illusion of motion. Whereas traditional barrier-grid animations are constrained to simple linear occlusion patterns, FabObscura introduces a parameterization that represents patterns as mathematical functions. Our parameterization offers two key advantages over existing barrier-grid animation design methods: first, it has a high expressive ceiling by enabling the systematic design of novel patterns; second, it is versatile enough to represent all established forms of barrier-grid animations. Using this parameterization, our computational design tool enables an end-to-end workflow for authoring, visualizing, and fabricating these animations without domain expertise. Our applications demonstrate how FabObscura can be used to create animations that respond to a range of user interactions, such as translations, rotations, and changes in viewpoint. By formalizing barrier-grid animation as a computational design material, FabObscura extends its expressiveness as an interactive medium.2025TSTicha Sethapakdi et al.Shape-Changing Materials & 4D PrintingCustomizable & Personalized ObjectsDigital Art Installations & Interactive PerformanceUIST
SCENIC: A Location-based System to Foster Cognitive Development in Children During Car RidesCar-riding is common for children in modern life, and given the repetitive nature of daily commutes, they often feel bored, which in turn leads them to rely on electronic devices for entertainment. Meanwhile, the rich and rapidly changing scenery outside the car naturally attracts children’s curiosity, providing abundant resources for cognitive development. Our formative study reveals that parents' support during car rides is often fleeting, as accompanying adults may struggle to consistently provide effective guidance to nurture children's innate curiosity. Therefore, we propose SCENIC, an interactive system that guides children aged 6-11 to better perceive the external environment through location-based cognitive development strategies. Specifically, we built upon the experiential approaches used by parents, culminating in the formulation of six cognitive development strategies integrated into SCENIC. Additionally, considering the repetitive nature of car commutes, SCENIC incorporates features of dynamic POI selection and journey gallery generation to improve children's engagement. We evaluated the quality of SCENIC's generated content (N=21) and conducted an in-situ user evaluation involving seven families and ten children. Study findings suggest that SCENIC can enhance the car riding experience for children and help them better perceive the external environment through cognitive development strategies.2025LCLiuqing Chen et al.Motion Sickness & Passenger ExperienceMicromobility (E-bike, E-scooter) InteractionUniversal & Inclusive DesignUIST
GyFoam: Fabricating Lattice Foam with Customizable Stiffness through Uniform ExpansionWe present GyFoam, a fabrication method integrating foam material with lattice structure to enable controlled and uniform expansion, which supports high-quality forming in appearance and customizable stiffness in function, using standard 3D printers, filaments, commercially available Thermo-Expandable Microspheres and silicone. To achieve customizable stiffness, we propose two methods through experiment: modifying material concentration and adjusting lattice structural parameters. Additionally, we propose three shape control strategies for creating complex shapes: bending, wavy edges, and internal doming. Furthermore, a user-friendly design tool is established for users to construct lattice structures, preview basic deformation, and generate mold models for printing. Finally, through a series of applications, we validate GyFoam's practical usage of fabricating large objects, wearable products, enabling flexible interactions and creating aesthetic designs.2025GWGuanyun Wang et al.Desktop 3D Printing & Personal FabricationShape-Changing Materials & 4D PrintingCustomizable & Personalized ObjectsUIST
VisMimic: Integrating Motion Chain in Feedback Video Generation for Motor CoachingAugmented video is a common medium for remote sports coaching, facilitating communication between trainees and coaches. Existing video augmentation techniques struggle to simultaneously convey both the overall motion dynamics and static key poses. This limitation hinders feedback comprehension in motor learning, making it difficult to understand where errors occur and how to correct them. To address this, we first reviewed popular video augmentation solutions. In collaboration with professional coaches, we integrated motion chain into feedback videos to combine key poses with motion trajectories. It supports multi-view observation and feedback explanation from overview to detail. To assist coaches in creating feedback videos, we present VisMimic, a human-AI interaction system that automatically analyzes trainee videos against reference movements, generates animated feedback, and enables customization. User studies show VisMimic's usability and effectiveness in enhancing motion analysis and communication for motor coaching.2025LCLiqi Cheng et al.Full-Body Interaction & Embodied InputHuman Pose & Activity RecognitionUIST
Touch-n-Curl: Designing and Constructing Skeletal Form through 3D Printing Flattened Zipper AssemblyIn the realm of digital fabrication, skeletal structures offer lightweight, cost-effective solutions for art installation, rapid fabrication, and large-scale construction. However, existing 3D printing methods for skeletal structures often require support structures, resulting in prolonged print time and excessive material consumption. This paper presents Touch-n-Curl, a design and construction system for rapidly prototyping 3D skeletal curved structures, covering scales from millimeters to meters, by printing 2D zipper assemblies with interlocking mechanisms using conventional 3D printers. This design process is made possible by a computational method that unrolls a 3D model into a 2D branch assembly while minimizing branch intersections, making the fabrication process both efficient and robust. A parametric design tool is developed to support this inverse design workflow, instantly generating 2D zippers and offering a preview of the 3D skeletal assembly. To ensure users can effectively utilize the system, we implement methods such as edge disjoining and tree rectification to accommodate closed mesh imports in addition to opened trees at a wide range of complexity measured by curvature and torsion. The result of this integrated and accessible workflow is evaluated in fabrication speed, mechanical strength, and shape-matching accuracy, and its versatility is showcased through a series of demonstrations.2025DPDeying Pan et al.Desktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingUIST
Examining Cross-Cultural Differences in Intelligent Vehicle Agents: Repair Strategies after Their FailuresAnthropomorphic design in intelligent vehicle agents (IVAs) is crucial for driving safety and user experience. Cultural background may shape user preferences, as evidenced by Chinese car manufacturers offering more anthropomorphic IVAs (e.g., physical robots, human-like virtual agents) than their Western counterparts. This suggests that a universal approach to anthropomorphic design may not be feasible. While prior academic research has examined cross-cultural differences in visual anthropomorphism, behavioral anthropomorphism remains understudied. In this study, we developed a taxonomy of user requests (N = 60), evaluated the performance and responses of eight IVAs in premium-level cars in the Chinese market (five from Chinese brands, three from Western brands), and analyzed their verbal repair behaviors (e.g., apology, promise) following request failures. Overall, the five Chinese-brand IVAs and three Western-brand IVAs did not differ in their corrective responses to user requests or their likelihood of employing verbal repair strategies. However, our in-depth analysis revealed that Chinese-brand IVAs were more likely to use combined repair strategies rather than single ones and to incorporate intimacy expressions in their verbal repair behaviors compared to their Western-brand counterparts. This suggests potential cross-cultural differences in the design of social strategies for IVAs. We also observed IVA-level variations within both Chinese-brand and Western-brand groups. Future cross-cultural research is needed to inform evidence-based anthropomorphic design.2025LLLan Lan et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsVoice User Interface (VUI) DesignMultilingual & Cross-Cultural Voice InteractionAutoUI
LuciEntry: Towards Understanding the Design of Lucid Dream InductionLucid dreaming, a state in which people become aware that they are dreaming, is known for its many mental and physical health benefits. However, most lucid dream induction techniques, such as reality testing, require significant time and effort to master, creating a barrier for people seeking these experiences. We designed \projectName, a portable interactive prototype aimed at helping people induce lucid dreaming through well-timed visual and auditory cues. We conducted a lab and a field study to understand \projectName{}'s user experience. The interview data allowed us to identify three themes. Building on these findings and our design practice, we derived seven considerations to guide the design of future lucid dream systems. Ultimately, this work aims to inspire further research into interactive technologies for altered states of consciousness.2025PWPo-Yao (Cosmos) Wang et al.Mental Health Apps & Online Support CommunitiesDIS
An Exploratory Study on How AI Awareness Impacts Human-AI Design CollaborationThe collaborative design process is intrinsically complicated and dynamic, and researchers have long been exploring how to enhance efficiency in this process. As Artificial Intelligence (AI) technology evolves, it has been widely used as a design tool and exhibited the potential as a design collaborator. Nevertheless, problems concerning how designers should communicate with AI in collaborative design remain unsolved. To address this research gap, we referred to how designers communicate fluently in human-human design collaboration, and found awareness to be an important ability for facilitating communication by understanding their collaborators and current situation. However, previous research mainly studied and supported human awareness, the possible impact AI awareness would bring to the human-AI collaborative design process, and the way to realize AI awareness remain unknown. In this study, we explored how AI awareness will impact human-AI collaboration through a Wizard-of-Oz experiment. Both quantitative and qualitative results supported that enabling AI to have awareness can enhance the communication fluidity between human and AI, thus enhancing collaboration efficiency. We further discussed the results and concluded design implications for future human-AI collaborative design systems.2025ZCZhuoyi Cheng et al.Human-LLM CollaborationAI-Assisted Decision-Making & AutomationIUI
SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language ModelsWith the continuous advancement of vision language models (VLMs) technology, remarkable research achievements have emerged in the dermatology field, the fourth most prevalent human disease category. However, despite these advancements, VLM still faces explainable problems to user in diagnosis due to the inherent complexity of dermatological conditions, existing tools offer relatively limited support for user comprehension. We propose SkinGEN, a diagnosis-to-generation framework that leverages the stable diffusion(SD) model to generate reference demonstrations from diagnosis results provided by VLM, thereby enhancing the visual explainability for users. Through extensive experiments with Low-Rank Adaptation (LoRA), we identify optimal strategies for skin condition image generation. We conduct a user study with 32 participants evaluating both the system performance and explainability. Results demonstrate that SkinGEN significantly improves users’ comprehension of VLM predictions and fosters increased trust in the diagnostic process. This work paves the way for more transparent and user-centric VLM applications in dermatology and beyond.2025BLYuyu Lin et al.Brain-Computer Interface (BCI) & NeurofeedbackExplainable AI (XAI)IUI
DreamDirector: Designing a Generative AI System to Aid Therapists in Treating Clients' NightmaresNightmares can adversely affect individuals' mental health and well-being, necessitating timely psychological intervention. Current nightmare therapy has set high requirements for therapists, appeared abstract to clients, and showed poor interaction between them, due to its extensive information input, lack of sensory stimulation, and exclusive reliance on one-on-one conversation. We proposed DreamDirector, a visual-interactive and narrative generative system powered by generative AI. Based on Imagery Rehearsal Therapy (IRT) and Nightmare Deconstruction and Reprocessing (NDR), the system can (1) recollect the nightmare scene, (2) interpret the dream with LLM, (3) reprocess the nightmare by generating therapeutic dream visuals using AI painting alongside meditative texts, and feedback with a picture book. Finally, we verified the usability of this system in terms of efficiency enhancement and interaction promotion through a user study with 2 therapists and 16 clients. The results revealed emotional relief among clients, with a positive and impressive attitude toward visual interaction.2025YZYijun Zhao et al.Generative AI (Text, Image, Music, Video)Mental Health Apps & Online Support CommunitiesIUI
TactStyle: Generating Tactile Textures with Generative AI for Digital FabricationRecent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle's generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.2025FFFaraz Faruqi et al.MIT CSAILForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsGenerative AI (Text, Image, Music, Video)CHI