Creative Blends of Visual ConceptsVisual blends combine elements from two distinct visual concepts into a single, integrated image, with the goal of conveying ideas through imaginative and often thought-provoking visuals. Communicating abstract concepts through visual blends poses a series of conceptual and technical challenges. To address these challenges, we introduce Creative Blends, an AI-assisted design system that leverages metaphors to visually symbolize abstract concepts by blending disparate objects. Our method harnesses commonsense knowledge bases and large language models to align designers’ conceptual intent with expressive concrete objects. Additionally, we employ generative text-to-image techniques to blend visual elements through their overlapping attributes. A user study (N=24) demonstrated that our approach reduces participants’ cognitive load, fosters creativity, and enhances the metaphorical richness of visual blend ideation. We explore the potential of our method to expand visual blends to include multiple object blending and discuss the insights gained from designing with generative AI.2025ZSZhida Sun et al.Shenzhen University, CSSEGenerative AI (Text, Image, Music, Video)AI-Assisted Creative WritingGraphic Design & Typography ToolsCHI
InsightBridge: Enhancing Empathizing with Users through Real-Time Information Synthesis and Visual CommunicationUser-centered design necessitates researchers deeply understanding target users throughout the design process. However, during early-stage user interviews, researchers may misinterpret users due to time constraints, incorrect assumptions, and communication barriers. To address this challenge, we introduce InsightBridge, a tool that supports real-time, AI-assisted information synthesis and visual-based verification. InsightBridge automatically organizes relevant information from ongoing interview conversations into an empathy map. It further allows researchers to specify elements to generate visual abstracts depicting the selected information, and then review these visuals with users to refine the visuals as needed. We evaluated the effectiveness of InsightBridge through a within-subject study (N=32) from both the researchers’ and users’ perspectives. Our findings indicate that InsightBridge can assist researchers in note-taking and organization, as well as in-time visual checking, thereby enhancing mutual understanding with users. Additionally, users’ discussions of visuals prompt them to recall overlooked details and scenarios, leading to more insightful ideas.2025JLJunze Li et al.The Hong Kong University of Science and TechnologyGenerative AI (Text, Image, Music, Video)Interactive Data VisualizationVisualization Perception & CognitionCHI
CUPID: Improving Battle Fairness and Position Satisfaction in Online MOBA Games with a Re-matchmaking SystemThe multiplayer online battle arena (MOBA) genre has gained significant popularity and economic success, attracting considerable research interest within the Human-Computer Interaction community. Enhancing the gaming experience requires a deep understanding of player behavior, and a crucial aspect of MOBA games is matchmaking, which aims to assemble teams of comparable skill levels. However, existing matchmaking systems often neglect important factors such as players' position preferences and team assignment, resulting in imbalanced matches and reduced player satisfaction. To address these limitations, this paper proposes a novel framework called CUPID, which introduces a novel process called ``re-matchmaking'' to optimize team and position assignments to improve both fairness and player satisfaction. CUPID incorporates a pre-filtering step to ensure a minimum level of matchmaking quality, followed by a pre-match win-rate prediction model that evaluates the fairness of potential assignments. By simultaneously considering players' position satisfaction and game fairness, CUPID aims to provide an enhanced matchmaking experience. Extensive experiments were conducted on two large-scale, real-world MOBA datasets to validate the effectiveness of CUPID. The results surpass all existing state-of-the-art baselines, with an average relative improvement of 7.18% in terms of win prediction accuracy. Furthermore, CUPID has been successfully deployed in a popular online mobile MOBA game. The deployment resulted in significant improvements in match fairness and player satisfaction, as evidenced by critical Human-Computer Interaction (HCI) metrics covering usability, accessibility, and engagement, observed through A/B testing.2024GFGe Fan et al.Session 4f: Multiplayer Gaming and CommunicationCSCW
ProInterAR: A Visual Programming Platform for Creating Immersive AR InteractionsAR applications commonly contain diverse interactions among different AR contents. Creating such applications requires creators to have advanced programming skills for scripting interactive behaviors of AR contents, repeated transferring and adjustment of virtual contents from virtual to physical scenes, testing by traversing between desktop interfaces and target AR scenes, and digitalizing AR contents. Existing immersive tools for prototyping/authoring such interactions are tailored for domain-specific applications. To support programming general interactive behaviors of real object(s)/environment(s) and virtual object(s)/environment(s) for novice AR creators, we propose ProInterAR, an integrated visual programming platform to create immersive AR applications with a tablet and an AR-HMD. Users can construct interaction scenes by creating virtual contents and augmenting real contents from the view of an AR-HMD, script interactive behaviors by stacking blocks from a tablet UI, and then execute and control the interactions in the AR scene. We showcase a wide range of AR application scenarios enabled by ProInterAR, including AR game, AR teaching, sequential animation, AR information visualization, etc. Two usability studies validate that novice AR creators can easily program various desired AR applications using ProInterAR.2024HYHui Ye et al.City University of Hong KongAR Navigation & Context AwarenessMixed Reality WorkspacesCHI
Beyond Numbers: Creating Analogies to Enhance Data Comprehension and Communication with Generative AIUnfamiliar measurements usually hinder readers from grasping the scale of the numerical data, understanding the content, and feeling engaged with the context. To enhance data comprehension and communication, we leverage analogies to bridge the gap between abstract data and familiar measurements. In this work, we first conduct semi-structured interviews with design experts to identify design problems and summarize design considerations. Then, we collect an analogy dataset of 138 cases from various online sources. Based on the collected dataset, we characterize a design space for creating data analogies. Next, we build a prototype system, AnalogyMate, that automatically suggests data analogies, their corresponding design solutions, and generated visual representations powered by generative AI. The study results show the usefulness of AnalogyMate in aiding the creation process of data analogies and the effectiveness of data analogy in enhancing data comprehension and communication.2024QCQing Chen et al.Tongji UniversityGenerative AI (Text, Image, Music, Video)Data StorytellingCHI
NF-Heart: A Near-field Non-contact Continuous User Authentication System via BallistocardiogramThe increasingly remote workforce resulting from the global coronavirus pandemic has caused unprecedented cybersecurity concerns to organizations. Considerable evidence has shown that one-pass authentication fails to meet security needs when the workforce work from home. The recent advent of continuous authentication (CA) has shown the potential to solve this predicament. In this paper, we propose NF-Heart, a physiological-based CA system utilizing a ballistocardiogram (BCG). The key insight is that the BCG measures the body's micro-movements produced by the recoil force of the body in reaction to the cardiac ejection of blood, and we can infer cardiac biometrics from BCG signals. To measure BCG, we deploy a lightweight accelerometer on an office chair, turning the common chair into a smart continuous identity "scanner". We design multiple stages of signal processing to decompose and transform the distorted BCG signals so that the effects of motion artifacts and dynamic variations are eliminated. User-specific fiducial features are then extracted from the processed BCG signals for authentication. We conduct comprehensive experiments on 105 subjects in terms of verification accuracy, security, robustness, and long-term availability. The results demonstrate that NF-Heart achieves a mean balanced accuracy of 96.45% and a median equal error rate of 3.83% for CA. The proposed signal processing pipeline is effective in addressing various practical disturbances. https://dl.acm.org/doi/10.1145/35808512023YHZeyu Huang et al.Passwords & AuthenticationUbiComp
Robust Finger Interactions with COTS Smartwatches via Unsupervised Siamese AdaptationWearable devices like smartwatches and smart wristbands have gained substantial popularity in recent years. However, their small interfaces create inconvenience and limit computing functionality. To fill this gap, we propose ViWatch, which enables robust finger interactions under deployment variations, and relies on a single IMU sensor that is ubiquitous in COTS smartwatches. To this end, we design an unsupervised Siamese adversarial learning method. We built a real-time system on commodity smartwatches and tested it with over one hundred volunteers. Results show that the system accuracy is about 97% over a week. In addition, it is resistant to deployment variations such as different hand shapes, finger activity strengths, and smartwatch positions on the wrist. We also developed a number of mobile applications using our interactive system and conducted a user study where all participants preferred our unsupervised approach to supervised calibration. The demonstration of ViWatch is shown at https://youtu.be/N5-ggvy2qfI2023WCWenqiang Chen et al.Foot & Wrist InteractionSmartwatches & Fitness BandsBiosensors & Physiological MonitoringUIST
ContextWing: Pair-wise Visual Comparison for Evolving Sequential Patterns of Contexts in Social Media Data StreamsUnderstanding and comparing the evolution of public opinions on a social media event is important. However, such a task requires summarizing rich semantic information and an in-depth comparison of semantics and dynamics at the same time, which is difficult for the analysis. To tackle these challenges, we propose ContextWing, an interactive visual analytics system to support pair-wise comparison for evolving sequential patterns of contexts between two data streams. The computational model of ContextWing generates dynamic topics and sequential patterns, and characterizes public attention and pair-wise correlations. A novel multi- layer bilateral wing metaphor is designed to intuitively visualizes sequential patterns merged by different contexts to reveal the similarities and differences in both temporal and semantic aspects between two streams. Interactive tools support the selection of a central keyword and its contexts to iteratively generate patterns for a focused exploration. The system supports analysis on both static and streaming settings that enables a wider range of application scenarios. We verify the effectiveness and usability of ContextWing from multiple facets, including three case studies, two expert interviews, and a user study.2023YZYuheng Zhao et al.VisualizationCSCW
Exploring Visual Information Flows in InfographicsInfographics are engaging visual representations that tell an informative story using a fusion of data and graphical elements. The large variety of infographic design poses a challenge for their high-level analysis. We use the concept of Visual Information Flow (VIF), which is the underlying semantic structure that links graphical elements to convey the information and story to the user. To explore VIF, we collected a repository of over 13K infographics. We use a deep neural network to identify visual elements related to information, agnostic to their various artistic appearances. We construct the VIF by automatically chaining these visual elements together based on Gestalt principles. Using this analysis, we characterize the VIF design space by a taxonomy of 12 different design patterns. Exploring in a real-world infographic dataset, we discuss the design space and potentials of VIF in light of this taxonomy.2020MLMin Lu et al.Shenzhen UniversityInteractive Data VisualizationData StorytellingCHI