Assessing Dynamic Flow Experience from EEG Signals: A Processing-based ApproachAs an interaction experience goal, the flow experience is characterized by its subjectivity and dynamism. Exploring objective methods to assess dynamic flow states is significant in enhancing user experience design, evaluation, and optimization. This study aims to model the dynamics of the flow experience and quantify its intensity using electroencephalography signals (EEG) from the perspective of the process. To achieve this, an interactive task is designed to induce dynamic changes in flow, and EEG signals from participants were recorded simultaneously, to form a flow assessment dataset. Subsequently, a frequency-aware convolutional Transformer model (FA-ConFormer) was proposed to extract dynamic features from EEG, with particular optimization for capturing complex dynamic features in the frequency domain. Experimental results demonstrate that FA-ConFormer outperforms existing methods in flow state and intensity recognition, the visualization of the flow process dynamically depicting the onset, development, peak, and decline of flow with varying intensities, which help to deepen the understanding of flow experience.2025SLJuan Liu et al.Brain-Computer Interface (BCI) & NeurofeedbackVisualization Perception & CognitionUIST
Libra: An Interaction Model for Data VisualizationWhile existing visualization libraries enable the reuse, extension, and combination of static visualizations, achieving the same for interactions remains nearly impossible. We contribute an interaction model and its implementation to achieve this goal. Our model enables the creation of interactions that support direct manipulation, enforce software modularity by clearly separating visualizations from interactions, and ensure compatibility with existing visualization systems. Interaction management is achieved through an instrument that receives events from the view, dispatches these events to graphical layers containing objects, and then triggers actions. We present a JavaScript prototype implementation of our model called Libra.js, enabling the specification of interactions for visualizations created by different libraries. We demonstrate the effectiveness of Libra by describing and generating a wide range of existing interaction techniques. We evaluate Libra.js through diverse examples, a metric-based notation comparison, and a performance benchmark analysis.2025YZYue Zhao et al.School of Computer Science and Technology, Shandong UniversityInteractive Data VisualizationTime-Series & Network Graph VisualizationCHI
Seeing Through the Overlap: The Impact of Color and Opacity on Depth Order Perception in VisualizationSemi-transparent visualizations are commonly used to reveal information in overlapped regions by applying colors and opacity. While a few studies made recommendations on how to choose colors and opacity levels to maintain depth perception, they often conflict and overlook the interaction effect between these factors. In this paper, we systematically explore the impact of color and opacity on depth order perception across eight colors, three opacity levels, and various layer orders and arrangements. Our inferential analysis shows that both color hue and opacity significantly influence depth order perception, with the effectiveness depending on their interaction. We also derived 12 features for predictive analysis, achieving the best mean accuracy of 80.72% and mean F1 score of 87.75%, with opacity assigned to the front layer as the top feature for most models. Finally, we provide a small design tool and four guidelines to better align the design rules of colors and opacity in semi-transparent visualizations.2025ZMZhiyuan Meng et al.Shandong UniversityInteractive Data VisualizationUncertainty VisualizationVisualization Perception & CognitionCHI
QCM: A Curvature Manipulation Method to Suppress Discomfort in Redirected WalkingIn redirected walking techniques, curvature gain and bending gain, which are referred to as curvature manipulation, are important redirection gains. The applied gains can differ when multiple paths are mapped, and sudden changes in gain may cause discomfort. This study proposes quadratic curvature manipulation (QCM) based on the habituation mechanism to effectively reduce discomfort. This method quadratically adjusts the path curvature, thereby reducing user's perception of curvature changes. Furthermore, we introduce the segmented curvature change (SCC) mode that combines QCM with linear curvature manipulation to facilitate more natural gain transitions, thereby reducing discomfort. Two experiments were conducted. Experiment 1 examined the relationship between QCM parameters and gains at which users felt discomfort. Experiment 2 further examined the effects of different curvature change modes on discomfort. The results indicate that using the SCC mode in curvature manipulations is more effective than other methods in reducing discomfort.2025XBXiyu Bao et al.Shandong UniversityFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
RASSAR: Room Accessibility and Safety Scanning in Augmented RealityThe safety and accessibility of our homes are critical and evolve as we age, become ill, host guests, or experience life events such as having children. Researchers and health professionals have created assessment instruments such as checklists that enable homeowners and trained experts to identify and mitigate safety and access issues. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR, a mobile AR application for semi-automatically identifying, localizing, and visualizing indoor accessibility and safety issues such as an inaccessible table height or unsafe loose rugs using LiDAR and real-time computer vision. We present findings from three studies: a formative study with 18 participants across five stakeholder groups to inform the design of RASSAR, a technical performance evaluation across ten homes demonstrating state-of-the-art performance, and a user study with six stakeholders. We close with a discussion of future AI-based indoor accessibility assessment tools, RASSAR's extensibility, and key application scenarios.2024XSXia Su et al.University of WashingtonAR Navigation & Context AwarenessContext-Aware ComputingCHI
Color Maker: a Mixed-Initiative Approach to Creating Accessible Color MapsQuantitative data is frequently represented using color, yet designing effective color mappings is a challenging task, requiring one to balance perceptual standards with personal color preference. Current design tools either overwhelm novices with complexity or offer limited customization options. We present ColorMaker, a mixed-initiative approach for creating colormaps. ColorMaker combines fluid user interaction with real-time optimization to generate smooth, continuous color ramps. Users specify their loose color preferences while leaving the algorithm to generate precise color sequences, meeting both designer needs and established guidelines. ColorMaker can create new colormaps, including designs accessible for people with color-vision deficiencies, starting from scratch or with only partial input, thus supporting ideation and iterative refinement. We show that our approach can generate designs with similar or superior perceptual characteristics to standard colormaps. A user study demonstrates how designers of varying skill levels can use this tool to create custom, high-quality colormaps. ColorMaker is available at: https://colormaker.org2024ASAmey A Salvi et al.Indiana UniversityUniversal & Inclusive DesignInteractive Data VisualizationVisualization Perception & CognitionCHI
Multi-Vib: Precise Multi-point Vibration Monitoring Using mmWave Radar"Vibration measurement is vital for fault diagnosis of structures (e.g., machines and civil structures). Different structure components undergo distinct vibration patterns, which jointly determine the structure's health condition, thus demanding simultaneous multi-point vibration monitoring. Existing solutions deploy multiple accelerometers along with their power supplies or laser vibrometers on the monitored object to measure multi-point vibration, which is inconvenient and costly. Cameras provide a less expensive solution while heavily relying on good lighting conditions. To overcome these limitations, we propose a cost-effective and passive system, called Multi-Vib, for precise multi-point vibration monitoring. Multi-Vib is implemented using a single mmWave radar to remotely and separately sense the vibration displacement of multiple points via signal reflection. However, simultaneously detecting and monitoring multiple points on a single object is a daunting task. This is because most radar signals are scattered away from vibration points due to their tilted locations and shapes by nature, causing an extremely weak reflected signal to the radar. To solve this issue, we dedicatedly design a physical marker placed on the target point, which can force the direction of the reflected signal towards the radar and significantly increase the reflected signal strength. Another practical issue is that the reflected signal from each point endures interferences and noises from the surroundings. Thus, we develop a series of effective signal processing methods to denoise the signal for accurate vibration frequency and displacement estimation. Extensive experimental results show that the average errors in multi-point vibration frequency and displacement estimation are around 0.16Hz and 14μm, respectively. https://dl.acm.org/doi/10.1145/3569496"2023YYYanni Yang et al.Human Pose & Activity RecognitionBiosensors & Physiological MonitoringUbiComp
TwinkleTwinkle: Interacting with Your Smart Devices by Eye BlinkRecent years have witnessed the rapid boom of mobile devices interweaving with changes the epidemic has made to people's lives. Though a tremendous amount of novel human-device interaction techniques have been put forward to facilitate various audiences and scenarios, limitations and inconveniences still occur to people having difficulty speaking or using their fingers/hands/arms or wearing masks/glasses/gloves. To fill the gap of such interaction contexts beyond using hands, voice, face, or mouth, in this work, we take the first step to propose a novel Human-Computer Interaction (HCI) system, TwinkleTwinkle, which senses and recognizes eye blink patterns in a contact-free and training-free manner leveraging ultrasound signals on commercial devices. TwinkleTwinkle first applies a phase difference based approach to depicting candidate eye blink motion profiles without removing any noises, followed by modeling intrinsic characteristics of blink motions through adaptive constraints to separate tiny patterns from interferences in conditions where blink habits and involuntary movements vary between individuals. We propose a vote-based approach to get final patterns designed to map with number combinations either self-defined or based on carriers like ASCII code and Morse code to make interaction seamlessly embedded with normal and well-known language systems. We implement TwinkleTwinkle on smartphones with all methods realized in the time domain and conduct extensive evaluations in various settings. Results show that TwinkleTwinkle achieves about 91% accuracy in recognizing 23 blink patterns among different people. https://dl.acm.org/doi/10.1145/35962382023HCHAIMING CHENG et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionUbiComp
Interactive Context-Preserving Color Highlighting for Multiclass ScatterplotsColor is one of the main visual channels used for highlighting elements of interest in visualization. However, in multi-class scatterplots, color highlighting often comes at the expense of degraded color discriminability. In this paper, we argue for context-preserving highlighting during the interactive exploration of multi-class scatterplots to achieve desired pop-out effects, while maintaining good perceptual separability among all classes and consistent color mapping schemes under varying points of interest. We do this by first generating two contrastive color mapping schemes with large and small contrasts to the background. Both schemes maintain good perceptual separability among all classes and ensure that when colors from the two palettes are assigned to the same class, they have a high color consistency in color names. We then interactively combine these two schemes to create a dynamic color mapping for highlighting different points of interest. We demonstrate the effectiveness through crowd-sourced experiments and case studies.2023KLKecheng Lu et al.Shandong UniversityInteractive Data VisualizationCHI
It's Touching: Understanding Touch-Affect Association in Shape-Change with Kinematic FeaturesWith the proliferation of shape-change research in affective computing, there is a need to deepen understandings of affective responses to shape-change display. Little research has focused on affective reactions to tactile experiences in shape-change, particularly in the absence of visual information. It is also rare to study response to the shape-change as it unfolds, isolated from a final shape-change outcome. We report on two studies on touch-affect associations, using the crossmodal ``Bouba-Kiki'' paradigm, to understand affective responses to shape-change as it unfolds. We investigate experiences with a shape-change gadget, as it moves between rounded (``Bouba'') and spiky (``Kiki'') forms. We capture affective responses via the circumplex model, and use a motion analysis approach to understand the certainty of these responses. We find that touch-affect associations are influenced by both the size and the frequency of the shape-change and may be modality-dependent, and that certainty in affective associations is influenced by association-consistency.2022FFFeng Feng et al.University of BristolShape-Changing Interfaces & Soft Robotic MaterialsVisualization Perception & CognitionCHI
CAST: Authoring Data-Driven Chart AnimationsWe present CAST, an authoring tool that enables the interactive creation of chart animations. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis, a declarative chart animation grammar that leverages data-enriched SVG charts, CAST supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation and other parameters for animation effects (e.g., animation type, easing function) using a control panel. In addition to describing how CAST infers recommendations for auto-completion, we present a gallery of examples to demonstrate the expressiveness of CAST and a user study to verify its learnability and usability. Finally, we discuss the limitations and potentials of CAST as well as directions for future research.2021TGTong Ge et al.Shandong University, Shandong UniversityInteractive Data Visualization3D Modeling & AnimationCHI
Data-Driven Mark Orientation for Trend Estimation in ScatterplotsA common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.2021TLTingting Liu et al.School of Computer ScienceInteractive Data VisualizationVisualization Perception & CognitionCHI
Exploring the Weak Association between Flow Experience and Performance in Virtual EnvironmentsMany studies conducted in non-virtual activities have shown that flow significantly influences performance, yet studies in virtual activities often reveal only a weak association. This paper begins by building a theoretical explanatory model, and then conducts 3 empirical studies to explore this question. Study 1 exams the mechanism of weak association in two virtual activities. Study 2 tests the effectiveness of a potential approach to strengthen this association. In Study 3 we applied our proposed model and design approach to optimize a VR tennis game. Results show that the influence of flow on performance was not significant in those virtual activities where the primary task and the operation of interactive artifacts were less congruent such that the artifacts can lead to flow experience that is independently of the primary task. Our research offers a theoretical and empirical basis on how to optimize virtual environment design and maximize positive effect of the flow experience.2018YBYulong Bian et al.Shandong UniversityImmersion & Presence ResearchSerious & Functional GamesCHI
Two Kinds of Novel Multi-user Immersive Display SystemsStereoscopic display is a standard display mode for virtual reality environments. Typical 3D projection provides only a single stereoscopic video stream; thus co-located users cannot correctly perceive the virtual scene based on their own position and view. Several works devoted to developing multi-user stereoscopic display, but the number of users is very limited or the technical implementation is complicated. In this paper we put forward two flexible and simple projection-based multi-user stereoscopic display systems. The first one, named TPA, is based on a triple-projector array and provides a 120Hz active stereo for three users. Two TPAs can be combined to form a six-user system. The second one, named DPA, is a dual-projector and easy-implemented system providing individual stereoscopic video stream for two to six users. Finally, a co-located multi-user virtual fireman simulation training system and a virtual tennis simulation system were created to verify the effectiveness of our systems.2018DGDongdong Guan et al.Shandong University , Shandong UniversitySocial & Collaborative VRImmersion & Presence ResearchCHI