Beyond Time and Accuracy: Strategies in Visual Problem-SolvingIn this paper, we explore viewers’ strategies in visual problem-solving tasks. We build on the traditional metrics of accuracy and time to better understand the learning that occurs as individuals interact with visualizations. We conducted an in-lab eye-tracking user study with 53 participants from diverse demographic backgrounds. Using questions from the Visualization Literacy Assessment Test (VLAT), we examined participants’ problem-solving strategies. We employed a mixed-methods approach capturing quantitative data on performance and gaze patterns, as well as qualitative data through think-alouds and sketches by participants as they reported on their problem-solving approach. Our analysis reveals not only the various cognitive strategies leading to correct answers but also the nature of mistakes and the conceptual misunderstandings that underlie them. This research contributes to the enhancement of visualization design guidelines by incorporating insights into the diverse strategies and cognitive processes employed by users.2025EMEric Mörth et al.Harvard Medical School, Department of Biomedical InformaticsEye Tracking & Gaze InteractionInteractive Data VisualizationVisualization Perception & CognitionCHI
Reading Between the Pixels: Investigating the Barriers to Visualization LiteracyIn our current visual-centric digital age, the capability to interpret, understand, and produce visual representations of data —termed visualization literacy— is paramount. However, not everyone is adept at navigating this visual terrain. This paper explores the barriers that individuals who misread a visualization encounter, aiming to understand their specific mental gaps. Utilizing a mixed-method approach, we administered the Visualization Literacy Assessment Test (VLAT) to a group of 120 participants drawn from diverse demographic backgrounds, which provided us with 1774 task completions. We augmented the standard VLAT test to capture quantitative and qualitative data on participants' errors. We collected participant sketches and open-ended text about their analysis approach, providing insight into users' mental models and rationale. Our findings reveal that individuals who incorrectly answer visualization literacy questions often misread visual channels, confound chart labels with data values, or struggle to translate data-driven questions into visual queries. Recognizing and bridging visualization literacy gaps not only ensures inclusivity but also enhances the overall effectiveness of visual communication in our society.2024CNCarolina Nobre et al.University of TorontoVisualization Perception & CognitionCHI
iBall: Augmenting Basketball Videos with Gaze-moderated Embedded VisualizationsWe present iBall, a basketball video-watching system that leverages gaze-moderated embedded visualizations to facilitate game understanding and engagement of casual fans. Video broadcasting and online video platforms make watching basketball games increasingly accessible. Yet, for new or casual fans, watching basketball videos is often confusing due to their limited basketball knowledge and the lack of accessible, on-demand information to resolve their confusion. To assist casual fans in watching basketball videos, we compared the game-watching behaviors of casual and die-hard fans in a formative study and developed iBall based on the findings. iBall embeds visualizations into basketball videos using a computer vision pipeline, and automatically adapts the visualizations based on the game context and users’ gaze, helping casual fans appreciate basketball games without being overwhelmed. We confirmed the usefulness, usability, and engagement of iBall in a study with 16 casual fans, and further collected feedback from 8 die-hard fans.2023ZCZhutian Chen et al.Harvard UniversityEye Tracking & Gaze InteractionInteractive Data VisualizationCHI
The Pattern is in the Details: An Evaluation of Interaction Techniques for Locating, Searching, and Contextualizing Details in Multivariate Matrix VisualizationsMatrix visualizations are widely used to display large-scale network, tabular, set, or sequential data. They typically only encode a single value per cell, e.g., through color. However, this can greatly limit the visualizations' utility when exploring multivariate data, where each cell represents a data point with multiple values (referred to as details). Three well-established interaction approaches can be applicable in multivariate matrix visualizations (or MMV): focus+context, pan&zoom, and overview+detail. However, there is little empirical knowledge of how these approaches compare in exploring MMV. We report on two studies comparing them for locating, searching, and contextualizing details in MMV. We first compared four focus+context techniques and found that the fisheye lens overall outperformed the others. We then compared the fisheye lens, to pan&zoom and overview+detail. We found that pan&zoom was faster in locating and searching details, and as good as overview+detail in contextualizing details.2022YYYalong Yang et al.Virginia TechInteractive Data VisualizationTime-Series & Network Graph VisualizationCHI
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw TrainingWe present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.2021TLTica Lin et al.Harvard UniversityAR Navigation & Context AwarenessContext-Aware ComputingCHI
Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design FeedbackCrowdsourced design feedback systems are emerging resources for getting large amounts of feedback in a short period of time. Traditionally, the feedback comes in the form of a declarative statement, which often contains positive or negative sentiment. Prior research has shown that overly negative or positive sentiment can strongly influence the perceived usefulness and acceptance of feedback and, subsequently, lead to ineffective design revisions. To enhance the effectiveness of crowdsourced design feedback, we investigate a new approach for mitigating the effects of negative or positive feedback by combining open-ended and thought-provoking questions with declarative feedback statements. We conducted two user studies to assess the effects of question-based feedback on the sentiment and quality of design revisions in the context of graphic design. We found that crowdsourced question-based feedback contains more neutral sentiment than statement-based feedback. Moreover, we provide evidence that presenting feedback as questions followed by statements leads to better design revisions than question- or statement-based feedback alone.2021FLFritz Lekschas et al.Harvard UniversityCreative Collaboration & Feedback SystemsCrowdsourcing Task Design & Quality ControlPrototyping & User TestingCHI
reVISit: Looking Under the Hood of Interactive Visualization StudiesQuantifying user performance with metrics such as time and accuracy does not show the whole picture when researchers evaluate complex, interactive visualization tools. In such systems, performance is often influenced by different analysis strategies that statistical analysis methods cannot account for. To remedy this lack of nuance, we propose a novel analysis methodology for evaluating complex interactive visualizations at scale. We implement our analysis methods in reVISit, which enables analysts to explore participant interactions performance metrics, and responses in the context of users' analysis strategies. Replays of participant sessions can aid in identifying usability problems during pilot studies and make individual analysis processes salient. To demonstrate the applicability of reVISit to visualization studies, we analyze participant data from two published crowdsourced studies. Our findings show that reVISit can be used to reveal and describe novel interaction patterns, to analyze performance differences between different analysis strategies, and to validate or challenge design decisions.2021CNCarolina Nobre et al.Harvard UniversityInteractive Data VisualizationVisualization Perception & CognitionCHI
ICONATE: Automatic Compound Icon Generation and IdeationCompound icons are prevalent on signs, webpages, and infographics, effectively conveying complex and abstract concepts, such as "no smoking" and "health insurance", with simple graphical representations. However, designing such icons requires experience and creativity, in order to efficiently navigate the semantics, space, and style features of icons. In this paper, we aim to automate the process of generating icons given compound concepts, to facilitate rapid compound icon creation and ideation. Informed by ethnographic interviews with professional icon designers, we have developed ICONATE, a novel system that automatically generates compound icons based on textual queries and allows users to explore and customize the generated icons. At the core of ICONATE is a computational pipeline that automatically finds commonly used icons for sub-concepts and arranges them according to inferred conventions. To enable the pipeline, we collected a new dataset, Compicon1k, consisting of 1000 compound icons annotated with semantic labels (i.e., concepts). Through user studies, we have demonstrated that our tool is able to automate or accelerate the compound icon design process for both novices and professionals.2020NZNanxuan Zhao et al.Harvard University & City University of Hong KongGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationGraphic Design & Typography ToolsCHI
Exploring Visual Information Flows in InfographicsInfographics are engaging visual representations that tell an informative story using a fusion of data and graphical elements. The large variety of infographic design poses a challenge for their high-level analysis. We use the concept of Visual Information Flow (VIF), which is the underlying semantic structure that links graphical elements to convey the information and story to the user. To explore VIF, we collected a repository of over 13K infographics. We use a deep neural network to identify visual elements related to information, agnostic to their various artistic appearances. We construct the VIF by automatically chaining these visual elements together based on Gestalt principles. Using this analysis, we characterize the VIF design space by a taxonomy of 12 different design patterns. Exploring in a real-world infographic dataset, we discuss the design space and potentials of VIF in light of this taxonomy.2020MLMin Lu et al.Shenzhen UniversityInteractive Data VisualizationData StorytellingCHI
DataSelfie: Empowering People to Design Personalized Visuals to Represent Their DataMany personal informatics systems allow people to collect and manage personal data and reflect more deeply about themselves. However, these tools rarely offer ways to customize how the data is visualized. In this work, we investigate the question of how to enable people to determine the representation of their data. We analyzed the Dear Data project to gain insights into the design elements of personal visualizations. We developed DataSelfie, a novel system that allows individuals to gather personal data and design custom visuals to represent the collected data. We conducted a user study to evaluate the usability of the system as well as its potential for individual and collaborative sensemaking of the data.2019NKNam Wook Kim et al.Harvard UniversityInteractive Data VisualizationData StorytellingCHI
DataToon: Drawing Dynamic Network Comics With Pen + Touch InteractionComics are an entertaining and familiar medium for presenting compelling stories about data. However, existing visualization authoring tools do not leverage this expressive medium. In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions. A storyteller can use DataToon rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visually compelling narrative. In a user study, participants quickly learned to use DataToon for producing data comics.2019NKNam Wook Kim et al.Microsoft Research & Harvard UniversityInteractive Data VisualizationData StorytellingCreative Coding & Computational ArtCHI
BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual AttentionIn this article, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal “bubbles” -- small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types.2018NKNam Wook Kim et al.Harvard UniversityEye Tracking & Gaze InteractionVisualization Perception & CognitionCHI