VizAbility: Enhancing Chart Accessibility with LLM-based Conversational InteractionTraditional accessibility methods like alternative text and data tables typically underrepresent data visualization's full potential. Keyboard-based chart navigation has emerged as a potential solution, yet efficient data exploration remains challenging. We present VizAbility, a novel system that enriches chart content navigation with conversational interaction, enabling users to use natural language for querying visual data trends. VizAbility adapts to the user's navigation context for improved response accuracy and facilitates verbal command-based chart navigation. Furthermore, it can address queries for contextual information, designed to address the needs of visually impaired users. We designed a large language model (LLM)-based pipeline to address these user queries, leveraging chart data & encoding, user context, and external web knowledge. We conducted both qualitative and quantitative studies to evaluate VizAbility's multimodal approach. We discuss further opportunities based on the results, including improved benchmark testing, incorporation of vision models, and integration with visualization workflows.2024JGJoshua Gorniak et al.Human-LLM CollaborationVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UIST
Natural Language Dataset Generation Framework for Visualizations Powered by Large Language ModelsWe introduce VL2NL, a Large Language Model (LLM) framework that generates rich and diverse NL datasets using Vega-Lite specifications as input, thereby streamlining the development of Natural Language Interfaces (NLIs) for data visualization. To synthesize relevant chart semantics accurately and enhance syntactic diversity in each NL dataset, we leverage 1) a guided discovery incorporated into prompting so that LLMs can steer themselves to create faithful NL datasets in a self-directed manner; 2) a score-based paraphrasing to augment NL syntax along with four language axes. We also present a new collection of 1,981 real-world Vega-Lite specifications that have increased diversity and complexity than existing chart collections. When tested on our chart collection, VL2NL extracted chart semantics and generated L1/L2 captions with 89.4% and 76.0% accuracy, respectively. It also demonstrated generating and paraphrasing utterances and questions with greater diversity compared to the benchmarks. Last, we discuss how our NL datasets and framework can be utilized in real-world scenarios. The codes and chart collection are available at https://github.com/hyungkwonko/chart-llm.2024KKKwon Ko et al.KAISTHuman-LLM CollaborationInteractive Data VisualizationTime-Series & Network Graph VisualizationCHI
VisLab: Enabling Visualization Designers to Gather Empirically Informed Design FeedbackWhen creating a visualization, designers face various conflicting design choices. They typically rely on their hunches to deal with intricate trade-offs or resort to feedback from their colleagues. On the other hand, researchers have long used empirical methods to derive useful quantitative insights into visualization designs. Taking inspiration from this research tradition, we developed VisLab, an open-source online system to complement the existing qualitative feedback practice and help visualization practitioners run experiments to gather empirically informed design feedback. We surveyed practitioners’ perceptions of quantitative feedback and analyzed the research literature to inform VisLab’s motivation and design. VisLab operationalizes the experiment process using templates and dashboards to make empirical methods amenable for practitioners while supporting sharing and remixing experiments to aid knowledge exchange and validation. We demonstrated the validity of experiments in VisLab and evaluated the usability and potential usefulness of VisLab in visualization design practice.2023JCJinhan Choi et al.Boston CollegeInteractive Data VisualizationPrototyping & User TestingCHI
Exploring Chart Question Answering for Blind and Low Vision UsersData visualizations can be complex or involve numerous data points, making them impractical to navigate using screen readers alone. Question answering (QA) systems have the potential to support visualization interpretation and exploration without overwhelming blind and low vision (BLV) users. To investigate if and how QA systems can help BLV users in working with visualizations, we conducted a Wizard of Oz study with 24 BLV people where participants freely posed queries about four visualizations. We collected 979 queries and mapped them to popular analytic task taxonomies. We found that retrieving value and finding extremum were the most common tasks, participants often made complex queries and used visual references, and the data topic notably influenced the queries. We compile a list of design considerations for accessible chart QA systems and make our question corpus publicly available to guide future research and development.2023JKJiho Kim et al.University of Wisconsin-MadisonVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Interactive Data VisualizationCHI
Visualization Accessibility in the Wild: Challenges Faced by Visualization DesignersData visualizations are now widely used across many disciplines. However, many of them are not easily accessible for visually impaired people. In this work, we use three-staged mixed methods to understand the current practice of accessible visualization design for visually impaired people. We analyzed 95 visualizations from various venues to inspect how they are made inaccessible. To understand the rationale and context behind the design choices, we also conducted surveys with 144 practitioners in the U.S. and follow-up interviews with ten selected survey participants. Our findings include the difficulties of handling modern complex and interactive visualizations and the lack of accessibility support from visualization tools in addition to personal and organizational factors making it challenging to perform accessible design practices.2022SJShakila Cherise S Joyner et al.Boston CollegeVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignInteractive Data VisualizationCHI
How Experienced Designers of Enterprise Applications Engage AI as a Design MaterialHCI research has explored AI as a design material, suggesting that designers can envision AI's design opportunities to improve UX. Recent research claimed that enterprise applications offer an opportunity for AI innovation at the user experience level. We conducted design workshops to explore the practices of experienced designers who work on cross-functional AI teams in the enterprise. We discussed how designers successfully work with and struggle with AI. Our findings revealed that designers can innovate at the system and service levels. We also discovered that making a case for an AI feature's return on investment is a barrier for designers when they propose AI concepts and ideas. Our discussions produced novel insights on designers' role on AI teams, and the boundary objects they used for collaborating with data scientists. We discuss the implications of these findings as opportunities for future research aiming to empower designers in working with data and AI.2022NYNur Yildirim et al.Carnegie Mellon UniversityGenerative AI (Text, Image, Music, Video)AI-Assisted Decision-Making & AutomationCHI
Juvenile Graphical Perception: A Comparison between Children and AdultsData visualization is pervasive in the lives of children as they encounter graphs and charts in early education and online media. In spite of this prevalence, our guidelines and understanding of how children perceive graphs stem primarily from studies conducted with adults. Previous psychology and education research indicates that children’s cognitive abilities are different from adults. Therefore, we conducted a classic graphical perception study on a population of children aged 8–12 enrolled in the Ivy After School Program in Boston, MA and adult computer science students enrolled in Northeastern University to determine how accurately participants judge diferences in particular graphical encodings. We record the accuracy of participants’ answers for five encodings most commonly used with quantitative data. The results of our controlled experiment show that children have remarkably similar graphical perception to adults, but are consistently less accurate at interpreting the visual encodings. We found similar effectiveness rankings, relative differences in error between the different encodings, and patterns of bias across encoding types. Based on our fndings, we provide design guidelines and recommendations for creating visualizations for children. This paper and all supplemental materials are available at https://osf.io/ygrdv.2022LPLiudas Panavas et al.Northeastern UniversityVisualization Perception & CognitionCHI
TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention DataEye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention. We draw from four main classes of attention-capturing methodologies in the literature. ZoomMaps is a novel zoom-based interface that captures viewing on a mobile phone. CodeCharts is a self-reporting methodology that records points of interest at precise viewing durations. ImportAnnots is an "annotation" tool for selecting important image regions, and cursor-based BubbleView lets viewers click to deblur a small area. We compare these methodologies using a common analysis framework in order to develop appropriate use cases for each interface. This toolbox and our analyses provide a blueprint for how to gather attention data at scale without an eye tracker.2020ANAnelise Newman et al.Massachusetts Institute of TechnologyEye Tracking & Gaze InteractionInteractive Data VisualizationCrowdsourcing Task Design & Quality ControlCHI
ICONATE: Automatic Compound Icon Generation and IdeationCompound icons are prevalent on signs, webpages, and infographics, effectively conveying complex and abstract concepts, such as "no smoking" and "health insurance", with simple graphical representations. However, designing such icons requires experience and creativity, in order to efficiently navigate the semantics, space, and style features of icons. In this paper, we aim to automate the process of generating icons given compound concepts, to facilitate rapid compound icon creation and ideation. Informed by ethnographic interviews with professional icon designers, we have developed ICONATE, a novel system that automatically generates compound icons based on textual queries and allows users to explore and customize the generated icons. At the core of ICONATE is a computational pipeline that automatically finds commonly used icons for sub-concepts and arranges them according to inferred conventions. To enable the pipeline, we collected a new dataset, Compicon1k, consisting of 1000 compound icons annotated with semantic labels (i.e., concepts). Through user studies, we have demonstrated that our tool is able to automate or accelerate the compound icon design process for both novices and professionals.2020NZNanxuan Zhao et al.Harvard University & City University of Hong KongGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationGraphic Design & Typography ToolsCHI