VisTorch: Interacting with Situated Visualizations using Handheld ProjectorsSpatial data is best analyzed in situ, but existing mixed reality technologies can be bulky, expensive, or unsuitable for collaboration. We present VisTorch: a handheld device for projected situated analytics consisting of a pico-projector, a multi-spectrum camera, and a touch surface. VisTorch enables viewing charts situated in physical space by simply pointing the device at a surface to reveal visualizations in that location. We evaluated the approach using both a user study and an expert review. In the former, we asked 20 participants to first organize charts in space and then refer to these charts to answer questions. We observed three spatial and one temporal pattern in participant analyses. In the latter, four experts---a museum designer, a statistical software developer, a theater designer, and an environmental educator---utilized VisTorch to derive practical scenarios. Results from our study showcase the utility of situated visualizations for memory and recall.2024BPBiswaksen Patnaik et al.University of Maryland College ParkGeospatial & Map VisualizationData PhysicalizationCHI
The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive VisualizationThe use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, AI-assistance can improve writing as long as writers can conform to publisher policies, and as long as readers can be assured that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for visualizing the writer's interaction with the LLM. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the text.2024MHMd Naimul Hoque et al.University of MarylandHuman-LLM CollaborationExplainable AI (XAI)AI-Assisted Creative WritingCHI
Portrayal: Leveraging NLP and Visualization for Analyzing Fictional CharactersMany creative writing tasks (e.g., fiction writing) require authors to write complex narrative components (e.g., characterization, events, dialogue) over the course of a long story. Similarly, literary scholars need to manually annotate and interpret texts to understand such abstract components. In this paper, we explore how Natural Language Processing (NLP) and interactive visualization can help writers and scholars in such scenarios. To this end, we present Portrayal, an interactive visualization system for analyzing characters in a story. Portrayal extracts natural language indicators from a text to capture the characterization process and then visualizes the indicators in an interactive interface. We evaluated the system with 12 creative writers and scholars in a one-week-long qualitative study. Our findings suggest Portrayal helped writers revise their drafts and create dynamic characters and scenes. It helped scholars analyze characters without the need for any manual annotation, and design literary arguments with concrete evidence.2023MHMd Naimul Hoque et al.Interactive Data VisualizationAI-Assisted Creative WritingDIS
Code Code Evolution: Understanding How People Change Data Science Notebooks Over TimeSensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the "sensemaking loop." However, little is known about how sensemaking behavior evolves from exploration and explanation during this process. This gap limits our ability to understand the full scope of sensemaking, which in turn inhibits the design of tools that support the process. We contribute the first mixed-method to characterize how sensemaking evolves within computational notebooks. We study 2,574 Jupyter notebooks mined from GitHub by identifying data science notebooks that have undergone significant iterations, presenting a regression model that automatically characterizes sensemaking activity, and using this regression model to calculate and analyze shifts in activity across GitHub versions. Our results show that notebook authors participate in various sensemaking tasks over time, such as annotation, branching analysis, and documentation. We use our insights to recommend extensions to current notebook environments.2023DRDeepthi Raghunandan et al.University of Maryland, University of MarylandInteractive Data VisualizationData StorytellingComputational Methods in HCICHI
Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop PlatformsMany collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining \textit{group awareness} between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user's 3D workspace. To address this, we propose the ``eyes-and-shoes'' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on OSF (osf.io/wgprb/).2023DSDavid Saffo et al.Northeastern UniversitySocial & Collaborative VRMixed Reality WorkspacesCHI
Accessible Data Representation with Natural SoundSonification translates data into non-speech audio. Such auditory representations can make data visualization accessible to people who are blind or have low vision (BLV). This paper presents a sonification method for translating common data visualization into a blend of natural sounds. We hypothesize that people's familiarity with sounds drawn from nature, such as birds singing in a forest, and their ability to listen to these sounds in parallel, will enable BLV users to perceive multiple data points being sonified at the same time. Informed by an extensive literature review and a preliminary study with 5 BLV participants, we designed an accessible data representation tool, Susurrus, that combines our sonification method with other accessibility features, such as keyboard interaction and text-to-speech feedback. Finally, we conducted a user study with 12 BLV participants and report the potential and application of natural sounds for sonification compared to existing sonification tools.2023MHMd Naimul Hoque et al.University of MarylandVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Perceptual Pat: A Virtual Human Visual System for Iterative Visualization DesignDesigning a visualization is often a process of iterative refinement where the designer improves a chart over time by adding features, improving encodings, and fixing mistakes. However, effective design requires external critique and evaluation. Unfortunately, such critique is not always available on short notice and evaluation can be costly. To address this need, we present Perceptual Pat, an extensible suite of AI and computer vision techniques that forms a virtual human visual system for supporting iterative visualization design. The system analyzes snapshots of a visualization using an extensible set of filters—including gaze maps, text recognition, color analysis, etc—and generates a report summarizing the findings. The web-based Pat Design Lab provides a version tracking system that enables the designer to track improvements over time. We validate Perceptual Pat using a longitudinal qualitative study involving 4 professional visualization designers that used the tool over a few days to design a new visualization.2023SSSungbok Shin et al.University of MarylandInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User StudiesThe nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.2022SHSebastian Hubenschmid et al.University of KonstanzMixed Reality WorkspacesInteractive Data VisualizationCHI
Scents and Sensibility: Evaluating Information OlfactationOlfaction---the sense of smell---is one of the least explored of the human senses for conveying abstract information. In this paper, we conduct a comprehensive perceptual experiment on information olfactation: the use of olfactory and cross-modal sensory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four cross-modal sensory channels---scent type, scent intensity, airflow, and temperature---for conveying three different types of data---nominal, ordinal, and quantitative. We also present details of a 24-scent multi-sensory display and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory and cross-modal sensory channels that follows similar principles as classic rankings for visual channels.2020ABAndrea Batch et al.University of MarylandVisualization Perception & CognitionContext-Aware ComputingCHI
Ranked-List Visualization: A Graphical Perception StudyVisualization of ranked lists is a common occurrence, but many in-the-wild solutions fly in the face of vision science and visualization wisdom. For example, treemaps and bubble charts are commonly used for this purpose, despite the fact that the data is not hierarchical and that length is easier to perceive than area. Furthermore, several new visual representations have recently been suggested in this area, including wrapped bars, packed bars, piled bars, and Zvinca plots. To quantify the differences and trade-offs for these ranked-list visualizations, we here report on a crowdsourced graphical perception study involving six such visual representations, including the ubiquitous scrolled barchart, in three tasks: ranking (assessing a single item), comparison (two items), and average (assessing global distribution). Results show that wrapped bars may be the best choice for visualizing ranked lists, and that treemaps are surprisingly accurate despite the use of area rather than length to represent value.2019PMPranathi Mylavarapu et al.University of Maryland, College ParkData StorytellingVisualization Perception & CognitionCHI
Shape Structuralizer: Design, Fabrication, and User-driven Iterative Refinement of 3D Mesh ModelsCurrent Computer-Aided Design (CAD) tools lack proper support for guiding novice users towards designs ready for fabrication. We propose Shape Structuralizer (SS), an interactive design support system that repurposes surface models into structural constructions using rods and custom 3D-printed joints. Shape Structuralizer embeds a recommendation system that computationally supports the user during design ideation by providing design suggestions on local refinements of the design. This strategy enables novice users to choose designs that both satisfy stress constraints as well as their personal design intent. The interactive guidance enables users to repurpose existing surface mesh models, analyze them in-situ for stress and displacement constraints, add movable joints to increase functionality, and attach a customized appearance. This also empowers novices to fabricate even complex constructs while ensuring structural soundness. We validate the Shape Structuralizer tool with a qualitative user study where we observed that even novice users were able to generate a large number of structurally safe designs for fabrication.2019SCSubramanian Chidambaram et al.Purdue UniversityDesktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsCHI
Vistribute: Distributing Interactive Visualizations in Dynamic Multi-Device SetupsWe present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework.2019THTom Horak et al.Technische Universität DresdenInteractive Data VisualizationContext-Aware ComputingCHI
TopoText: Context-Preserving Text Data Exploration Across Multiple Spatial ScalesTopoText is a context-preserving technique for visualizing text data for multi-scale spatial aggregates to gain insight into spatial phenomena. Conventional exploration requires users to navigate across multiple scales but only presents the information related to the current scale. This limitation potentially adds more steps of interaction and cognitive overload to the users. TopoText renders multi-scale aggregates into a single visual display combining novel text-based encoding and layout methods that draw labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each individual scale, but also indicates the spatial coverage of the aggregates and their underlying hierarchical relationships. We validate TopoText with both a user study as well as several application examples.2018JZJiawei Zhang et al.Purdue UniversityInteractive Data VisualizationVisualization Perception & CognitionCHI
When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data ExplorationWe explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics—display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.2018THTom Horak et al.Technische Universität DresdenInteractive Data VisualizationCHI