Redefining Activity Tracking Through Older Adults' Reflections on Meaningful ActivitiesActivity tracking has the potential to promote active lifestyles among older adults. However, current activity tracking technologies may inadvertently perpetuate ageism by focusing on age-related health risks. Advocating for a personalized approach in activity tracking technology, we sought to understand what activities older adults find meaningful to track and the underlying values of those activities. We conducted a reflective interview study following a 7-day activity journaling with 13 participants. We identified various underlying values motivating participants to track activities they deemed meaningful. These values, whether competing or aligned, shape the desirability of activities. Older adults appreciate low-exertion activities, but they are difficult to track. We discuss how these activities can become central in designing activity tracking systems. Our research offers insights for creating value-driven, personalized activity trackers that resonate more fully with the meaningful activities of older adults.2024YWYiwen Wang et al.University of MarylandFitness Tracking & Physical Activity MonitoringElderly Care & Dementia SupportCHI
MAIDR: Making Statistical Visualizations Accessible with Multimodal Data RepresentationThis paper investigates new data exploration experiences that enable blind users to interact with statistical data visualizations---bar plots, heat maps, box plots, and scatter plots---leveraging multimodal data representations. In addition to sonification and textual descriptions that are commonly employed by existing accessible visualizations, our MAIDR (multimodal access and interactive data representation) system incorporates two additional modalities (braille and review) that offer complementary benefits. It also provides blind users with the autonomy and control to interactively access and understand data visualizations. In a user study involving 11 blind participants, we found the MAIDR system facilitated the accurate interpretation of statistical visualizations. Participants exhibited a range of strategies in combining multiple modalities, influenced by their past interactions and experiences with data visualizations. This work accentuates the overlooked potential of combining refreshable tactile representation with other modalities and elevates the discussion on the importance of user autonomy when designing accessible data visualizations.2024JSJooYoung Seo et al.University of Illinois at Urbana-ChampaignVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Interactive Data VisualizationCHI
Visual Cues for Data Analysis Features Amplify Challenges for Blind Spreadsheet UsersSpreadsheets are widely used for storing, manipulating, analyzing, and visualizing data. Features such as conditional formatting, formulas, sorting, and filtering play an important role when understanding and analyzing data in spreadsheets. They employ visual cues, but we have little understanding of the experiences of blind screen reader (SR) users with such features. We conducted a study with 12 blind SR users to gain insights into their challenges, workarounds, and strategies in understanding and extracting information from a spreadsheet consisting of multiple tables that incorporated data analysis features. We identified five factors that impact blind SR users' experiences: cognitive overload, time-information trade-off, lack of awareness and expertise, inadequate system feedback, and delayed and absent SR responses. Drawn from these findings, we discuss design suggestions and future research agenda to improve SR users' spreadsheet experiences.2024MPMinoli Perera et al.Monash UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignVisualization Perception & CognitionCHI
Investigating In-Situ Personal Health Data Qeries on Smartwatches"Smartwatches enable not only the continuous collection of but also ubiquitous access to personal health data. However, exploring this data in-situ on a smartwatch is often reserved for singular and generic metrics, without the capacity for further insight. To address our limited knowledge surrounding smartwatch data exploration needs, we collect and characterize desired personal health data queries from smartwatch users. We conducted a week-long study (N = 18), providing participants with an application for recording responses that contain their query and current activity related information, throughout their daily lives. From the responses, we curated a dataset of 205 natural language queries. Upon analysis, we highlight a new preemptive and proactive data insight category, an activity-based lens for data exploration, and see the desired use of a smartwatch for data exploration throughout daily life. To aid in future research and the development of smartwatch health applications, we contribute the dataset and discuss implications of our findings. https://dl.acm.org/doi/10.1145/3569481"2023BRBradley Rey et al.Smartwatches & Fitness BandsUbiComp
Decorative, Evocative, and Uncanny: Reactions on Ambient-to-Disruptive Health Notifications via Plant-Mimicking Shape-Changing InterfacesAmbient Information Systems (AIS) have shown some success when used as a notification towards users' health-related activities. But in the actual busy lives of users, ambient notifications might be forgotten or even missed, nullifying the original notification. Could a system use multiple levels of noticeability to ensure its message is received, and how could this concept be effectively portrayed? To examine these questions, we took a Research through Design approach and created plant-mimicking Shape-Changing Interface (S-CI) artifacts, then conducted interviews with 10 participants who currently used a reminder system for health-related activities. We report findings on acceptable scenarios to disrupting people for health-related activities, and participants’ reactions to our design choices, including how using naturalistic aesthetics led to interpretations of the uncanny and morose, and which ways system physicality affected imagined uses. We offer design suggestions in health-related notification systems and S-CIs, and discuss future work in ambient-to-disruptive technology.2023JLJarrett G.W. Lee et al.University of MarylandShape-Changing Interfaces & Soft Robotic MaterialsSustainable HCICHI
OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional CameraAn omni-directional (360°) camera captures the entire viewing sphere surrounding its optical center. Such cameras are growing in use to create highly immersive content and viewing experiences. When such a camera is held by a user, the view includes the user's hand grip, finger, body pose, face, and the surrounding environment, providing a complete understanding of the visual world and context around it. This capability opens up numerous possibilities for rich mobile input sensing. In OmniSense, we explore the broad input design space for mobile devices with a built-in omni-directional camera and broadly categorize them into three sensing pillars: i) near device ii) around device and iii) surrounding device. In addition we explore potential use cases and applications that leverage these sensing capabilities to solve user needs. Following this, we develop a working system to put these concepts into action, by leveraging these sensing capabilities to enable potential use cases and applications. We studied the system in a technical evaluation and a preliminary user study to gain initial feedback and insights. Collectively these techniques illustrate how a single, omni-purpose sensor on a mobile device affords many compelling ways to enable expressive input, while also affording a broad range of novel applications that improve user experience during mobile interaction.2023HYHui-Shyong Yeo et al.HuaweiEye Tracking & Gaze InteractionImmersion & Presence Research360° Video & Panoramic ContentCHI
Chart Reader: Accessible Visualization Experiences Designed with Screen Reader UsersEven though screen readers are a core accessibility tool for blind and low vision individuals (BLVIs), most visualizations are incompatible with screen readers. To improve accessible visualization experiences, we partnered with 10 BLV screen reader users (SRUs) in an iterative co-design study to design and develop accessible visualization experiences that afford SRUs the autonomy to interactively read and understand visualizations and their underlying data. During the five-month study, we explored accessible visualization prototypes with our design partners for three one-hour sessions. Our results provide feedback on the synthesized design concepts we explored, why (or why not) they aid comprehension and exploration for SRUs, and how differing design concepts can fit into cohesive accessible visualization experiences. We contribute both Chart Reader, a web-based accessibility engine resulting from our design iterations, and our distilled study findings – organized by design dimensions – in the creation of comprehensive accessible visualization experiences.2023JTJohn R Thompson et al.Microsoft ResearchVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Interactive Data VisualizationCHI
PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor DataWe present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro:bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.2022VPVenkatesh Potluri et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Motor Impairment Assistive Input TechnologiesUIST
Trade-offs in Sampling and Search for Early-stage Interactive Machine LearningFor many automated classification tasks, collecting labeled data is the key barrier to training a useful supervised model. Interfaces for interactive labeling tighten the loop of labeled data collection and model development, enabling a subject-matter expert to quickly establish the feasibility of a classifier for addressing a problem of interest. These interactive machine learning (IML) interfaces iteratively sample unlabeled data for annotation, train a new model, and display feedback on the model's estimated performance. Different sampling strategies affect both the rate at which the model improves and the bias of performance estimates. We compare the performance of three sampling strategies in the "early-stage" of label collection, starting from zero labeled data. By simulating a user's interactions with an IML labeling interface, we demonstrate a trade-off between improving a text classifier's performance and computing unbiased estimates of that performance. We show that supplementing early-stage sampling with user-guided text search can effectively "seed" a classifier with positive documents without compromising generalization performance—particularly for imbalanced tasks where positive documents are rare. We argue for the benefits of incorporating search alongside active learning in IML interfaces and identify design trade-offs around the use of non-random sampling strategies.2022ZLZachary Levonian et al.Human-LLM CollaborationComputational Methods in HCIIUI
MyMove: Facilitating Older Adults to Collect In-Situ Activity Labels on a Smartwatch with SpeechCurrent activity tracking technologies are largely trained on younger adults, which can lead to solutions that are not well-suited for older adults. To build activity trackers for older adults, it is crucial to collect training data with them. To this end, we examine the feasibility and challenges with older adults in collecting activity labels by leveraging speech. Specifically, we built MyMove, a speech-based smartwatch app to facilitate the in-situ labeling with a low capture burden. We conducted a 7-day deployment study, where 13 older adults collected their activity labels and smartwatch sensor data, while wearing a thigh-worn activity monitor. Participants were highly engaged, capturing 1,224 verbal reports in total. We extracted 1,885 activities with corresponding effort level and timespan, and examined the usefulness of these reports as activity labels. We discuss the implications of our approach and the collected dataset in supporting older adults through personalized activity tracking technologies.2022YKYoung-Ho Kim et al.University of MarylandFitness Tracking & Physical Activity MonitoringSmartwatches & Fitness BandsBiosensors & Physiological MonitoringCHI
Understanding Multi-Device Usage Patterns: Physical Device Configurations and Fragmented WorkflowsTo better ground technical (systems) investigation and interaction design of cross-device experiences, we contribute an in-depth survey of existing multi-device practices, including fragmented workflows across devices and the way people physically organize and configure their workspaces to support such activity. Further, this survey documents a historically significant moment of transition to a new future of remote work, an existing trend dramatically accelerated by the abrupt switch to work-from-home (and having to contend with the demands of home-at-work) during the COVID-19 pandemic. We surveyed 97 participants, and collected photographs of home setups and open-ended answers to 50 questions categorized in 5 themes. We characterize the wide range of multi-device physical configurations and identify five usage patterns, including: partitioning tasks, integrating multi-device usage, cloning tasks to other devices, expanding tasks and inputs to multiple devices, and migrating between devices. Our analysis also sheds light on the benefits and challenges people face when their workflow is fragmented across multiple devices. These insights have implications for the design of multi-device experiences that support people's fragmented workflows.2022YYYe Yuan et al.Microsoft Research, University of MinnesotaRemote Work Tools & ExperienceDistributed Team CollaborationNotification & Interruption ManagementCHI
AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware ArmaturesAirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air – with 2-5 armatures poseable in 7DoF within the same workspace – to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen" in remote work.2021NMNicolai Marquardt et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Knowledge Management & Team AwarenessUbiquitous ComputingUIST
FoodScrap: Promoting Rich Data Capture and Reflective Food Journaling Through Speech InputThe factors influencing people’s food decisions, such as one’s mood and eating environment, are important information to foster self-reflection and to develop a personalized healthy diet. But, it is difficult to consistently collect them due to the heavy data capture burden. In this work, we examine how speech input supports capturing everyday food practice through a week-long data collection study (N=11). We deployed FoodScrap, a speech-based food journaling app that allows people to capture food components, preparation methods, and food decisions. Using speech input, participants detailed their meal ingredients and elaborated their food decisions by describing the eating moments, explaining their eating strategy, and assessing their food practice. Participants recognized that speech input facilitated self-reflection, but expressed concerns around re-recording, mental load, social constraints, and privacy. We discuss how speech input can support low-burden and reflective food journaling and opportunities for effectively processing and presenting large amounts of speech data.2021YLYuhan Luo et al.Voice User Interface (VUI) DesignDiet Tracking & Nutrition ManagementDIS
DIY: Helping People Assess the Correctness of Natural Language to SQL SystemsDesigning natural language interfaces for querying databases remains an important goal pursued by researchers in natural language processing, databases, and HCI. These systems receive natural language as input, translate it into a formal database query, and execute the query to compute a result. Because the responses from these systems are not always correct, it is important to provide people with mechanisms to assess the correctness of the generated query and computed result. However, this assessment can be challenging for people who lack expertise in query languages. We present Debug-It-Yourself (DIY), an interactive technique that enables users to assess the responses from a state-of-the-art NL2SQL system for correctness and, if possible, fix errors. DIY provides users with a sandbox where they can interact with (1) the mappings between the question and the generated query, (2) a small-but-relevant subset of the underlying database, and (3) a multi-modal explanation of the generated query by employing a back-of-the-envelope calculation, end-user debugging strategy on the system's responses. Through an exploratory study with 12 users, we investigate how DIY helps users assess the correctness of the system’s answers and detect & fix errors. Our observations reveal the benefits of DIY while providing insights about end-user debugging strategies and underscore opportunities for further improving the user experience.2021ANArpit Narechania et al.Human-LLM CollaborationExplainable AI (XAI)AI-Assisted Decision-Making & AutomationIUI
Collecting and Characterizing Natural Language Utterances for Specifying Data VisualizationsNatural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.2021ASArjun Srinivasan et al.Tableau ResearchVoice User Interface (VUI) DesignInteractive Data VisualizationCHI
CAST: Authoring Data-Driven Chart AnimationsWe present CAST, an authoring tool that enables the interactive creation of chart animations. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis, a declarative chart animation grammar that leverages data-enriched SVG charts, CAST supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation and other parameters for animation effects (e.g., animation type, easing function) using a control panel. In addition to describing how CAST infers recommendations for auto-completion, we present a gallery of examples to demonstrate the expressiveness of CAST and a user study to verify its learnability and usability. Finally, we discuss the limitations and potentials of CAST as well as directions for future research.2021TGTong Ge et al.Shandong University, Shandong UniversityInteractive Data Visualization3D Modeling & AnimationCHI
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch InteractionMost mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.2021YKYoung-Ho Kim et al.University of MarylandVoice User Interface (VUI) DesignInteractive Data VisualizationSmartwatches & Fitness BandsCHI
Learning to Automate Chart Layout Configurations Using Crowdsourced Paired ComparisonWe contribute a method to automate parameter configurations for chart layouts by learning from human preferences. Existing charting tools usually determine the layout parameters using predefined heuristics, producing sub-optimal layouts. People can repeatedly adjust multiple parameters (e.g., chart size, gap) to achieve visually appealing layouts. However, this trial-and-error process is unsystematic and time-consuming, without a guarantee of improvement. To address this issue, we develop Layout Quality Quantifier (LQ2), a machine learning model that learns to score chart layouts from pairwise crowdsourcing data. Combined with optimization techniques, LQ2 recommends layout parameters that improve the charts' layout quality. We apply LQ2 on bar charts and conduct user studies to evaluate its effectiveness by examining the quality of layouts it produces. Results show that LQ2 can generate more visually appealing layouts than both laypeople and baselines. This work demonstrates the feasibility and usages of quantifying human preferences and aesthetics for chart layouts.2021AWAoyu Wu et al.Hong Kong University of Science and TechnologyInteractive Data VisualizationCHI
InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet DevicesWhile tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and factchecking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.2020ASArjun Srinivasan et al.Microsoft Research & Georgia Institute of TechnologyVoice User Interface (VUI) DesignInteractive Data VisualizationNotification & Interruption ManagementCHI
TandemTrack: Shaping Consistent Exercise Experience by Complementing a Mobile App with a Smart SpeakerSmart speakers such as Amazon Echo present promising opportunities for exploring voice interaction in the domain of in-home exercise tracking. In this work, we examine if and how voice interaction complements and augments a mobile app in promoting consistent exercise. We designed and developed TandemTrack, which combines a mobile app and an Alexa skill to support exercise regimen, data capture, feedback, and reminder. We then conducted a four-week between-subjects study deploying TandemTrack to 22 participants who were instructed to follow a short daily exercise regimen: one group used only the mobile app and the other group used both the app and the skill. We collected rich data on individuals' exercise adherence and performance, and their use of voice and visual interactions, while examining how TandemTrack as a whole influenced their exercise experience. Reflecting on these data, we discuss the benefits and challenges of incorporating voice interaction to assist daily exercise, and implications for designing effective multimodal systems to support self-tracking and promote consistent exercise.2020YLYuhan Luo et al.University of MarylandVoice User Interface (VUI) DesignFitness Tracking & Physical Activity MonitoringCHI