Does Adding Visual Signifiers in Animated Transitions Improve Interaction Discoverability?Smartphones support diverse inputs, however, the multitude of devices and platforms makes it challenging for people to discover when and where interactions are meaningful. Motivated by the effectiveness of visual signifiers in communicating interactivity, we explore the viability of integrating temporary visual signifiers in animated transitions between UI screens to promote the discoverability of swipe-revealed widgets. We implemented two transition types (Container Transform, Panels), and compared them to a baseline. We found that transitions with a standard duration did not impact the discovery of swipe-related widgets (N=33). We ran a follow-up study (N=22) with extremely slow 5000ms transitions to guarantee noticeability, but similarly found no impact on discovery of swipe-revealed widgets, diverging from previous findings for visual signifiers. This raises interesting questions about the perception and understanding of interaction signifiers, and indicates a disconnect between noticeability and discoverability, while highlighting difficulties with adapting established interface elements beyond their entrenched functionality.2025EMEva Mackamul et al.Université Grenoble-Alpes, CNRS, LIGVisualization Perception & CognitionPrototyping & User TestingCHI
Facilitating the Parametric Definition of Geometric Properties in Programming-Based CADParametric Computer-aided design (CAD) enables the creation of reusable models by integrating variables into geometric properties, facilitating customization without a complete redesign. However, creating parametric designs in programming-based CAD presents significant challenges. Users define models in a code editor using a programming language, with the application generating a visual representation in a viewport. This process involves complex programming and arithmetic expressions to describe geometric properties, linking various object properties to create parametric designs. Unfortunately, these applications lack assistance, making the process unnecessarily demanding. We propose a solution that allows users to retrieve parametric expressions from the visual representation for reuse in the code, streamlining the design process. We demonstrated this concept through a proof-of-concept implemented in the programming-based CAD application, OpenSCAD, and conducted an experiment with 11 users. Our findings suggest that this solution could significantly reduce design errors, improve interactivity and engagement in the design process, and lower the entry barrier for newcomers by reducing the mathematical skills typically required in programming-based CAD applications2024JAJ Felipe Gonzalez Avila et al.Desktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingUIST
ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector ChartsExtracting underlying data from rasterized charts is tedious and inaccurate; values might be partially occluded or hard to distinguish, and the quality of the image limits the precision of the data being recovered. To address these issues, we introduce a semi-automatic system leveraging vector charts to extract the underlying data easily and accurately. The system is designed to make the most of vector information by relying on a drag-and-drop interface combined with selection, filtering, and previsualization features. A user study showed that participants spent less than 4 minutes to accurately recover data from charts published at CHI with diverse styles, thousands of data points, a combination of different encodings, and elements partially or completely occluded. Compared to other approaches relying on raster images, our tool successfully recovered all data, even when hidden, with a 78% lower relative error.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingVisualization Perception & CognitionCHI
User Preference and Performance using Tagging and Browsing for Image LabelingVisual content must be labeled to facilitate navigation and retrieval, or provide ground truth data for supervised machine learning approaches. The efficiency of labeling techniques is crucial to produce numerous qualitative labels, but existing techniques remain sparsely evaluated. We systematically evaluate the efficiency of tagging and browsing tasks in relation to the number of images displayed, interaction modes, and the image visual complexity. Tagging consists in focusing on a single image to assign multiple labels (image-oriented strategy), and browsing in focusing on a single label to assign to multiple images (label-oriented strategy). In a first experiment, we focus on the nudges inducing participants to adopt one of the strategies (n=18). In a second experiment, we evaluate the efficiency of the strategies (n=24). Results suggest an image-oriented strategy (tagging task) leads to shorter annotation times, especially for complex images, and participants tend to adopt it regardless of the conditions they face.2023BFBruno Fruchard et al.Univ. Lille, Inria, CNRS, Centrale LilleUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Charagraph: Interactive Generation of Charts for Realtime Annotation of Data-Rich ParagraphsDocuments often have paragraphs packed with numbers that are difficult to extract, compare, and interpret. To help readers make sense of data in text, we introduce the concept of Charagraphs: dynamically generated interactive charts and annotations for in-situ visualization, comparison, and manipulation of numeric data included within text. Three Charagraph characteristics are defined: leveraging related textual information about data; integrating textual and graphical representations; and interacting at different contexts. We contribute a document viewer to select in-text data; generate and customize Charagraphs; merge and refine a Charagraph using other in-text data; and identify, filter, compare, and sort data synchronized between text and visualization. Results of a study show participants can easily create Charagraphs for diverse examples of data-rich text, and when answering questions about data in text, participants were more correct compared to only reading text.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingCHI
Relevance and Applicability of Hardware-independent Pointing Transfer FunctionsPointing transfer functions remain predominantly expressed in pixels per input counts, which can generate different visual pointer behaviors with different input and output devices; we show in a first controlled experiment that even small hardware differences impact pointing performance with functions defined in this manner. We also demonstrate the applicability of "hardware-independent'' transfer functions defined in physical units. We explore two methods to maintain hardware-independent pointer performance in operating systems that require hardware-dependent definitions: scaling them to the resolutions of the input and output devices, or selecting the OS acceleration setting that produces the closest visual behavior. In a second controlled experiment, we adapted a baseline function to different screen and mouse resolutions using both methods, and the resulting functions provided equivalent performance. Lastly, we provide a tool to calculate equivalent transfer functions between hardware setups, allowing users to match pointer behavior with different devices, and researchers to tune and replicate experiment conditions. Our work emphasizes, and hopefully facilitates, the idea that operating systems should have the capability to formulate pointing transfer functions in physical units, and to adjust them automatically to hardware setups.2021RHRaiza Hanada et al.Prototyping & User TestingComputational Methods in HCIUIST
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction ScenariosStatic illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.2021AAAxel Antoine et al.Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStALInteractive Data VisualizationComputational Methods in HCICHI
Chameleon: Bringing Interactivity to Static Digital DocumentsDocuments such as presentations, instruction manuals, and research papers are disseminated using various file formats, many of which barely support the incorporation of interactive content. To address this lack of interactivity, we present Chameleon, a system-wide tool that combines computer vision algorithms used for image identification with an open database format to allow for the layering of dynamic content. Using Chameleon, static documents can be easily upgraded by layering user-generated interactive content on top of static images, all while preserving the original static document format and without modifying existing applications. We describe the development of Chameleon, including the design and evaluation of vision-based image replacement algorithms, the new document-creation pipeline as well as a user study evaluating Chameleon.2020DMDamien Masson et al.University of Waterloo & InriaInteractive Data VisualizationData StorytellingCHI
Comparing Smartphone Speech Recognition and Touchscreen Typing for Composition and TranscriptionRuan et al. found transcribing short phrases with speech recognition nearly 200% faster than typing on a smartphone. We extend this comparison to a novel composition task, using a protocol that enables a controlled comparison with transcription. Results show that both composing and transcribing with speech is faster than typing. But, the magnitude of this difference is lower with composition, and speech has a lower error rate than keyboard during composition, but not during transcription. When transcribing, speech outperformed typing in most NASA-TLX measures, but when composing, there were no significant differences between typing and speech for any measure except physical demand.2020MFMargaret Foley et al.University of WaterlooVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)CHI
Manipulation, Learning, and Recall with Tangible Pen-Like InputWe examine two key human performance characteristics of a pen-like tangible input device that executes a different command depending on which corner, edge, or side contacts a surface. The manipulation time when transitioning between contacts is examined using physical mock-ups of three representative device sizes and a baseline pen mock-up. Results show the largest device is fastest overall and minimal differences with a pen for equivalent transitions. Using a hardware prototype able to sense all 26 different contacts, a second experiment evaluates learning and recall. Results show almost all 26 contacts can be learned in a two-hour session with an average of 94% recall after 24 hours. The results provide empirical evidence for the practicality, design, and utility for this type of tangible pen-like input.2020LELisa A. Elkin et al.University of Washington & University of WaterlooShape-Changing Interfaces & Soft Robotic MaterialsFoot & Wrist InteractionCHI
Next-Point Prediction for Direct Touch Using Finite-Time Derivative EstimationEnd-to-end latency in interactive systems is detrimental to performance and usability, and comes from a combination of hardware and software delays. While these delays are steadily addressed by hardware and software improvements, it is at a decelerating pace. In parallel, short-term input prediction has shown promising results in recent years, in both research and industry, as an addition to these efforts. We describe a new prediction algorithm for direct touch devices based on (i) a state-of-the-art finite-time derivative estimator, (ii) a smoothing mechanism based on input speed, and (iii) a post-filtering of the prediction in two steps. Using both a pre-existing dataset of touch input as benchmark, and subjective data from a new user study, we show that this new predictor outperforms the predictors currently available in the literature and industry, based on metrics that model user-defined negative side-effects caused by input prediction. In particular, we show that our predictor can predict up to 2 or 3 times further than existing techniques with minimal negative side-effects.2018MNMathieu Nancel et al.Hand Gesture RecognitionComputational Methods in HCIUIST
Characterizing Finger Pitch and Roll Orientation During Atomic Touch ActionsAtomic interactions in touch interfaces, like tap, drag, and flick, are well understood in terms of interaction design, but less is known about their physical performance characteristics. We carried out a study to gather baseline data about finger pitch and roll orientation during atomic touch input actions. Our results show differences in orientation and range for different fingers, hands, and actions, and we analyse the effect of tablet angle. Our data provides designers and researchers with a new resource to better understand what interactions are possible in different settings (eg when using the left or right hand), to design novel interaction techniques that use orientation as input (eg using finger tilt as an implicit mode), and to determine whether new sensing techniques are feasible (eg using fingerprints for identifying specific finger touches).2018AGAlix Goguey et al.University of Saskatchewan, InriaHand Gesture RecognitionEye Tracking & Gaze InteractionCHI