Does Adding Visual Signifiers in Animated Transitions Improve Interaction Discoverability?Smartphones support diverse inputs, however, the multitude of devices and platforms makes it challenging for people to discover when and where interactions are meaningful. Motivated by the effectiveness of visual signifiers in communicating interactivity, we explore the viability of integrating temporary visual signifiers in animated transitions between UI screens to promote the discoverability of swipe-revealed widgets. We implemented two transition types (Container Transform, Panels), and compared them to a baseline. We found that transitions with a standard duration did not impact the discovery of swipe-related widgets (N=33). We ran a follow-up study (N=22) with extremely slow 5000ms transitions to guarantee noticeability, but similarly found no impact on discovery of swipe-revealed widgets, diverging from previous findings for visual signifiers. This raises interesting questions about the perception and understanding of interaction signifiers, and indicates a disconnect between noticeability and discoverability, while highlighting difficulties with adapting established interface elements beyond their entrenched functionality.2025EMEva Mackamul et al.Université Grenoble-Alpes, CNRS, LIGVisualization Perception & CognitionPrototyping & User TestingCHI
Facilitating the Parametric Definition of Geometric Properties in Programming-Based CADParametric Computer-aided design (CAD) enables the creation of reusable models by integrating variables into geometric properties, facilitating customization without a complete redesign. However, creating parametric designs in programming-based CAD presents significant challenges. Users define models in a code editor using a programming language, with the application generating a visual representation in a viewport. This process involves complex programming and arithmetic expressions to describe geometric properties, linking various object properties to create parametric designs. Unfortunately, these applications lack assistance, making the process unnecessarily demanding. We propose a solution that allows users to retrieve parametric expressions from the visual representation for reuse in the code, streamlining the design process. We demonstrated this concept through a proof-of-concept implemented in the programming-based CAD application, OpenSCAD, and conducted an experiment with 11 users. Our findings suggest that this solution could significantly reduce design errors, improve interactivity and engagement in the design process, and lower the entry barrier for newcomers by reducing the mathematical skills typically required in programming-based CAD applications2024JAJ Felipe Gonzalez Avila et al.Desktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingUIST
User Preference and Performance using Tagging and Browsing for Image LabelingVisual content must be labeled to facilitate navigation and retrieval, or provide ground truth data for supervised machine learning approaches. The efficiency of labeling techniques is crucial to produce numerous qualitative labels, but existing techniques remain sparsely evaluated. We systematically evaluate the efficiency of tagging and browsing tasks in relation to the number of images displayed, interaction modes, and the image visual complexity. Tagging consists in focusing on a single image to assign multiple labels (image-oriented strategy), and browsing in focusing on a single label to assign to multiple images (label-oriented strategy). In a first experiment, we focus on the nudges inducing participants to adopt one of the strategies (n=18). In a second experiment, we evaluate the efficiency of the strategies (n=24). Results suggest an image-oriented strategy (tagging task) leads to shorter annotation times, especially for complex images, and participants tend to adopt it regardless of the conditions they face.2023BFBruno Fruchard et al.Univ. Lille, Inria, CNRS, Centrale LilleUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Charagraph: Interactive Generation of Charts for Realtime Annotation of Data-Rich ParagraphsDocuments often have paragraphs packed with numbers that are difficult to extract, compare, and interpret. To help readers make sense of data in text, we introduce the concept of Charagraphs: dynamically generated interactive charts and annotations for in-situ visualization, comparison, and manipulation of numeric data included within text. Three Charagraph characteristics are defined: leveraging related textual information about data; integrating textual and graphical representations; and interacting at different contexts. We contribute a document viewer to select in-text data; generate and customize Charagraphs; merge and refine a Charagraph using other in-text data; and identify, filter, compare, and sort data synchronized between text and visualization. Results of a study show participants can easily create Charagraphs for diverse examples of data-rich text, and when answering questions about data in text, participants were more correct compared to only reading text.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingCHI
Relevance and Applicability of Hardware-independent Pointing Transfer FunctionsPointing transfer functions remain predominantly expressed in pixels per input counts, which can generate different visual pointer behaviors with different input and output devices; we show in a first controlled experiment that even small hardware differences impact pointing performance with functions defined in this manner. We also demonstrate the applicability of "hardware-independent'' transfer functions defined in physical units. We explore two methods to maintain hardware-independent pointer performance in operating systems that require hardware-dependent definitions: scaling them to the resolutions of the input and output devices, or selecting the OS acceleration setting that produces the closest visual behavior. In a second controlled experiment, we adapted a baseline function to different screen and mouse resolutions using both methods, and the resulting functions provided equivalent performance. Lastly, we provide a tool to calculate equivalent transfer functions between hardware setups, allowing users to match pointer behavior with different devices, and researchers to tune and replicate experiment conditions. Our work emphasizes, and hopefully facilitates, the idea that operating systems should have the capability to formulate pointing transfer functions in physical units, and to adjust them automatically to hardware setups.2021RHRaiza Hanada et al.Prototyping & User TestingComputational Methods in HCIUIST
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction ScenariosStatic illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.2021AAAxel Antoine et al.Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStALInteractive Data VisualizationComputational Methods in HCICHI
Next-Point Prediction for Direct Touch Using Finite-Time Derivative EstimationEnd-to-end latency in interactive systems is detrimental to performance and usability, and comes from a combination of hardware and software delays. While these delays are steadily addressed by hardware and software improvements, it is at a decelerating pace. In parallel, short-term input prediction has shown promising results in recent years, in both research and industry, as an addition to these efforts. We describe a new prediction algorithm for direct touch devices based on (i) a state-of-the-art finite-time derivative estimator, (ii) a smoothing mechanism based on input speed, and (iii) a post-filtering of the prediction in two steps. Using both a pre-existing dataset of touch input as benchmark, and subjective data from a new user study, we show that this new predictor outperforms the predictors currently available in the literature and industry, based on metrics that model user-defined negative side-effects caused by input prediction. In particular, we show that our predictor can predict up to 2 or 3 times further than existing techniques with minimal negative side-effects.2018MNMathieu Nancel et al.Hand Gesture RecognitionComputational Methods in HCIUIST