Does Adding Visual Signifiers in Animated Transitions Improve Interaction Discoverability?Smartphones support diverse inputs, however, the multitude of devices and platforms makes it challenging for people to discover when and where interactions are meaningful. Motivated by the effectiveness of visual signifiers in communicating interactivity, we explore the viability of integrating temporary visual signifiers in animated transitions between UI screens to promote the discoverability of swipe-revealed widgets. We implemented two transition types (Container Transform, Panels), and compared them to a baseline. We found that transitions with a standard duration did not impact the discovery of swipe-related widgets (N=33). We ran a follow-up study (N=22) with extremely slow 5000ms transitions to guarantee noticeability, but similarly found no impact on discovery of swipe-revealed widgets, diverging from previous findings for visual signifiers. This raises interesting questions about the perception and understanding of interaction signifiers, and indicates a disconnect between noticeability and discoverability, while highlighting difficulties with adapting established interface elements beyond their entrenched functionality.2025EMEva Mackamul et al.Université Grenoble-Alpes, CNRS, LIGVisualization Perception & CognitionPrototyping & User TestingCHI
Studying the Simultaneous Visual Representation of MicrogesturesHand microgestures are promising for mobile interaction with wearable devices. However, they will not be adopted if practitioners cannot communicate to users the microgestures associated with the commands of their applications. This requires unambiguous representations that simultaneously shows the multiple microgestures available to control an application. Using a systematic approach, we evaluate how these representations should be designed and contrast 4 conditions depending on the microgestures (tap-swipe and tap-hold) and fingers considered (index and index-middle). Based on the results, we design a simultaneous representation of microgestures for a given set of 14 application commands. We then evaluate the usability of the representation for novice users and the suitability of the representation for small screens compared with a baseline. Finally, we formulate 8 recommendations based on the results of all the experiments. In particular, redundant graphical and textual representations of microgestures should only be displayed for novice users.2024VLVincent LAMBERT et al.Hand Gesture RecognitionMobileHCI
Tutorial mismatches: understanding the frictions due to interface differences when following software video tutorialsVideo tutorials are the main medium to learn novel software skills. However, the User Interface (UI) presented in a video tutorial may differ from the learner's UI because of customizations or differences in software versions. We investigate the frictions resulting from such differences on a learners' ability to reproduce a task demonstrated in a video tutorial. Through a morphological analysis, we first identify 13 types of "interface differences" that differ in terms of availability, reachability and spatial location of features in the interface. To better assess the frictions resulting from each of these differences, we then conduct a laboratory study with 26 participants instructed to reproduce a vector graphics editing task. Our results highlight interesting UI comparison behaviors, and illustrate various approaches employed to visually locate features.2024RPRaphaël Perraud et al.Programming Education & Computational ThinkingUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingDIS
Exploring Visual Signifier Characteristics to Improve the Perception of Affordances of In-Place Touch InputsTouch screens supporting different inputs such as `Tap’, `Dwell’, `Double Tap’ and `Force Press' are omnipresent in modern devices and yet this variety of interaction opportunities is rarely communicated to the user. Without visual signifiers, these potentially useful inputs remain unknown or underutilised. We propose a design space of visual signifier characteristics that may impact the perception of in-place one finger inputs. We generated 36 designs and investigated their perception in an online survey (N=32) and an interactive experiment (N=24). The results suggest that visual signifiers increase the perception of input possibilities beyond `Tap’, and reduce perceived mental effort for participants, who also prefer added visual signifiers over a baseline. Our work informs how future touch-based interfaces could be designed to better communicate in-place single finger input possibilities.2023EMEva Mackamul et al.Hand Gesture RecognitionFoot & Wrist InteractionContext-Aware ComputingMobileHCI
ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector ChartsExtracting underlying data from rasterized charts is tedious and inaccurate; values might be partially occluded or hard to distinguish, and the quality of the image limits the precision of the data being recovered. To address these issues, we introduce a semi-automatic system leveraging vector charts to extract the underlying data easily and accurately. The system is designed to make the most of vector information by relying on a drag-and-drop interface combined with selection, filtering, and previsualization features. A user study showed that participants spent less than 4 minutes to accurately recover data from charts published at CHI with diverse styles, thousands of data points, a combination of different encodings, and elements partially or completely occluded. Compared to other approaches relying on raster images, our tool successfully recovered all data, even when hidden, with a 78% lower relative error.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingVisualization Perception & CognitionCHI
User Preference and Performance using Tagging and Browsing for Image LabelingVisual content must be labeled to facilitate navigation and retrieval, or provide ground truth data for supervised machine learning approaches. The efficiency of labeling techniques is crucial to produce numerous qualitative labels, but existing techniques remain sparsely evaluated. We systematically evaluate the efficiency of tagging and browsing tasks in relation to the number of images displayed, interaction modes, and the image visual complexity. Tagging consists in focusing on a single image to assign multiple labels (image-oriented strategy), and browsing in focusing on a single label to assign to multiple images (label-oriented strategy). In a first experiment, we focus on the nudges inducing participants to adopt one of the strategies (n=18). In a second experiment, we evaluate the efficiency of the strategies (n=24). Results suggest an image-oriented strategy (tagging task) leads to shorter annotation times, especially for complex images, and participants tend to adopt it regardless of the conditions they face.2023BFBruno Fruchard et al.Univ. Lille, Inria, CNRS, Centrale LilleUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Charagraph: Interactive Generation of Charts for Realtime Annotation of Data-Rich ParagraphsDocuments often have paragraphs packed with numbers that are difficult to extract, compare, and interpret. To help readers make sense of data in text, we introduce the concept of Charagraphs: dynamically generated interactive charts and annotations for in-situ visualization, comparison, and manipulation of numeric data included within text. Three Charagraph characteristics are defined: leveraging related textual information about data; integrating textual and graphical representations; and interacting at different contexts. We contribute a document viewer to select in-text data; generate and customize Charagraphs; merge and refine a Charagraph using other in-text data; and identify, filter, compare, and sort data synchronized between text and visualization. Results of a study show participants can easily create Charagraphs for diverse examples of data-rich text, and when answering questions about data in text, participants were more correct compared to only reading text.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingCHI
Relevance and Applicability of Hardware-independent Pointing Transfer FunctionsPointing transfer functions remain predominantly expressed in pixels per input counts, which can generate different visual pointer behaviors with different input and output devices; we show in a first controlled experiment that even small hardware differences impact pointing performance with functions defined in this manner. We also demonstrate the applicability of "hardware-independent'' transfer functions defined in physical units. We explore two methods to maintain hardware-independent pointer performance in operating systems that require hardware-dependent definitions: scaling them to the resolutions of the input and output devices, or selecting the OS acceleration setting that produces the closest visual behavior. In a second controlled experiment, we adapted a baseline function to different screen and mouse resolutions using both methods, and the resulting functions provided equivalent performance. Lastly, we provide a tool to calculate equivalent transfer functions between hardware setups, allowing users to match pointer behavior with different devices, and researchers to tune and replicate experiment conditions. Our work emphasizes, and hopefully facilitates, the idea that operating systems should have the capability to formulate pointing transfer functions in physical units, and to adjust them automatically to hardware setups.2021RHRaiza Hanada et al.Prototyping & User TestingComputational Methods in HCIUIST
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction ScenariosStatic illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.2021AAAxel Antoine et al.Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStALInteractive Data VisualizationComputational Methods in HCICHI
Interaction Interferences: Implications of Last-Instant System State ChangesWe study interaction interferences, situations where an unexpected change occurs in an interface immediately before the user performs an action, causing the corresponding input to be misinterpreted by the system. For example, a user tries to select an item in a list, but the list is automatically updated immediately before the click, causing the wrong item to be selected. First, we formally define interaction interferences and discuss their causes from behavioral and system-design perspectives. Then, we report the results of a survey examining users’ perceptions of the frequency, frustration, and severity of interaction interferences. We also report a controlled experiment, based on state-of-the-art experimental protocols from neuroscience, that explores the minimum time interval, before clicking, below which participants could not refrain from completing their action. Finally, we discuss our findings and their implications for system design, paving the way for future work.2020PSPhilippe Schmid et al.Privacy by Design & User ControlNotification & Interruption ManagementUIST
Where is that Feature? Designing for Cross-Device Software LearnabilityPeople increasingly access cross-device applications from their smartphones while on the go. Yet, they do not fully use the mobile versions for complex tasks, preferring the desktop version of the same application. We conducted a survey (N=77) to identify challenges when switching back and forth between devices. We discovered significant cross-device learnability issues, including that users often find exploring the mobile version frustrating, which leads to prematurely giving up on using the mobile version. Based on the findings, we created four design concepts as video prototypes to explore how to support cross-device learnability. The concepts vary in four key dimensions: the device involved, automation, temporality, and learning approach. Interviews (N=20) probing the design concepts identified individual differences affecting cross-device learning preferences, and that users are more motivated to use cross-device applications when offered the right cross-device learnability support. We conclude with future design directions for supporting seamless cross-device learnability.2020JAJessalyn Alvina et al.Knowledge Worker Tools & WorkflowsPrototyping & User TestingDIS
Chameleon: Bringing Interactivity to Static Digital DocumentsDocuments such as presentations, instruction manuals, and research papers are disseminated using various file formats, many of which barely support the incorporation of interactive content. To address this lack of interactivity, we present Chameleon, a system-wide tool that combines computer vision algorithms used for image identification with an open database format to allow for the layering of dynamic content. Using Chameleon, static documents can be easily upgraded by layering user-generated interactive content on top of static images, all while preserving the original static document format and without modifying existing applications. We describe the development of Chameleon, including the design and evaluation of vision-based image replacement algorithms, the new document-creation pipeline as well as a user study evaluating Chameleon.2020DMDamien Masson et al.University of Waterloo & InriaInteractive Data VisualizationData StorytellingCHI
Investigating the Necessity of Delay in Marking Menu InvocationDelayed display of menu items is a core design component of marking menus, arguably to prevent visual distraction and foster the use of mark mode. We investigate these assumptions, by contrasting the original marking menu design with immediately-displayed marking menus. In three controlled experiments, we fail to reveal obvious and systematic performance or usability advantages to using delay and mark mode. Only in very constrained settings after significant training and only two items to learn did traditional marking menus show a time improvement of about 260~ms. Otherwise, we found an overall decrease in performance with delay, whether participants exhibited practiced or unpracticed behaviour. Our final study failed to demonstrate that an immediately-displayed menu interface is more visually disrupting than a delayed menu. These findings inform the costs and benefits of incorporating delay in marking menus, and motivate guidelines for situations in which its use is desirable.2020JHJay Henderson et al.University of WaterlooUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Using High Frequency Accelerometer and Mouse to Compensate for End-to-end Latency in Indirect InteractionEnd-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.2018AAAxel Antoine et al.Université de LillePrototyping & User TestingCHI
Pointing at a Distance with Everyday Smart DevicesLarge displays are becoming commonplace at work, at home, or in public areas. However, interaction at a distance -- anything greater than arms-length -- remains cumbersome, restricts simultaneous use, and requires specific hardware augmentations of the display: touch layers, cameras, or dedicated input devices. Yet a rapidly increasing number of people carry smartphones and smartwatches, devices with rich input capabilities that can easily be used as input devices to control interactive systems. We contribute (1) the results of a survey on possession and use of smart devices, and (2) the results of a controlled experiment comparing seven distal pointing techniques on phone or watch, one- and two-handed, and using different input channels and mappings. Our results favor using a smartphone as a trackpad, but also explore performance tradeoffs that can inform the choice and design of distal pointing techniques for different contexts of use.2018SSShaishav Siddhpuria et al.University of WaterlooFull-Body Interaction & Embodied InputKnowledge Worker Tools & WorkflowsCHI
Improving Discoverability and Expert Performance in Force-Sensitive Text Selection for Touch Devices with Mode GaugesText selection on touch devices can be a difficult task for users. Letters and words are often too small to select directly, and the enhanced interaction techniques provided by the OS – magnifiers, selection handles, and methods for selecting at the character, word, or sentence level – often lead to as many usability problems as they solve. The introduction of force-sensitive touchscreens has added another enhancement to text selection (using force for different selection modes); however, these modes are difficult to discover and many users continue to struggle with accurate selection. In this paper we report on an investigation of the design of touch-based and force-based text selection mechanisms, and describe two novel text-selection techniques that provide improved discoverability, enhanced visual feedback, and a higher performance ceiling for experienced users. Two evaluations show that one design successfully combined support for novices and experts, was never worse than the standard iOS technique, and was preferred by participants.2018AGAlix Goguey et al.University of SaskatchewanForce Feedback & Pseudo-Haptic WeightComputational Methods in HCICHI
Introducing Transient Gestures to Improve Pan and Zoom on Touch SurfacesDespite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.2018JAJeff Avery et al.University of WaterlooHand Gesture RecognitionPrototyping & User TestingCHI