Strategies for the Reconciliation of Artistic Intent and Technical Constraints in Mixed Reality PerformancesAs immersive technologies advance, Mixed Reality Performances (MRP) increasingly integrate them but often face technical challenges that must balance artistic vision and practical constraints. These constraints sometimes lead to unavoidable limitations. The diversity of technologies and artistic goals in MRP prevents a one-size-fits-all solution to such dilemmas. This paper presents a model unifying strategies for addressing compromises between artistic intent and technological feasibility. Using reflexive thematic analysis of a performance-led case study, interviews, and independent case studies, we identify recurring strategies in Mixed Reality experiences with varying constraints. These strategies fall on an axis based on the audience's awareness of limitations and are categorized into five approaches: Avoid, Disguise, Tolerate, Integrate, or Leverage. We argue that this framework helps designers better navigate the limitations inherent in creating MRPs, offering practical pathways to align technological capabilities with creative objectives.2025PUPierrick Uro et al.Digital Art Installations & Interactive PerformanceInteractive Narrative & Immersive StorytellingDIS
Facilitating the Parametric Definition of Geometric Properties in Programming-Based CADParametric Computer-aided design (CAD) enables the creation of reusable models by integrating variables into geometric properties, facilitating customization without a complete redesign. However, creating parametric designs in programming-based CAD presents significant challenges. Users define models in a code editor using a programming language, with the application generating a visual representation in a viewport. This process involves complex programming and arithmetic expressions to describe geometric properties, linking various object properties to create parametric designs. Unfortunately, these applications lack assistance, making the process unnecessarily demanding. We propose a solution that allows users to retrieve parametric expressions from the visual representation for reuse in the code, streamlining the design process. We demonstrated this concept through a proof-of-concept implemented in the programming-based CAD application, OpenSCAD, and conducted an experiment with 11 users. Our findings suggest that this solution could significantly reduce design errors, improve interactivity and engagement in the design process, and lower the entry barrier for newcomers by reducing the mathematical skills typically required in programming-based CAD applications2024JAJ Felipe Gonzalez Avila et al.Desktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingUIST
Understanding the Challenges of OpenSCAD Users for 3D PrintingDirect manipulation has been established as the main interaction paradigm for Computer-Aided Design (CAD) for decades. It provides fast, incremental, and reversible actions that allow for an iterative process on a visual representation of the result. Despite its numerous advantages, some users prefer a programming-based approach where they describe the 3D model they design with a specific programming language, such as OpenSCAD. It allows users to create complex structured geometries and facilitates abstraction. Unfortunately, most current knowledge about CAD practices only focuses on direct manipulation programs. In this study, we interviewed 20 programming-based CAD users to understand their motivations and challenges. Our findings reveal that this programming-oriented population presents difficulties in the design process in tasks such as 3D spatial understanding, validation and code debugging, creation of organic shapes, and code-view navigation.2024JAJ Felipe Gonzalez Avila et al.Université de Lille, Carleton UniversityDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingCHI
DirectGPT: A Direct Manipulation Interface to Interact with Large Language ModelsWe characterize and demonstrate how the principles of direct manipulation can improve interaction with large language models. This includes: continuous representation of generated objects of interest; reuse of prompt syntax in a toolbar of commands; manipulable outputs to compose or control the effect of prompts; and undo mechanisms. This idea is exemplified in DirectGPT, a user interface layer on top of ChatGPT that works by transforming direct manipulation actions to engineered prompts. A study shows participants were 50% faster and relied on 50% fewer and 72% shorter prompts to edit text, code, and vector images compared to baseline ChatGPT. Our work contributes a validated approach to integrate LLMs into traditional software using direct manipulation. Data, code, and demo available at https://osf.io/3wt6s.2024DMDamien Masson et al.University of WaterlooHuman-LLM CollaborationExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
Statslator: Interactive Translation of NHST and Estimation Statistics Reporting Styles in Scientific DocumentsInferential statistics are typically reported using p-values (NHST) or confidence intervals on effect sizes (estimation). This is done using a range of styles, but some readers have preferences about how statistics should be presented and others have limited familiarity with alternatives. We propose a system to interactively translate statistical reporting styles in existing documents, allowing readers to switch between interval estimates, p-values, and standardized effect sizes, all using textual and graphical reports that are dynamic and user customizable. Forty years of CHI papers are examined. Using only the information reported in scientific documents, equations are derived and validated on simulated datasets to show that conversions between p-values and confidence intervals are accurate. The system helps readers interpret statistics in a familiar style, compare reports that use different styles, and even validate the correctness of reports. Code and data: https://osf.io/x4ue72023DMDamien Masson et al.Interactive Data VisualizationTime-Series & Network Graph VisualizationUIST
Modeling and Reducing Spatial Jitter caused by Asynchronous Input and Output RatesJitter in interactive systems occurs when visual feedback is perceived as unstable or trembling even though the input signal is smooth or stationary. It can have multiple causes such as sensing noise, or feedback calculations introducing or exacerbating sensing imprecisions. Jitter can however occur even when each individual component of the pipeline works perfectly, as a result of the differences between the input frequency and the display refresh rate. This asynchronicity can introduce rapidly-shifting latencies between the rendered feedbacks and their display on screen, which can result in trembling cursors or viewports. This paper contributes a better understanding of this particular type of jitter. We first detail the problem from a mathematical standpoint, from which we develop a predictive model of jitter amplitude as a function of input and output frequencies, and a new metric to measure this spatial jitter. Using touch input data gathered in a study, we developed a simulator to validate this model and to assess the effects of different techniques and settings with any output frequency. The most promising approach, when the time of the next display refresh is known, is to estimate (interpolate or extrapolate) the user’s position at a fixed time interval before that refresh. When input events occur at 125 Hz, as is common in touch screens, we show that an interval of 4 to 6 ms works well for a wide range of display refresh rates. This method effectively cancels most of the jitter introduced by input/output asynchronicity, while introducing minimal imprecision or latency.2020AAAxel Antoine et al.Eye Tracking & Gaze InteractionVisualization Perception & CognitionNotification & Interruption ManagementUIST
Have a SEAT on Stage : Restoring Trust with Spectator Experience Augmentation TechniquesWhen the collaboration between humans and machines happens in public, the audience can face difficulties in distinguishing the actual human contribution from the contribution of autonomous processes. In music concerts involving digital interfaces doubts about the performer's contribution can drastically hinder the audience interest. The disappearing of the direct physical link between actions and effects is one of the reasons of this confusion. Consequently both artists and researchers have explored techniques to augment the experience of spectators. However their respective impact on the multiple aspects of audience experience has not yet been formally compared. In this controlled study, we compare two techniques : pre-performance explanations and visual augmentations. Despite contradictory results on comprehension tasks, we show that contrary to pre-performance explanations, visual augmentations improve the audience experience, increase their subjective comprehension and restore the trust in performers by reversing the doubt in their favour.2020OCOlivier Capra et al.Digital Art Installations & Interactive PerformanceInteractive Narrative & Immersive StorytellingDIS
Leveraging Distal Vibrotactile Feedback for Target AcquisitionMany touch based interactions provide limited opportunities for direct tactile feedback; examples include multi-user touch displays, augmented reality based projections on passive surfaces, and mid-air input. In this paper, we consider distal feedback, through vibrotactile stimulation on a smart-watch placed on the user's non-dominant wrist, as an alternative feedback mechanism to interaction location vibrotactile feedback, under the user's finger. We compare the effectiveness of interaction location feedback vs. distal feedback through a Fitts's Law task completed on a smartphone. Results show that distal and interaction location feedback both reduce errors in target acquisition and exhibit statistically comparable performance, suggesting that distal vibrotactile feedback is a suitable alternative when interaction location feedback is not readily available.2019JHJay Henderson et al.University of WaterlooVibrotactile Feedback & Skin StimulationFoot & Wrist InteractionCHI
Using High Frequency Accelerometer and Mouse to Compensate for End-to-end Latency in Indirect InteractionEnd-to-end latency corresponds to the temporal difference between a user input and the corresponding output from a system. It has been shown to degrade user performance in both direct and indirect interaction. If it can be reduced to some extend, latency can also be compensated through software compensation by trying to predict the future position of the cursor based on previous positions, velocities and accelerations. In this paper, we propose a hybrid hardware and software prediction technique specifically designed for partially compensating end-to-end latency in indirect pointing. We combine a computer mouse with a high frequency accelerometer to predict the future location of the pointer using Euler based equations. Our prediction method results in more accurate prediction than previously introduced prediction algorithms for direct touch. A controlled experiment also revealed that it can improve target acquisition time in pointing tasks.2018AAAxel Antoine et al.Université de LillePrototyping & User TestingCHI
Ether-Toolbars: Evaluating Off-Screen Toolbars for MobileInteractionIn mobile interaction, the use of touchscreen interaction, while beneficial from the perspective of portability, has limited spatial accuracy due to the ``fat finger problem''. As a result, an important challenge on mobile interaction is to find solutions to balance the size of individual widgets against the number of widgets needed during interaction. In this work, to address display space limitations, we explore the design of off-screen toolbars (ether-toolbars) that leverage computer vision to expand application features by placing widgets adjacent to the display screen. We show how simple computer vision algorithms can be combined with a natural human ability to estimate physical placing to support highly accurate targeting. Our ether-toolbar design promises targeting accuracy approximating on-screen widget accuracy while significantly expanding the interaction space of mobile devices. Through two experiments, we examine off-screen content placement metaphors and off-screen precision of participants accessing these toolbars. From the data of the second experiment, we provide a basic model that reflects how users perceive mobile surroundings for ether-widgets and validate it. We also demonstrate a prototype system consisting of an inexpensive 3D printed mount for mirror that supports ether-toolbar implementations. Finally, we discuss the implications of our work and potential design extensions that can increase the usability and the utility of ether-toolbars.2018HRHanae Rateau et al.Hand Gesture RecognitionMotor Impairment Assistive Input TechnologiesIUI
Introducing Transient Gestures to Improve Pan and Zoom on Touch SurfacesDespite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.2018JAJeff Avery et al.University of WaterlooHand Gesture RecognitionPrototyping & User TestingCHI