Investigating Hand-Bound Pads for AR Input Using Hand-Tracking OnlyInteraction in Augmented Reality primarily relies on raycast pointing and mid-air touch. An alternative consists of using the non-dominant hand as a touch-sensitive surface, enabling more comfortable, less fatiguing input. AR UI design guidelines have so far discouraged this alternative because of poor hand tracking performance when the hands overlap, favoring touchpads in the air near the hand, rather than on the hand. But significant improvements to the hand tracking capabilities of recent commodity headsets suggest that on-hand pads may now be feasible. We develop an on-hand touchpad prototype and conduct two studies that involve both discrete input and continuous control tasks. The first study compares such on-hand pads to baseline in-air and on-object pads, showing comparable performance despite some limitations in tracking accuracy. The second study quantifies the advantage of on-hand and in-air pads over on-object pads during transitions between touchpad input and other physical hand activities.2025CDCamille Dupré et al.Hand Gesture RecognitionAR Navigation & Context AwarenessMobileHCI
"Should I choose a smaller model?'': Understanding ML Model Selection and Its Impact on SustainabilityThe increasing accessibility of large machine learning (ML) models has resulted in their widespread adoption in everyday products, with a correspondingly negative environmental impact. Selecting more suitable ML models could not only improve training time and achievable accuracy, but also long-term sustainability. However, ML developers' model selection process remains underexplored, especially with respect to sustainability trade-offs. Our interviews with 13 ML developers showed that participants select models mainly based on familiarity, accuracy and interpretability, but often overlook sustainability. They critically reflected on the current trends of large models and the lack of available information regarding model sustainability. We present implications for the ML and HCI communities, emphasizing the importance of critical reflection on model selection in education and practice. Based on our insights, we provide initial recommendations for promoting model sustainability evaluation and how the HCI community can assist in making sustainable model alternatives more accessible.2025ECEya Ben chaaben et al.Inria Paris Saclay, ExSituAI-Assisted Decision-Making & AutomationSustainable HCIEcological Design & Green ComputingCHI
Evaluation of a Tailored Mobile Application for Self-Management of Low Back Pain: Towards a Metamodel for Designing Behavior Change TechnologiesThe mobile health market is rapidly developing, but few apps follow evidence-based guidelines. Literature recommends personalized systems grounded in behavioral science, involving healthcare professionals in design to maximize effectiveness. To address this, we propose a metamodel to guide designers. This article discusses its application to low back pain self-management, focusing on four patient profiles: Unmotivated, Cautious, Depressed, and Confident. We evaluated the app over one month with 60 users. Of these, 32 users received a version of the application tailored to their profile, and 28 users received a version of the application without tailoring (no recommendations or motivational messages). We assessed user experience, engagement and psychological characteristics involved in the behavior change process. Results showed satisfactory user experience, impact of tailoring on user behavior and features to reduce fears and false beliefs and increase self-efficacy. Further efforts are needed to increase user engagement and observe an impact on long-term behavior.2025FDFlorian Debackere et al.CNRS, Laboratoire interdisciplinaire des sciences du numérique, Université Paris-SaclayMental Health Apps & Online Support CommunitiesChronic Disease Self-Management (Diabetes, Hypertension, etc.)CHI
FusAIn: Composing Generative AI Visual Prompts Using Pen-based InteractionAlthough current generative AI (GenAI) enables designers to create novel images, its focus on text-based and whole-image interaction limits expressive engagement with visual materials. Based on the design concept of deconstruction and reconstruction of digital visual attributes for visual prompts, we present FusAIn, a GenAI prompt composition tool that lets designers create personalized pens by loading them with objects or attributes such as color or texture. GenAI then fuses the pen's contents to create new images. Extracting and reusing inspirational material matches designers' existing work practices, making GenAI more contextualized for professional design. A study with 12 designers shows how FusAIn improves their ability to define visual details at different levels that are difficult to express with current GenAI prompts. Pen-based interaction lets them maintain fine-grained control over generated results, increasing GenAI image's editability and reusability. We discuss the benefits of "composition as prompts" and directions for future research.2025XPXiaohan Peng et al.Université Paris-Saclay, CNRS, Inria, ExSitu, LISNGenerative AI (Text, Image, Music, Video)CHI
Lost in Magnitudes: Exploring Visualization Designs for Large Value RangesWe explore the design of visualizations for values spanning multiple orders of magnitude; we call them Orders of Magnitude Values (OMVs). Visualization researchers have shown that separating OMVs into two components, the mantissa and the exponent, and encoding them separately overcomes limitations of linear and logarithmic scales. However, only a small number of such visualizations have been tested, and the design guidelines for visualizing the mantissa and exponent separately remain under-explored. To initiate this exploration, better understand the factors influencing the effectiveness of these visualizations, and create guidelines, we adopt a multi-stage workflow. We introduce a design space for visualizing mantissa and exponent, systematically generating and qualitatively evaluating all possible visualizations within it. From this evaluation, we derive guidelines. We select two visualizations that align with our guidelines and test them using a crowdsourcing experiment, showing they facilitate quantitative comparisons and increase confidence in interpretation compared to the state-of-the-art.2025KBKaterina Batziakoudi et al.Berger-Levrault; Inria, AvizInteractive Data VisualizationTime-Series & Network Graph VisualizationVisualization Perception & CognitionCHI
When Should I Lead or Follow: Understanding Initiative Levels in Human-AI Collaborative GameplayDynamics in Human-AI interaction should lead to more satisfying and engaging collaboration. Key open questions are how to design such interactions and the role personal goals and expectations play. We developed three AI partners of varying initiative (leader, follower, shifting) in a collaborative game called Geometry Friends. We conducted a within-subjects experiment with 60 participants to assess personal AI partner preference and performance satisfaction as well as perceived warmth and competence of AI partners. Results show that AI partners following human initiative are perceived as warmer and more collaborative. However, some participants preferred AI leaders for their independence and speed, despite being seen as less friendly. This suggests that assigning a leadership role to the AI partner may be suitable for time-sensitive scenarios. We identify design factors for developing collaborative AI agents with varying levels of initiative to create more effective human-AI teams that consider context and individual preference.2024ILInês Lobo et al.Generative AI (Text, Image, Music, Video)Creative Collaboration & Feedback SystemsDIS
DesignPrompt: Using Multimodal Interaction for Design Exploration with Generative AIVisually oriented designers often struggle to create effective generative AI (GenAI) prompts. A preliminary study identified specific issues in composing and fine-tuning prompts, as well as needs in accurately translating intentions into rich input. We developed DesignPrompt, a moodboard tool that lets designers combine multiple modalities — images, color, text — into a single GenAI prompt and tweak the results. We ran a comparative structured observation study with 12 professional designers to better understand their intent expression, expectation alignment and transparency perception using DesignPrompt and text input GenAI. We found that multimodal prompt input encouraged designers to explore and express themselves more effectively. Designer’s interaction preferences change according to their overall sense of control over the GenAI and whether they are seeking inspiration or a specific image. Designers developed innovative uses of DesignPrompt, including developing elaborate multimodal prompts and creating a multimodal prompt pattern to maximize novelty while ensuring consistency.2024XPXiaohan Peng et al.Generative AI (Text, Image, Music, Video)Graphic Design & Typography ToolsCreative Collaboration & Feedback SystemsDIS
TriPad: Touch Input in AR on Ordinary Surfaces with Hand Tracking OnlyTriPad enables opportunistic touch interaction in Augmented Reality using hand tracking only. Users declare the surface they want to appropriate with a simple hand tap gesture. They can then use this surface at will for direct and indirect touch input. TriPad only involves analyzing hand movements and postures, without the need for additional instrumentation, scene understanding or machine learning. TriPad thus works on a variety of flat surfaces, including glass. It also ensures low computational overhead on devices that typically have a limited power budget. We describe the approach, and report on two user studies. The first study demonstrates the robustness of TriPad's hand movement interpreter on different surface materials. The second study compares TriPad against direct mid-air AR input techniques on both discrete and continuous tasks and with different surface orientations. TriPad achieves a better speed-accuracy trade-off overall, improves comfort and minimizes fatigue.2024CDCamille Dupré et al.Université Paris-Saclay, CNRS, Inria, Carl Berger-LevraultShape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputCHI
PITAS: Sensing and Actuating Embedded Robotic Sheet for Physical Information CommunicationThis work presents PITAS, a thin-sheet robotic material composed of a reversible phase transition actuating layer and a heating/sensing layer. The synthetic sheet material enables non-expert makers to create shape-changing devices that can locally or remotely convey physical information such as shape, color, texture and temperature changes. PITAS sheets can be manipulated into various 2D shapes or 3D geometries using subtractive fabrication methods such as laser, vinyl, or manual cutting or an optional additive 3D printing method for creating 3D objects. After describing the design of PITAS, this paper also describes a study conducted with thirteen makers to gauge the accessibility, design space, and limitations encountered when PITAS is used as a soft robotic material while designing physical information communication devices. Lastly, this work reports on the results of a mechanical and electrical evaluation of PITAS and presents application examples to demonstrate its utility.2022TCTingyu Cheng et al.Interactive Computing, Interactive ComputingShape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingCHI
KeyTch: Combining the Keyboard with a Touchscreen for Rapid Command Selection on ToolbarsIn this paper, we address the challenge of reducing mouse pointer transitions from the working object (e.g. text document) to simple or multi-level toolbars on desktop computers. To this end, we introduce KeyTch (pronounced ‘Keetch’), a novel approach for command selection on toolbars based on the combined use of the keyboard with a touchscreen. The toolbar is displayed on the touchscreen, which is positioned below the keyboard. Users can select commands by performing gestures combining a key press with the pinky finger, and a screen touch with the thumb of the same hand. After analyzing the design properties of KeyTch, a preliminary experiment validates that users can perform such gestures and reach the entire touchscreen surface with the thumb. Then a first user study unveils that direct touch outperforms indirect pointing to reach items on a simple toolbar displayed on the touchscreen. In a second study, we validate that KeyTch interaction techniques outperform the mouse for selecting items on a multi-level toolbar displayed on the touchscreen, allowing to select up to 720 commands with an accuracy above 95%, or 480 commands with an accuracy above 97%. Finally, two follow-up studies validate the benefits of KeyTch when used in a more integrated context.2021EKElio Keddisseh et al.Universite Paul Sabatier, Oktal SydacFoot & Wrist InteractionKnowledge Worker Tools & WorkflowsCHI
Physiologically Driven Storytelling: Concept and Software ToolWe put forth Physiologically Driven Storytelling, a new approach to interactive storytelling where narratives adaptively unfold based on the reader's physiological state. We first describe a taxonomy framing how physiological signals can be used to drive interactive systems both as input and output. We then propose applications to interactive storytelling and describe the implementation of a software tool to create Physiological Interactive Fiction (PIF). The results of an online study (N=140) provided guidelines towards augmenting the reading experience. PIF was then evaluated in a lab study (N=14) to determine how physiological signals can be used to infer a reader's state. Our results show that breathing, electrodermal activity, and eye tracking can help differentiate positive from negative tones, and monotonous from exciting events. This work demonstrates how PIF can support storytelling in creating engaging content and experience tailored to the reader. Moreover, it opens the space to future physiologically driven systems within broader application areas.2020JFJérémy Frey et al.Ullo & Interdisciplinary Center (IDC) HerzliyaBiosensors & Physiological MonitoringInteractive Narrative & Immersive StorytellingCHI
An Exploratory Study on Visual Exploration of Model Simulations by Multiple Types of ExpertsExperts in different domains rely increasingly on simulation models of complex processes to reach insights, make decisions, and plan future projects. These models are often used to study possible trade-offs, as experts try to optimise multiple conflicting objectives in a single investigation. Understanding all the model intricacies, however, is challenging for a single domain expert. We propose a simple approach to support multiple experts when exploring complex model results. First, we reduce the model exploration space, then present the results on a shared interactive surface, in the form of a scatterplot matrix and linked views. To explore how multiple experts analyse trade-offs using this setup, we carried out an observational study focusing on the link between expertise and insight generation during the analysis process. Our results reveal the different exploration strategies and multi-storyline approaches that domain experts adopt during trade-off analysis, and inform our recommendations for collaborative model exploration systems.2019NBNadia Boukhelifa et al.UMR GMPA, AgroParisTech, INRA, Univ. Paris-SaclayInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)CHI
Multi-plié: A Linear Foldable and Flattenable Interactive Display to Support Efficiency, Safety and CollaborationWe present the design concept of an accordion-fold interactive display to address the limits of touch-based interaction in airliner cockpits. Based on an analysis of pilot activity, tangible design principles for this design concept are identified. Two resulting functional prototypes are explored during participatory workshops with pilots, using activity scenarios. This exploration validated the design concept by revealing its ability to match pilot responsibilities in terms of safety, efficiency and collaboration. It provides an efficient visual perception of the system for real-time collaborative operations and tangible interaction to strengthen the perception of action and to manage safety through anticipation and awareness. The design work and insights enabled to specify further our needs regarding flexible screens. They also helped to better characterize the design concept as based on continuity of a developed surface, predictability of aligned folds and pleat face roles, embodied interactive properties, and flexibility through affordable reconfigurations.2019SPSylvain Pauchet et al.University of Toulouse - ENAC & AstrolabShape-Changing Interfaces & Soft Robotic MaterialsKnowledge Worker Tools & WorkflowsCHI
Automation: Danger or Opportunity? Designing and Assessing Automation for Interactive SystemsThis course takes a practical approach to introduce the principles, methods and tools in task modeling and how this technique can support identification of automation opportunities, dangers and limitations. A technical interactive hands-on exercise of how to "do it right", such as: How to go from task analysis to task models? How to identify tasks that are good candidate for automation (through analysis and simulation)? How to identify reliability and usability dangers added by automation? How to design usable automation at system, application and interaction levels? And more...2018PPPhilippe Palanque et al.ICS-IRIT, Université Paul Sabatier Toulouse 3AI-Assisted Decision-Making & AutomationImpact of Automation on WorkCHI
Automation: Danger or Opportunity? Designing and Assessing Automation for Interactive SystemsThis course takes a practical approach to introduce the principles, methods and tools in task modeling and how this technique can support identification of automation opportunities, dangers and limitations. A technical interactive hands-on exercise of how to "do it right", such as: How to go from task analysis to task models? How to identify tasks that are good candidate for automation (through analysis and simulation)? How to identify reliability and usability dangers added by automation? How to design usable automation at system, application and interaction levels? And more...2018PPPhilippe Palanque et al.ICS-IRIT, Université Paul Sabatier Toulouse 3AI-Assisted Decision-Making & AutomationImpact of Automation on WorkCHI
Taking into account Sensory Knowledge: the case of Geo-techologies for children with visual impairmentsThis paper argues for designing geo-technologies supporting non-visual sensory knowledge. Sensory knowledge refers to the implicit and explicit knowledge guiding our uses of our senses to understand the world. To support our argument, we build on an 18 months field-study on geography classes for primary school children with visual impairments. Our findings show (1) a paradox in the use of non-visual sensory knowledge: described as fundamental to the geography curriculum, it is mostly kept out of school; (2) that accessible geo-technologies in the literature mainly focus on substituting vision with another modality, rather than enabling teachers to build on children's experiences; (3) the importance of the hearing sense in learning about space. We then introduce a probe, a wrist-worn device enabling children to record audio cues during field-trips. By giving importance to children's hearing skills, it modified existing practices and actors' opinions on non-visual sensory knowledge. We conclude by reflecting on design implications, and the role of technologies in valuing diverse ways of understanding the world.2018EBEmeline Brulé et al.CNRS i3Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Geospatial & Map VisualizationCHI
Self-Reflection and Personal Physicalization ConstructionSelf-reflection is a central goal of personal informatics systems, and constructing visualizations from physical tokens has been found to help people reflect on data. However, so far, constructive physicalization has only been studied in lab environments with provided datasets. Our qualitative study investigates the construction of personal physicalizations in people's domestic environments over 2-4 weeks. It contributes an understanding of (1) the process of creating personal physicalizations, (2) the types of personal insights facilitated, (3) the integration of self-reflection in the physicalization process, and (4) its benefits and challenges for self-reflection. We found that in constructive personal physicalization, data collection, construction and self-reflections are deeply intertwined. This extends previous models of visualization creation and data-driven self-reflection. We outline how benefits such as reflection through manual construction, personalization, and presence in everyday life can be transferred to a wider set of digital and physical systems.2018ATAlice Thudt et al.University of CalgaryData PhysicalizationCHI