Between Bulky Suits and Isolated, Deserted Landscape: Measuring the User Experience of Astronaut-Drone InteractionIsolated, Confined, and Extreme (ICE) environments, such as those encountered in space exploration missions, pose unique physical and psychological challenges that influence user interactions with computer systems, yet remain considerably less documented compared to conventional settings. To investigate the impact of such environments on mobile interaction, we conducted an experiment involving a crew of analog astronauts operating a drone via a handheld controller in both a conventional Earth-based setting and an ICE environment represented by the extreme landscape of the Mars Desert Research Station. Our findings reveal how the user experience of mobile interaction evolves over multiple evaluation sessions conducted over a two-week period in the ICE environment, for which we analyze both pragmatic and hedonic dimensions, such as perceived efficiency, adaptability, novelty, usefulness, and trust. Based on our findings, we outline a set of implications for the design of mobile interaction intersecting space research through the distinctive lens of astronaut-drone interaction2025JVJean Vanderdonckt et al.Drone Interaction & ControlTeleoperation & TelepresenceMobileHCI
UX, but on Mars: Exploring User Experience in Extreme Environments with Insights from a Mars Analog MissionIsolated, Confined, and Extreme (ICE) environments, such as those encountered in space missions, deep-sea explorations, and polar expeditions, pose unique physical and psychological challenges that influence user interaction with computer systems and have been significantly less explored compared to conventional environments. In this paper, we report empirical results from two experiments involving two crews of six analog astronauts each and two interactive systems with graphical and haptic user interfaces, conducted in both a conventional Earth environment and a Mars analog setting at the Mars Desert Research Station. We examine how extreme conditions affect UX and we provide implications for interaction design addressing ICE environments through adaptation, automation, and assistance-resistance mechanisms.2025JVJean Vanderdonckt et al.Participatory DesignHuman-Nature Relationships (More-than-Human Design)DIS
Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and InversionMicrowave radar sensors in human-computer interactions have several advantages compared to wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and relatively complex to interpret. Advanced data processing, including machine learning techniques, is therefore necessary for gesture recognition. While these approaches can reach high gesture recognition accuracy, using artificial neural networks requires a significant amount of gesture templates for training and calibration is radar-specific. To address these challenges, we present a novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling (EM) and inversion with machine learning. In particular, the physical model accounts for the radar source, radar antennas, radar-target interactions and target itself, i.e., the hand in our case. To make EM processing feasible, the hand is emulated by an equivalent infinite planar reflector, for which analytical Green's functions exist. The hand, located at a specific distance from the radar, is therefore characterized by an apparent dielectric permittivity. This apparent permittivity depends on the hand only (e.g., size, electric properties, orientation) and, together with the distance, determines wave reflection amplitude. Through full-wave inversion of the radar data, the physical distance as well as this apparent permittivity are retrieved, thereby reducing by several orders of magnitude the dimension of the radar dataset, while keeping essential information. Using the estimated distance and apparent permittivity as a function of time is finally used to train the machine learning algorithm for gesture recognition. This dimension reduction enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples. We evaluate significant stages of our pipeline on a dataset of 16 gesture classes, with 5 templates per class, recorded with the Walabot, a lightweight, off-the-shelf array radar. We also compare these results with an ultra wideband radar made of a single horn antenna and lightweight vector network analyzer, and a Leap Motion Controller.2022ASArthur Sluÿters et al.Hand Gesture RecognitionIUI
Interpreting the Effect of Embellishment on Chart VisualizationsInfographics range from minimalism that aims to convey the raw data to elaborately decorated, or embellished, graphics that aim to engage readers by telling a story. Studies have shown evidence to negative, but also positive, effects on embellishments. We conducted a set of experiments to gauge more precisely how embellishments affect how people relate to infographics and make sense of the conveyed story. We analyzed questionnaires, interviews, and eye-tracking data simplified by bundling to find how embellishments affect reading infographics, beyond engagement, memorization, and recall. We found that, within bounds, embellishments have a positive effect on how users get engaged in understanding an infographic, with very limited downside. To our knowledge, our work is the first that fuses the aforementioned three information sources gathered from the same data-and-user corpus to understand infographics. Our findings can help to design more fine-grained studies to quantify embellishment effects and also to design infographics that effectively use embellishments.2021TATiffany Andry et al.UCLouvainData StorytellingVisualization Perception & CognitionCHI
A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?Gesture elicitation studies represent a popular and resourceful method in HCI to inform the design of intuitive gesture commands, reflective of end-users’ behavior, for controlling all kinds of interactive devices, applications, and systems. In the last ten years, an impressive body of work has been published on this topic, disseminating useful design knowledge regarding users’ preferences for finger, hand, wrist, arm, head, leg, foot, and whole-body gestures. In this paper, we deliver a systematic literature review of this large body of work by summarizing the characteristics and findings ofN=216gesture elicitation studies subsuming 5,458 participants, 3,625 referents, and 148,340 elicited gestures. We highlight the descriptive, comparative, and generative virtues of our examination to provide practitioners with an effective method to (i) understand how new gesture elicitation studies position in the literature; (ii) compare studies from different authors; and (iii) identify opportunities for new research. We make our large corpus of papers accessible online as a Zotero group library at https://www.zotero.org/groups/2132650/gesture_elicitation_studies.2020SVSantiago Villarreal et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputPrototyping & User TestingDIS
Designing, Engineering, and Evaluating Gesture User InterfacesThis course will introduce participants to the three main stages of the development life cycle of gesture-based interactions: (i) how to design a gesture user interface (UI) by carefully considering key aspects, such as gesture recognition techniques, variability in gesture articulation, properties of invariance (sampling, direction, position, scale, rotation), and good practices for gesture set design, (ii) how to implement a gesture UI with existing recognizers, software architecture, and libraries, and (iii) how to evaluate a gesture user interface with the help of various metrics of user performance. The course will also cover a discussion about the wide range of gestures, such as touch, finger, wrist, hand, arm, and whole-body gestures. Participants will be engaged to try out various tools on their own laptops and will leave the course with a set of useful resources for prototyping and evaluating gesture-based interactions in their own projects.2018JVJean Vanderdonckt et al.Louvain Interaction Laboratory, Université catholique de Louvain Pl. Place des Doyens, 1 – B-1348, Louvain-la-NeuveHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
Cloud Menus, a Circular Adaptive Menu for Small ScreensThis paper presents Cloud Menus, a split adaptive menu for small screens where the predicted menu items are arranged in a tag cloud with a location consistent with their corre-sponding position in the static menu and a font size depending on their prediction level. This layout results from a 3-step design process: (i) defining an initial design space on Bertin’s 8 visual variables and 4 quality properties, (ii) iden-tifying the most preferred layout based on agreement rate, and (iii) implementing it into Cloud Menus, a new widget for Android with cloud layout. A theoretical study investigates a model for estimating the item selection time. An empirical study suggests that cloud menus reduce item selection time and error rate when prediction is correct without penalizing it when prediction is incorrect, compared to two baselines: a non-adaptive static menu and an adaptive linear menu, where predicted items are arranged in a vertical list. From these studies, design guidelines for cloud menus are developed.2018JVJean Vanderdonckt et al.Data StorytellingComputational Methods in HCIIUI