Between Bulky Suits and Isolated, Deserted Landscape: Measuring the User Experience of Astronaut-Drone InteractionIsolated, Confined, and Extreme (ICE) environments, such as those encountered in space exploration missions, pose unique physical and psychological challenges that influence user interactions with computer systems, yet remain considerably less documented compared to conventional settings. To investigate the impact of such environments on mobile interaction, we conducted an experiment involving a crew of analog astronauts operating a drone via a handheld controller in both a conventional Earth-based setting and an ICE environment represented by the extreme landscape of the Mars Desert Research Station. Our findings reveal how the user experience of mobile interaction evolves over multiple evaluation sessions conducted over a two-week period in the ICE environment, for which we analyze both pragmatic and hedonic dimensions, such as perceived efficiency, adaptability, novelty, usefulness, and trust. Based on our findings, we outline a set of implications for the design of mobile interaction intersecting space research through the distinctive lens of astronaut-drone interaction2025JVJean Vanderdonckt et al.Drone Interaction & ControlTeleoperation & TelepresenceMobileHCI
Good Accessibility, Handcuffed Creativity: AI-Generated UIs Between Accessibility Guidelines and Practitioners’ ExpectationsThe emergence of AI-powered UI generation tools presents both opportunities and challenges for accessible design, but their ability to produce truly accessible outcomes remains underexplored. In this work, we examine the effects of different prompt strategies through an evaluation of ninety interfaces generated by two AI tools across three application domains. Our findings reveal that, while these tools consistently achieve basic accessibility compliance, they rely on homogenized design patterns, which can limit their effectiveness in addressing specialized user needs. Through interviews with eight professional designers, we examine how this standardization impacts creativity and challenges the design of inclusive UIs. Our results contribute to the growing discourse on AI-powered design with (i) empirical insights into the capabilities of AI tools for generating accessible UIs, (ii) identification of barriers in this process, and (iii) guidelines for integrating AI into design workflows in ways that support both designers' creativity and design flexibility.2025AGAlexandra-Elena Gurita et al.Explainable AI (XAI)Universal & Inclusive DesignPrivacy by Design & User ControlDIS
UX, but on Mars: Exploring User Experience in Extreme Environments with Insights from a Mars Analog MissionIsolated, Confined, and Extreme (ICE) environments, such as those encountered in space missions, deep-sea explorations, and polar expeditions, pose unique physical and psychological challenges that influence user interaction with computer systems and have been significantly less explored compared to conventional environments. In this paper, we report empirical results from two experiments involving two crews of six analog astronauts each and two interactive systems with graphical and haptic user interfaces, conducted in both a conventional Earth environment and a Mars analog setting at the Mars Desert Research Station. We examine how extreme conditions affect UX and we provide implications for interaction design addressing ICE environments through adaptation, automation, and assistance-resistance mechanisms.2025JVJean Vanderdonckt et al.Participatory DesignHuman-Nature Relationships (More-than-Human Design)DIS
Distal-Haptic Touchscreens: Understanding the User Experience of Vibrotactile Feedback Decoupled from the Touch PointWe examine the user experience of distal haptics for touchscreen input through confirmatory vibrations of on-screen touches at various on-body locations. To this end, we introduce the Distal Haptics Continuum, a conceptual framework of haptic feedback delivery across the body, organized along the dimensions of Body Laterality and Proximity to the touch point. Our results, from three experiments involving 45 participants and 16 locations across the hand, arm, and whole body, reveal a strong preference for distal haptics over no haptics at all, despite the spatial decoupling from the touch point, with the index finger yielding the highest user experience. We also identify additional on-body locations, the adjacent fingers, wrist, and abdomen, that unlock distinctive design opportunities. Building on our insights, demonstrating haptics effectiveness even when distant from the touch point, we outline implications for integrating various on-body locations, well beyond the index finger, into the user experience of touchscreen input.2025MTMihail Terenti et al.Stefan cel Mare University of Suceava, MintViz Lab, MANSiD Research CenterIn-Vehicle Haptic, Audio & Multimodal FeedbackVibrotactile Feedback & Skin StimulationFull-Body Interaction & Embodied InputCHI
Intermanual Deictics: Uncovering Users' Gesture Preferences for Opposite-Arm Referential Input, from Fingers to ShoulderWe examine intermanual deictics, a distinctive class of gesture input characterized by an intermanual structure, asymmetric postural-manipulative articulation, and a deictic nature, drawing from both on-skin and bimanual mid-air gestures. To understand user preferences for gestures featuring these characteristics, we conducted a large-sample end-user elicitation study with 75 participants, who proposed intermanual deictics involving the opposite palm, forearm, and upper arm. Our results reveal a strong preference for physical-contact gestures primarily performed with the index finger, with strokes (62.4%) and touch input (28.8%) being most common, complemented by some preference for non-contact gestures (5.2%). We report similar agreement rates across gestures elicited in the three arm regions, averaging 26.3%, with higher agreement between the forearm and upper arm. We also present a consensus set of sixty gestures for effecting generic commands in interactive systems, along with design principles encompassing multiple practical implications for interactions that incorporate intermanual deictics.2025RVRadu-Daniel Vatavu et al.Ștefan cel Mare University of Suceava, MintViz Lab, MANSiD Research CenterHand Gesture RecognitionFull-Body Interaction & Embodied InputPrototyping & User TestingCHI
Non-Natural Interaction DesignNatural interactions, such as those based on gesture input, feel intuitive, familiar, and well-suited to user abilities in context, and have been supported by extensive research. Contrary to the conventional mainstream, we advocate for non-natural interaction design as a transformative process that results in highly effective interactions by deliberately deviating from user intuition and expectations of physical-world naturalness or the context in which innate human modalities, such as gestures used for interaction and communication, are applied-departing from the established notion of the "natural," yet prioritizing usability. To this end, we offer four perspectives on the relationship between natural and non-natural design, and explore three prototypes addressing gesture-based interactions with digital content in the physical environment, on the user's body, and through digital devices, to challenge assumptions in natural design. Lastly, we provide a formalization of non-natural interaction, along with design principles to guide future developments.2025RVRadu-Daniel VatavuȘtefan cel Mare University of Suceava, MintViz Lab, MANSiD Research CenterHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
iFAD Gestures: Understanding Users' Gesture Input Performance with Index-Finger Augmentation DevicesWe examine gestures performed with a class of input devices with distinctive quality properties in the wearables landscape, which we call "index-Finger Augmentation Devices" (iFADs). We introduce a four-level taxonomy to characterize the diversity of iFAD gestures, evaluate iFAD gesture articulation on a dataset of 6,369 gestures collected from 20 participants, and compute recognition accuracy rates. Our findings show that iFAD gestures are fast (1.84s on average), easy to articulate (1.52 average rating on a difficulty scale from 1 to 5), and socially acceptable (81% willingness to use them in public places). We compare iFAD gestures with gestures performed using other devices (styli, touchscreens, game controllers) from several public datasets (39,263 gestures, 277 participants), and report that iFAD gestures are two times faster than whole-body gestures and as fast as stylus and finger strokes performed on touchscreens.2023RVRadu-Daniel VatavuȘtefan cel Mare University of SuceavaHaptic WearablesHand Gesture RecognitionCHI
Understanding Wheelchair Users' Preferences for On-Body, In-Air, and On-Wheelchair GesturesWe present empirical results from a gesture elicitation study conducted with eleven wheelchair users that proposed on-body, in-air, and on-wheelchair gestures to effect twenty-one referents representing common actions, types of digital content, and navigation commands for interactive systems. We report a large preference for on-body (47.6%) and in-air (40.7%) compared to on-wheelchair (11.7%) gestures, mostly represented by touch input on different parts of the body and hand poses performed in mid-air with one hand. Following an agreement analysis that revealed low consensus (<5.5%) between users, although high perceived gesture ease, goodness, and social acceptability within users, we examine our participants' gesture characteristics in relation to their self-reported motor impairments, e.g., low strength, rapid fatigue, etc. We highlight the need for personalized gesture sets, tailored to and reflective of both users' preferences and specific motor abilities, an implication that we examine through the lenses of ability-based design.2023LBLaura-Bianca Bilius et al.Ștefan cel Mare University of SuceavaFull-Body Interaction & Embodied InputMotor Impairment Assistive Input TechnologiesCHI
Fingerhints: Understanding Users' Perceptions of and Preferences for On-Finger Kinesthetic NotificationsWe present "fingerhints," on-finger kinesthetic feedback represented by hyper-extension movements of the index finger, bypassing user agency, for notifications delivery. To this end, we designed a custom-made finger-augmentation device, which leverages mechanical force to deliver fingerhints as programmable hyper-extensions of the index finger. We evaluate fingerhints with 21 participants, and report good usability, low technology creepiness, and moderate to high social acceptability. In a second study with 11 new participants, we evaluate the wearable comfort of our fingerhints device against four commercial finger- and hand-augmentation devices. Finally, we present insights from the experience of one participant, who wore our device for eight hours during their daily life. We discuss the user experience of fingerhints in relation to our participants' personality traits, finger dexterity levels, and general attitudes toward notifications, and present implications for interactive systems leveraging on-finger kinesthetic feedback for on-body computing.2023ACAdrian-Vasile Catană et al.Ștefan cel Mare University of SuceavaVibrotactile Feedback & Skin StimulationFoot & Wrist InteractionCHI
Interactive Public Displays and Wheelchair Users: Between Direct, Personal and Indirect, Assisted InteractionWe examine accessible interactions for wheelchair users and public displays with three studies. In a first study, we conduct a Systematic Literature Review, from which we report very few scientific papers on this topic and a preponderant focus on touch input. In a second study, we conduct a Systematic Video Review using YouTube as a data source, and unveil accessibility challenges for public displays and several input modalities alternative to direct touch. In a third study, we conduct semi-structured interviews with eleven wheelchair users to understand their experience interacting with public displays and to collect their preferences for more accessible input modalities. Based on our findings, we propose the "assisted interaction" phase to extend Vogel and Balakrishnan's four-phase interaction model with public displays, and the "ability" dimension for cross-device interaction design to support, via users' personal mobile devices, independent use of interactive public displays.2022RVRadu-Daniel Vatavu et al.Universal & Inclusive DesignIntelligent Tutoring Systems & Learning AnalyticsUIST
Understanding Gesture Input Articulation with Upper-Body Wearables for Users with Upper-Body Motor ImpairmentsWe examine touchscreen stroke-gestures and mid-air motion-gestures articulated by users with upper-body motor impairments with devices worn on the wrist, finger, and head. We analyze users' gesture input performance in terms of production time, articulation consistency, and kinematic measures, and contrast the performance of users with upper-body motor impairments with that of a control group of users without impairments. Our results, from two datasets of 7,290 stroke-gestures and 3,809 motion-gestures collected from 28 participants, reveal that users with upper-body motor impairments take twice as much time to produce stroke-gestures on wearable touchscreens compared to users without impairments, but articulate motion-gestures equally fast and with similar acceleration. We interpret our findings in the context of ability-based design and propose ten implications for accessible gesture input with upper-body wearables for users with upper-body motor impairments.2022RVRadu-Daniel Vatavu et al.Ștefan cel Mare University of SuceavaHaptic WearablesFoot & Wrist InteractionMotor Impairment Assistive Input TechnologiesCHI
Symphony: Composing Interactive Interfaces for Machine LearningInterfaces for machine learning (ML), information and visualizations about models or data, can help practitioners build robust and responsible ML systems. Despite their benefits, recent studies of ML teams and our interviews with practitioners (n=9) showed that ML interfaces have limited adoption in practice. While existing ML interfaces are effective for specific tasks, they are not designed to be reused, explored, and shared by multiple stakeholders in cross-functional teams. To enable analysis and communication between different ML practitioners, we designed and implemented Symphony, a framework for composing interactive ML interfaces with task-specific, data-driven components that can be used across platforms such as computational notebooks and web dashboards. We developed Symphony through participatory design sessions with 10 teams (n=31), and discuss our findings from deploying Symphony to 3 production ML projects at Apple. Symphony helped ML practitioners discover previously unknown issues like data duplicates and blind spots in models while enabling them to share insights with other stakeholders.2022ABAlex Bäuerle et al.Ulm University, AppleExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCreative Collaboration & Feedback SystemsCHI
GestuRING: A Web-based Tool for Designing Gesture Input with Rings, Ring-Like, and Ring-Ready DevicesDespite an exciting area with many promises for innovations in wearable interactive systems, research on interaction techniques for smart rings lacks structured knowledge and readily-available resources for designers to systematically attain such innovations. In this work, we conduct a systematic literature review of ring-based gesture input, from which we extract key results and a large set of gesture commands for ring, ring-like, and ring-ready devices. We use these findings to deliver GestuRING, our web-based tool to support design of ring-based gesture input. GestuRING features a searchable gesture-to-function dictionary of 579 records with downloadable numerical data files and an associated YouTube video library. These resources are meant to assist the community in attaining further innovations in ring-based gesture input for interactive systems.2021RVDaniel Li et al.Haptic WearablesHand Gesture RecognitionPrototyping & User TestingUIST
A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?Gesture elicitation studies represent a popular and resourceful method in HCI to inform the design of intuitive gesture commands, reflective of end-users’ behavior, for controlling all kinds of interactive devices, applications, and systems. In the last ten years, an impressive body of work has been published on this topic, disseminating useful design knowledge regarding users’ preferences for finger, hand, wrist, arm, head, leg, foot, and whole-body gestures. In this paper, we deliver a systematic literature review of this large body of work by summarizing the characteristics and findings ofN=216gesture elicitation studies subsuming 5,458 participants, 3,625 referents, and 148,340 elicited gestures. We highlight the descriptive, comparative, and generative virtues of our examination to provide practitioners with an effective method to (i) understand how new gesture elicitation studies position in the literature; (ii) compare studies from different authors; and (iii) identify opportunities for new research. We make our large corpus of papers accessible online as a Zotero group library at https://www.zotero.org/groups/2132650/gesture_elicitation_studies.2020SVSantiago Villarreal et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputPrototyping & User TestingDIS
Stroke-Gesture Input for People with Motor Impairments: Empirical Results & Research RoadmapWe examine the articulation characteristics of stroke-gestures produced by people with upper body motor impairments on touchscreens as well as the accuracy rates of popular classification techniques, such as the $-family, to recognize those gestures. Our results on a dataset of 9,681 gestures collected from 70 participants reveal that stroke-gestures produced by people with motor impairments are recognized less accurately than the same gesture types produced by people without impairments, yet still accurately enough (93.0%) for practical purposes; are similar in terms of geometrical criteria to the gestures produced by people without impairments; but take considerably more time to produce (3.4s vs. 1.7s) and exhibit lower consistency (-49.7%). We outline a research roadmap for accessible gesture input on touchscreens for users with upper body motor impairments, and we make our large gesture dataset publicly available in the community.2019RVRadu-Daniel Vatavu et al.University Ştefan cel Mare of SuceavaMotor Impairment Assistive Input TechnologiesCHI
The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation StudiesWe introduce the dissimilarity-consensus method, a new approach to computing objective measures of consensus between users' gesture preferences to support data analysis in end-user gesture elicitation studies. Our method models and quantifies the relationship between users' consensus over gesture articulation and numerical measures of gesture dissimilarity, e.g., Dynamic Time Warping or Hausdorff distances, by employing growth curves and logistic functions. We exemplify our method on 1,312 whole-body gestures elicited from 30 children, ages 3 to 6 years, and we report the first empirical results in the literature on the consensus between whole-body gestures produced by children this young. We provide C# and R software implementations of our method and make our gesture dataset publicly available.2019RVRadu-Daniel VatavuUniversity Ştefan cel Mare of SuceavaFull-Body Interaction & Embodied InputHuman Pose & Activity RecognitionCHI
Designing, Engineering, and Evaluating Gesture User InterfacesThis course will introduce participants to the three main stages of the development life cycle of gesture-based interactions: (i) how to design a gesture user interface (UI) by carefully considering key aspects, such as gesture recognition techniques, variability in gesture articulation, properties of invariance (sampling, direction, position, scale, rotation), and good practices for gesture set design, (ii) how to implement a gesture UI with existing recognizers, software architecture, and libraries, and (iii) how to evaluate a gesture user interface with the help of various metrics of user performance. The course will also cover a discussion about the wide range of gestures, such as touch, finger, wrist, hand, arm, and whole-body gestures. Participants will be engaged to try out various tools on their own laptops and will leave the course with a set of useful resources for prototyping and evaluating gesture-based interactions in their own projects.2018JVJean Vanderdonckt et al.Louvain Interaction Laboratory, Université catholique de Louvain Pl. Place des Doyens, 1 – B-1348, Louvain-la-NeuveHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
Designing, Engineering, and Evaluating Gesture User InterfacesThis course will introduce participants to the three main stages of the development life cycle of gesture-based interactions: (i) how to design a gesture user interface (UI) by carefully considering key aspects, such as gesture recognition techniques, variability in gesture articulation, properties of invariance (sampling, direction, position, scale, rotation), and good practices for gesture set design, (ii) how to implement a gesture UI with existing recognizers, software architecture, and libraries, and (iii) how to evaluate a gesture user interface with the help of various metrics of user performance. The course will also cover a discussion about the wide range of gestures, such as touch, finger, wrist, hand, arm, and whole-body gestures. Participants will be engaged to try out various tools on their own laptops and will leave the course with a set of useful resources for prototyping and evaluating gesture-based interactions in their own projects.2018JVJean Vanderdonckt et al.Louvain Interaction Laboratory, Université catholique de Louvain Pl. Place des Doyens, 1 – B-1348, Louvain-la-NeuveHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
KeyTime: Super-Accurate Prediction of Stroke Gesture Production TimesWe introduce KeyTime, a new technique and accompanying software for predicting the production times of users' stroke gestures articulated on touchscreens. KeyTime employs the principles and concepts of the Kinematic Theory, such as lognormal modeling of stroke gestures' velocity profiles, to estimate gesture production times significantly more accurately than existing approaches. Our experimental results obtained on several public datasets show that KeyTime predicts user-independent production times that correlate r=.99 with groundtruth from just one example of a gesture articulation, while delivering an average error in the predicted time magnitude that is 3 to 6 times smaller than that delivered by CLC, the best prediction technique up to date. Moreover, KeyTime reports a wide range of useful statistics, such as the trimmed mean, median, standard deviation, and confidence intervals, providing practitioners with unprecedented levels of accuracy and sophistication to characterize their users' a priori time performance with stroke gesture input.2018LLLuis A. Leiva et al.ScilingFull-Body Interaction & Embodied InputHuman Pose & Activity RecognitionCHI