A Declarative Human-Robot Interaction Framework: Integrating Improvisation and Materiality in Robotic Fabrication and DesignCollaborative robots, with their computational power and versatile manipulation capabilities, hold significant potential in design and fabrication. However, their reliance on predefined CAD models and algorithms limits their effectiveness in creative, dynamic and unstructured contexts. In contrast, humans easily adapt to dynamic conditions during craft, improvise novel making strategies and embrace emergent material properties as part of form-finding. This paper investigates how human-robot interaction (HRI) can augment human adaptability through the computational power of robotics while embracing materials as a medium for creativity. The main contribution of this study is a declarative HRI workflow and its software implementation that relates real-time sensor feedback to robot's action selection. The results showed how this workflow enabled the improvisation, reproduction and modification of material expressions through dynamic tool path planning in robotic sand casting and clay forming. Consequently, this paper expands ongoing discussions on innovative ways to combine robotic technology with craft sensibilities.2025ITIremnur TokacDesktop 3D Printing & Personal FabricationLaser Cutting & Digital FabricationMakerspace CultureC&C
Embrogami: Shape-Changing Textiles with Machine EmbroideryMachine embroidery is a versatile technique for creating custom and entirely fabric-based patterns on thin and conformable textile surfaces. However, existing machine-embroidered surfaces remain static, limiting the interactions they can support. We introduce Embrogami, an approach for fabricating textile structures with versatile shape-changing behaviors. Inspired by origami, we leverage machine embroidery to form finger-tip-scale mountain-and-valley structures on textiles with customized shapes, bistable or elastic behaviors, and modular composition. The structures can be actuated by the user or the system to modify the local textile surface topology, creating interactive elements like toggles and sliders or textile shape displays with an ultra-thin, flexible, and integrated form factor. We provide a dedicated software tool and report results of technical experiments to allow users to flexibly design, fabricate, and deploy customized Embrogami structures. With four application cases, we showcase Embrogami’s potential to create functional and flexible shape-changing textiles with diverse visuo-tactile feedback.2024YJYu Jiang et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
Designing for Human Operations on the Moon: Challenges and Opportunities of Navigational HUD Interfaces Future crewed missions to the Moon will face significant environmental and operational challenges, posing risks to the safety and performance of astronauts navigating its inhospitable surface. Whilst head-up displays (HUDs) have proven effective in providing intuitive navigational support on Earth, the design of novel human-spaceflight solutions typically relies on costly and time-consuming analogue deployments, leaving the potential use of lunar HUDs largely under-explored. This paper explores an alternative approach by simulating navigational HUD concepts in a high-fidelity Virtual Reality (VR) representation of the lunar environment. In evaluating these concepts with astronauts and other aerospace experts (n=25), our mixed methods study demonstrates the efficacy of simulated analogues in facilitating rapid design assessments of early-stage HUD solutions. We illustrate this by elaborating key design challenges and guidelines for future lunar HUDs. In reflecting on the limitations of our approach, we propose directions for future design exploration of human-machine interfaces for the Moon.2024LBLeonie Bensch et al.German Aerospace Center (DLR), European Space Agency (ESA)Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AR Navigation & Context AwarenessCHI
Fighting Malicious Designs: Towards Visual Countermeasures Against Dark PatternsDark patterns are malicious UI design strategies that nudge users towards decisions going against their best interests. To create technical countermeasures against them, dark patterns must be automatically detectable. While researchers have devised algorithms to detect some patterns automatically, there has only been little work to use obtained results to technically counter the effects of dark patterns when users face them on their devices. To address this, we tested three visual countermeasures against 13 common dark patterns in an interactive lab study. The countermeasures we tested either (a) highlighted and explained the manipulation, (b) hid it from the user, or (c) let the user switch between the original view and the hidden version. From our data, we were able to extract multiple clusters of dark patterns where participants preferred specific countermeasures for similar reasons. To support creating effective countermeasures, we discuss our findings with a recent ontology of dark patterns.2024RSRené Schäfer et al.RWTH Aachen UniversityPrivacy by Design & User ControlDark Patterns RecognitionCHI
Follow me: Anthropomorphic appearance and communication impact social perception and joint navigation behaviorThis study addresses how anthropomorphic features shape users’ social perception and trust towards service robots and whether anthropomorphic characteristics influence the way people jointly navigate with them facing several obstacles in a course. Therefore, an experimental study was conducted where two communication and appearance designs (humanlike vs. machinelike) were examined for a service robot that provides transportation of goods by semi-automated following. The results of the study indicate that the humanlike robot design is rated more competent, warmer, less discomforting, and is generally preferred. Furthermore, participants jointly navigating with the humanlike designed robot walked around obstacles significantly more often indicating a more considerate navigation behavior and better remembering of system limits; both probably evoked by the humanlike design characteristics. In sum, the results of this study provide intriguing implications on how to target HRI for the service robot examined to enhance pleasant and error-free interaction.2024PDPia Dautzenberg et al.Social Robot InteractionHuman-Robot Collaboration (HRC)HRI
Navigating Real-World Complexity: A Multi-Medium System for Heterogeneous Human-Robot InteractionReal-world robot system deployment is often performed in complex and unstructured environments. These complex environments coupled with multi-faceted global tasks often lead to complicated stakeholder structures, making designing for these environments extremely challenging. Magnifying this difficulty, tasks performed in these environments often cannot be accomplished by a single robot or even single robot type because of the broad range of needs and psychical constraints of the robots. In these cases, heterogeneous robot teams may need to be coupled to human team members to perform the global tasks. From a Human-Robot Interaction (HRI) perspective, this increases the complexity of designing and deploying the system significantly, as now complicated stakeholder structures are mixed with complex robot teams. This paper presents a novel real-world system and interface design leveraging multiple mediums to balance stakeholder needs. To this end, the UI presented here incorporates features that support shared mental models (SMMs), trust establishment and development, and utilizes a centralized data distribution architecture to improve team performance. In addition to the interface, this paper presents a detailed look at the design process and the lessons learned from the perspective of a multi-year, real-world deployed system, as part of a large European project consisting of 21 partners from varying countries and backgrounds.2024PSPete Schroepfer et al.Human-Robot Collaboration (HRC)Teleoperation & TelepresenceHRI
User-Aware Rendering: Merging the Strengths of Device- and User-Perspective Rendering in Handheld ARIn handheld AR, users have only a small screen to see the augmented scene, making decisions about scene layout and rendering techniques crucial. Traditional device-perspective rendering (DPR) uses the device camera's full field of view, enabling fast scene exploration, but ignoring what the user sees around the device screen. In contrast, user-perspective rendering (UPR) emulates the feeling of looking through the device like a glass pane, which enhances depth perception, but severely limits the field of view in which virtual objects are displayed, impeding scene exploration and search. We introduce the notion of User-Aware Rendering. By following the principles of UPR, but pretending the device is larger than it actually is, it combines the strengths of UPR and DPR. We present two studies showing that User-Aware AR imitating a 50% larger device successfully achieves both enhanced depth perception and fast scene exploration in typical search and selection tasks.2023SHSebastian Hueber et al.AR Navigation & Context AwarenessImmersion & Presence ResearchMobileHCI
Handheld Tools Unleashed: Mixed-Initiative Physical Sketching with a Robotic PrinterPersonal fabrication has mostly focused on handheld tools as embodied extensions of the user, and machines like laser cutters and 3D printers automating parts of the process without intervention. Although interactive digital fabrication has been explored as a middle ground, existing systems have a fixed allocation of user intervention vs. machine autonomy, limiting flexibility, creativity, and improvisation. We explore a new class of devices that combine the desirable properties of a handheld tool and an autonomous fabrication robot, offering a continuum from manual and assisted to autonomous fabrication, with seamless mode transitions. We exemplify the concept of mixed-initiative physical sketching with a working robotic printer that can be handheld for free-hand sketching, can provide interactive assistance during sketching, or move about for computer-generated sketches. We present interaction techniques to seamlessly transition between modes, and sketching techniques benefitting from these transitions to, e.g., extend (upscale, repeat) or revisit (refine, color) sketches. Our evaluation with seven sketchers illustrates that RoboSketch successfully leverages each mode's strengths, and that mixed-initiative physical sketching makes computer-supported sketching more flexible.2023NPNarjes Pourjafarian et al.Saarland University, Saarland Informatics CampusDesktop 3D Printing & Personal FabricationLaser Cutting & Digital FabricationShape-Changing Materials & 4D PrintingCHI
What's That Shape? Investigating Eyes-Free Recognition of Textile IconsTextile surfaces, such as on sofas, cushions, and clothes, offer promising alternative locations to place controls for digital devices. Textiles are a natural, even abundant part of living spaces, and support unobtrusive input. While there is solid work on technical implementations of textile interfaces, there is little guidance regarding their design—especially their haptic cues, which are essential for eyes-free use. In particular, icons easily communicate information visually in a compact fashion, but it is unclear how to adapt them to the haptics-centric textile interface experience. Therefore, we investigated the recognizability of 84 haptic icons on fabrics. Each combines a shape, height profile (raised, recessed, or flat), and affected area (filled or outline). Our participants clearly preferred raised icons, and identified them with the highest accuracy and at competitive speeds. We also provide insights into icons that look very different, but are hard to distinguish via touch alone.2023RSRené Schäfer et al.RWTH Aachen UniversityHaptic WearablesVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile SlidersTextile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.2022ONOliver Nowak et al.RWTH Aachen UniversityShape-Changing Interfaces & Soft Robotic MaterialsElectronic Textiles (E-textiles)CHI
From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual DataAudiovisual recordings of user studies and interviews provide important data in qualitative HCI research. Even when a textual transcription is available, researchers frequently turn to these recordings due to their rich information content. However, the temporal, unstructured nature of audiovisual recordings makes them less efficient to work with than text. Through interviews and a survey, we explored how HCI researchers work with audiovisual recordings. We investigated researchers' transcription and annotation practice, their overall analysis workflow, and the prevalence of direct analysis of audiovisual recordings. We found that a key task was locating and analyzing inspectables, interesting segments in recordings. Since locating inspectables can be time consuming, participants look for detectables, visual or auditory cues that indicate the presence of an inspectable. Based on our findings, we discuss the potential for automation in locating detectables in qualitative audiovisual analysis.2021KSKrishna Subramanian et al.RWTH Aachen UniversityInteractive Data VisualizationComputational Methods in HCICHI
UISketch: A Large-Scale Dataset of UI Element SketchesThis paper contributes the first large-scale dataset of 17,979 hand-drawn sketches of 21 UI element categories collected from 967 participants, including UI/UX designers, front-end developers, HCI, and CS grad students, from 10 different countries. We performed a perceptual study with this dataset and found out that UI/UX designers can recognize the UI element sketches with ~96% accuracy. To compare human performance against computational recognition methods, we trained the state-of-the-art DNN-based image classification models to recognize the UI elements sketches. This study revealed that the ResNet-152 model outperforms other classification networks and detects unknown UI element sketches with 91.77% accuracy (chance is 4.76%). We have open-sourced the entire dataset of UI element sketches to the community intending to pave the way for further research in utilizing AI to assist the conversion of lo-fi UI sketches to higher fidelities.2021VPVinoth Pandian Sermuga Pandian et al.RWTH Aachen UniversityGenerative AI (Text, Image, Music, Video)Interactive Data VisualizationCHI
Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld ARIn Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a 'heatmap' technique that colors the objects in the scene based on their distance to the pen.2020PWPhilipp Wacker et al.RWTH Aachen UniversityAR Navigation & Context AwarenessInteractive Data VisualizationCHI
HeadReach: Using Head Tracking to Increase Reachability on Mobile Touch DevicesPeople often operate their smartphones with only one hand, using just their thumb for touch input. With today's larger smartphones, this leads to a reachability issue: Users can no longer comfortably touch everywhere on the screen without changing their grip. We investigate using the head tracking in modern smartphones to address this reachability issue. We developed three interaction techniques, pure head (PH), head + touch (HT), and head area + touch (HA), to select targets beyond the reach of one's thumb. In two user studies, we found that selecting targets using HT and HA had higher success rates than the default direct touch (DT) while standing (by about 9%) and walking (by about 12%), while being moderately slower. HT and HA were also faster than one of the best techniques, BezelCursor (BC) (by about 20% while standing and 6% while walking), while having the same success rate.2020SVSimon Voelker et al.RWTH Aachen UniversityEye Tracking & Gaze InteractionUniversal & Inclusive DesignCHI
TRACTUS: Understanding and Supporting Source Code Experimentation in Hypothesis-Driven Data ScienceData scientists experiment heavily with their code, compromising code quality to obtain insights faster. We observed ten data scientists perform hypothesis-driven data science tasks, and analyzed their coding, commenting, and analysis practice. We found that they have difficulty keeping track of their code experiments. When revisiting exploratory code to write production code later, they struggle to retrace their steps and capture the decisions made and insights obtained, and have to rerun code frequently. To address these issues, we designed TRACTUS, a system extending the popular RStudio IDE, that detects, tracks, and visualizes code experiments in hypothesis-driven data science tasks. TRACTUS helps recall decisions and insights by grouping code experiments into hypotheses, and structuring information like code execution output and documentation. Our user studies show how TRACTUS improves data scientists' workflows, and suggest additional opportunities for improvement. TRACTUS is available as an open source RStudio IDE addin at http://hci.rwth-aachen.de/tractus.2020KSKrishna Subramanian et al.RWTH Aachen UniversityInteractive Data VisualizationPrototyping & User TestingCHI
GazeConduits: Calibration-Free Cross-Device Collaboration through Gaze and TouchWe present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.2020SVSimon Voelker et al.RWTH Aachen UniversityEye Tracking & Gaze InteractionKnowledge Worker Tools & WorkflowsCHI
Evaluation of a Financial Portfolio Visualization using Computer Displays and Mixed Reality Devices with Domain ExpertsWith the advent of mixed reality devices such as the Microsoft HoloLens, developers have been faced with the challenge to utilize the third dimension in information visualization effectively. Research on stereoscopic devices has shown that three-dimensional representation can improve accuracy in specific tasks (e.g., network visualization). Yet, so far the field has remained mute on the underlying mechanism. Our study systematically investigates the differences in user perception between a regular monitor and a mixed reality device. In a real-life within-subject experiment in the field with twenty-eight investment bankers, we assessed subjective and objective task performance with two- and three-dimensional systems, respectively. We tested accuracy with regard to position, size, and color using single and combined tasks. Our results do not show a significant difference in accuracy between mixed-reality and standard 2D monitor visualizations.2020KSKay Schroeder et al.Zuyd University of Applied SciencesMixed Reality WorkspacesInteractive Data VisualizationCHI
ForceRay: Extending Thumb Reach via Force Input Stabilizes Device Grip for Mobile Touch InputSmartphones are used predominantly one-handed, using the thumb for input. Many smartphones, however, have grown beyond 5". Users cannot tap everywhere on these screens without destabilizing their grip. ForceRay (FR) lets users aim at an out-of-reach target by applying a force touch at a comfortable thumb location, casting a virtual ray towards the target. Varying pressure moves a cursor along the ray. When reaching the target, quickly lifting the thumb selects it. In a first study, FR was 195 ms slower and had a 3% higher selection error than the best existing technique, BezelCursor (BC), but FR caused significantly less device movement than all other techniques, letting users maintain a steady grip and removing their concerns about device drops. A second study showed that an hour of training speeds up both BC and FR, and that both are equally fast for targets at the screen border.2019CCChristian Corsten et al.RWTH Aachen UniversityForce Feedback & Pseudo-Haptic WeightPrototyping & User TestingCHI
ARPen: Mid-Air Object Manipulation Techniques for a Bimanual AR System with Pen & SmartphoneModeling in Augmented Reality (AR) lets users create and manipulate virtual objects in mid-air that are aligned to their real environment. We present ARPen, a bimanual input technique for AR modeling that combines a standard smartphone with a 3D-printed pen. Users sketch with the pen in mid-air, while holding their smartphone in the other hand to see the virtual pen traces in the live camera image. ARPen combines the pen's higher 3D input precision with the rich interactive capabilities of the smartphone touchscreen. We studied subjective preferences for this bimanual input technique, such as how people hold the smartphone while drawing, and analyzed the performance of different bimanual techniques for selecting and moving virtual objects. Users preferred a bimanual technique casting a ray through the pen tip for both selection and translation. We provide initial design guidelines for this new class of bimanual AR modeling systems.2019PWPhilipp Wacker et al.RWTH Aachen UniversityShape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesCHI
What's in a Review: Discrepancies Between Expert and Amateur Reviews of Video Games on MetacriticAs video game press ("experts") and casual gamers ("amateurs") have different motivations when writing video game reviews, discrepancies in their reviews may arise. To study such potential discrepancies, we conduct a large-scale investigation of more than 1 million reviews on the Metacritic review platform. In particular, we assess the existence and nature of discrepancies in video game appraisal by experts and amateurs, and how they manifest in ratings, over time, and in review language. Leveraging these insights, we explore the predictive power of early expert vs. amateur reviews in forecasting video game reputation in the short- and long-term. We find that amateurs, in contrast to experts, give more polarized ratings of video games, rate games surprisingly long after game release, and are positively biased towards older games. On a textual level, we observe that experts write rather complex, less readable texts than amateurs, whose reviews are more emotionally charged. While in the short-term amateur reviews are remarkably predictive of game reputation among other amateurs (achieving 91% ROC AUC in a binary classification), both expert and amateur reviews are equally well suited for long-term predictions. Overall, our work is the first large-scale comparative study of video game reviewing behavior, with practical implications for amateurs when deciding which games to play, and for game developers when planning which games to design, develop, or continuously support. More broadly, our work contributes to the discussion of wisdom of the few vs. wisdom of the crowds, as we uncover the limits of experts in capturing the views of amateurs in the particular context of video game reviews.2019TSTiago Santos et al.Expert WorkCSCW