Embrogami: Shape-Changing Textiles with Machine EmbroideryMachine embroidery is a versatile technique for creating custom and entirely fabric-based patterns on thin and conformable textile surfaces. However, existing machine-embroidered surfaces remain static, limiting the interactions they can support. We introduce Embrogami, an approach for fabricating textile structures with versatile shape-changing behaviors. Inspired by origami, we leverage machine embroidery to form finger-tip-scale mountain-and-valley structures on textiles with customized shapes, bistable or elastic behaviors, and modular composition. The structures can be actuated by the user or the system to modify the local textile surface topology, creating interactive elements like toggles and sliders or textile shape displays with an ultra-thin, flexible, and integrated form factor. We provide a dedicated software tool and report results of technical experiments to allow users to flexibly design, fabricate, and deploy customized Embrogami structures. With four application cases, we showcase Embrogami’s potential to create functional and flexible shape-changing textiles with diverse visuo-tactile feedback.2024YJYu Jiang et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
Fighting Malicious Designs: Towards Visual Countermeasures Against Dark PatternsDark patterns are malicious UI design strategies that nudge users towards decisions going against their best interests. To create technical countermeasures against them, dark patterns must be automatically detectable. While researchers have devised algorithms to detect some patterns automatically, there has only been little work to use obtained results to technically counter the effects of dark patterns when users face them on their devices. To address this, we tested three visual countermeasures against 13 common dark patterns in an interactive lab study. The countermeasures we tested either (a) highlighted and explained the manipulation, (b) hid it from the user, or (c) let the user switch between the original view and the hidden version. From our data, we were able to extract multiple clusters of dark patterns where participants preferred specific countermeasures for similar reasons. To support creating effective countermeasures, we discuss our findings with a recent ontology of dark patterns.2024RSRené Schäfer et al.RWTH Aachen UniversityPrivacy by Design & User ControlDark Patterns RecognitionCHI
User-Aware Rendering: Merging the Strengths of Device- and User-Perspective Rendering in Handheld ARIn handheld AR, users have only a small screen to see the augmented scene, making decisions about scene layout and rendering techniques crucial. Traditional device-perspective rendering (DPR) uses the device camera's full field of view, enabling fast scene exploration, but ignoring what the user sees around the device screen. In contrast, user-perspective rendering (UPR) emulates the feeling of looking through the device like a glass pane, which enhances depth perception, but severely limits the field of view in which virtual objects are displayed, impeding scene exploration and search. We introduce the notion of User-Aware Rendering. By following the principles of UPR, but pretending the device is larger than it actually is, it combines the strengths of UPR and DPR. We present two studies showing that User-Aware AR imitating a 50% larger device successfully achieves both enhanced depth perception and fast scene exploration in typical search and selection tasks.2023SHSebastian Hueber et al.AR Navigation & Context AwarenessImmersion & Presence ResearchMobileHCI
Handheld Tools Unleashed: Mixed-Initiative Physical Sketching with a Robotic PrinterPersonal fabrication has mostly focused on handheld tools as embodied extensions of the user, and machines like laser cutters and 3D printers automating parts of the process without intervention. Although interactive digital fabrication has been explored as a middle ground, existing systems have a fixed allocation of user intervention vs. machine autonomy, limiting flexibility, creativity, and improvisation. We explore a new class of devices that combine the desirable properties of a handheld tool and an autonomous fabrication robot, offering a continuum from manual and assisted to autonomous fabrication, with seamless mode transitions. We exemplify the concept of mixed-initiative physical sketching with a working robotic printer that can be handheld for free-hand sketching, can provide interactive assistance during sketching, or move about for computer-generated sketches. We present interaction techniques to seamlessly transition between modes, and sketching techniques benefitting from these transitions to, e.g., extend (upscale, repeat) or revisit (refine, color) sketches. Our evaluation with seven sketchers illustrates that RoboSketch successfully leverages each mode's strengths, and that mixed-initiative physical sketching makes computer-supported sketching more flexible.2023NPNarjes Pourjafarian et al.Saarland University, Saarland Informatics CampusDesktop 3D Printing & Personal FabricationLaser Cutting & Digital FabricationShape-Changing Materials & 4D PrintingCHI
What's That Shape? Investigating Eyes-Free Recognition of Textile IconsTextile surfaces, such as on sofas, cushions, and clothes, offer promising alternative locations to place controls for digital devices. Textiles are a natural, even abundant part of living spaces, and support unobtrusive input. While there is solid work on technical implementations of textile interfaces, there is little guidance regarding their design—especially their haptic cues, which are essential for eyes-free use. In particular, icons easily communicate information visually in a compact fashion, but it is unclear how to adapt them to the haptics-centric textile interface experience. Therefore, we investigated the recognizability of 84 haptic icons on fabrics. Each combines a shape, height profile (raised, recessed, or flat), and affected area (filled or outline). Our participants clearly preferred raised icons, and identified them with the highest accuracy and at competitive speeds. We also provide insights into icons that look very different, but are hard to distinguish via touch alone.2023RSRené Schäfer et al.RWTH Aachen UniversityHaptic WearablesVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile SlidersTextile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.2022ONOliver Nowak et al.RWTH Aachen UniversityShape-Changing Interfaces & Soft Robotic MaterialsElectronic Textiles (E-textiles)CHI
From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual DataAudiovisual recordings of user studies and interviews provide important data in qualitative HCI research. Even when a textual transcription is available, researchers frequently turn to these recordings due to their rich information content. However, the temporal, unstructured nature of audiovisual recordings makes them less efficient to work with than text. Through interviews and a survey, we explored how HCI researchers work with audiovisual recordings. We investigated researchers' transcription and annotation practice, their overall analysis workflow, and the prevalence of direct analysis of audiovisual recordings. We found that a key task was locating and analyzing inspectables, interesting segments in recordings. Since locating inspectables can be time consuming, participants look for detectables, visual or auditory cues that indicate the presence of an inspectable. Based on our findings, we discuss the potential for automation in locating detectables in qualitative audiovisual analysis.2021KSKrishna Subramanian et al.RWTH Aachen UniversityInteractive Data VisualizationComputational Methods in HCICHI
Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld ARIn Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a 'heatmap' technique that colors the objects in the scene based on their distance to the pen.2020PWPhilipp Wacker et al.RWTH Aachen UniversityAR Navigation & Context AwarenessInteractive Data VisualizationCHI
TRACTUS: Understanding and Supporting Source Code Experimentation in Hypothesis-Driven Data ScienceData scientists experiment heavily with their code, compromising code quality to obtain insights faster. We observed ten data scientists perform hypothesis-driven data science tasks, and analyzed their coding, commenting, and analysis practice. We found that they have difficulty keeping track of their code experiments. When revisiting exploratory code to write production code later, they struggle to retrace their steps and capture the decisions made and insights obtained, and have to rerun code frequently. To address these issues, we designed TRACTUS, a system extending the popular RStudio IDE, that detects, tracks, and visualizes code experiments in hypothesis-driven data science tasks. TRACTUS helps recall decisions and insights by grouping code experiments into hypotheses, and structuring information like code execution output and documentation. Our user studies show how TRACTUS improves data scientists' workflows, and suggest additional opportunities for improvement. TRACTUS is available as an open source RStudio IDE addin at http://hci.rwth-aachen.de/tractus.2020KSKrishna Subramanian et al.RWTH Aachen UniversityInteractive Data VisualizationPrototyping & User TestingCHI
ForceRay: Extending Thumb Reach via Force Input Stabilizes Device Grip for Mobile Touch InputSmartphones are used predominantly one-handed, using the thumb for input. Many smartphones, however, have grown beyond 5". Users cannot tap everywhere on these screens without destabilizing their grip. ForceRay (FR) lets users aim at an out-of-reach target by applying a force touch at a comfortable thumb location, casting a virtual ray towards the target. Varying pressure moves a cursor along the ray. When reaching the target, quickly lifting the thumb selects it. In a first study, FR was 195 ms slower and had a 3% higher selection error than the best existing technique, BezelCursor (BC), but FR caused significantly less device movement than all other techniques, letting users maintain a steady grip and removing their concerns about device drops. A second study showed that an hour of training speeds up both BC and FR, and that both are equally fast for targets at the screen border.2019CCChristian Corsten et al.RWTH Aachen UniversityForce Feedback & Pseudo-Haptic WeightPrototyping & User TestingCHI
ARPen: Mid-Air Object Manipulation Techniques for a Bimanual AR System with Pen & SmartphoneModeling in Augmented Reality (AR) lets users create and manipulate virtual objects in mid-air that are aligned to their real environment. We present ARPen, a bimanual input technique for AR modeling that combines a standard smartphone with a 3D-printed pen. Users sketch with the pen in mid-air, while holding their smartphone in the other hand to see the virtual pen traces in the live camera image. ARPen combines the pen's higher 3D input precision with the rich interactive capabilities of the smartphone touchscreen. We studied subjective preferences for this bimanual input technique, such as how people hold the smartphone while drawing, and analyzed the performance of different bimanual techniques for selecting and moving virtual objects. Users preferred a bimanual technique casting a ray through the pen tip for both selection and translation. We provide initial design guidelines for this new class of bimanual AR modeling systems.2019PWPhilipp Wacker et al.RWTH Aachen UniversityShape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesCHI
Springlets: Expressive, Flexible and Silent On-Skin Tactile InterfacesWe introduce Springlets, expressive, non-vibrating mechanotactile interfaces on the skin. Embedded with shape memory alloy springs, we implement Springlets as thin and flexible stickers to be worn on various body locations, thanks to their silent operation even on the neck and head. We present a technically simple and rapid technique for fabricating a wide range of Springlet interfaces and computer-generated tactile patterns. We developed Springlets for six tactile primitives: pinching, directional stretching, pressing, pulling, dragging, and expanding. A study placing Springlets on the arm and near the head demonstrates Springlets' effectiveness and wearability in both stationary and mobile situations. We explore new interactive experiences in tactile social communication, physical guidance, health interfaces, navigation, and virtual reality gaming, enabled by Springlets' unique and scalable form factor.2019NHNur Al-huda Hamdan et al.RWTH Aachen UniversityHaptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsCHI
Tangible Awareness: How Tangibles on Tabletops Influence Awareness of Each Other's ActionsTangibles on multitouch tabletops increase speed, accuracy, and eyes-free operability for individual users, and verbal and behavioral social interaction among multiple users around smaller tables with a shared focus of attention. Modern multitouch tables, however, provide sizes and resolutions that let groups work alongside each other in separate workspaces. But how aware do these users remain of each other's actions, and what impact can tangibles have on their awareness? In our study, groups of 2--4 users around the table played an individual game grabbing their attention as primary task, while they also had to occasionally become aware of other players' actions and react as secondary task. We found that players were significantly more aware of other players' actions using tangibles than those using pure multitouch interaction, indicated by faster reaction times. This effect was especially strong with more players. We close with qualitative user feedback and design recommendations. We found that players were significantly more aware of other players' actions using tangibles than those using pure multitouch interaction, indicated by faster reaction times. This effect was especially strong with more players. We close with qualitative user feedback and design recommendations.2018CCChristian Cherek et al.RWTH Aachen UniversityFull-Body Interaction & Embodied InputDigitalization of Board & Tabletop GamesCHI
Use the Force Picker, Luke: Space-Efficient Value Input on Force-Sensitive Mobile TouchscreensPicking values from long ordered lists, such as when setting a date or time, is a common task on smartphones. However, the system pickers and tables used for this require significant screen space for spinning and dragging, covering other information or pushing it off-screen. The Force Picker reduces this footprint by letting users increase and decrease values over a wide range using force touch for rate-based control. However, changing input direction this way is difficult. We propose three techniques to address this. With our best candidate, Thumb-Roll, the Force Picker lets untrained users achieve similar accuracy as a standard picker, albeit less quickly. Shrinking it to a single table row, 20% of the iOS picker height, slightly affects completion time, but not accuracy. Intriguingly, after 70 minutes of training, users were significantly faster with this minimized Thumb-Roll Picker compared to the standard picker, at the same accuracy and only 6% of the gesture footprint. We close with application examples.2018CCChristian Corsten et al.RWTH Aachen UniversityForce Feedback & Pseudo-Haptic WeightCHI
Sketch&Stitch: Interactive Embroidery for E-textilesE-Textiles are fabrics that integrate electronic circuits and components. Makers use them to create interactive clothing, furniture, and toys. However, this requires significant manual labor and skills, and using technology-centric design tools. We introduce Sketch&Stitch, an interactive embroidery system to create e-textiles using a traditional crafting approach: Users draw their art and circuit directly on fabric using colored pens. The system takes a picture of the sketch, converts it to embroidery patterns, and sends them to an embroidery machine. Alternating between sketching and stitching, users build and test their design incrementally. Sketch&Stitch features Circuitry Stickers representing circuit boards, components, and custom stitch patterns for wire crossings to insulate, and various textile touch sensors such as pushbuttons, sliders, and 2D touchpads. Circuitry Stickers serve as placeholders during design. Using computer vision, they are recognized and replaced later in the appropriate embroidery phases. We close with technical considerations and application examples.2018NHNur Al-huda Hamdan et al.RWTH Aachen UniversityShape-Changing Interfaces & Soft Robotic MaterialsElectronic Textiles (E-textiles)CHI