VisRing: A Display-Extended Smartring for Nano VisualizationsWe introduce VisRing, the first smartring incorporating a bendable 160 x 32 4-bit grayscale organic light-emitting diode display. VisRing stands out by displaying nano visualizations while maintaining a compact design and minimal weight of 6.6 g, with an overall cost of around $35. We exploit opportunities for a system-on-a-chip architecture to tightly integrate an inertial measurement unit, a photoplethysmograph sensor, a temperature sensor, Bluetooth, a microcontroller, and a display unit that spans 270° to 360°, depending on finger size. Our contributions include the hardware design and implementation of VisRing, along with a software library that supports visualizing various data types. A qualitative study with 12 participants demonstrated the comfort, likability, and social acceptance of VisRing’s hardware and software. The participants liked the visualizations and found the ring lightweight, but also pointed out possible improvements. All materials are shared under an open-source license to enable the community to extend and improve VisRing.2025TLTaiting Lu et al.Haptic WearablesData PhysicalizationSmartwatches & Fitness BandsUIST
GuitarPie: Using the Fretboard of an Electric Guitar for Audio-Based Pie Menu InteractionNowadays, electric guitars are often used together with digital interfaces. For instance, tablature applications can support guitar practice by rendering and playing back the tabs of individual instrument tracks of a song (guitar, drums, etc.). However, those interfaces are typically controlled via mouse and keyboard or via touch input. This means that controlling and configuring playback during practice can lead to high switching costs, as learners often need to switch between playing and interface control. In this paper, we explore the use of audio input from an unmodified electric guitar to enable interface control without letting go of the guitar. We present GuitarPie, an audio-based pie menu interaction method. GuitarPie utilizes the grid-like structure of a fretboard to spatially represent audio-controlled operations, avoiding the need to memorize note sequences. Furthermore, we implemented TabCtrl, a tablature interface that uses GuitarPie and other audio-based interaction methods for interface control.2025FHFrank Heyen et al.Electrical Muscle Stimulation (EMS)Shape-Changing Interfaces & Soft Robotic MaterialsFood Culture & Food InteractionUIST
Traversing Dual Realities: Investigating Techniques for Transitioning 3D Objects between Desktop and Augmented Reality EnvironmentsDesktop environments can integrate augmented reality (AR) head-worn devices to support 3D representations, visualizations, and interactions in a novel yet familiar setting. As users navigate across the dual realities---desktop and AR---a way to move 3D objects between them is needed. We devise three baseline transition techniques based on common approaches in the literature and evaluate their usability and practicality in an initial user study (N=18). After refining both our transition techniques and the surrounding technical setup, we validate the applicability of the overall concept for real-world activities in an expert user study (N=6). In it, computational chemists followed their usual desktop workflows to build, manipulate, and analyze 3D molecular structures, but now aided with the addition of AR and our transition techniques. Based on our findings from both user studies, we provide lessons learned and takeaways for the design of 3D object transition techniques in desktop + AR environments.2025TRTobias Rau et al.University of Stuttgart, Visualization Research CenterAR Navigation & Context AwarenessMixed Reality WorkspacesCHI
Who is in Control? Understanding User Agency in AR-assisted Construction AssemblyAdaptive AR assistance can automatically trigger content to support users based on their context. Such intelligent automation offers many benefits but also alters users' degree of control, which is seldom explored in existing research. In this paper, we compare high- and low-agency control in AR-assisted construction assembly to understand the role of user agency. We designed cognitive and physical assembly scenarios and conducted a lab study (N=24), showing that low-agency control reduced mental workloads and perceived autonomy in several tasks. A follow-up domain expert study with trained carpenters (N=8) contextualised these results in an ecologically valid setting. Through semi-structured interviews, we examined the carpenters' perspectives on AR support in their daily work and the trade-offs of automating interactions. Based on these findings, we summarise key design considerations to inform future adaptive AR designs in the context of timber construction.2025XYXiliu Yang et al.Institute of Computational Design and ConstructionAR Navigation & Context AwarenessKnowledge Worker Tools & WorkflowsComputational Methods in HCICHI
A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCI In a world where diversity is increasingly recognised and celebrated, it is important for HCI to embrace the evolving methods and theories for technologies to reflect the diversity of its users and be ability-centric. Interdependence Theory, an example of this evolution, highlights the interpersonal relationships between humans and technologies and how technologies should be designed to meet shared goals and outcomes for people, regardless of their abilities. This necessitates a contemporary understanding of "ability-diverse collaboration," which motivated this review. In this review, we offer an analysis of 117 papers sourced from the ACM Digital Library spanning the last two decades. We contribute (1) a unified taxonomy and the Ability-Diverse Collaboration Framework, (2) a reflective discussion and mapping of the current design space, and (3) future research opportunities and challenges. Finally, we have released our data and analysis tool to encourage the HCI research community to contribute to this ongoing effort.2024LXLan Xiao et al.University College London, University College LondonCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Universal & Inclusive DesignInclusive DesignCHI
Sitting Posture Recognition and Feedback: A Literature ReviewExtensive sitting is unhealthy; thus, countermeasures are needed to react to the ongoing trend toward more prolonged sitting. A variety of studies and guidelines have long addressed the question of how we can improve our sitting habits. Nevertheless, sitting time is still increasing. Here, smart devices can provide a general overview of sitting habits for more nuanced feedback on the user's sitting posture. Based on a literature review (N=223), including publications from engineering, computer science, medical sciences, electronics, and more, our work guides developers of posture systems. There is a large variety of approaches, with pressure-sensing hardware and visual feedback being the most prominent. We found factors like environment, cost, privacy concerns, portability, and accuracy important for deciding hardware and feedback types. Further, one should consider the user's capabilities, preferences, and tasks. Regarding user studies for sitting posture feedback, there is a need for better comparability and for investigating long-term effects.2024CKChristian Krauter et al.University of StuttgartHuman Pose & Activity RecognitionBiosensors & Physiological MonitoringContext-Aware ComputingCHI
What’s (Not) Tracking? Factors of Influence in Industrial Augmented Reality Tracking: A Use Case Study in an Automotive EnvironmentAugmented Reality (AR) is a key technology for digitization in enterprises. However, often there is a lack of stable tracking solutions when used inside manufacturing environments. Many different tracking technologies are available, yet, it can be difficult to choose the most appropriate tracking solution for different use cases with their varying conditions. In order to shed light on common tracking requirements and conditions for automotive AR use cases we conducted a use case study spanning 61 use cases within the complete product life-cycle of a large automotive manufacturer. By analyzing the gathered data we were able to note the frequency of different tracking requirements and conditions within automotive AR use cases. Based on these use cases we could also derive common factors of influence for AR tracking in the automotive industry, which show the various challenges automotive AR tracking is currently facing.2023JHJonas Haischt et al.AR Navigation & Context AwarenessContext-Aware ComputingAutoUI
Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper FiguresWe present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided ~30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions.2022KAKatrin Angerbauer et al.University of StuttgartVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignCHI
Strives: String-based Force Feedback for Automotive EngineeringThe large potential of force feedback devices for interacting in Virtual Reality (VR) has been illustrated in a plethora of research prototypes. Yet, these devices are still rarely used in practice and it remains an open challenge how to move this research into practice. To that end, we contribute a participatory design study on the use of haptic feedback devices in the automotive industry. Based on a 10-month observing process with 13 engineers, we developed STRIVE, a string-based haptic feedback device. In addition to the design of STRIVE, this process led to a set of requirements for introducing haptic devices into industrial settings, which center around a need for flexibility regarding forces, comfort, and mobility. We evaluated STRIVE with 16 engineers in five different day-to-day automotive VR use cases. The main results show an increased level of trust and perceived safety as well as further challenges towards moving haptics research into practice.2021AAAlexander Achberger et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Teleoperated DrivingForce Feedback & Pseudo-Haptic WeightUIST
A View on the Viewer: Gaze-Adaptive Captions for VideosSubtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.2020KKKuno Kurzhals et al.ETH ZürichVoice User Interface (VUI) DesignVoice AccessibilityDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)CHI
Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical StudyHeatmaps are a popular visualization technique that encode 2D density distributions using color or brightness. Experimental studies have shown though that both of these visual variables are inaccurate when reading and comparing numeric data values. A potential remedy might be to use 3D heatmaps by introducing height as a third dimension to encode the data. Encoding abstract data in 3D, however, poses many problems, too. To better understand this tradeoff, we conducted an empirical study (N=48) to evaluate the user performance of 2D and 3D heatmaps for comparative analysis tasks. We test our conditions on a conventional 2D screen, but also in a virtual reality environment to allow for real stereoscopic vision. Our main results show that 3D heatmaps are superior in terms of error rate when reading and comparing single data items. However, for overview tasks, the well-established 2D heatmap performs better.2020MKMatthias Kraus et al.University of KonstanzInteractive Data VisualizationVisualization Perception & CognitionCHI
Neighborhood Perception in Bar ChartsIn this paper, we report three user experiments that investigate in how far the perception of a bar in a bar chart changes based on the height of its neighboring bars. We hypothesized that the perception of the very same bar, for instance, might differ when it is surrounded by the top highest vs. the top lowest bars. Our results show that such neighborhood effects exist: a target bar surrounded by high neighbor bars, is perceived to be lower as the same bar surrounded with low neighbors. Yet, the effect size of this neighborhood effect is small compared to other data-inherent effects: the judgment accuracy largely depends on the target bar rank, number of data items, and other data characteristics of the dataset. Based on the findings, we discuss design implications for perceptually optimizing bar charts.2019MZMingqian Zhao et al.Hong Kong University of Science and TechnologyUncertainty VisualizationVisualization Perception & CognitionCHI