🌳-generAItor: Tree-in-the-loop Text Generation for Language Model Explainability and AdaptationLarge language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities. Our approach allows interactions on multiple levels and offers an iterative pipeline that encompasses generating, exploring, and comparing output candidates, as well as fine-tuning the model based on adapted data. Our case study shows that our tool generates new insights in gender bias analysis beyond state-of-the-art template-based methods. Additionally, we demonstrate the applicability of our approach in a qualitative user study. Finally, we quantitatively evaluate the adaptability of the model to few samples, as occurring in text-generation use cases.2025TSThilo Spinner et al.Explainable AI (XAI)Recommender System UXInteractive Data VisualizationIUI
MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical SpacesRemote meetings using 3D avatars in Augmented Reality (AR) allow effective communication and enable users to retain awareness of their surroundings. However, positioning 3D avatars effectively and consistently for all users in AR is challenging since most spaces, such as offices or living rooms, are not large enough to accommodate multiple life-sized avatars without interference. To address this issue, we contribute MiniMates---a novel approach leveraging miniature avatars, which make it possible to place multiple remote users in a limited physical space. We see MiniMates as complementary to traditional 2D video conferencing and immersive telepresence. Our approach automatically adjusts the formation of avatars and redirects users' head and body orientation to facilitate communication. Results from our user study (n = 24) show that participants experience a higher sense of co-presence compared to video conferencing, and that MiniMates enabled them to communicate the direction of their interactions non-verbally as well as manage multiple simultaneous conversations.2025AKAkihiro Kiuchi et al.The University of TokyoSocial & Collaborative VRMixed Reality WorkspacesContext-Aware ComputingCHI
Eye-Hand Movement of Objects in Near Space Extended RealityHand-tracking in Extended Reality (XR) enables moving objects in near space with direct hand gestures, to pick, drag and drop objects in 3D. In this work, we investigate the use of eye-tracking to reduce the effort involved in this interaction. As the eyes naturally look ahead to the target for a drag operation, the principal idea is to map the translation of the object in the image plane to gaze, such that the hand only needs to control the depth component of the operation. We have implemented four techniques that explore two factors: the use of gaze only to move objects in X-Y vs.\ extra refinement by hand, and the use of hand input in the Z axis to directly move objects vs.\ indirectly via a transfer function. We compared all four techniques in a user study (N=24) against baselines of direct and indirect hand input. We detail user performance, effort and experience trade-offs and show that all eye-hand techniques significantly reduce physical effort over direct gestures, pointing toward effortless drag-and-drop for XR environments.2024UWUta Wagner et al.Hand Gesture RecognitionEye Tracking & Gaze InteractionUIST
IntentAR: Immersive Authoring of Condition-based AR Robot VisualisationsWe introduce RoboVisAR, an immersive augmented reality (AR) authoring tool to create in-situ robot visualisations. AR robot visualisations such as the robot’s path, status, and safety zones has shown to benefit human-robot collaboration. However, creating custom AR visualisations requires extensive skills in both robotics and AR programming. RoboVisAR allow users to create custom AR robot visualisations without programming. By recording an example robot program behavior, users can create and test custom visualisations in-situ within a mixed reality environment. RoboVisAR supports six types of visualisations; path, point-of-interest, safety zone, robot state, message, and force/torque. Furthermore, RoboVisAR supports four types of conditions; robot state, proximity, inside-box, and force/torque. Their features enable the users to easily combine different visualisations on demand to make the context-aware assistant without visual clutter. An expert user study with three participants suggests that users generally appreciate the customizability of the visualisation and they easily create robot visualisations in less than ten minutes.2024RLRasmus Skovhus Lunding et al.Mixed Reality WorkspacesSocial Robot InteractionTeleoperation & TelepresenceHRI
Visual Analytics of Co-Occurrences to Discover Subspaces in Structured DataWe present an approach that shows all relevant subspaces of categorical data condensed in a single picture. We model the categorical values of the attributes as co-occurrences with data partitions generated from structured data using pattern mining. We show that these co-occurrences are a-priori, allowing us to greatly reduce the search space, effectively generating the condensed picture where conventional approaches filter out several subspaces as these are deemed insignificant. The task of identifying interesting subspaces is common but difficult due to exponential search spaces and the curse of dimensionality. One application of such a task might be identifying a cohort of patients defined by attributes such as gender, age, and diabetes type that share a common patient history, which is modeled as event sequences. Filtering the data by these attributes is common but cumbersome and often does not allow a comparison of subspaces. We contribute a powerful multi-dimensional pattern exploration approach (MDPE-approach) agnostic to the structured data type that models multiple attributes and their characteristics as co-occurrences, allowing the user to identify and compare thousands of subspaces of interest in a single picture. In our MDPE-approach, we introduce two methods to dramatically reduce the search space, outputting only the boundaries of the search space in the form of two tables. We implement the MDPE-approach in an interactive visual interface (MDPE-vis) that provides a scalable, pixel-based visualization design allowing the identification, comparison, and sense-making of subspaces in structured data. Our case studies using a gold-standard dataset and external domain experts confirm our approach’s and implementation’s applicability. A third use case sheds light on the scalability of our approach and a user study with 15 participants underlines its usefulness and power.2024WJWolfgang Jentner et al.Interactive Data VisualizationVisualization Perception & CognitionIUI
Effects of Human-Swarm Interaction on Subjective Time Perception: Swarm Size and SpeedMany large-scale multi-robot systems require human input during operation in different applications. To still minimize the human effort, interaction is intermittent or restricted to a subset of robots. Despite this reduced demand for human interaction, the mental load and stress can be challenging for the human operator. A specific effect of human-swarm interaction may be a hypothesized change of subjective time perception in the human operator. In a series of simple human-swarm interaction experiments with robot swarms of up to 15 physical robots, we study whether human operators have altered time perception due to the number of controlled robots or robot speeds. Using data gathered by questionnaires, we found that increased swarm size shrinks perceived time and decreased robot speeds expand the perceived time. We introduce the concept of subjective time perception to human-swarm interaction. Future research will enable swarm systems to autonomously modulate subjective timing to ease the job of human operators.2023JKJulian Kaduk et al.Human-Robot Collaboration (HRC)Teleoperation & TelepresenceHRI
Interactive Context-Preserving Color Highlighting for Multiclass ScatterplotsColor is one of the main visual channels used for highlighting elements of interest in visualization. However, in multi-class scatterplots, color highlighting often comes at the expense of degraded color discriminability. In this paper, we argue for context-preserving highlighting during the interactive exploration of multi-class scatterplots to achieve desired pop-out effects, while maintaining good perceptual separability among all classes and consistent color mapping schemes under varying points of interest. We do this by first generating two contrastive color mapping schemes with large and small contrasts to the background. Both schemes maintain good perceptual separability among all classes and ensure that when colors from the two palettes are assigned to the same class, they have a high color consistency in color names. We then interactively combine these two schemes to create a dynamic color mapping for highlighting different points of interest. We demonstrate the effectiveness through crowd-sourced experiments and case studies.2023KLKecheng Lu et al.Shandong UniversityInteractive Data VisualizationCHI
ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial MemorySmartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefits of familiar touch interaction with the near-infinite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user's visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using different virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a "sweet spot" for extending smartphones with augmented reality, informing the design of hybrid user interfaces.2023SHSebastian Hubenschmid et al.University of KonstanzAR Navigation & Context AwarenessMixed Reality WorkspacesCHI
Producing and Consuming Instructional Material in Manufacturing Contexts: Evaluation of an AR-based Cyber-Physical Production System for Supporting Knowledge and Expertise SharingFast-paced knowledge and expertise sharing (KES) is a typical demand in contemporary workplaces due to dynamic markets and ever-changing work practices. Past and current computer supported cooperative work (CSCW) research has long been investigating how computer technologies can support people with KES. Recent claims have asserted that augmented reality- (AR-)based cyber-physical production systems (CPPS) are poised to bring significant changes in the ways that KES unfolds in manufacturing contexts. This paper scrutinises such claims by implementing a short-term evaluation of an AR-based CPPS and assessing how it can potentially support (1) the generation of AR content by experienced production workers and (2) the visualisation and processing of such content by novice workers. We, therefore, contribute a user study to the CSCW community that sheds light on the use of a particular type of AR-based CPPS for KES in industrial contexts.2022SHSven Hoffmann et al.XR in Place and Space; XR in Place and SpaceCSCW
Interpolating Happiness: Understanding the Intensity Gradations of Face Emojis Across CulturesWe frequently utilize face emojis to express emotions in digital communication. But how wholly and precisely do such pictographs sample the emotional spectrum, and are there gaps to be closed? Our research establishes emoji intensity scales for seven basic emotions: happiness, anger, disgust, sadness, shock, annoyance, and love. In our survey (N = 1195), participants worldwide assigned emotions and intensities to 68 face emojis. According to our results, certain feelings, such as happiness or shock, are visualized by manifold emojis covering a broad spectrum of intensities. Other feelings, such as anger, have limited and only very intense representative visualizations. We further emphasize that the cultural background influences emojis' perception: for instance, linear-active cultures (e.g., UK, Germany) rate the intensity of such visualizations higher than multi-active (e.g., Brazil, Russia) or reactive cultures (e.g., Indonesia, Singapore). To summarize, our manuscript promotes future research on more expressive, culture-aware emoji design.2022AKAndrey Krekhov et al.University of Duisburg-EssenMultilingual & Cross-Cultural Voice InteractionAlgorithmic Fairness & BiasCHI
ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User StudiesThe nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.2022SHSebastian Hubenschmid et al.University of KonstanzMixed Reality WorkspacesInteractive Data VisualizationCHI
Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output LabelsThe confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions.2022JGJochen Görtler et al.University of KonstanzInteractive Data VisualizationTime-Series & Network Graph VisualizationVisualization Perception & CognitionCHI
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive AnalyticsRecent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.2021SHSebastian Hubenschmid et al.University of KonstanzAR Navigation & Context AwarenessMixed Reality WorkspacesInteractive Data VisualizationCHI
KiTT - The Kinaesthetics Transfer Teacher: Design and Evaluation of a Tablet-based System to Promote the Learning of Ergonomic Patient TransfersNurses frequently transfer patients as part of their daily work. However, manual patient transfers pose a major risk to nurses’ health. Although the Kinaesthetics care conception can help address this issue, existing support to learn the concept is low. We present KiTT, a tablet-based system, to promote the learning of ergonomic patient transfers based on the Kinaesthetics care conception. KiTT supports the training of Kinaesthetics-based patient transfers by two nurses. The nurses are guided by the phases (i) interactive instructions, (ii) training of transfer conduct, and (iii) feedback and reflection. We evaluated KiTT with 26 nursing-care students in a nursing-care school. Our results indicate that KiTT provides a good subjective support for the learning of Kinaesthetics. Our results also suggest that KiTT can promote the ergonomically correct conduct of patient transfers while providing a good user experience adequate to the nursing-school context, and reveal how KiTT can extend existing practices.2021MDMaximilian Dürr et al.University of KonstanzVibrotactile Feedback & Skin StimulationIntelligent Tutoring Systems & Learning AnalyticsFitness Tracking & Physical Activity MonitoringCHI
Data-Driven Mark Orientation for Trend Estimation in ScatterplotsA common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.2021TLTingting Liu et al.School of Computer ScienceInteractive Data VisualizationVisualization Perception & CognitionCHI
A Human Touch: Social Touch Increases the Perceived Human-likeness of Agents in Virtual RealityVirtual Reality experiences and games present believable virtual environments based on graphical quality, spatial audio, and interactivity. The interaction with in-game characters, controlled by computers (agents) or humans (avatars), is an important part of VR experiences. Pre-captured motion sequences increase the visual humanoid resemblance. However, this still precludes realistic social interactions (eye contact, imitation of body language), particularly for agents. We aim to make social interaction more realistic via social touch. Social touch is non-verbal, conveys feelings and signals (coexistence, closure, intimacy). In our research, we created an artificial hand to apply social touch in a repeatable and controlled fashion to investigate its effect on the perceived human-likeness of avatars and agents. Our results show that social touch is effective to further blur the boundary between computer- and human-controlled virtual characters and contributes to experiences that closely resemble human-to-human interactions.2020MHMatthias Hoppe et al.Ludwig Maximilian University of MunichHaptic WearablesImmersion & Presence ResearchCHI
Next Steps for Human-Computer IntegrationHuman-Computer Integration (HInt) is an emerging paradigm in which computational and human systems are closely interwoven. Integrating computers with the human body is not new. however, we believe that with rapid technological advancements, increasing real-world deployments, and growing ethical and societal implications, it is critical to identify an agenda for future research. We present a set of challenges for HInt research, formulated over the course of a five-day workshop consisting of 29 experts who have designed, deployed and studied HInt systems. This agenda aims to guide researchers in a structured way towards a more coordinated and conscientious future of human-computer integration.2020FMFlorian Floyd Mueller et al.Monash UniversityBrain-Computer Interface (BCI) & NeurofeedbackTechnology Ethics & Critical HCIUser Research Methods (Interviews, Surveys, Observation)CHI
Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical StudyHeatmaps are a popular visualization technique that encode 2D density distributions using color or brightness. Experimental studies have shown though that both of these visual variables are inaccurate when reading and comparing numeric data values. A potential remedy might be to use 3D heatmaps by introducing height as a third dimension to encode the data. Encoding abstract data in 3D, however, poses many problems, too. To better understand this tradeoff, we conducted an empirical study (N=48) to evaluate the user performance of 2D and 3D heatmaps for comparative analysis tasks. We test our conditions on a conventional 2D screen, but also in a virtual reality environment to allow for real stereoscopic vision. Our main results show that 3D heatmaps are superior in terms of error rate when reading and comparing single data items. However, for overview tasks, the well-established 2D heatmap performs better.2020MKMatthias Kraus et al.University of KonstanzInteractive Data VisualizationVisualization Perception & CognitionCHI
"It's in my other hand!" Studying the Interplay of Interaction Techniques and Multi-Tablet ActivitiesCross-device interaction with tablets is a popular topic in HCI research. Recent work has shown the benefits of including multiple devices into users' workflows while various interaction techniques allow transferring content across devices. However, users are only reluctantly using multiple devices in combination. At the same time, research on cross-device interaction struggles to find a frame of reference to compare techniques or systems. In this paper, we try to address these challenges by studying the interplay of interaction techniques, device utilization, and task-specific activities in a user study with 24 participants from different but complementary angles of evaluation using an abstract task, a sensemaking task, and three interaction techniques. We found that different interaction techniques have a lower influence than expected, that work behaviors and device utilization depend on the task at hand, and that participants value specific aspects of cross-device interaction.2020JZJohannes Zagermann et al.University of KonstanzContext-Aware ComputingUbiquitous ComputingCHI
NurseCare: Design and 'In-The-Wild' Evaluation of a Mobile System to Promote the Ergonomic Transfer of PatientsNurses are frequently required to transfer patients as part of their daily duties. However, the manual transfer of patients is a major risk factor for injuries to the back. Although the Kinaesthetics Care Conception can help to address this issue, existing support for the integration of the concept into nursing-care practice is low. We present NurseCare, a mobile system that aims to promote the practical application of ergonomic patient transfers based on the Kinaesthetics Care Conception. NurseCare consists of a wearable and a smartphone app. Key features of NurseCare include mobile accessible instructions for ergonomic patient transfers, in-situ feedback for the risky bending of the back, and long-term feedback. We evaluated NurseCare in a nine participant 'in-the-wild' evaluation. Results indicate that NurseCare can facilitate ergonomic work while providing a high user experience adequate to the nurses' work domain, and reveal how NurseCare can be incorporated in given practices.2020MDMaximilian Dürr et al.University of KonstanzMental Health Apps & Online Support CommunitiesFitness Tracking & Physical Activity MonitoringSmartwatches & Fitness BandsCHI