Cluster-Based Approach for Visual Anomaly Detection in Multivariate Welding Process Data Supported by User GuidanceWelding robots are essential in modern manufacturing as they automate hazardous welding tasks, improving productivity and safety while reducing costs. However, a significant portion of the total part costs comes from the manual visual inspection and the rework of robot-welded seams, underlining the importance of process optimization. Production sites are increasingly digitalized, using systems to track and manage production processes, plan resources, and collect production process data. Utilizing this data, welding engineers face the challenge of analyzing extensive time series data to gain actionable insights. The complexity and volume of the data make it challenging to identify problems, while missing ground truth and labels make unsupervised approaches, such as anomaly detection for short-term issues and clustering for long-term trends, necessary. To ensure that our research fits the specific needs of welding engineers, we conducted a design study with subject matter experts from the industry. Based on the design study, we introduce a visual analytics approach to support domain experts in analyzing welding data, addressing the challenge of examining multiple time series datasets recorded from different welding robots that produce multiple seams on different components within a production line. The interactive tool integrates advanced visualization techniques in a human-in-the-loop approach to allow domain experts to identify, explore, and interpret anomalies and clusters. It implements directing guidance to support users with navigating and focusing on meaningful patterns in data. A pair analytics user study assessed the prototypes' capabilities in hypothesis generation and examined how well users could learn and utilize the system efficiently. The study presents examples of findings, demonstrating how domain expert participants utilize the visual analytics tool to reveal patterns, leading to potentially improved decision-making and operational efficiency. We conclude the article with possible future work directions for researchers aiming to refine our tool's capabilities.2025JSJosef Suschnigg et al.Interactive Data VisualizationTime-Series & Network Graph VisualizationIUI
CreAItive Collaboration? Users' Misjudgment of AI-Creativity Affects Their Collaborative PerformanceHow does generative AI affect collaborative creative work and humans' capability to carry it out? We tested 52 participant pairs in a standard creativity test, the Alternate Uses Test. The experimental AI group had access to ChatGPT-4, while the control group did not. The intervention did not lead to an improved performance overall. Further, the AI group elaborated their ideas significantly less. This effect carried over to the unaided post-test, pointing to longer-term effects of AI be(com)ing everyday technology, as how people perform a task with a tool shapes how they (learn to) perform the task without it. Analysis of the human-AI collaboration process revealed that participants were selective in using ChatGPT-4 output for the experimental task, misjudging and falsely assessing its output. This actually reduced their number of created ideas and underscores that users need to understand a (generative AI-based) tool's capability for the specific task to support effective performance.2025MBMia Magdalena Bangerl et al.Graz University of Technology, Institute of Interactive Systems and Data Science; Know Center Research Gmbh, Area Digital Transformation DesignGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
Does the Medium Matter? An Exploration of Voice-Interaction for Self-ExplanationsThis research evaluates voice-based self-explanations as a pedagogical tool in preparation for lectures, assesses user preferences between voice and text, and derives design insights. We report two studies: Study 1, a quasi-experimental field study, with 247 participants divided into voice-based (N=83), text-based (N=81), and choice (N=83) conditions. Study 2 uses semi-structured interviews (N=16) to explore perceptions of the interaction paradigms in-depth. Results from the first study revealed a general preference for text, though voice users produced longer responses and more topic-related keywords. Over time, the preference for voice increased among students, from 10% to 46%, when given a choice. Study 2 suggested that factors like social presence contribute to hesitance toward voice-based explanations, with a cognitive load, self-confidence, and performance anxiety also influencing medium preferences. Our findings highlight design recommendations and demonstrate the potential of voice-based self-explanations in educational settings, indicating that mixed interfaces might better meet diverse needs.2024ABAngela Zavaleta Bernuy et al.Voice User Interface (VUI) DesignMultilingual & Cross-Cultural Voice InteractionUser Research Methods (Interviews, Surveys, Observation)DIS
Flicker Augmentations: Rapid Brightness Modulation for Real-World Visual Guidance using Augmented RealityProviding attention guidance, such as assisting in search tasks, is a prominent use for Augmented Reality. Typically, this is achieved by graphically overlaying geometrical shapes such as arrows. However, providing visual guidance can cause side effects such as attention tunnelling or scene occlusions, and introduce additional visual clutter. Alternatively, visual guidance can adjust saliency but this comes with different challenges such as hardware requirements and environment dependent parameters. In this work we advocate for using flicker as an alternative for real-world guidance using Augmented Reality. We provide evidence for the effectiveness of flicker from two user studies. The first compared flicker against alternative approaches in a highly controlled setting, demonstrating efficacy (N = 28). The second investigated flicker in a practical task, demonstrating feasibility with higher ecological validity (N = 20). Finally, our discussion highlights the opportunities and challenges when using flicker to provide real-world visual guidance using Augmented Reality.2024JSJonathan Sutton et al.University of Copenhagen, University of OtagoAR Navigation & Context AwarenessCHI
An Asymmetric Multiplayer Learning Environment for Room-Scale Virtual Reality and a Handheld DeviceMany different digital learning environments are currently in use. In combination with virtual reality (VR) technologies, these allow the creation of engaging hands-on experiences. While VR environments can deeply immerse the person wearing the headset, spectators are often not actively involved or are not even considered in the design phase. This is an issue for learning environments, as learning often takes place in pairs or groups. We propose a novel system that enables more than one person to join the VR world in a co-located space to overcome this problem. In addition to the classic VR headset, the asymmetric VR system features a position-tracked tablet. To evaluate this asymmetric VR concept, we conducted a study with 14 students to explore the user experience and motivation, the social presence, and possible further fields of application. The results indicate that users in both perspectives feel that they can control the virtual world.2023MHMichael Holly et al.Social & Collaborative VRCollaborative Learning & Peer TeachingMobileHCI
Eye-Perspective View Management for Optical See-Through Head-Mounted DisplaysOptical see-through (OST) head-mounted displays (HMDs) enable users to experience Augmented Reality (AR) support in the form of helpful real-world annotations. Unfortunately, the blend of the environment with virtual augmentations due to semitransparent OST displays often deteriorates the contrast and legibility of annotations. View management algorithms adapt the annotations' layout to improve legibility based on real-world information, typically captured by built-in HMD cameras. However, the camera views are different from the user's view through the OST display which decreases the final layout quality. We present eye-perspective view management that synthesizes high-fidelity renderings of the user’s view to optimize annotation placement. Our method significantly improves over traditional camera-based view management in terms of annotation placement and legibility. Eye-perspective optimizations open up opportunities for further research on use cases relying on the user's true view through OST HMDs.2023GEGerlinde Emsenhuber et al.Salzburg University of Applied SciencesAR Navigation & Context AwarenessCHI
Territoriality in Hybrid CollaborationHybrid Collaboration, where remote and co-located team members work together using different devices and tools, has already been trending in recent years (e.g., through globalization and international cooperation) but experienced a further boost since the outbreak of the COVID-19 pandemic. The reason behind this surge in hybrid practices is probably that the crisis revealed aspects of remote collaboration which proved functional and which many decision makers (in industry as well as academia) plan to retain for the future. Thus, hybrid collaboration is an extremely timely topic which should be further studied in the context of CSCW. One major CSCW-anchored concept that has most intensively been researched in co-located collaboration settings where it is usually inherently related to spatial aspects and proximity, is territoriality. Already work on territoriality in fully distributed, remote settings has shown that there are significant differences due to the characteristics of the scenario. In this paper, we focus on territoriality in hybrid settings where we identified a significant research gap, and present the results of a user study with 22 teams consisting of four people each (distributed across two locations at two different universities), collaborating on a problem-solving task. Our findings reveal that more dimensions and communication channels, in addition to space, might strongly impact territoriality and territorial behavior in hybrid collaboration. Besides classical spatial territories also auditory territories frequently emerged. In addition, visibility of and accessibility to certain territories need to be rethought. We discuss these novel findings also regarding their interplay with earlier ones and derive design implications for CSCW systems supporting hybrid collaboration.2022TNThomas Neumayr et al.Remote and Hybrid Collaborations; Remote and Hybrid CollaborationsCSCW
TiiS: Humanized Recommender Systems: State-of-the-Art and Research Issues2022TTThi Ngoc Trang TranRecommender System UXIUI
Designing for Knowledge Construction to Facilitate the Uptake of Open Science: Laying out the Design SpaceThe uptake of open science resources needs knowledge construction on the side of the readers/receivers of scientific content. The design of technologies surrounding open science resources can facilitate such knowledge construction, but this has not been investigated yet. To do so, we first conducted a scoping review of literature, from which we draw design heuristics for knowledge construction in digital environments. Subsequently, we grouped the underlying technological functionalities into three design categories: i) structuring and supporting collaboration, ii) supporting the learning process, and iii) structuring, visualising and navigating (learning) content. Finally, we mapped the design categories and associated design heuristics to core components of popular open science platforms. This mapping constitutes a design space (design implications), which informs researchers and designers in the HCI community about suitable functionalities for supporting knowledge construction in existing or new digital open science platforms.2022LDLeonie Disch et al.Know-Center GmbH Research Center for Data-Driven Business & Big Data AnalyticsUser Research Methods (Interviews, Surveys, Observation)Research Ethics & Open ScienceCHI
Designing a Personalised Sensor Glove Using Deep-LearningWhen designing a smart glove for gesture recognition, the set of sensors available and their layout on the gloveare crucial. However, once a computational model reaches acceptable recognition accuracy, it is often notclear which sensors are more important for the task. Nor whether some sensors can be strategically removedwhile retaining similar performance in order to save cost. Furthermore, when aiming for a personalized setup,there can be minor deviation in how gestures are performed by each participant, and so the importance of asensor may vary between participants. In this paper, we use explainable AI to explore whether a personalisedglove can be produced, and whether the set of significant sensors persist between users. We present a deeplearning algorithm which utilises a layer of weights to estimate the importance of each sensor in relation toeach other. Besides estimating importance in relation to recognition accuracy, it is demonstrated how theimportance estimates can be extended to take into account factors external to the computational model, suchas costs. This allows for a cost effective elimination of sensors to reduce hardware redundancy whilst having acontrolled impact on performance. We provide 2 methods:genericorspecific. The generic method exploits theimportance estimate from all participants to select a set of sensors for removal. Whereas the specific methodestimates importance, and removes sensors based on individuals to provide a personalised setup.2021JCJeremy Chan et al.Haptic WearablesHand Gesture RecognitionGenerative AI (Text, Image, Music, Video)IUI
Grand Challenges in Immersive AnalyticsImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.2021BEBarrett Ens et al.Monash UniversityImmersion & Presence ResearchInteractive Data VisualizationCHI
Mixed Reality Light Fields for Interactive Remote AssistanceRemote assistance represents an important use case for mixed reality. With the rise of handheld and wearable devices, remote assistance has become practical in the wild. However, spontaneous provisioning of remote assistance requires an easy, fast and robust approach for capturing and sharing of unprepared environments. In this work, we make a case for utilizing interactive light fields for remote assistance. We demonstrate the advantages of object representation using light fields over conventional geometric reconstruction. Moreover, we introduce an interaction method for quickly annotating light fields in 3D space without requiring surface geometry to anchor annotations. We present results from a user study demonstrating the effectiveness of our interaction techniques, and we provide feedback on the usability of our overall system.2020PMPeter Mohr et al.Graz University of Technology & VRVis GmbHMixed Reality WorkspacesTeleoperation & TelepresenceCHI
Optimising Encoding for Vibrotactile Skin ReadingThis paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.2019GLGranit Luzhnica et al.Know CenterVibrotactile Feedback & Skin StimulationCHI
TrackCap: Enabling Smartphones for 3D Interaction on Mobile Head-Mounted DisplaysThe latest generation of consumer market Head-mounted displays (HMD) now include self-contained inside-out tracking of head motions, which makes them suitable for mobile applications. However, 3D tracking of input devices is either not included at all or requires to keep the device in sight, so that it can be observed from a sensor mounted on the HMD. Both approaches make natural interactions cumbersome in mobile applications. TrackCap, a novel approach for 3D tracking of input devices, turns a conventional smartphone into a precise 6DOF input device for an HMD user. The device can be conveniently operated both inside and outside the HMD's field of view, while it provides additional 2D input and output capabilities.2019PMPeter Mohr et al.Graz University of Technology & VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbHHand Gesture RecognitionMixed Reality WorkspacesCHI
“I think people are powerful”: The sociality of individuals managing depressionMillions of Americans struggle with depression, a condition characterized by feelings of sadness and motivation loss. To understand how individuals managing depression conceptualize their self-management activities, we conducted visual elicitations and semi-structured interviews with 30 participants managing depression in a large city in the U.S. Midwest. Many depression support tools are focused on the individual user and do not often incorporate social features. However, our analysis showed the key importance of sociality for self-management of depression. We describe how individuals connect with specific others to achieve expected support and how these interactions are mediated through locations and communication channels. We discuss factors influencing participants’ sociality including relationship roles and expectations, mood state and communication channels, location and privacy, and culture and society. We broaden our understanding of sociality in CSCW through discussing diffuse sociality (being proximate to others but not interacting directly) as an important activity to support depression self-management.2019EBEleanor R. Burgess et al.HealthCSCW
Evaluating Narrative-Driven Movie Recommendations on RedditRecommender systems have become omni-present tools that are used by a wide variety of users in everyday life tasks, such as finding products in Web stores or online movie streaming portals. However, in situations where users already have an idea of what they are looking for (e.g., "The Lord of the Rings, but in space with a dark vibe"), most traditional recommender algorithms struggle to adequately address such a priori defined requirements. Therefore, users have built dedicated discussion boards to ask peers for suggestions, which ideally fulfill the stated requirements. In this paper, we set out to determine the utility of well-established recommender algorithms for calculating recommendations when provided with such a narrative. To that end, we first crowdsource a reference evaluation dataset from human movie suggestions. We use this dataset to evaluate the potential of five recommendation algorithms for incorporating such a narrative into their recommendations. Further, we make the dataset available for other researchers to advance the state of research in the field of narrative-driven recommendations. Finally, we use our evaluation dataset to improve not only our algorithmic recommendations, but also existing empirical recommendations of IMDb. Our findings suggest that the implemented recommender algorithms yield vastly different suggestions than humans when presented with the same a priori requirements. However, with carefully configured post-filtering techniques, we can outperform the baseline by up to 100%. This represents an important first step towards more refined algorithmic narrative-driven recommendations.2019LELukas Eberhard et al.Recommender System UXIUI
What's in a Review: Discrepancies Between Expert and Amateur Reviews of Video Games on MetacriticAs video game press ("experts") and casual gamers ("amateurs") have different motivations when writing video game reviews, discrepancies in their reviews may arise. To study such potential discrepancies, we conduct a large-scale investigation of more than 1 million reviews on the Metacritic review platform. In particular, we assess the existence and nature of discrepancies in video game appraisal by experts and amateurs, and how they manifest in ratings, over time, and in review language. Leveraging these insights, we explore the predictive power of early expert vs. amateur reviews in forecasting video game reputation in the short- and long-term. We find that amateurs, in contrast to experts, give more polarized ratings of video games, rate games surprisingly long after game release, and are positively biased towards older games. On a textual level, we observe that experts write rather complex, less readable texts than amateurs, whose reviews are more emotionally charged. While in the short-term amateur reviews are remarkably predictive of game reputation among other amateurs (achieving 91% ROC AUC in a binary classification), both expert and amateur reviews are equally well suited for long-term predictions. Overall, our work is the first large-scale comparative study of video game reviewing behavior, with practical implications for amateurs when deciding which games to play, and for game developers when planning which games to design, develop, or continuously support. More broadly, our work contributes to the discussion of wisdom of the few vs. wisdom of the crowds, as we uncover the limits of experts in capturing the views of amateurs in the particular context of video game reviews.2019TSTiago Santos et al.Expert WorkCSCW
Investigating Interactions for Text Recognition using a Vibrotactile Werable DisplayVibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed taking reading as a process into account. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.2018GLGranit Luzhnica et al.Vibrotactile Feedback & Skin StimulationHaptic WearablesHand Gesture RecognitionIUI