Pathways of Desire: Enhancing Navigation and Sense of Community Through Player-Generated Desire PathsNavigating is essential in many video games. However, previous work suggests that many games still suffer from navigational problems that decrease enjoyment. In this paper, we focus on "Desire Paths", informal trails collectively created by pedestrians representing the most convenient route. While they are known to be useful wayfinding aids, it is unclear how they affect navigation and experience in games. We therefore investigated diegetically visualized player trajectory data in a 2D game through virtual footprints that were persistently visible for all subsequent players. Through a mixed-methods study involving 50 participants, we found that virtual footprints improved navigation by guiding players to points of interest and reducing disorientation for early players. However, visual clutter from excessive footprints reduced their effectiveness in later stages. They also fostered a sense of community, especially for late-stage players and prompted exploration of yet undiscovered areas. We further discuss design implications and future research directions.2025MLMichael Lankes et al.University of Applied Sciences Upper Austria, Department of Digital MediaGamification DesignMultiplayer & Social GamesCHI
Cognitive Integration of Delays: Anticipated System Delays Slow Down User ActionsThere are inevitably delays between user actions and system responses, which can increase task completion times. However, it remains unclear whether this is solely due to waiting times and compensation strategies, or whether users further slow down their actions because these delays become integrated into their cognitive action structures, as suggested by cognitive psychological theories. To explore this, we examined the effects of repeated exposure to delays during point-and-click tasks. Our findings demonstrate that longer system response delays significantly slow down users' actions, even before they experience the delayed feedback from the current input. This suggests that the user's cognitive system anticipates delays based on previous interactions and adjusts actions accordingly. These results emphasize the importance of minimizing systematic delays to maintain optimal user performance and highlight the potential for system properties to become embedded in users' cognitive action structures.2025JBJohanna Bogon et al.University of RegensburgVisualization Perception & CognitionPrivacy by Design & User ControlCHI
Investigating the Impact of Customized Avatars and the Proteus Effect during Physical Exercise in Virtual RealityVirtual reality (VR) allows to embody avatars. Coined the Proteus effect, an avatar's visual appearance can influence users' behavior and perception. Recent work suggests that athletic avatars decrease perceptual and physiological responses during VR exercise. However, such effects can fail to occur when users do not experience avatar ownership and identification. While customized avatars increase body ownership and identification, it is unclear whether they improve the Proteus effect. We conducted a study with 24 participants to determine the effects of athletic and non-athletic avatars that were either customized or randomly assigned. We developed a customization editor to allow creating customized avatars. We found that customized avatars reduced perceived exertion. We also found that athletic avatars decreased heart rate while holding weights, however, only when being customized. Results indicate that customized avatars can positively influence users during physical exertion. We discuss the utilization of avatar customization in VR exercise systems.2025MKMartin Kocur et al.University of Applied Sciences Upper AustriaIdentity & Avatars in XRFitness Tracking & Physical Activity MonitoringCHI
MobileGravity: Mobile Simulation of a High Range of Weight in Virtual RealitySimulating accurate weight forces in Virtual Reality (VR) is an unsolved challenge. Therefore, providing real weight sensations by transferring liquid mass has emerged as a promising approach. However, key objectives conceptually interfere with each other. In particular, previous designs that support a high range of weight or high flow rate lack mobility. In this work, we present MobileGravity, a system, that decouples the weight-changing object from the liquid supply and the pump. It enables weight changes of up to 1 kg at a rate of 235 g/s and allows the user to walk around freely. Through a study with 30 participants, we show that the system enables users to perceive the weight of different virtual objects and enhances realism, as well as enjoyment.2024AKAlexander Kalus et al.University of RegensburgForce Feedback & Pseudo-Haptic WeightCHI
Towards Cross-Content Conversational Agents for Behaviour Change: Investigating Domain Independence and the Role of Lexical Features in Written Language Around ChangeValuable insights into an individual's current thoughts and stance regarding behaviour change can be obtained by analysing the language they use, which can be conceptualized using Motivational Interviewing concepts. Training conversational agents (CAs) to detect and employ these concepts could help them provide more personalized and effective assistance. This study investigates the similarity of written language around behaviour change spanning diverse conversational and social contexts and change objectives. Drawing on previous research that applied MI concepts to texts about health behaviour change, we evaluate the performance of existing classifiers on six newly constructed datasets from diverse contexts. To gain insights in determining factors when identifying change language, we explore the impact of lexical features on classification. The results suggest that patterns of change language remain stable across contexts and domains, leading us to conclude that peer-to-peer online data may be sufficient to train CAs to understand user utterances related to behaviour change.2023SMSelina Meyer et al.Conversational ChatbotsHuman-LLM CollaborationMental Health Apps & Online Support CommunitiesCUI
The Effects of Avatar and Environment on Thermal Perception and Skin Temperature in Virtual RealityHumans' thermal regulation and subjective perception of temperature is highly plastic and depends on the visual appearance of the surrounding environment. Previous work shows that an environment’s color temperature affects the experienced temperature. As virtual reality (VR) enables visual immersion, recent work suggests that a VR scene's color temperature also affects experienced temperature. It is, however, unclear if an avatar’s appearance also affects users’ thermal perception and if a change in thermal perception even influences the body temperature. Therefore, we conducted a study with 32 participants performing a task in an ice or fire world while having ice or fire hands. We show that being in a fire world or having fire hands increases the perceived temperature. We even show that having fire hands decreases the hand temperature compared to having ice hands. We discuss the implications for the design of VR systems and future research directions.2023MKMartin Kocur et al.University of RegensburgImmersion & Presence ResearchIdentity & Avatars in XRCHI
PumpVR: Rendering Weight of Objects and Avatars through Liquid Mass Transfer in Virtual RealityPerceiving objects' and avatars’ weight in Virtual Reality (VR) is important to understand their properties and naturally interact with them. However, commercial VR controllers cannot render weight. Controllers presented by previous work are single-handed, slow, or only render a small mass. In this paper, we present PumpVR that renders weight by varying the controllers’ mass according to the properties of virtual objects or bodies. Using a bi-directional pump and solenoid valves, the system changes the controllers' absolute weight by transferring water in or out with an average error of less than 5%. We implemented VR use cases with objects and avatars of different weights to compare the system with standard controllers. A study with 24 participants revealed significantly higher realism and enjoyment when using PumpVR to interact with virtual objects. Using the system to render body weight had significant effects on virtual embodiment, perceived exertion, and self-perceived fitness.2023AKAlexander Kalus et al.University of RegensburgForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsCHI
Train as you Fight: Evaluating Authentic Cybersecurity Training in Cyber RangesHumans can play a decisive role in detecting and mitigating cyber attacks if they possess sufficient cybersecurity skills and knowledge. Realizing this potential requires effective cybersecurity training. Cyber range exercises (CRXs) represent a novel form of cybersecurity training in which trainees can experience realistic cyber attacks in authentic environments. Although evaluation is undeniably essential for any learning environment, it has been widely neglected in CRX research. Addressing this issue, we propose a taxonomy-based framework to facilitate a comprehensive and structured evaluation of CRXs. To demonstrate the applicability and potential of the framework, we instantiate it to evaluate Iceberg CRX, a training we recently developed to improve cybersecurity education at our university. For this matter, we conducted a user study with 50 students to identify both strengths and weaknesses of the CRX.2023MGMagdalena Glas et al.University of RegensburgCybersecurity Training & AwarenessCHI
Automating Contextual Privacy Policies: Design and Evaluation of a Production Tool for Digital Consumer Privacy AwarenessUsers avoid engaging with privacy policies because they are lengthy and complex, making it challenging to retrieve relevant information. In response, research proposed contextual privacy policies (CPPs) that embed relevant privacy information directly into their affiliated contexts. To date, CPPs are limited to concept showcases. This work evolves CPPs into a production tool that automatically extracts and displays concise policy information. We first evaluated the technical functionality on the US's 500 most visited websites with 59 participants. Based on our results, we further revised the tool to deploy it in the wild with 11 participants over ten days. We found that our tool is effective at embedding CPP information on websites. Moreover, we found that the tool's usage led to more reflective privacy behavior, making CPPs powerful in helping users understand the consequences of their online activities. We contribute design implications around CPP presentation to inform future systems design.2022MWMaximiliane Windl et al.LMU MunichPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Physiological and Perceptual Responses to Athletic Avatars while Cycling in Virtual RealityAvatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.2021MKMartin Kocur et al.University of RegensburgMotion Sickness & Passenger ExperienceFull-Body Interaction & Embodied InputCHI
Reading in VR: The Effect of Text Presentation Type and LocationReading is a fundamental activity to obtain information both in the real and the digital world. Virtual reality (VR) allows novel approaches for users to view, read, and interact with a text. However, for efficient reading, it is necessary to understand how a text should be displayed in VR without impairing the VR experience. Therefore, we conducted a study with 18 participants to investigate text presentation type and location in VR. We compared world-fixed, edge-fixed, and head-fixed text locations. Texts were displayed using Rapid Serial Visual Presentation (RSVP) or as a paragraph. We found that RSVP is a promising presentation type for reading short texts displayed in edge-fixed or head-fixed location in VR. The paragraph presentation type using world-fixed or edge-fixed location is promising for reading long text if movement in the virtual environment is not required. Insights from our study inform the design of reading interfaces for VR applications.2021RRRufat Rzayev et al.University of RegensburgImmersion & Presence ResearchInteractive Data VisualizationVisualization Perception & CognitionCHI
Implementation and In Situ Assessment of Contextual Privacy PoliciesOnline services collect an increasing amount of data about their users. Privacy policies are currently the only common way to inform users about the kinds of data collected, stored and processed by online services. Previous work showed that users do not read and understand privacy policies, due to their length, difficult language, and often non-prominent location. Embedding privacy-relevant information directly in the context of use could help users understand the privacy implications of using online services. We implemented Contextual Privacy Policies (CPPs) as a browser extension and provide it to the community to make privacy information accessible for end-users. We evaluated CPPs through a one-week deployment and in situ questionnaires as well as pre- and post-study interviews. We found that CPPs were well received by participants. The analysis revealed that provided information should be as compact as possible, be adjusted to user groups and enable users to take action.2020AOAnna-Marie Ortloff et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingIoT Device PrivacyDIS
Improving Humans' Ability to Interpret Deictic Gestures in Virtual RealityCollaborative Virtual Environments (CVEs) offer unique opportunities for human communication. Humans can interact with each other over a distance in any environment and visual embodiment they want. Although deictic gestures are especially important as they can guide other humans' attention, humans make systematic errors when using and interpreting them. Recent work suggests that the interpretation of vertical deictic gestures can be significantly improved by warping the pointing arm. In this paper, we extend previous work by showing that models enable to also improve the interpretation of deictic gestures at targets all around the user. Through a study with 28 participants in a CVE, we analyzed the errors users make when interpreting deictic gestures. We derived a model that rotates the arm of a pointing user's avatar to improve the observing users' accuracy. A second study with 24 participants shows that we can improve observers' accuracy by 22.9%. As our approach is not noticeable for users, it improves their accuracy without requiring them to learn a new interaction technique or distracting from the experience.2020SMSven Mayer et al.Carnegie Mellon University & University of StuttgartSocial & Collaborative VRImmersion & Presence ResearchCHI
Effect of Orientation on Unistroke Touch GesturesAs touchscreens are the most successful input method of current mobile devices, touch gestures became a widely used input technique. While gestures provide users with advantages to express themselves, they also introduce challenges regarding accuracy and memorability. In this paper, we investigate the effect of a gesture's orientation on how well the gesture can be performed. We conducted a study in which participants performed systematically rotated unistroke gestures. For straight lines as well as for compound lines, we found that users tend to align gestures with the primary axes. We show that the error can be described by a Clausen function with R² = .93. Based on our findings, we suggest design implications and highlight the potential for recognizing flick gestures, visualizing gestures and improving recognition of compound gestures.2019SMSven Mayer et al.University of StuttgartHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
Using Presence Questionnaires in Virtual RealityVirtual Reality (VR) is gaining increasing importance in science, education, and entertainment. A fundamental characteristic of VR is creating presence, the experience of 'being' or 'acting', when physically situated in another place. Measuring presence is vital for VR research and development. It is typically repeatedly assessed through questionnaires completed after leaving a VR scene. Requiring participants to leave and re-enter the VR costs time and can cause disorientation. In this paper, we investigate the effect of completing presence questionnaires directly in VR. Thirty-six participants experienced two immersion levels and filled three standardized presence questionnaires in the real world or VR. We found no effect on the questionnaires' mean scores; however, we found that the variance of those measures significantly depends on the realism of the virtual scene and if the subjects had left the VR. The results indicate that, besides reducing a study's duration and reducing disorientation, completing questionnaires in VR does not change the measured presence but can increase the consistency of the variance.2019VSValentin Schwind et al.University of StuttgartImmersion & Presence ResearchCHI
Investigating the Feasibility of Finger Identification on Capacitive Touchscreens using Deep LearningTouchscreens enable intuitive mobile interaction. However, touch input is limited to 2D touch locations which makes it challenging to provide shortcuts and secondary actions similar to hardware keyboards and mice. Previous work presented a wide range of approaches to provide secondary actions by identifying which finger touched the display. While these approaches are based on external sensors which are inconvenient, we use capacitive images from mobile touchscreens to investigate the feasibility of finger identification. We collected a dataset of low-resolution fingerprints and trained convolutional neural networks that classify touches from eight combinations of fingers. We focused on combinations that involve the thumb and index finger as these are mainly used for interaction. As a result, we achieved an accuracy of over 92% for a position-invariant differentiation between left and right thumbs. We evaluated the model and two use cases that users find useful and intuitive. We publicly share our data set (CapFingerId) comprising 455,709 capacitive images of touches from each finger on a representative mutual capacitive touchscreen and our models to enable future work using and improving them.2019HLHuy Viet Le et al.Force Feedback & Pseudo-Haptic WeightHand Gesture RecognitionIUI
Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart ArtifactsEmpirical studies are a cornerstone of HCI research. Technical progress constantly enables new study methods. Online surveys, for example, make it possible to collect feedback from remote users. Progress in augmented and virtual reality enables to collect feedback with early designs. In-situ studies enable researchers to gather feedback in natural environments. While these methods have unique advantages and disadvantages, it is unclear if and how using a specific method affects the results. Therefore, we conducted a study with 60 participants comparing five different methods (online, virtual reality, augmented reality, lab setup, and in-situ) to evaluate early prototypes of smart artifacts. We asked participants to assess four different smart artifacts using standardized questionnaires. We show that the method significantly affects the study result and discuss implications for HCI research. Finally, we highlight further directions to overcome the effect of the used methods.2019AVAlexandra Voit et al.University of StuttgartUser Research Methods (Interviews, Surveys, Observation)Field StudiesCHI
Investigating the Effect of Orientation and Visual Style on Touchscreen Slider PerformanceSliders are one of the most fundamental components used in touchscreen user interfaces (UIs). When entering data using a slider, errors occur due e.g. to visual perception, resulting in inputs not matching what is intended by the user. However, it is unclear if the errors occur uniformly across the full range of the slider or if there are systematic offsets. We conducted a study to assess the errors occurring when entering values with horizontal and vertical sliders as well as two common visual styles. Our results reveal significant effects of slider orientation and style on the precision of the entered values. Furthermore, we identify systematic offsets that depend on the visual style and the target value. As the errors are partially systematic, they can be compensated to improve users' precision. Our findings provide UI designers with data to optimize user experiences in the wide variety of application areas where slider based touchscreen input is used.2019ACAshley Colley et al.University of Lapland360° Video & Panoramic ContentPrototyping & User TestingCHI
Exploratory Analysis of the Research Literature on Evaluation of In-Vehicle SystemsAn exploratory literature review method was applied to publications from several sources on Human-Computer Interaction (HCI) for In-Vehicle Information Systems (IVIS). The novel approach for bibliographic classification uses a graph database to investigate connections between authors, papers, used methods, and investigated interface types. This allows the application of algorithms to find similarities between different publications and overlaps between different usability evaluation methods. Through community detection algorithms, the publications can be clustered based on similarity relationships. For the proposed approach several thousand papers were systematically filtered, classified, and stored in a graph database. The survey shows a trend for usability assessment methods with direct involvement of users, especially the observation of users and performance-related measurements, as well as questionnaires and interviews. However, especially methods usually applied in early stages of development based on the assessment through models or experts, as well as collaborative and creativity methods do not seem very popular in automotive HCI research.2019LLLukas Lamm et al.User Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingAutoUI
The Mental Image Revealed by Gaze TrackingHumans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.2019XWXi Wang et al.Technische Universität BerlinEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI