Simulating Multiple Road User Perspectives on Autonomous Vehicle BehaviorsThis work presents a system and a study in which we have multiple road users interact simultaneously with an autonomous vehicle (AV) in a virtual reality (VR) environment. We go beyond studying dyadic interactions (e.g., AV-pedestrian or AV-driver) to involve a pedestrian, a human driver, and an AV passenger all jointly interacting with an AV in the same VR scenario. We probed multiple user perspectives with two different prototypes of AV behavior strategies in ambiguous stop-sign intersections. An efficient AV attempts to enter the intersection as soon as it can without collision, while a prosocial AV waits for other road users to pass before proceeding. We recruited 16 three-person groups (N=48), where half interacted with the first AV type and the other half interacted with the second AV type in four different traffic configurations. Our investigation demonstrates that road users in different roles can have diverging preferences and trust levels in the same AV behavior when making joint decisions. Finally, we discuss how our methods and findings can be used to guide further explorations for AV interaction research with multiple agents in different roles.2025JJJiHyun Jeong et al.Automated Driving Interface & Takeover DesignExternal HMI (eHMI) — Communication with Pedestrians & CyclistsTeleoperated DrivingAutoUI
Shocking Realities: VR Horror Games as a Tool for Raising Wheelchair Accessibility AwarenessWheelchair users face numerous challenges in their daily lives, such as narrow hallways, stairs, or misplaced elevator buttons. While accessible design is important for addressing these issues in the long term, societal understanding and sensitization remain crucial for overcoming these daily obstacles. Our research aims to foster such awareness through virtual reality game design. In contrast to prior works, we intentionally avoid the didactic style typical of serious or persuasive games. Instead, we embed common wheelchair challenges within naturally fitting horror game mechanics. Insights from our playtest (N = 11) highlight the synergy between horror elements (e.g., loss of control, slowdown) and manual wheelchair locomotion challenges. Most importantly, this approach helped participants recognize the daily struggles faced by wheelchair users, demonstrating the potential of our method in promoting a more inclusive and understanding society.2025AKAndrey Krekhov et al.Accessible GamingUniversal & Inclusive DesignSerious & Functional GamesDIS
Situated Artifacts Amplify Engagement in Physical ActivityIn the context of rising sedentary lifestyles, this paper investigates the efficacy of "Situated Artifacts" in promoting physical activity. We designed two artifacts that display users' physical activity data within their homes - one physical and one digital. We conducted a 9-week, counterbalanced, within-subject field study with N=24 participants to assess the impact of these artifacts on physical activity, reflection, and motivation. We collected quantitative data on physical activity and administered daily and weekly questionnaires, employing individual Likert items and standardized instruments, as well as conducted interviews post-prototype usage. Our findings indicate that while both artifacts act as reminders for physical activity, the physical artifact was superior in terms of user engagement. The study revealed that this can be attributed to the higher perceived presence and, thereby, enhanced social interaction, which acts as a motivational source for activity. In this sense, situated artifacts gently nudge toward sustainable health behavior change.2025JKJonas Keppel et al.Fitness Tracking & Physical Activity MonitoringSleep & Stress MonitoringDIS
How To Draw Commands? An Elicitation Study for Sketching on SpreadsheetsSketching is one of the oldest techniques humans use to express themselves. We sketch to visualize concepts, externalize memory, and communicate ideas. However, we barely use sketching to interact with computers. Given how naturally sketching comes to humans, we believe untapped potential exists in being able to simply draw commands onto a user interface. In this paper, we present results of an elicitation study about expressing common operations in spreadsheets through sketching. Spreadsheets are an interesting class of applications because they are widely used, support complex data and operations, and are available on touch-enabled devices. Our results show that despite considerable variation in syntactic details, participants gravitate towards recurring patterns (\eg\ enclosures and arrows, examples and cross-references, and temporal sequences of strokes). The sketch patterns we identified can be a first step towards developing interpreters of sketched commands, and thus enable new means of interacting with spreadsheets and other applications.2025MHMarc Hesenius et al.University of Duisburg-EssenPrototyping & User TestingComputational Methods in HCICHI
A Pandemic for the Good of Digital Literacy? An Empirical Investigation of Newly Improved Digital Skills during COVID-19 LockdownsThis research explores whether the rapid digital transformation due to COVID-19 managed to close or exacerbate the digital divide concerning users’ digital skills. We conducted a pre-registered survey with N = 1,143 German Internet users. Our findings suggest the latter: younger, male, and higher educated users were more likely to improve their digital skills than older, female, and less educated ones. According to their accounts, the pandemic helped Internet users improve their skills in communicating with others by using video conference software and reflecting critically upon information they found online. These improved digital skills exacerbated not only positive (e.g., feeling informed and safe) but also negative (e.g., feeling lonely) effects of digital media use during the pandemic. We discuss this research's theoretical and practical implications regarding the impact of challenges, such as technological disruption and health crises, on humans’ digital skills, capabilities, and future potential, focusing on the second-level digital divide.2025GNGerman Neubaum et al.University of Duisburg-EssenUser Research Methods (Interviews, Surveys, Observation)Sustainable HCICHI
Long-Term Effects of User Expertise and Application Design on Collision Anxiety in VR GamesVirtual reality (VR) applications achieve their high immersive potential by detaching the user from the real world, replacing it through a virtual environment. This detachment also blocks real-world orientation cues, which might cause fear of colliding with the real environment and negatively impact the player experience. However, since collision anxiety (CA) is a relatively young concept, it is unclear how factors like users’ VR expertise or specific game design choices may affect it. We defined expected CA profiles for five commercial VR games and conducted a longitudinal study examining how growing VR expertise and VR game design influence the users’ CA. After six weeks and a total of 154 VR sessions, results indicate that CA differs between applications and generally decreases as VR expertise increases. Based on our results, we propose design implications, providing researchers and designers with guidelines on when to expect and how to avoid fear of colliding.2025PRPatrizia Ring et al.Faculty of Computer Science / Department of Human-centered Computing and Cognitive Science (HCCS) / Entertainment Computing GroupImmersion & Presence ResearchGame UX & Player BehaviorCHI
Virtual Visits, Real Emotions: Designing Social VR Experiences for Imprisoned Fathers and their ChildrenThe imprisonment of parents has severe consequences for their relationship to their children. Thus, ensuring valuable contact between them is crucial for parent’s social rehabilitation and children’s development and well-being. However, visits are often not child-friendly and lack interaction. We see social VR as a means to address these issues. In this paper, we share findings of a user-centered design process of a virtual reality application that allows imprisoned parents to meet their children. Our pilot study with four dyads of children and imprisoned fathers revealed that both appreciated the virtual visits, felt close to each other, and had a positive emotional experience, although fathers missed physical contact. Children preferred VR’s playful and interactive nature compared to regular visits. Our research presents virtual visits as a suitable alternative to ensure valuable social interaction between prisoners and their children and contribute to the potential of immersive virtual social experiences for sensitive use cases.2025LGLinda Graf et al.University of Duisburg-Essen, Faculty of Computer Science / Department of Human-centered Computing and Cognitive Science (HCCS) / Entertainment Computing Group/Social & Collaborative VRImmersion & Presence ResearchIdentity & Avatars in XRCHI
Spatial Haptics: A Sensory Substitution Method for Distal Object Detection Using Tactile CuesWe present a sensory substitution-based method for representing locations of remote objects in 3D space via haptics. By imitating auditory localization processes, we enable vibrotactile localization abilities similar to those of some spiders, elephants, and other species. We evaluated this concept in virtual reality by modulating the vibration amplitude of two controllers depending on relative locations to a target. We developed two implementations applying this method using either ear or hand locations. A proof-of-concept study assessed localization performance and user experience, achieving under 30° differentiation between horizontal targets with no prior training. This unique approach enables localization by using only two actuators, requires low computational power, and could potentially assist users in gaining spatial awareness in challenging environments. We compare the implementations and discuss the use of hands as ears in motion, a novel technique not previously explored in the sensory substitution literature.2025IWIddo Yehoshua Wald et al.University of Bremen, Digital Media LabVibrotactile Feedback & Skin StimulationFull-Body Interaction & Embodied InputCHI
OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative Base Motion Plus Close-Range Pen PositionDigital pen input devices based on absolute pen position sensing, such as Wacom Pens, support high-fidelity pen input. However, they require specialized sensing surfaces like drawing tablets, which can have a large desk footprint, constrain the possible input area, and limit mobility. In contrast, digital pens with integrated relative sensing enable mobile use on passive surfaces, but suffer from motion artifacts or require surface contact at all times, deviating from natural pen affordances. We present OptiBasePen, a device for mobile pen input on ordinary surfaces. Our prototype consists of two parts: the "base" on which the hand rests and the pen for fine-grained input. The base features a high-precision mouse sensor to sense its own relative motion, and two infrared image sensors to track the absolute pen tip position within the base's frame of reference. This enables pen input on ordinary surfaces without external cameras while also avoiding drift from pen micro-movements. In this work, we present our prototype as well as the general base+pen concept, which combines relative and absolute sensing.2024AFAndreas Rene Fender et al.Circuit Making & Hardware PrototypingKnowledge Worker Tools & WorkflowsUIST
Understanding the Impact of the Reality-Virtuality Continuum on Visual Search using Physiological MeasuresWhile Mixed Reality allows the seamless blending of digital content in their surroundings, it is not clear if such a fusion of digital and physical information impacts users' perceptual and cognitive resources differently. While the fusion of real and virtual objects provides numerous opportunities to present additional information, it also introduces undesirable side effects, such as split attention and increased visual complexity. We conducted a visual search study in three manifestations of mixed reality to understand the effects of the environment on visual search behavior. We conducted a multimodal evaluation using EEG and eye-tracking correlates of search efficiency, distractor suppression, attention allocation, and behavioral measures. We found that, independently of the perceptual load, Augmented Reality environments reduce users' capacity to identify target information and suppress irrelevant stimuli. Participants reported AR as more demanding and distracting. We discuss design implications for MR interfaces based on physiological inputs for adaptive interactions.2024FCFrancesco Chiossi et al.Eye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackAR Navigation & Context AwarenessMobileHCI
Understanding User Acceptance of Electrical Muscle Stimulation in Human-Computer InteractionElectrical Muscle Stimulation (EMS) has unique capabilities that can manipulate users' actions or perceptions, such as actuating user movement while walking, changing the perceived texture of food, and guiding movements for a user learning an instrument. These applications highlight the potential utility of EMS, but such benefits may be lost if users reject EMS. To investigate user acceptance of EMS, we conducted an online survey (N=101). We compared eight scenarios, six from HCI research applications and two from the sports and health domain. To gain further insights, we conducted in-depth interviews with a subset of the survey respondents (N=10). The results point to the challenges and potential of EMS regarding social and technological acceptance, showing that there is greater acceptance of applications that manipulate action than those that manipulate perception. The interviews revealed safety concerns and user expectations for the design and functionality of future EMS applications.2024SFSarah Faltaous et al.University Duisburg-EssenElectrical Muscle Stimulation (EMS)CHI
Kinetic Signatures: A Systematic Investigation of Movement-Based User Identification in Virtual RealityBehavioral Biometrics in Virtual Reality (VR) enable implicit user identification by leveraging the motion data of users' heads and hands from their interactions in VR. This spatiotemporal data forms a Kinetic Signature, which is a user-dependent behavioral biometric trait. Although kinetic signatures have been widely used in recent research, the factors contributing to their degree of identifiability remain mostly unexplored. Drawing from existing literature, this work systematically examines the influence of static and dynamic components in human motion. We conducted a user study (N = 24) with two sessions to reidentify users across different VR sports and exercises after one week. We found that the identifiability of a kinetic signature depends on its inherent static and dynamic factors, with the best combination allowing for 90.91 % identification accuracy after one week had passed. Therefore, this work lays a foundation for designing and refining movement-based identification protocols in immersive environments.2024JLJonathan Liebers et al.University of Duisburg-EssenEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionSocial & Collaborative VRCHI
Development and Validation of the Collision Anxiety Questionnaire for VR ApplicationsThe high degree of sensory immersion is a distinctive feature of head-mounted virtual reality (VR) systems. While the visual detachment from the real world enables unique immersive experiences, users risk collisions due to their inability to perceive physical obstacles in their environment. Even the mere anticipation of a collision can adversely affect the overall experience and erode user confidence in the VR system. However, there are currently no valid tools for assessing collision anxiety. We present the iterative development and validation of the Collision Anxiety Questionnaire (CAQ), involving an exploratory and a confirmatory factor analysis with a total of 159 participants. The results provide evidence for both discriminant and convergent validity and a good model fit for the final CAQ with three subscales: general collision anxiety, orientation, and interpersonal collision anxiety. By utilizing the CAQ, researchers can examine potential confounding effects of collision anxiety and evaluate methods for its mitigation.2024PRMatthew T Dearing et al.University Duisburg-EssenSocial & Collaborative VRImmersion & Presence ResearchPrototyping & User TestingCHI
Don’t Forget to Disinfect: Understanding Technology-Supported Hand Disinfection StationsThe global COVID-19 pandemic created a constant need for hand disinfection. While it is still essential, disinfection use is declining with the decrease in perceived personal risk (e.g., as a result of vaccination). Thus this work explores using different visual cues to act as reminders for hand disinfection. We investigated different public display designs using (1) paper-based only, adding (2) screen-based, or (3) projection-based visual cues. To gain insights into these designs, we conducted semi-structured interviews with passersby (N=30). Our results show that the screen- and projection-based conditions were perceived as more engaging. Furthermore, we conclude that the disinfection process consists of four steps that can be supported: drawing attention to the disinfection station, supporting the (subconscious) understanding of the interaction, motivating hand disinfection, and performing the action itself. We conclude with design implications for technology-supported disinfection.2023JKJonas Keppel et al.Context-Aware ComputingUbiquitous ComputingMobileHCI
ARcoustic: A Mobile Augmented Reality System for Seeing Out-of-View TrafficLocating out-of-view vehicles can help pedestrians to avoid critical traffic encounters. Some previous approaches focused solely on visualising out-of-view objects, neglecting their localisation and limitations. Other methods rely on continuous camera-based localisation, raising privacy concerns. Hence, we propose the ARcoustic system, which utilises a microphone array for nearby moving vehicle localisation and visualises nearby out-of-view vehicles to support pedestrians. First, we present the implementation of our sonic-based localisation and discuss the current technical limitations. Next, we present a user study (n=18) in which we compared two state-of-the-art visualisation techniques (Radar3D, CompassbAR) to a baseline without any visualisation. Results show that both techniques present too much information, resulting in below-average user experience and longer response times. Therefore, we introduce a novel visualisation technique that aligns with the technical localisation limitations and meets pedestrians' preferences for effective visualisation, as demonstrated in the second user study (n=16). Lastly, we conduct a small field study (n=8) testing our ARcoustic system under realistic conditions. Our work shows that out-of-view object visualisations must align with the underlying localisation technology and fit the concrete application scenario.2023XZXuesong Zhang et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAR Navigation & Context AwarenessAutoUI
VR Almost There: Simulating Co-located Multiplayer Experiences in Social Virtual RealityConsumer social virtual reality (VR) applications have recently started to enable social interactions at a distance. Yet it is still relatively unknown if and to what extent such applications provide meaningful social experiences in cases where in-person leisure activities are not feasible. To explore this, we developed a custom social VR application and conducted an exploratory lab study with 25 dyads in which we compared an in-person and a virtual version of a co-located multiplayer scenario. Our mixed-methods analysis revealed that both scenarios created a socially rich atmosphere and strengthened the social closeness between players. However, the lack of facial animations, limited body language, and a low field of view led to VR's main social experiential limitations: a reduced mutual awareness and emotional understanding compared to the in-person scenario. We derive implications for social VR design and research as well as game user research.2023PSPhilipp Sykownik et al.University of Duisburg-EssenSocial & Collaborative VRMultiplayer & Social GamesCHI
Never Skip Leg Day Again: Training the Lower Body with Vertical Jumps in a Virtual Reality ExergameVirtual Reality (VR) exergames can increase engagement in and motivation for physical activities. Most VR exergames focus on the upper body because many VR setups only track the users' heads and hands. To become a serious alternative to existing exercise programs, VR exergames must provide a balanced workout and train the lower limbs, too. To address this issue, we built a VR exergame focused on vertical jump training to explore full-body exercise applications. To create a safe and effective training, nine domain experts participated in our prototype design. Our mixed-methods study confirms that the jump-centered exercises provided a worthy challenge and positive player experience, indicating long-term retention. Based on our findings, we present five design implications to guide future work: avoid an unintended forward drift, consider technical constraints, address safety concerns in full-body VR exergames, incorporate rhythmic elements with fluent movement patterns, adapt difficulty to players' fitness progression status.2023SCSebastian Cmentowski et al.University of Duisburg-EssenFull-Body Interaction & Embodied InputSerious & Functional GamesCHI
Literature Reviews in HCI: A Review of ReviewsThis paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI.2023ESEvropi Stefanidi et al.University of BremenUser Research Methods (Interviews, Surveys, Observation)Research Ethics & Open ScienceCHI
HotFoot: Foot-Based User Identification using Thermal ImagingWe propose a novel method for seamlessly identifying users by combining thermal and visible feet features. While it is known that users’ feet have unique characteristics, these have so far been underutilized for biometric identification, as observing those features often requires the removal of shoes and socks. As thermal cameras are becoming ubiquitous, we foresee a new form of identification, using feet features and heat traces to reconstruct the footprint even while wearing shoes or socks. We collected a dataset of users’ feet (𝑁 = 21), wearing three types of footwear (personal shoes, standard shoes, and socks) on three floor types (carpet, laminate, and linoleum). By combining visual and thermal features, an AUC between 91.1% and 98.9%, depending on floor type and shoe type can be achieved, with personal shoes on linoleum floor performing best. Our findings demonstrate the potential of thermal imaging for continuous and unobtrusive user identification.2023ASAlia Saad et al.University of Duisburg-EssenHuman Pose & Activity RecognitionBiosensors & Physiological MonitoringCHI
How to Communicate Robot Motion Intent: A Scoping ReviewRobots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.2023MPMax Pascher et al.Westphalian University of Applied Sciences, University of Duisburg-EssenSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI