Embracer: A Wearable Encountered-Type Haptic Controller for 3 DoF Input and FeedbackThe lack of haptic sensations beyond very simple vibration feedback diminishes the feeling of presence in Virtual Reality. Research suggested various approaches to deliver haptic sensations to the user's palm. However, these approaches are typically limited in the number of actuation directions and only focus on enhancing the system's output, ignoring haptic input. We present \systemname{}, a wrist-mounted encountered-type haptic controller that addresses these gaps by rendering forces along three axes through a sphere-shaped end effector within the user's palm. Using modified servo motors, we sense user-performed manipulations of the end effector as an input modality. In this paper, we contribute the design and implementation of \systemname{} together with a preliminary technical evaluation. By providing a more comprehensive haptic feedback system, \systemname{} enhances the realism and immersion of haptic feedback and user control.2024DDDennis Dietz et al.Force Feedback & Pseudo-Haptic WeightHaptic WearablesUbiComp
Exploring Redirection and Shifting Techniques to Mask Hand Movements from Shoulder-Surfing Attacks during PIN Authentication in Virtual RealityThe proliferation of mobile Virtual Reality (VR) headsets shifts our interaction with virtual worlds beyond our living rooms into shared spaces. Consequently, we are entrusting more and more personal data to these devices, calling for strong security measures and authentication. However, the standard authentication method of such devices - entering PINs via virtual keyboards - is vulnerable to shoulder-surfing, as movements to enter keys can be monitored by an unnoticed observer. To address this, we evaluated masking techniques to obscure VR users' input during PIN authentication by diverting their hand movements. Through two experimental studies, we demonstrate that these methods increase users' security against shoulder-surfing attacks from observers without excessively impacting their experience and performance. With these discoveries, we aim to enhance the security of future VR authentication without disrupting the virtual experience or necessitating additional hardware or training of users.2024YWYannick Weiss et al.Passwords & AuthenticationMobileHCI
Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual RealityProviding users with a haptic sensation of the hardness and softness of objects in virtual reality is an open challenge. While physical props and haptic devices help, their haptic properties do not allow for dynamic adjustments. To overcome this limitation, we present a novel technique for changing the perceived stiffness of objects based on a visuo-haptic illusion. We achieved this by manipulating the hands' Control-to-Display (C/D) ratio in virtual reality while pressing down on an object with fixed stiffness. In the first study (N=12), we determine the detection thresholds of the illusion. Our results show that we can exploit a C/D ratio from 0.7 to 3.5 without user detection. In the second study (N=12), we analyze the illusion's impact on the perceived stiffness. Our results show that participants perceive the objects to be up to 28.1% softer and 8.9% stiffer, allowing for various haptic applications in virtual reality.2023YWYannick Weiss et al.LMU MunichForce Feedback & Pseudo-Haptic WeightImmersion & Presence ResearchCHI
When XR and AI Meet - A Scoping Review on Extended Reality and Artificial IntelligenceResearch on Extended Reality (XR) and Artificial Intelligence (AI) is booming, which has led to an emerging body of literature in their intersection. However, the main topics in this intersection are unclear, as are the benefits of combining XR and AI. This paper presents a scoping review that highlights how XR is applied in AI research and vice versa. We screened 2619 publications from 203 international venues published between 2017 and 2021, followed by an in-depth review of 311 papers. Based on our review, we identify five main topics at the intersection of XR and AI, showing how research at the intersection can benefit each other. Furthermore, we present a list of commonly used datasets, software, libraries, and models to help researchers interested in this intersection. Finally, we present 13 research opportunities and recommendations for future work in XR and AI research.2023THTeresa Hirzle et al.University of CopenhagenSocial & Collaborative VRGenerative AI (Text, Image, Music, Video)CHI
Tailor Twist: Assessing Rotational Mid-Air Interactions for Augmented RealityMid-air gestures, widely used in today's Augmented Reality (AR) applications, are prone to the “gorilla arm” effect, leading to discomfort with prolonged interactions. While prior work has proposed metrics to quantify this effect and means to improve comfort and ergonomics, these works usually only consider simplistic, one-dimensional AR interactions, like reaching for a point or pushing a button. However, interacting with AR environments also involves far more complex tasks, such as rotational knobs, potentially impacting ergonomics. This paper advances the understanding of the ergonomics of rotational mid-air interactions in AR. For this, we contribute the results of a controlled experiment exposing the participants to a rotational task in the interaction space defined by their arms' reach. Based on the results, we discuss how novel future mid-air gesture modalities benefit from our findings concerning ergonomic-aware rotational interaction.2023DSDominik Schön et al.Technical University of DarmstadtFull-Body Interaction & Embodied InputAR Navigation & Context AwarenessCHI
TicTacToes: Assessing Toe Movements as an Input ModalityFrom carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.2023FMFlorian Müller et al.LMU MunichFoot & Wrist InteractionCHI
In Sync: Exploring Synchronization to Increase Trust Between Humans and Non-humanoid RobotsWhen we go for a walk with friends, we can observe an interesting effect: From step lengths to arm movements - our movements unconsciously align; they synchronize. Prior research found that this synchronization is a crucial aspect of human relations that strengthens social cohesion and trust. Generalizing from these findings in synchronization theory, we propose a dynamical approach that can be applied in the design of non-humanoid robots to increase trust. We contribute the results of a controlled experiment with 51 participants exploring our concept in a between-subjects design. For this, we built a prototype of a simple non-humanoid robot that can bend to follow human movements and vary the movement synchronization patterns. We found that synchronized movements lead to significantly higher ratings in an established questionnaire on trust between people and automation but did not influence the willingness to spend money in a trust game.2023WBWieslaw Bartkowski et al.University of WarsawSocial Robot InteractionEmpowerment of Marginalized GroupsCHI
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User ExperienceWhen we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point\&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.2023FMFlorian Müller et al.LMU MunichSocial & Collaborative VRMixed Reality WorkspacesImmersion & Presence ResearchCHI
Going, Going, Gone: Exploring Intention Communication for Multi-User Locomotion in Virtual RealityExploring virtual worlds together with others adds a social component to the Virtual Reality (VR) experience that increases connectedness. In the physical world, joint locomotion comes naturally through implicit intention communication and subsequent adjustments of the movement patterns. In VR, however, discrete locomotion techniques such as point&teleport come without prior intention communication, hampering the collective experience. Related work proposes fixed groups, with a single person controlling the group movement, resulting in the loss of individual movement capabilities. To close the gap and mediate between these two extremes, we introduce three intention communication methods and explore them with two baseline methods. We contribute the results of a controlled experiment (n=20) investigating these methods from the perspective of a leader and a follower in a dyadic locomotion task. Our results suggest shared visualizations support the understanding of movement intentions, increasing the group feeling while maintaining individual freedom of movement.2023JRJulian Rasch et al.LMU MunichSocial & Collaborative VRImmersion & Presence ResearchCHI
Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual RealityHaptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.2022SGSebastian Günther et al.Technical University of DarmstadtVibrotactile Feedback & Skin StimulationImmersion & Presence ResearchCHI
Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input ModalityFrom zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efficiency of such grips are affected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our findings, we conclude that the pinching interaction between the thumb and index finger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that benefit from pinching as an additional and complementary interaction modality.2022MSMartin Schmitz et al.Technical University of DarmstadtIn-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionCHI