Eliciting Change Towards Better Virtual Worlds: A Workshop Process to Foster Ethical Reflection in Creative Technology Design ProcessesThe concept of the metaverse, recently re-emerging in public discourse, is viewed by some as the internet’s next evolutionary stage, while others regard it as a dubious promise of a future where physical and digital worlds merge seamlessly. As the metaverse takes shape, it is crucial to question whether its design will embody the values of the societies it aims to serve. Emphasizing ethical and inclusive technology development is essential, particularly through educating about the social impacts and ethical challenges in computer science and design. Our research contributes to this goal by introducing a five-day workshop process designed to encourage ethically reflective technology development. This workshop process integrates speculative design, service design, and digital ethics methodologies. We showcase its effectiveness by detailing the outcomes of its implementation in two distinct educational settings: a seminar in Germany and a summer school in Taiwan, both centered on the development of metaverse applications.2025MHMichel Hohendanner et al.Technology Ethics & Critical HCIParticipatory DesignUser Research Methods (Interviews, Surveys, Observation)C&C
Exploring the Effect of Music on User Typing and Identification through Keystroke DynamicsThis paper explores the relationship between music and keyboard typing behavior. In particular, we focus on how it affects keystroke-based authentication systems. To this end, we conducted an online experiment (N=43), where participants were asked to replicate paragraphs of text while listening to music at varying tempos and loudness levels across two sessions. Our findings reveal that listening to music leads to more errors and faster typing if the music is fast. Identification through a biometric model was improved when music was played either during its training or testing. This hints at the potential of music for increasing identification performance and a tradeoff between this benefit and user distraction. Overall, our research sheds light on typing behavior and introduces music as a subtle and effective tool to influence user typing behavior in the context of keystroke-based authentication.2025LMLukas Mecke et al.LMU Munich; University of the Bundeswehr MunichVibrotactile Feedback & Skin StimulationExplainable AI (XAI)Passwords & AuthenticationCHI
Metaverse Perspectives from Japan: A Participatory Speculative Design Case StudyCurrently, the development of the metaverse lies in the hands of industry. Citizens have little influence on this process. Instead, to do justice to the pluralism of (digital) societies, we should strive for an open discourse including many different perspectives on the metaverse and its core technologies such as AI. We utilize a participatory speculative design (PSD) approach to explore Japanese citizens’ perspectives on future metaverse societies, as well as social and ethical implications. Our contributions are twofold. Firstly, we demonstrate the effectiveness of PSD in engaging citizens in critical discourse on emerging technologies like the metaverse by presenting our workshop framework and participants' processes. Secondly, we identify key themes from participants' perspectives, providing insights for culturally sensitive design and development of virtual environments. Our analysis shows that participants imagine the metaverse to have the potential to solve a variety of societal issues; for example, breaking down barriers of physical environments for communication, social interaction, crisis preparation, and political participation, or tackling identity-related issues. Regarding future metaverse societies, participants’ imaginations raise critical questions about human-AI relations, technical solutionism, politics and technology, globalization and local cultures, and immersive technologies. We discuss implications and contribute to expanding conversations on metaverse developments.2024MHMichel Hohendanner et al.Session 3c: Speculative Design and Emerging TechnologiesCSCW
PriView -- Exploring Visualisations Supporting Users' Privacy AwarenessWe present PriView, a concept that allows privacy-invasive devices in the users’ vicinity to be visualised. PriView is motivated by an ever-increasing number of sensors in our environments tracking potentially sensitive data (e.g., audio and video). At the same time, users are oftentimes unaware of this, which violates their privacy. Knowledge about potential recording would enable users to avoid accessing such areas or not to disclose certain information. We built two prototypes: a) a mobile application capable of detecting smart devices in the environment using a thermal camera, and b) VR mockups of six scenarios where PriView might be useful (e.g., a rental apartment). In both, we included several types of visualisation. Results of our lab study (N=24) indicate that users prefer simple, permanent indicators while wishing for detailed visualisations on demand. Our exploration is meant to support future designs of privacy visualisations for varying smart environments.2021SPSarah Prange et al.Bundeswehr University Munich, LMU MunichPrivacy by Design & User ControlPrivacy Perception & Decision-MakingContext-Aware ComputingCHI
SpatialProto: Exploring Real-World Motion Captures for Rapid Prototyping of Interactive Mixed RealitySpatial computing devices that blend virtual and real worlds have the potential to soon become ubiquitous. Yet, creating experiences for spatial computing is non-trivial and needs skills in programming and 3D content creation, rendering them inaccessible to a wider group of users. We present SpatialProto, an in-situ spatial prototyping system for lowering the barrier to engage in spatial prototyping. With a depth-sensing capable Mixed Reality headset, SpatialProto lets users record animated objects of the real-world environment (e.g. paper, clay, people or any other prop), extract only the relevant parts, and directly place and transform these recordings in their physical environment. We describe the design and implementation of SpatialProto, a user study evaluating the system's prototype with non-expert users (n=9), and demonstrate applications where multiple captures are fused for compelling Augmented Reality experiences.2021LMLeon Müller et al.LMU MunichEV Charging & Eco-Driving InterfacesShape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesCHI
Hidden Interaction Techniques: Concealed Information Acquisition and Texting on Smartphones and WearablesThere are many situations where using personal devices is not socially acceptable, or where nearby people present a privacy risk. For these situations, we explore the concept of hidden interaction techniques through two prototype applications. HiddenHaptics allows users to receive information through vibrotactile cues on a smartphone, and HideWrite allows users to write text messages by drawing on a dimmed smartwatch screen. We conducted three user studies to investigate whether, and how, these techniques can be used without being exposed. Our primary findings are (1) users can effectively hide their interactions while attending to a social situation, (2) users seek to interact when another person is speaking, and they also tend to hide the interaction using their body or furniture, and (3) users can sufficiently focus on the social situation despite their interaction, whereas non-users feel that observing the user hinders their ability to focus on the social activity.2021VMVille Mäkelä et al.LMU MunichFoot & Wrist InteractionDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Smart Home Privacy & SecurityCHI
Behavioural Biometrics in VR: Identifying People from Body Motion and Relations in Virtual RealityEvery person is unique, with individual behavioural characteristics: how one moves, coordinates, and uses their body. In this paper we investigate body motion as behavioural biometrics for virtual reality. In particular, we look into which behaviour is suitable to identify a user. This is valuable in situations where multiple people use a virtual reality environment in parallel, for example in the context of authentication or to adapt the VR environment to users' preferences. We present a user study (N=22) where people perform controlled VR tasks (pointing, grabbing, walking, typing), monitoring their head, hand, and eye motion data over two sessions. These body segments can be arbitrarily combined into body relations, and we found that these movements and their combination lead to characteristic behavioural patterns. We present an extensive analysis of which motion/relation is useful to identify users in which tasks using classification methods. Our findings are beneficial for researchers and practitioners alike who aim to build novel adaptive and secure user interfaces in virtual reality.2019KPKen Pfeuffer et al.Bundeswehr University MunichFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackCHI
Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones used in the WildCommodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations.2018MKMohamed Khamis et al.LMU MunichEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
Pocket Transfers: Interaction Techniques for Transferring Content from Situated Displays to Mobile DevicesWe present Pocket Transfers: interaction techniques that allow users to transfer content from situated displays to a personal mobile device while keeping the device in a pocket or bag. Existing content transfer solutions require direct manipulation of the mobile device, making inter-action slower and less flexible. Our introduced tech-niques employ touch, mid-air gestures, gaze, and a mul-timodal combination of gaze and mid-air gestures. We evaluated the techniques in a novel user study (N=20), where we considered dynamic scenarios where the user approaches the display, completes the task, and leaves. We show that all pocket transfer techniques are fast and seen as highly convenient. Mid-air gestures are the most efficient touchless method for transferring a single item, while the multimodal method is the fastest touchless method when multiple items are transferred. We provide guidelines to help researchers and practitioners choose the most suitable content transfer techniques for their systems.2018VMVille Mäkelä et al.University of Tampere, LMU MunichIn-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionEye Tracking & Gaze InteractionCHI
Your Eyes Tell: Leveraging Smooth Pursuit for Assessing Cognitive WorkloadA common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. Existing approaches such as questionnaires or pupil dilation measurements either only allow for subjective assessments or are susceptible to environmental influences and user physiology. We address these challenges by exploiting the fact that cognitive workload influences smooth pursuit eye movements. We compared three trajectories and two speeds under different levels of cognitive workload within a user study (N=20). We found higher deviations of gaze points during smooth pursuit eye movements for specific trajectory types at higher cognitive workload levels. Using an SVM classifier, we predict cognitive workload through smooth pursuit with an accuracy of 99.5% for distinguishing between low and high workload as well as an accuracy of 88.1% for estimating workload between three levels of difficulty. We discuss implications and present use cases of how cognition-aware systems benefit from inferring cognitive workload in real-time by smooth pursuit eye movements.2018TKThomas Kosch et al.LMU MunichEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionContext-Aware ComputingCHI
ResearchIME: A Mobile Keyboard Application for Studying Free Typing Behaviour in the WildWe present a data logging concept, tool, and analyses to facilitate studies of everyday mobile touch keyboard use and free typing behaviour: 1) We propose a filtering concept to log typing without recording readable text and assess reactions to filters with a survey (N=349). 2) We release an Android keyboard app and backend that implement this concept. 3) Based on a three-week field study (N=30), we present the first analyses of keyboard use and typing biometrics on such free text typing data in the wild, including speed, postures, apps, auto correction, and word suggestions. We conclude that research on mobile keyboards benefits from observing free typing beyond the lab and discuss ideas for further studies.2018DBDaniel Buschek et al.LMU MunichUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Which one is me? Identifying Oneself on Public DisplaysWhile user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment.2018MKMohamed Khamis et al.LMU MunichSocial & Collaborative VRIdentity & Avatars in XRCHI