Heartbeat Resonance: Inducing Non-contact Heartbeat Sensations in the ChestPerceiving and altering the sensation of internal physiological states, such as heartbeats, is key for biofeedback and interception. Yet, wearable devices used for this purpose can feel intrusive and typically fail to deliver stimuli aligned with the heart’s location in the chest. To address this, we introduce Heartbeat Resonance, which uses low-frequency sound waves to create non-contact haptic sensations in the chest cavity, mimicking heartbeats. We conduct two experiments to evaluate the system's effectiveness. The first experiment shows that the system created realistic heartbeat sensations in the chest, with 78.05 Hz being the most effective frequency. In the second experiment, we evaluate the effects of entrainment by simulating faster and slower heart rates. Participants perceived the intended changes and reported high confidence in their perceptions for +15% and -30% heart rates. This system offers a non-intrusive solution for biofeedback while creating new possibilities for immersive VR environments.2025WHWaseem Hassan et al.University of Copenhagen, Department of Computer ScienceVibrotactile Feedback & Skin StimulationTelemedicine & Remote Patient MonitoringSleep & Stress MonitoringCHI
Does Random Movements mean Random Results? Why Asynchrony in Experiments on Body Ownership does not Work as IntendedEffects of embodying virtual avatars are routinely validated experimentally by comparing synchronous and asynchronous movements between virtual and real bodies. This experimental paradigm, however, lacks justification, validation, and standardization. Asynchrony is currently implemented in numerous ways, such as through delayed, dislocated, or prerecorded movements, and these may impact embodiment and user experience distinctively. An online study (N = 202) revealed that variations of asynchrony cause disparate responses to embodiment and user experience, with prerecorded movements distorting embodiment the most. A think-aloud study (N = 16) revealed that asynchronous conditions lead to peculiar and oftentimes negative experiences. Furthermore, asynchronous conditions in some cases maintain, rather than break the body ownership illusion, as participants imitate the virtual body. Our results show that asynchrony in experiments on embodiment entails profound validity issues and should therefore be used with caution.2025OIOlga Iarygina et al.IT University of CopenhagenImmersion & Presence ResearchIdentity & Avatars in XRCHI
Theorising in HCI using Causal ModelsAlthough the literature on Human-Computer Interaction (HCI) catalogues many theories, it offers surprisingly few tools for theorising. This paper critiques dominant approaches to engaging with theory and proposes a working model for theorising in HCI. We then present graphical causal modelling as an effective theorising tool. This includes a step-by-step guide to building causal models and examples of their use in different stages of the research process. We explain how causal models help develop method-agnostic representations of research problems using directed acyclic graphs, identify potential confounders, and construct alternative interpretations of data. Finally, we discuss their limitations and challenges for adoption by the HCI community.2025EVEduardo Velloso et al.University of Sydney, School of Computer ScienceExplainable AI (XAI)Computational Methods in HCICHI
A Concept at Work: A Review of Motivations, Operationalizations, and Conclusions in VR Research about PresencePresence appears an important concept for virtual reality (VR): It is frequently measured with questionnaires, and theory and methods about it have been discussed in numerous works. Yet, it is unclear how to actually work with this concept: Why is presence important to measure, how to choose an appropriate questionnaire, and what to conclude about it based on findings? To answer these questions, we review how the concept is put to work in 288 VR papers from 2023 measuring presence with questionnaires. Our findings include that measuring presence is often motivated by another construct, such as user experience; the reasons for choosing a specific questionnaire are often weak or not reported at all; and high presence values are frequently used simply to validate an interaction technique. We propose recommendations for working with presence and formulate questions to direct future research.2025CXCleo Xiao et al.University of Copenhagen, Department of Computer ScienceImmersion & Presence ResearchCHI
Deriving Selection Techniques for GUIs based on the Multiple Process ModelDesigning efficient selection techniques for graphical user interfaces (GUIs) is fundamental in HCI research. We derive selection techniques based on the multiple process model, a theory that details the motor control processes during goal-directed movements. Specifically, we deduce three theoretical assumptions on how control processes of pre-planning, impulse control, and limb-target control could influence selection movements when adjusting GUI elements, including visual feedback, cursor position, and target position. Corresponding to our assumptions, we develop three techniques that hide the cursor when a target is highlighted, snap the cursor when selection begins, and expand clustered objects during selection movements. After that, we pre-register the assumptions and research methodology and evaluate the techniques in three crowdsourcing-based pointing studies. Our results show that all techniques improved the selection efficiency compared to established baselines. We further discuss the design implications and reflect on how we derived techniques from theory.2025DYDifeng Yu et al.University of Copenhagen, Department of Computer ScienceUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Integrated Calculators: Moving Calculation into the WorldComputing devices commonly act as tools, extending our abilities and shaping how we interact with the world. We investigate one such tool, the calculator, which helps with arithmetic, but also commonly offers specialized functions for conversions, formulas, or graphing. Through an analysis of calculator apps and use cases, we describe limitations of current calculators. Crucially, calculator apps remain detached from tasks, motivating us to explore how to more closely integrate calculation with the world through augmented reality (AR). AR calculators can directly use measurements and numbers from the world in calculations as well as display results of calculations in the world. We provide a conceptual account of calculation in AR, as well as video prototypes that concretize the concept across different scenarios. These examples demonstrate how moving tools like the calculator to AR offers tighter task integration and reduces the work required in translating between the world and computational tools.2024HPHenning Pohl et al.AR Navigation & Context AwarenessInteractive Data VisualizationGeospatial & Map VisualizationDIS
“You Can Find a Part of my Life in Every Single App”: An Interview Study of What Makes Smartphone Applications Special to Their UsersIn the 1979 book "The Meaning of Things'' Csikszentmihalyi and Rochberg-Halton studied people's perception of the significance of things in the home. They emphasized how things influence the self, and vice versa. We propose that their method and analytical framework can help to understand the analogous question for smartphones: Why are some apps special to users? Using the framework, we conduct and analyze 60 interviews with people aged 21 to 41; with participants' consent, we made the anonymized transcripts publicly available. The analysis of the interviews shows that participants find apps special because they are convenient, support personal goals and social communication, help them remember, and serve emotional functions. Participants report that their identity is intertwined with certain apps, even if they are annoying or cause dependency. Importantly, we also find that participants actively regulate their use of apps through their organization and particular use strategies.2024KHKasper Hornbæk et al.University of CopenhagenSocial Platform Design & User BehaviorOnline Identity & Self-PresentationCHI
“I finally felt I had the tools to control these urges”: Empowering Students to Achieve Their Device Use Goals With the Reduce Digital Distraction WorkshopDigital self-control tools (DSCTs) help people control their time and attention on digital devices, using interventions like distraction blocking or usage tracking. Most studies of DSCTs' effectiveness have focused on whether a single intervention reduces time spent on a single device. In reality, people may require combinations of DSCTs to achieve more subjective goals across multiple devices. We studied how DSCTs can address individual needs of university students (n = 280), using a workshop where students reflect on their goals before exploring relevant tools. At 1-3 month follow-ups, 95\% of respondents still used at least one type of DSCT, typically applied across multiple devices, and there was substantial variation in the tool combinations chosen. We observed a large increase in self-reported digital self-control, suggesting that providing a space to articulate goals and self-select appropriate DSCTs is a powerful way to support people who struggle to self-regulate digital device use.2024ULUlrik Lyngs et al.University of OxfordNotification & Interruption ManagementCHI
Flicker Augmentations: Rapid Brightness Modulation for Real-World Visual Guidance using Augmented RealityProviding attention guidance, such as assisting in search tasks, is a prominent use for Augmented Reality. Typically, this is achieved by graphically overlaying geometrical shapes such as arrows. However, providing visual guidance can cause side effects such as attention tunnelling or scene occlusions, and introduce additional visual clutter. Alternatively, visual guidance can adjust saliency but this comes with different challenges such as hardware requirements and environment dependent parameters. In this work we advocate for using flicker as an alternative for real-world guidance using Augmented Reality. We provide evidence for the effectiveness of flicker from two user studies. The first compared flicker against alternative approaches in a highly controlled setting, demonstrating efficacy (N = 28). The second investigated flicker in a practical task, demonstrating feasibility with higher ecological validity (N = 20). Finally, our discussion highlights the opportunities and challenges when using flicker to provide real-world visual guidance using Augmented Reality.2024JSJonathan Sutton et al.University of Copenhagen, University of OtagoAR Navigation & Context AwarenessCHI
Using Low-frequency Sound to Create Non-contact Sensations On and In the BodyThis paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.2024WHWaseem Hassan et al.University of CopenhagenMid-Air Haptics (Ultrasonic)CHI
Using the Visual Language of Comics to Alter Sensations in Augmented RealityAugmented Reality (AR) excels at altering what we see but non-visual sensations are difficult to augment. To augment non-visual sensations in AR, we draw on the visual language of comic books. Synthesizing comic studies, we create a design space describing how to use comic elements (e.g., onomatopoeia) to depict non-visual sensations (e.g., hearing). To demonstrate this design space, we built eight demos, such as speed lines to make a user think they are faster and smell lines to make a scent seem stronger. We evaluate these elements in a qualitative user study (N=20) where participants performed everyday tasks with comic elements added as augmentations. All participants stated feeling a change in perception for at least one sensation, with perceived changes detected by between four participants (touch) and 15 participants (hearing). The elements also had positive effects on emotion and user experience, even when participants did not feel changes in perception.2024ABArpit Bhatia et al.University of CopenhagenAR Navigation & Context AwarenessInteractive Narrative & Immersive StorytellingCHI
When XR and AI Meet - A Scoping Review on Extended Reality and Artificial IntelligenceResearch on Extended Reality (XR) and Artificial Intelligence (AI) is booming, which has led to an emerging body of literature in their intersection. However, the main topics in this intersection are unclear, as are the benefits of combining XR and AI. This paper presents a scoping review that highlights how XR is applied in AI research and vice versa. We screened 2619 publications from 203 international venues published between 2017 and 2021, followed by an in-depth review of 311 papers. Based on our review, we identify five main topics at the intersection of XR and AI, showing how research at the intersection can benefit each other. Furthermore, we present a list of commonly used datasets, software, libraries, and models to help researchers interested in this intersection. Finally, we present 13 research opportunities and recommendations for future work in XR and AI research.2023THTeresa Hirzle et al.University of CopenhagenSocial & Collaborative VRGenerative AI (Text, Image, Music, Video)CHI
Towards a Bedder Future: A Study of Using Virtual Reality while Lying DownMost contemporary Virtual Reality (VR) experiences are made for standing users. However, when a user is lying down---either by choice or necessity---it is unclear how they can walk around, dodge obstacles, or grab distant objects. We rotate the virtual coordinate space to study the movement requirements and user experience of using VR while lying down. Fourteen experienced VR users engaged with various popular VR applications for 40 minutes in a study using a think-aloud protocol and semi-structured interviews. Thematic analysis of captured videos and interviews reveals that using VR while lying down is comfortable and usable and that the virtual perspective produces a potent illusion of standing up. However, commonplace movements in VR are surprisingly difficult when lying down, and using alternative interactions is fatiguing and hampers performance. To conclude, we discuss design opportunities to tackle the most significant challenges and to create new experiences.2023TGThomas van Gemert et al.University of CopenhagenFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Feellustrator: A Design Tool for Ultrasound Mid-Air HapticsUltrasound mid-air haptic technology provides a large space of design possibilities, as one can modulate the ultrasound intensity in a continuous 3D space at a high speed over time. Yet, the need for programming the patterns limits rapid ideation and testing of alternatives. We present Feellustrator, a graphical design tool for quickly creating and editing ultrasound mid-air haptics. With Feellustrator, one can create custom ultrasound patterns, layer or sequence them into complex effects, project them on the user's hand, and export them for use in external programs (e.g., Unity). To create the tool, we interviewed 13 designers who had from a few months to several years of experience with ultrasound, then derived a set of requirements for supporting ultrasound design. We demonstrate the design power of Feellustrator through example applications and an evaluation with 15 participants. Then, we outline future directions for ultrasound haptic design.2023HSHasti Seifi et al.Arizona State University, University of CopenhagenMid-Air Haptics (Ultrasonic)CHI
OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VRWe introduce OVRlap, a VR interaction technique that lets the user perceive multiple places simultaneously from a first-person perspective. OVRlap achieves this by overlapping viewpoints. At any time, only one viewpoint is active, meaning that the user may interact with objects therein. Objects seen from the active viewpoint are opaque, whereas objects seen from passive viewpoints are transparent. This allows users to perceive multiple locations at once and easily switch to the one in which they want to interact. We compare OVRlap and a single-viewpoint technique in a study where 20 participants complete object-collection and monitoring tasks. We find that participants are significantly faster and move their head significantly less with OVRlap in both tasks. We propose how the technique might be improved through automated switching of the active viewpoint and intelligent viewpoint rendering.2022JSJonas Schjerlund et al.University of CopenhagenFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Quantifying Proactive and Reactive Button InputWhen giving input with a button, users follow one of two strategies: (1) react to the output from the computer or (2) proactively act in anticipation of the output from the computer. We propose a technique to quantify reactiveness and proactiveness to determine the degree and characteristics of each input strategy. The technique proposed in this study uses only screen recordings and does not require instrumentation beyond the input logs. The likelihood distribution of the time interval between the button inputs and system outputs, which is uniquely determined for each input strategy, is modeled. Then the probability that each observed input/output pair originates from a specific strategy is estimated along with the parameters of the corresponding likelihood distribution. In two empirical studies, we show how to use the technique to answer questions such as how to design animated transitions and how to predict a player's score in real-time games.2022HKHyunchul Kim et al.KAISTPrototyping & User TestingCHI
How to Evaluate Object Selection and Manipulation in VR? Guidelines from 20 Years of StudiesThe VR community has introduced many object selection and manipulation techniques during the past two decades. Typically, they are empirically studied to establish their benefits over the state-of-the-art. However, the literature contains few guidelines on how to conduct such studies; standards developed for evaluating 2D interaction often do not apply. This lack of guidelines makes it hard to compare techniques across studies, to report evaluations consistently, and therefore to accumulate or replicate findings. To build such guidelines, we review 20 years of studies on VR object selection and manipulation. Based on the review, we propose recommendations for designing studies and a checklist for reporting them. We also identify research directions for improving evaluation methods and offer ideas for how to make studies more ecologically valid and rigorous.2021JBJoanna Bergström et al.University of CopenhagenImmersion & Presence ResearchPrototyping & User TestingCHI
Poros: Configurable Proxies for Distant Interactions in VRA compelling property of virtual reality is that it allows users to interact with objects as they would in the real world. However, such interactions are limited to space within reach. We present Poros, a system that allows users to rearrange space. After marking a portion of space, the distant marked space is mirrored in a nearby proxy. Thereby, users can arrange what is within their reachable space, making it easy to interact with multiple distant spaces as well as nearby objects. Proxies themselves become part of the scene and can be moved, rotated, scaled, or anchored to other objects. Furthermore, they can be used in a set of higher-level interactions such as alignment and action duplication. We show how Poros enables a variety of tasks and applications and also validate its effectiveness through an expert evaluation.2021HPHenning Pohl et al.University of CopenhagenSocial & Collaborative VRMixed Reality WorkspacesCHI
Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTubeVirtual reality (VR) is increasingly used in complex social and physical settings outside of the lab. However, not much is known about how these settings influence use, nor how to design for them. We analyse 233 YouTube videos of VR Fails to: (1) understand when breakdowns occur, and (2) reveal how the seams between VR use and the social and physical setting emerge. The videos show a variety of fails, including users flailing, colliding with surroundings, and hitting spectators. They also suggest causes of the fails, including fear, sensorimotor mismatches, and spectator participation. We use the videos as inspiration to generate design ideas. For example, we discuss more flexible boundaries between the real and virtual world, ways of involving spectators, and interaction designs to help overcome fear. Based on the findings, we further discuss the ‘moment of breakdown’ as an opportunity for designing engaging and enhanced VR experiences.2021EDEmily Dao et al.Monash UniversitySocial & Collaborative VRImmersion & Presence ResearchCHI
Iteratively Adapting Avatars using Task-Integrated OptimisationVirtual Reality allows users to embody avatars that do not match their real bodies. Earlier work has selected changes to the avatar arbitrarily and it therefore remains unclear how to change avatars to improve users’ performance. We propose a systematic approach for iteratively adapting the avatar to perform better for a given task based on users’ performance. The approach is evaluated in a target selection task, where the forearms of the avatar are scaled to improve performance. A comparison between the optimised and real arm lengths shows a significant reduction in average tapping time by 18.7%, for forearms multiplied in length by 5.6. Additionally, with the adapted avatar, participants moved their real body and arms significantly less, and subjective measures show reduced physical demand and frustration. In a second study, we modify finger lengths for a linear tapping task to achieve a better performing avatar, which demonstrates the generalisability of the approach.2020JMJess McIntosh et al.Full-Body Interaction & Embodied InputIdentity & Avatars in XRUIST