At a Glance to Your Fingertips: Enabling Direct Manipulation of Distant Objects Through SightWarpIn 3D user interfaces, reaching out to grab and manipulate something works great until it is out of reach. Indirect techniques like gaze and pinch offer an alternative for distant interaction, but do not provide the same immediacy or proprioceptive feedback as direct gestures. To support direct gestures for faraway objects, we introduce SightWarp: an interaction technique that exploits eye-hand coordination to seamlessly summon object proxies to the user’s fingertips. The idea is that after looking at a distant object, users either shift their gaze to the hand or move their hand into view—triggering the creation of a scaled near-space proxy of the object and its surrounding context. The proxy remains active until the eye–hand pattern is released. The key benefit is that users always have an option to immediately operate on the distant object through a natural, direct hand gesture. Through a user study of a 3D object docking task, we show that users can easily employ SightWarp, and that subsequent direct manipulation improves performance over gaze and pinch. Application examples illustrate its utility for 6DOF manipulation, overview-and-detail navigation, and world-in-miniature interaction. Our work contributes to expressive and flexible object interactions across near and far spaces.2025YLYang Liu et al.Hand Gesture RecognitionImmersion & Presence Research3D Modeling & AnimationUIST
Spatialstrates: Cross-Reality Collaboration through Spatial HypermediaConsumer-level XR hardware now enables immersive spatial computing, yet most knowledge work remains confined to traditional 2D desktop environments. These worlds exist in isolation: writing emails or editing presentations favors desktop interfaces, while viewing 3D simulations or architectural models benefits from immersive environments. We address this fragmentation by combining spatial hypermedia, shareable dynamic media, and cross-reality computing to provide (1) composability of heterogeneous content and of nested information spaces through spatial transclusion, (2) pervasive cooperation across heterogeneous devices and platforms, and (3) congruent spatial representations despite underlying environmental differences. Our implementation, the Spatialstrates platform, embodies these principles using standard web technologies to bridge 2D desktop and 3D immersive environments. Through four scenarios—collaborative brainstorming, architectural design, molecular science visualization, and immersive analytics—we demonstrate how Spatialstrates enables collaboration between desktop 2D and immersive 3D contexts, allowing users to select the most appropriate interface for each task while maintaining collaborative capabilities.2025MBMarcel Borowski et al.Social & Collaborative VRMixed Reality WorkspacesPrototyping & User TestingUIST
HydroHaptics: High-Fidelity Force-Feedback on Soft Deformable Interfaces using Hydrostatic TransmissionSoft deformable interfaces offer unique interaction potential through input flexibility and diverse forms. However, force feedback on these devices remains limited, with pneumatic approaches lacking responsiveness and precision, while microhydraulic solutions are constrained to small form factors with limited input. We present HydroHaptics, a novel platform that enables high-fidelity force feedback on deformable interfaces via hydrostatic transmission. Surpassing current state-of-the-art methods, our approach allows fine-grained force feedback on soft interfaces, achieving a 10 N force change in < 100 ms and accurate 1 N, 10 Hz oscillation rendering. We detail the system's design and implementation, highlighting its ability to maintain the inherent interaction benefits of soft interfaces. A user study (N = 18)evaluates the system's performance, showing high accuracy in rendering distinct haptic effects (82.6% accuracy) and classifying input gestures (89.1% accuracy). To showcase the platform’s versatility, we present four applications illustrating HydroHaptics' potential to enhance interaction with deformable devices and unlock novel user experiences.2025JNJames David Nash et al.Force Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsUIST
The Co-Creative Design Framework for Hybrid Intelligence With the rapid advancement of generative AI, co-creation has emerged as a key interaction paradigm, enabling humans and AI to collaborate in creative processes. However, despite decades of research on co-creativity, recent AI developments often lack a structured framework to integrate these insights effectively. To address this gap, we propose the Co-Creative Design Framework (CCDF), which formalizes human-AI co-creation through cognitive and interaction principles. The framework is structured around three core dimensions: agency, which defines the balance of autonomy and control between user and AI; interaction dynamics, which describe the evolving relationship between collaborators and their shared creative product; and communication, which governs information exchange between human and AI. The CCDF provides a systematic approach to modeling co-creative AI and hybrid intelligence systems, defining key dimensions of variance that shape the interaction space of co-creation. In particular, it highlights agency and interaction dynamics, which have been underexplored in recent co-creative AI frameworks. This paper details the iterative development of CCDF, synthesizing insights from co-creativity literature and AI research. We apply the framework in a comparative analysis of Traditional ChatGPT, ChatGPT Canvas Mode, and DALL-E, demonstrating its ability to capture fine-grained differences in system design and user experience.2025NDNicholas Davis et al.Generative AI (Text, Image, Music, Video)Creative Collaboration & Feedback SystemsC&C
From Euclidean to Topological: Visual Exploration of Transformations Types in Shape-Changing InterfacesThis pictorial explores the challenge of distinguishing between shape-changing and actuated interfaces by applying a foundational geometry framework from mathematics to categorise different types of transformations. The framework introduces six geometric transformation types: Euclidean transformations, similarity transformations, affine transformations, projective transformations, topological transformations, and non-topological transformations. Through visual analysis, the pictorial contributes a new vocabulary for describing the transformations of shape-changing interfaces. It offers reflections on which transformations can be considered shape changing, as well as how features of the physical design might impact the perceived shape change.2025MRMajken Kirkegaard RasmussenShape-Changing Interfaces & Soft Robotic MaterialsDIS
The ML-Machine Toolkit: Empowering Teachers and Education Professionals to Explore Embodied Approaches to Teaching Machine LearningMost HCI studies on teaching K-12 students about machine learning (ML) through embodied interaction approaches are based on design and evaluation of one-off prototypes and are not sustained in schools after the studies. In addition, the tools are seldom theoretically positioned, which makes the overall research effort largely technology-driven. This work presents an HCI toolkit, ML-Machine, for supporting teachers and education professionals in developing and conducting embodied educational activities with ML. It encapsulates theory and intermediate-level knowledge from previous HCI research in three design principles - \textit{enacting ML practices, using ML as a design material, and embodied exploration of ML} - to make them readily available to be integrated into educational contexts and practices. We evaluate the toolkit through a case study with a teacher, library employees, and content developers. Based on this, we discuss how toolkits can develop HCI research efforts on teaching digital emerging technologies in K-12 education.2025KBKarl-Emil Kjær Bilstrup et al.Human-LLM CollaborationProgramming Education & Computational ThinkingCollaborative Learning & Peer TeachingDIS
Sound-O-Matic: A tool for designing and implementing 3D soundscapesThe last two decades have witnessed a growing significance of soundscape design as a core topic in Interaction Design. Sound-O-Matic is an innovative tool that facilitates the design of real-time three-dimensional soundscapes. Diverging from conventional track-based audio tools, Sound-O-Matic is constructed atop Unity, a robust 3D game engine, thereby empowering designers to intricately address the temporal and spatial dynamics inherent in soundscape design. This study showcases three enduring and one transitory soundscape instances, seamlessly integrated into diverse settings: 1) a greenhouse, 2) a bunker, 3) a playground, and 4) a passenger train. These case studies illustrate how Sound-O-Matic adeptly manages a spectrum of design considerations encompassing the spatial configuration of speakers, the temporal dynamics inherent in the soundscape, interaction, and user experience. The discussion compares the four cases, highlights the diversity of the designs, and concludes with a brief discussion of potential further development of the tool.2025JPJonas Oxenbøll Petersen et al.Music Composition & Sound Design ToolsDIS
Online-EYE: Multimodal Implicit Eye Tracking Calibration for XRUnlike other inputs for extended reality (XR) that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration. While our evaluation focuses on VR and controller-based interactions, we anticipate the broader applicability of this approach to various XR devices and input modalities.2025BHBaosheng James HOU et al.Google; Lancaster University , Computing and CommunicationsEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
De-centering Inclusivity: Fitting Design for Aut-EthnographyNeurodiversity perspectives have in recent years made headway in HCI, broadening the role of autistic people. Outside HCI, an essential tool of the neurodiversity movement is the use of first person methods such as autoethnography. This paper explores how interaction design may contribute to ease the burden of conducting Autistic autoethnography (aut-ethnography), and how aut-ethnography may contribute to HCI. Taking an autoethnographic approach in the design of a set of recording devices, we identify three design sensitivities when designing for aut-ethnography: Inertial, sensory, and social fit. We further nuance these in an exploratory trial with other autistic people. We conclude that designing for the context of aut-ethnography requires significant adaptability of the designed artifacts in order to facilitate maintenance of existing rhythms in practice and adhere to fine-grained idiosyncratic preferences and ideals of practicing care and fairness.2025SASarah Fjelsted Alrøe et al.Aarhus University, Department of Digital Design & Information StudiesCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Empowerment of Marginalized GroupsTechnology Ethics & Critical HCICHI
Micro-Phenomenology as a Method for Studying User Experience in Human-Computer InteractionWe examine how micro-phenomenology, a qualitative research method developed to attend to, articulate, and analyse lived experience in fine detail, can be employed to study the experience of using digital systems. Micro-phenomenological interviews unpack the specific experiences of interviewees in fine-grained detail and have previously been acknowledged as a potent tool for Human-Computer Interaction Research. More recently, the method has been extended to comprise a structured analysis method to systematically analyse the temporal unfolding and qualitative dimensions of experiences captured by the interviews. This is the first paper demonstrating the combined use of interviews and analysis via a case in which they were employed to examine the experience of using WeUsedTo, a website for sharing experiences related to the COVID-19 pandemic. On this basis, we discuss the potentials of the method for eliciting and understanding experiential aspects of interactive systems, particularly pertaining to embodiment, temporality, attention, agency, and the systemic nature of experience.2025KHKatrin Heimann et al.Aarhus University, Interacting Minds CentreUser Research Methods (Interviews, Surveys, Observation)CHI
On-body Icons: Designing a 3D Interface for Launching Apps in Augmented RealityOn-body tapping provides a quick way to launch augmented reality (AR) apps using virtual shortcuts placed on the user’s skin, clothes, and jewelry. While prior work has focused on tapping performance, social acceptance, and sensing techniques, users’ behaviour in placing shortcuts on their body has been underexplored. In this work, we propose On-body Icons — a novel interface for launching apps via touching virtual icons placed across the user’s entire body, and use it to investigate locations, reasons for chosen icon placement, and users’ attitudes towards the feature. Results of the qualitative study conducted with 24 participants demonstrated that people employ a wide variety of placement strategies that balance memorability of the locations with accuracy and comfort of reaching the icons. We discuss these findings in regard to current understanding of memorability of icon placement, placement appropriateness, and privacy, and offer design implications for similar features in spatial applications.2025UTUliana Tsimbalistaia et al.HSE UniversityAR Navigation & Context AwarenessOn-Skin Display & On-Skin InputCHI
To Use or Not to Use: Impatience and Overreliance When Using Generative AI Productivity Support ToolsGenerative AI has the potential to assist people with completing various tasks, but increased productivity is not guaranteed due to challenges such as uncertainty in output quality and unclear processing time. Through an online crowdsourced experiment (N=508), leveraging a “paint by numbers” task to simulate properties of GenAI assistance, we explore how, and how well, users make decisions on whether to use or not use automation to maximize their productivity given varying waiting times and output quality. We observed gaps between user’s actual choices and their optimal choices and characterized these gaps as the “gulf of impatience” and the “gulf of overreliance”. We also distilled strategies that participants adopted when making their decisions. We discuss design considerations in supporting users to make more informed decisions when interacting with GenAI tools and make these tools more useful for improving users’ task performance, productivity and satisfaction.2025HQHan Qiao et al.Autodesk ResearchGenerative AI (Text, Image, Music, Video)AI-Assisted Decision-Making & AutomationCHI
PinchCatcher: Enabling Multi-selection for Gaze+PinchThis paper investigates multi-selection in XR interfaces based on eye and hand interaction. We propose enabling multi-selection using different variations of techniques that combine gaze with a semi-pinch gesture, allowing users to select multiple objects, while on the way to a full-pinch. While our exploration is based on the semi-pinch mode for activating a quasi-mode, we explore four methods for confirming subselections in multi-selection mode, varying in effort and complexity: dwell-time (SemiDwell), swipe (SemiSwipe), tilt (SemiTilt), and non-dominant hand input (SemiNDH), and compare them to a baseline technique. In the user study, we evaluate their effectiveness in reducing task completion time, errors, and effort. The results indicate the strengths and weaknesses of each technique, with SemiSwipe and SemiDwell as the most preferred methods by participants. We also demonstrate their utility in file managing and RTS gaming application scenarios. This study provides valuable insights to advance 3D input systems in XR.2025JKJinwook Kim et al.KAIST, Graduate School of Culture TechnologyHand Gesture RecognitionEye Tracking & Gaze InteractionMixed Reality WorkspacesCHI
“Flexible Platforms? An Ethnographic Study of Flexible Scheduling in Platform-Mediated DeliveryThis paper explores flexibility in platform-mediated work through a multi-sited ethnographic study of delivery workers' "flexible scheduling" in three European countries: Denmark, Finland, and Malta. While workers generally value the ability to schedule flexibly, this flexibility is constrained by structural factors such as piece-rate remuneration, demand fluctuations, surge pricing, and income dependency. The constraints result in markedly different experiences across the different instantiations of the same, standardised delivery platform: workers in Denmark benefit from the system, in Finland workers face seasonal precarity, and in Malta workers endure exploitative cycles of long hours and low pay. The findings demonstrate how the same platform's standardised design can produce divergent outcomes in local contexts. The paper highlights the need for platform designers and regulators to balance the benefits of flexible scheduling with its trade-offs, ensuring that flexibility supports worker well-being as the flexible platforms manifest locally.2025KKKalle KuskAarhus University, Digital Design & Information Studies; The University of Texas at Austin, School of InformationGig Economy PlatformsEmpowerment of Marginalized GroupsCHI
How Do Hackathons Foster Creativity? Towards Automated Evaluation of Creativity at ScaleHackathons have become popular collaborative events for accelerating the development of creative ideas and prototypes. There are several case studies showcasing creative outcomes across domains such as industry, education, and research. However, there are no large-scale studies on creativity in hackathons which can advance theory on how hackathon formats lead to creative outcomes. We conducted a computational analysis of 193,353 hackathon projects. By operationalizing creativity through usefulness and novelty, we refined our dataset to 10,363 projects, allowing us to analyze how participant characteristics, collaboration patterns, and hackathon setups influence the development of creative projects. The contribution of our paper is twofold: We identified means for organizers to foster creativity in hackathons. We also explore the use of large language models (LLMs) to augment the evaluation of creative outcomes and discuss challenges and opportunities of doing this, which has implications for creativity research at large.2025JFJeanette Falk et al.Aalborg University, Computer ScienceGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCrowdsourcing Task Design & Quality ControlCHI
Spatial Heterogeneity in Distributed Mixed Reality CollaborationCollaborative Mixed Reality (MR) enables embodied meetings for distributed collaborators working across a variety of locations. However, providing a coherent experience for all users regardless of the spatial configurations of their respective physical environments is a central challenge. We present the Spatial Heterogeneity Framework, which breaks the problem into four core components: the activity zones, heterogeneity ladder, blended proxemics, and MR solutions matrix. We explain the interplay between these components, demonstrating their interconnectivity via a case study. Our framework enables researchers to navigate differences and trade-offs between solutions for distributed MR collaboration. It also supports designers to think about the role of space, technology, and social behaviours in MR collaboration. Ultimately, our contributions advance the field by conceptualising the challenges of spatial heterogeneity and strategies to overcome them.2025EWEmily Wong et al.The University of Sydney, School of Computer Science; The University of Melbourne, School of Computing and Information SystemsMixed Reality WorkspacesContext-Aware ComputingCHI
Breaking Barriers or Building Dependency? Exploring Team-LLM Collaboration in AI-infused Classroom DebateClassroom debates are a unique form of collaborative learning characterized by fast-paced, high-intensity interactions that foster critical thinking and teamwork. Despite the recognized importance of debates, the role of AI tools, particularly LLM-based systems, in supporting this dynamic learning environment has been under-explored in HCI. This study addresses this opportunity by investigating the integration of LLM-based AI into real-time classroom debates. Over four weeks, 22 students in a Design History course participated in three rounds of debates with support from ChatGPT. The findings reveal how learners prompted the AI to offer insights, collaboratively processed its outputs, and divided labor in team-AI interactions. The study also surfaces key advantages of AI usage—reducing social anxiety, breaking communication barriers, and providing scaffolding for novices—alongside risks, such as information overload and cognitive dependency, which could limit learners' autonomy. We thereby discuss a set of nuanced implications for future HCI exploration.2025ZZZihan Zhang et al.Southern University of Science and Technology, School of DesignHuman-LLM CollaborationCollaborative Learning & Peer TeachingCHI
A Cross-Country Analysis of GDPR Cookie Banners and Flexible Methods for Scraping ThemOnline tracking remains problematic, with compliance and ethical issues persisting despite regulatory efforts. Consent interfaces, the visible manifestation of this industry, have seen significant attention over the years. We present robust automated methods to study the presence, design, and third-party suppliers of consent interfaces at scale and the web service consent-observatory.eu to do it with. We examine the top 10,000 websites across 31 countries under the ePrivacy Directive and GDPR (n=254.148). Our findings show that 67% of websites use consent interfaces, but only 15% are minimally compliant, mostly because they lack a reject option. Consent management platforms (CMPs) are powerful intermediaries in this space: 67% of interfaces are provided by CMPs, and three organisations hold 37% of the market. There is little evidence that regulators’ guidance and fines have impacted compliance rates, but 18% of compliance variance is explained by CMPs. Researchers should take an infrastructural perspective on online tracking and study the factual control of intermediaries to identify effective leverage points.2025MNMidas Nouwens et al.Aarhus University, Digital Design and Information StudiesAlgorithmic Transparency & AuditabilityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Datamancer: Bimanual Gesture Interaction in Multi-Display Ubiquitous Analytics EnvironmentsWe introduce Datamancer, a wearable device enabling bimanual gesture interaction across multi-display ubiquitous analytics environments. Datamancer addresses the gap in gesture-based interaction within data visualization settings, where current methods are often constrained by limited interaction spaces or the need for installing bulky tracking setups. Datamancer integrates a finger-mounted pinhole camera and a chest-mounted gesture sensor, allowing seamless selection and manipulation of visualizations on distributed displays. By pointing to a display, users can acquire the display and engage in various interactions, such as panning, zooming, and selection, using both hands. Our contributions include (1) an investigation of the design space of gestural interaction for physical ubiquitous analytics environments; (2) a prototype implementation of the Datamancer system that realizes this model; and (3) an evaluation of the prototype through demonstration of application scenarios, an expert review, and a user study.2025BPBiswaksen Patnaik et al.University of Maryland College Park, Department of Computer ScienceFull-Body Interaction & Embodied InputInteractive Data VisualizationContext-Aware ComputingCHI
Co-Designing Multimodal Tools for Radically Mobile Hybrid MeetingsHybrid meetings have become common practice in collaborative work environments. However, they are constrained by the fixed spatial configurations of videoconferencing technology. This limits opportunities for mobile and spontaneous interactions; qualities that are critical to successful collaboration. In this paper, we explore the concept of radically mobile hybrid meetings. Our work investigates the design space of multimodal devices as mobile alternatives to traditional videoconferencing. We conducted three group co-design sessions, where participants prototyped mobile hybrid meeting technologies to explore how such meetings could be supported. From these workshops, we derive design fictions envisioning future uses of these technologies, which we evaluate with a questionnaire to spark reflections on future mobile hybrid collaboration tools and practices. We contribute an initial exploration of the design space for radically mobile hybrid meetings, laying the groundwork for developing tools that enable spontaneous, effective, and inclusive collaboration in hybrid mobile settings.2025JKJulia Kleinau et al.Aarhus UniversityRemote Work Tools & ExperienceDistributed Team CollaborationCHI