Peer or Steer: A Pilot Study Exploring Human-AI Collaboration in Creative FieldsRecent years have brought immense advancements around development of Artificial Intelligence (AI) technology and its application. This has also led to a boost in interest on human-AI collaboration and co-creation. Specifically in the creative field, where it is usually not sufficient for an AI to solve given problems or produce deterministic output, this next-level interaction between humans and AI comes with huge potential but also major challenges, related to aspects such as role and power distribution, trust and reliance, or efficiency and effectiveness. In this paper, we present a pilot study on human-AI collaboration in three different creative fields (programming, marketing texting, and UI design), addressing User Experience, Technology Acceptance and, specifically, Perception of Collaboration. The study is based on a theoretical framework we derived from prior research through a focused, systematic literature review, and intended to raise research questions and identify related hypotheses informing future empirical work.2025DLDavid Lang et al.Human-AI (and Robot!) CollaborationCSCW
Visual Sampling Behavior Does not Explain Risk Perception: A Data-Driven xAI InvestigationHow do drivers perceive risk? Understanding what situations and factors cause drivers to perceive situations as critical can improve our understanding of road user behavior and inform automated driving technology. To investigate the factors that shape drivers’ risk perception, we conducted an eye-tracking study with 27 participants who watched dashcam videos and continuously rated the perceived risk of various driving situations. Using the resulting dataset, we developed a computer vision-based machine learning approach that generates explainable predictions of perceived risk from video and eye-tracking data. Our SHAP analysis reveals that the proximity of objects and number of cars in a scene are the most significant contributors to perceived criticality. Most interestingly, while people tend to sample similar objects in critical situations, their risk perception remains highly personal making visual sampling behavior a weak predictor of perceived risk. Overall, our explanations reveal non-linear insights beyond previous work, suggesting that risk perception is not only shaped by visual input, but primarily by cognitive processes which is in line with theoretical models of Situation Awareness. The dataset, source code, and a comprehensive usage guide are publicly available: https://osf.io/cwd6h/?view_only=31a8173570de4b0383f55d52dc784492.2025MLMartin Lorenz et al.Eye Tracking & Gaze InteractionExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAutoUI
Pathways of Desire: Enhancing Navigation and Sense of Community Through Player-Generated Desire PathsNavigating is essential in many video games. However, previous work suggests that many games still suffer from navigational problems that decrease enjoyment. In this paper, we focus on "Desire Paths", informal trails collectively created by pedestrians representing the most convenient route. While they are known to be useful wayfinding aids, it is unclear how they affect navigation and experience in games. We therefore investigated diegetically visualized player trajectory data in a 2D game through virtual footprints that were persistently visible for all subsequent players. Through a mixed-methods study involving 50 participants, we found that virtual footprints improved navigation by guiding players to points of interest and reducing disorientation for early players. However, visual clutter from excessive footprints reduced their effectiveness in later stages. They also fostered a sense of community, especially for late-stage players and prompted exploration of yet undiscovered areas. We further discuss design implications and future research directions.2025MLMichael Lankes et al.University of Applied Sciences Upper Austria, Department of Digital MediaGamification DesignMultiplayer & Social GamesCHI
Investigating the Impact of Customized Avatars and the Proteus Effect during Physical Exercise in Virtual RealityVirtual reality (VR) allows to embody avatars. Coined the Proteus effect, an avatar's visual appearance can influence users' behavior and perception. Recent work suggests that athletic avatars decrease perceptual and physiological responses during VR exercise. However, such effects can fail to occur when users do not experience avatar ownership and identification. While customized avatars increase body ownership and identification, it is unclear whether they improve the Proteus effect. We conducted a study with 24 participants to determine the effects of athletic and non-athletic avatars that were either customized or randomly assigned. We developed a customization editor to allow creating customized avatars. We found that customized avatars reduced perceived exertion. We also found that athletic avatars decreased heart rate while holding weights, however, only when being customized. Results indicate that customized avatars can positively influence users during physical exertion. We discuss the utilization of avatar customization in VR exercise systems.2025MKMartin Kocur et al.University of Applied Sciences Upper AustriaIdentity & Avatars in XRFitness Tracking & Physical Activity MonitoringCHI
Hand Grips and Mobile Menus: Exploring Perceived Usability and User Preferences This paper investigates the relationship between menu design and hand positions in relation to the assessment of end users with main focus on usability, user preference, and potential adaptions to different hand positions. Sixteen (N = 16) participants first participated in a co-design workshop, in which they proposed menu designs for different hand grips. Based on the design proposals, a selection of menu designs were derived and implemented in a mobile app prototype, on which a menu selection study was conducted to investigate performance and perceived usability of the menus in one-handed and two-handed interaction. The results include user ratings and performance, which highlight the need for mobile menus to be adapted for different hand positions. Based on that, we derive design recommendations for more adaptive, user-centric and ergonomic mobile menu designs to match the natural interactions of users.2024TZTamara Zieher et al.Hand Gesture RecognitionPrototyping & User TestingMobileHCI
Changing Lanes Toward Open Science: Openness and Transparency in Automotive User ResearchWe review the state of open science and the perspectives on open data sharing within the automotive user research community. Openness and transparency are critical not only for judging the quality of empirical research, but also for accelerating scientific progress and promoting an inclusive scientific community. However, there is little documentation of these aspects within the automotive user research community. To address this, we report two studies that identify (1) community perspectives on motivators and barriers to data sharing, and (2) how openness and transparency have changed in papers published at AutomotiveUI over the past 5 years. We show that while open science is valued by the community and openness and transparency have improved, overall compliance is low. The most common barriers are legal constraints and confidentiality concerns. Although research published at AutomotiveUI relies more on quantitative methods than research published at CHI, openness and transparency are not as well established. Based on our findings, we provide suggestions for improving openness and transparency, arguing that the motivators for open science must outweigh the barriers. All supporting materials are freely available at: https://osf.io/zdpek/2024PEPatrick Ebel et al.Research Ethics & Open ScienceAutoUI
Development and Evaluation of Advanced Cyclist Assistance Systems on a Bicycle SimulatorResearch on cycling safety has recently gained the attention of the HCI community. While there have been multiple proposals for automated driving features on bikes, we are unaware of a project that systematically aims to translate and evaluate driver assistance systems from the automotive to the bike domain to promote cycling safety in traffic. Thus, we implemented an adaptive cruise control and a lane-keeping/centering system with hard- and software on a motion-based bicycle simulator and investigated their potential in a virtual reality experiment. Based on performance measurements and subjective ratings, results showed significant improvements in technology acceptance, subjective workload, and driving performance regarding the cruise control. In contrast, the lane-centering and lane-keeping features were rated significantly worse than the baseline without such assistance. The paper concludes with a critical reflection on automated driving features for bicycles and a list of recommendations for future projects in this field.2024YWYu Wang et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Micromobility (E-bike, E-scooter) InteractionAutoUI
What Characterizes "Situations'' in Situation Awareness? Findings from a Human-centered InvestigationSituation Awareness (SA) is one of the core concepts describing drivers' interaction with vehicles, and the lack of SA has contributed to multiple incidents with automated systems. Despite existing definitions and measurements, little is known about what constitutes the concept of situations from users’ perspective, i.e., do they have a similar or different understanding of situation dynamics? Therefore, we conducted a video-based experiment where participants had to mark the onset of new situations from their perspective, provide a continuous criticality rating, and justify their decisions in a post-test interview. Our results indicate that the understanding of situations, their complexity, and their duration is quite diverse between people and independent of properties such as age, gender, or driving experience, while partly being influenced by the road type. Additionally, we found correlations between subjective situation durations, criticality ratings, and algorithm output, which can be exploited by future applications and experiments.2024PAPhilipp Michael Markus Peter Asteriou et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionHuman Pose & Activity RecognitionAutoUI
Only Trust a Hidden Wizard: Investigating the Effects of Wizard Visibility in Automotive Wizard of Oz StudiesThe Wizard-of-Oz method has been widely used recently as it allows mimicking automated vehicles with relatively few resources. In some studies, it is challenging to ensure that the wizard remains fully hidden from participants, despite this being a crucial aspect of such experiments. To determine whether participants' awareness of the wizard influences the outcomes of these studies, we conducted an experiment investigating participants' crossing behavior and subjective perception of a remote-controlled automated vehicle. Participants were exposed to two conditions: in one, they solely focused on a simulated vehicle driving autonomously; in the other, they observed a wizard with a remote control and were instructed to imagine the car was automated. Results, based on scales for user experience, acceptance, and trust, as well as crossing behavior, indicate similar results. However, participants’ knowledge of the wizard necessitates careful interpretation when system errors are simulated. We conclude with recommendations for future Wizard-of-Oz experiments.2024HKHeike Christiane Kotsios et al.Teleoperated DrivingAutoUI
Voice Assistants' Accountability through Explanatory Dialogues As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs’ accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa’s accounts are often single, decontextualized responses that led to users’ alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users’ emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.2024FAFatemeh Alizadeh et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Multilingual & Cross-Cultural Voice InteractionExplainable AI (XAI)CUI
Threads of Traceability: Textile IDs in the Fabric of Sustainable FashionTextile fabrication, an ancient human technology, has evolved over millennia, transitioning from a focus on affordability and speed to a current emphasis on sustainability. With Textile ID, we envision a digital garment passport that seamlessly incorporates directly into textile surfaces as a design element to bridge the gap between sustainability and consumer engagement, transforming garments into interactive storytellers of their ecological journey. The visual surface of the garment can be scanned with a smartphone to access a unique identifier embedded within the fabric, which provides essential information about the product's lifecycle. This work discusses the design space of various visual and textile parameters, proposes design possibilities and insights for implementation. Finally, we showcase a set of sample garment designs and provide design recommendations for designers to use in their future work.2024MHMira Alida Haberfellner et al.Customizable & Personalized ObjectsEcological Design & Green ComputingDIS
Supporting Task Switching with Reinforcement LearningAttention management systems aim to mitigate the negative effects of multitasking. However, sophisticated real-time attention management is yet to be developed. We present a novel concept for attention management with reinforcement learning that automatically switches tasks. The system was trained with a user model based on principles of computational rationality. Due to this user model, the system derives a policy that schedules task switches by considering human constraints such as visual limitations and reaction times. We evaluated its capabilities in a challenging dual-task balancing game. Our results confirm our main hypothesis that an attention management system based on reinforcement learning can significantly improve human performance, compared to humans’ self-determined interruption strategy. The system raised the frequency and difficulty of task switches compared to the users while still yielding a lower subjective workload. We conclude by arguing that the concept can be applied to a great variety of multitasking settings.2024ALAlexander Lingler et al.University of Applied Sciences Upper AustriaPrivacy by Design & User ControlNotification & Interruption ManagementCHI
Loopsense: low-scale, unobtrusive, and minimally invasive knitted force sensors for multi-modal input, enabled by selective loop-meshingIntegrating sensors into knitted input devices traditionally comes with considerable constraints for textile and UI design freedom. In this work, we demonstrate a novel, minimally invasive method for fabricating knitted sensors that overcomes this limitation. We integrate copper wire with piezoresistive enamel directly into the fabric using weft knitting to establish strain and pressure sensing cells that consist only of single pairs of intermeshed loops. The result is unobtrusive and potentially invisible, which provides tremendous latitude for visual and haptic design. Furthermore, we present several variations of stitch compositions, resulting in loop meshes that feature distinct response with respect to direction of exerting force. Utilizing this property, we are able to infer actuation modalities and considerably expand the device's input space. In particular, we discern strain directions and surface pressure. Moreover, we provide an in-depth description of our fabrication method, and demonstrate our solution's versatility on three exemplary use cases.2024RARoland Aigner et al.University of Applied Sciences Upper AustriaShape-Changing Interfaces & Soft Robotic MaterialsOn-Skin Display & On-Skin InputCircuit Making & Hardware PrototypingCHI
From Real to Virtual: Exploring Replica-Enhanced Environment Transitions along the Reality-Virtuality ContinuumRecent Head-Mounted Displays enable users to perceive the real environment using a video-based see-through mode and the fully virtual environment within a single display. Leveraging these advancements, we present a generic concept to seamlessly transition between the real and virtual environment, with the goal of supporting users in engaging with and disengaging from any real environment into Virtual Reality. This transition process uses a digital replica of the real environment and incorporates various stages of Milgram’s Reality-Virtuality Continuum, along with visual transitions that facilitate gradual navigation between them. We implemented the overall transition concept and four object-based transition techniques. The overall transition concept and four techniques were evaluated in a qualitative user study, focusing on user experience, the use of the replica and visual coherence. The results of the user study show, that most participants stated that the replica facilitates the cognitive processing of the transition and supports spatial orientation.2024FPFabian Pointecker et al.University of Applied Sciences Upper AustriaMixed Reality WorkspacesImmersion & Presence ResearchCHI
Spot'Em: Interactive Data Labeling as a Means to Maintain Situation AwarenessAppropriate monitoring and successfully intervening when automation fails is one of the most critical issues in level 2 automated driving, since drivers suffer from low situation awareness when using such systems. To counter, we present a gamified in-vehicle interface based on ideas from previous work, where drivers have to support the vehicle by pointing at other traffic objects in the environment. We hypothesized that this system could help drivers in the monitoring task, maintain their situation awareness, and result in lower crash rates. We implemented a prototype of this system and evaluated it in a lab study with N=20 participants. The results indicate that participants were looking more intensively at lead vehicles and performed stronger braking actions. However, there was no measurable benefit on situation awareness and intervention performance in critical situations. We conclude by discussing differences to related experiments and present future ideas.2023PWPhilipp Wintersberger et al.Automated Driving Interface & Takeover DesignGamification DesignAutoUI
A Real Bottleneck Scenario with a Wizard of Oz Automated Vehicle - Role of eHMIsAutomated vehicles (AVs) are expected to encounter various ambiguous space-sharing conflicts in urban traffic. Bottleneck scenarios, where one of the parts needs to resolve the conflict by yielding priority to the other, could be utilized as a representative ambiguous scenario to understand human behavior in experimental settings. We conducted a controlled field experiment with a Wizard of Oz automated car in a bottleneck scenario. 24 participants attended the study by driving their own cars. They made yielding, or priority-taking decisions based on implicit and explicit locomotion cues on AV realized with an external display. Results indicate that acceleration and deceleration cues affected participants' driving choices and their perception regarding the social behavior of AV, which further serve as ecological validation of related simulation studies.2023HİHatice Şahin İppoliti et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAutoUI
Remote Persons Are Closer Than They Appear: Home, Team and a LockdownSince 2020, worldwide COVID-19-related lockdowns have led to a rapid increase of remote collaboration, particularly in the domain of knowledge work. This has undoubtedly brought challenges (e.g., work-life boundary management, social isolation), but also opportunities. Practices that have proven successful (e.g., through increased task performance, efficiency or satisfaction) are worth retaining in future. In this qualitative empirical study, we analyzed four teams' (14 participants in total) mandatory remote collaboration over a period of several days to several months during a nationally imposed lockdown. We report results derived from questionnaires, logbooks, group interviews, and meeting recordings. We identify possible factors influencing quality of task outcome as well as subjective aspects like satisfaction, motivation, and team atmosphere. As a basis for our conclusions, we provide a scheme for categorizing effects of remote collaboration based on an exhaustive literature review on pandemic-induced mandatory remote work and collaboration.2023MAMirjam Augstein et al.University of Applied Sciences Upper AustriaRemote Work Tools & ExperienceDistributed Team CollaborationCHI
Territoriality in Hybrid CollaborationHybrid Collaboration, where remote and co-located team members work together using different devices and tools, has already been trending in recent years (e.g., through globalization and international cooperation) but experienced a further boost since the outbreak of the COVID-19 pandemic. The reason behind this surge in hybrid practices is probably that the crisis revealed aspects of remote collaboration which proved functional and which many decision makers (in industry as well as academia) plan to retain for the future. Thus, hybrid collaboration is an extremely timely topic which should be further studied in the context of CSCW. One major CSCW-anchored concept that has most intensively been researched in co-located collaboration settings where it is usually inherently related to spatial aspects and proximity, is territoriality. Already work on territoriality in fully distributed, remote settings has shown that there are significant differences due to the characteristics of the scenario. In this paper, we focus on territoriality in hybrid settings where we identified a significant research gap, and present the results of a user study with 22 teams consisting of four people each (distributed across two locations at two different universities), collaborating on a problem-solving task. Our findings reveal that more dimensions and communication channels, in addition to space, might strongly impact territoriality and territorial behavior in hybrid collaboration. Besides classical spatial territories also auditory territories frequently emerged. In addition, visibility of and accessibility to certain territories need to be rethought. We discuss these novel findings also regarding their interplay with earlier ones and derive design implications for CSCW systems supporting hybrid collaboration.2022TNThomas Neumayr et al.Remote and Hybrid Collaborations; Remote and Hybrid CollaborationsCSCW
spaceR: Knitting Ready-Made, Tactile, and Highly Responsive Spacer-Fabric Force Sensors for Continuous InputWith spaceR, we present both design and implementation of a resistive force-sensor based on a spacer fabric knit. Due to its softness and elasticity, our sensor provides an appealing haptic experience. It enables continuous input with high precision due to its innate haptic feedback and can be manufactured ready-made on a regular two-bed weft knitting machine, without requiring further post-processing steps. For our multi-component knit, we add resistive yarn to the filler material, in order to achieve a highly sensitive and responsive pressure sensing textile. Sensor resistance drops by ~90% when actuated with moderate finger pressure of 2 N, making the sensor accessible also for straightforward readout electronics. We discuss related manufacturing parameters and their effect on shape and electrical characteristics and explore design opportunities to harness visual and tactile affordances. Finally, we demonstrate several application scenarios by implementing diverse spaceR variations, including analog rocker- and four-way directional buttons, and show the possibility of mode-switching by tracking temporal data.2022RARoland Aigner et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
Exploring Affordances of Surface Gestures on Textile User InterfacesThis pictorial explores the design space for communicating surface gestures to users of textile interfaces by experimenting with the interfaces’ physical design and affordances. First, we created a collection of functional and non-functional textile samples. Their development was based on three aspects: design, fabrication, and sensing. The design aspect covered different visual (shape, color) and haptic (details, textures) designs, fabrication explored three textile-specific fabrication methods, and electronic sensing offered options for adding touch-sensing capabilities. Second, we reflected on created samples and their characteristics contrasting different designs and speculating on why some work better than others. Our main findings and insights are presented in five clusters: ergonomics, visual affordances, perception of textures, the direction of movement, and the economic usage of design elements. This intermediate-level knowledge can provide a starting point for each professional or novice designer to take inspiration from when creating their own textile user interfaces.2021SMSara Mlakar et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsDIS