VIP-Sim: A User-Centered Approach to Vision Impairment Simulation for Accessible DesignPeople with vision impairments (VIPs) often rely on their remaining vision when interacting with user interfaces. Simulating visual impairments is an effective tool for designers, fostering awareness of the challenges faced by VIPs. While previous research has introduced various vision impairment simulators, none have yet been developed with the direct involvement of VIPs or thoroughly evaluated from their perspective. To address this gap, we developed VIP-Sim. This symptom-based vision simulator was created through a participatory design process tailored explicitly for this purpose, involving N=7 VIPs. 21 symptoms, like field loss or light sensitivity, can be overlaid on desktop design tools. Most participants felt VIP-Sim could replicate their symptoms. VIP-Sim was received positively, but concerns about exclusion in design and comprehensiveness of the simulation remain, mainly whether it represents the experiences of other VIPs.2025MRMax Rädler et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignParticipatory DesignUIST
Long-Term Evolution of Driver Visual Attention during Automated Driving in Real-Traffic: Investigating the Influence of Mental Model and Dynamic Learned TrustA calibrated trust level is essential for the safe use of automated systems. In automated driving, overtrust can reduce the driver’s monitoring behavior and delay takeover times, which poses significant safety risks. This motivates the need for continuous, objective trust assessment for real-time system adaptations. Prior research identified eye-tracking as a promising approach. Therefore, this study examines the longitudinal relationship between dynamic learned trust and visual attention. Given that mental models influence both trust and visual attention, their role in this process is also explored over time. In a longitudinal study, twenty-three participants repeatedly operated an automated vehicle in real traffic while their visual attention was recorded via the vehicle’s built-in driver monitoring camera. Findings suggest an interrelation between dynamic learned trust and mental model formation, with mental models mediating the effect of dynamic learned trust on visual attention. This work contributes to advancing trust measurement during automated driving.2025SSStephanie Seupke et al.Automated Driving Interface & Takeover DesignEye Tracking & Gaze InteractionAutoUI
Unraveling Subjective ADAS Comprehension Considering Factors of Situational Complexity on the Example of Traffic Light ScenariosAdvanced driver assistance systems (ADAS) with increasing automation maturity and availability in urban contexts are entering the market. Meanwhile, the situational context has been identified to play a crucial role in system comprehension and usage, yet its subcomponents and their relation to system comprehension remain an open research question. To gain insights in the role of the situation complexity regarding subjective system comprehension and different methodological aspects, this study applies a mixed quantitative and qualitative approach, focusing on signaled intersections as an exemplary scenario. An on-road study with forty-six participants was conducted, involving six traffic light scenarios (all experienced twice). Results indicate that while comprehension was generally high, the situational context, including environmental and traffic-related factors, affected subjective system understanding. The proposed approach sheds light on the role of mixed methods in ADAS research, which may provide insights for system developers and suggestions for user training content.2025CBClaudia Buchner et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AutoUI
Mind Games! Exploring the Impact of Dark Patterns in Mixed Reality ScenariosMixed Reality (MR) integrates virtual objects with the real world, offering potential but raising concerns about misuse through dark patterns. This study explored the effects of four dark patterns, adapted from prior research, and applied to MR across three targets: places, products, and people. In a two-factorial within-subject study with 74 participants, we analyzed 13 videos simulating MR experiences during a city walk. Results show that all dark patterns significantly reduced user comfort, increased reactance, and decreased the intention to use MR glasses, with the most disruptive effects linked to personal or monetary manipulation. Additionally, the dark patterns of Emotional and Sensory Manipulation and Hiding Information produced similar impacts on the user in MR, suggesting a re-evaluation of current classifications to go beyond deceptive design techniques. Our findings highlight the importance of developing ethical design guidelines and tools to detect and prevent dark patterns as immersive technologies continue to evolve.2025LMLuca-Maxim Meinhardt et al.Mixed Reality WorkspacesDark Patterns RecognitionMobileHCI
Introducing ROADS: A Systematic Comparison of Remote Control Interaction Concepts for Automated Vehicles at Road WorksAs vehicle automation technology continues to mature, there is a necessity for robust remote monitoring and intervention features. These are essential for intervening during vehicle malfunctions, challenging road conditions, or in areas that are difficult to navigate. This evolution in the role of the human operator—from a constant driver to an intermittent teleoperator—necessitates the development of suitable interaction interfaces. While some interfaces were suggested, a comparative study is missing. We designed, implemented, and evaluated three interaction concepts (path planning, trajectory guidance, and waypoint guidance) with up to four concurrent requests of automated vehicles in a within-subjects study with N=23 participants. The results showed a clear preference for the path planning concept. It also led to the highest usability but lower satisfaction. With trajectory guidance, the fewest requests were resolved. The study’s findings contribute to the ongoing development of HMIs focused on the remote assistance of automated vehicles.2025MCMark Colley et al.Ulm University; UCL Interaction CentreAutomated Driving Interface & Takeover DesignTeleoperated DrivingCHI
Light My Way. Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated VehiclesThe introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.2025LMLuca-Maxim Meinhardt et al.Institute of Media Informatics, Ulm UniversityIn-Vehicle Haptic, Audio & Multimodal FeedbackVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Improving External Communication of Automated Vehicles Using Bayesian OptimizationThe absence of a human operator in automated vehicles (AVs) may require external Human-Machine Interfaces (eHMIs) to facilitate communication with other road users in uncertain scenarios, for example, regarding the right of way. Given the plethora of adjustable parameters, balancing visual and auditory elements is crucial for effective communication with other road users. With N=37 participants, this study employed multi-objective Bayesian optimization to enhance eHMI designs and improve trust, safety perception, and mental demand. By reporting the Pareto front, we identify optimal design trade-offs. This research contributes to the ongoing standardization efforts of eHMIs, supporting broader adoption.2025MCMark Colley et al.Ulm University; UCL Interaction CentreExternal HMI (eHMI) — Communication with Pedestrians & CyclistsExplainable AI (XAI)CHI
PlantPal: Leveraging Precision Agriculture Robots to Facilitate Remote Engagement in Urban GardeningUrban gardening is widely recognized for its numerous health and environmental benefits. However, the lack of suitable garden spaces, demanding daily schedules and limited gardening expertise present major roadblocks for citizens looking to engage in urban gardening. While prior research has explored smart home solutions to support urban gardeners, these approaches currently do not fully address these practical barriers. In this paper, we present PlantPal, a system that enables the cultivation of garden spaces irrespective of one's location, expertise level, or time constraints. PlantPal enables the shared operation of a precision agriculture robot (PAR) that is equipped with garden tools and a multi-camera system. Insights from a 3-week deployment (N=18) indicate that PlantPal facilitated the integration of gardening tasks into daily routines, fostered a sense of connection with one's field, and provided an engaging experience despite the remote setting. We contribute design considerations for future robot-assisted urban gardening concepts.2025AZAlbin Zeqiri et al.Ulm University, Institute of Media InformaticsHuman-Robot Collaboration (HRC)Community Engagement & Civic TechnologyCHI
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User ExperienceAutomated vehicle (AV) acceptance relies on their understanding via feedback. While visualizations aim to enhance user understanding of AV's detection, prediction, and planning functionalities, establishing an optimal design is challenging. Traditional "one-size-fits-all" designs might be unsuitable, stemming from resource-intensive empirical evaluations. This paper introduces OptiCarVis, a set of Human-in-the-Loop (HITL) approaches using Multi-Objective Bayesian Optimization (MOBO) to optimize AV feedback visualizations. We compare conditions using eight expert and user-customized designs for a Warm-Start HITL MOBO. An online study (N=117) demonstrates OptiCarVis efficacy in significantly improving trust, acceptance, perceived safety, and predictability without increasing cognitive load. OptiCarVis facilitates a comprehensive design space exploration, enhancing in-vehicle interfaces for optimal passenger experiences and broader applicability.2025PJPascal Jansen et al.Ulm University, Institute of Media InformaticsHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AI-Assisted Decision-Making & AutomationCHI
When Do We Feel Present in a Virtual Reality? Towards Sensitivity and User Acceptance of Presence QuestionnairesPresence is an important and widely used metric to measure the quality of virtual reality (VR) applications. Given the multifaceted and subjective nature of presence, the most common measures for presence are questionnaires. But there is little research on their validity regarding specific presence dimensions and their responsiveness to differences in perception among users. We investigated four presence questionnaires (SUS, PQ, IPQ, Bouchard) on their responsiveness to intensity variations of known presence dimensions and asked users about their consistency with their experience. Therefore, we created five VR scenarios that were designed to emphasize a specific presence dimension. Our findings showed heterogeneous sensitivity of the questionnaires dependent on the different dimensions of presence. This highlights a context-specific suitability of presence questionnaires. The questionnaires' sensitivity was further stated as lower than actually perceived. Based on our findings, we offer guidance on selecting these questionnaires based on their suitability for particular use cases.2025ADAnnalisa Degenhard et al.University of Ulm, Media informaticsImmersion & Presence ResearchCHI
Scrolling in the Deep: Analysing Contextual Influences on Intervention Effectiveness during Infinite Scrolling on Social MediaInfinite scrolling on social media platforms is designed to encourage prolonged engagement, leading users to spend more time than desired, which can provoke negative emotions. Interventions to mitigate infinite scrolling have shown initial success, yet users become desensitized due to the lack of contextual relevance. Understanding how contextual factors influence intervention effectiveness remains underexplored. We conducted a 7-day user study (N=72) investigating how these contextual factors affect users' reactance and responsiveness to interventions during infinite scrolling. Our study revealed an interplay, with contextual factors such as being at home, sleepiness, and valence playing significant roles in the intervention's effectiveness. Low valence coupled with being at home slows down the responsiveness to interventions, and sleepiness lowers reactance towards interventions, increasing user acceptance of the intervention. Overall, our work contributes to a deeper understanding of user responses toward interventions and paves the way for developing more effective interventions during infinite scrolling.2025LMLuca-Maxim Meinhardt et al.Institute of Media Informatics, Ulm UniversityNotification & Interruption ManagementCHI
Bumpy Ride? Understanding the Effects of External Forces on Spatial Interactions in Moving VehiclesAs the use of Head-Mounted Displays in moving vehicles increases, passengers can immerse themselves in visual experiences independent of their physical environment. However, interaction methods are susceptible to physical motion, leading to input errors and reduced task performance. This work investigates the impact of G-forces, vibrations, and unpredictable maneuvers on 3D interaction methods. We conducted a field study with 24 participants in both stationary and moving vehicles to examine the effects of vehicle motion on four interaction methods: (1) Gaze\&Pinch, (2) DirectTouch, (3) Handray, and (4) HeadGaze. Participants performed selections in a Fitts' Law task. Our findings reveal a significant effect of vehicle motion on interaction accuracy and duration across the tested combinations of Interaction Method $\times$ Road Type $\times$ Curve Type. We found a significant impact of movement on throughput, error rate, and perceived workload. Finally, we propose future research considerations and recommendations on interaction methods during vehicle movement.2025MSMarkus Sasalovici et al.Mercedes-Benz Tech Motion GmbH; Ulm University, Institute of Media InformaticsHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Motion Sickness & Passenger ExperienceCHI
Fly Away: Evaluating the Impact of Motion Fidelity on Optimized User Interface Design via Bayesian Optimization in Automated Urban Air Mobility SimulationsAutomated Urban Air Mobility (UAM) can improve passenger transportation and reduce congestion, but its success depends on passenger trust. While initial research addresses passengers' information needs, questions remain about how to simulate air taxi flights and how these simulations impact users and interface requirements. We conducted a between-subjects study (N=40), examining the influence of motion fidelity in Virtual-Reality-simulated air taxi flights on user effects and interface design. Our study compared simulations with and without motion cues using a 3-Degrees-of-Freedom motion chair. Optimizing the interface design across six objectives, such as trust and mental demand, we used multi-objective Bayesian optimization to determine the most effective design trade-offs. Our results indicate that motion fidelity decreases users' trust, understanding, and acceptance, highlighting the need to consider motion fidelity in future UAM studies to approach realism. However, minimal evidence was found for differences or equality in the optimized interface designs, suggesting personalized interface designs.2025LMLuca-Maxim Meinhardt et al.Institute of Media Informatics, Ulm UniversityAutomated Driving Interface & Takeover DesignMotion Sickness & Passenger ExperienceCHI
Effects of Uncertain Trajectory Prediction Visualization in Highly Automated Vehicles on Trust, Situation Awareness, and Cognitive LoadColley 等人研究高度自动驾驶汽车中不确定轨迹预测可视化形式对驾驶员信任度、态势感知和认知负荷的影响机制。2024MCMark Colley et al.Automated Driving Interface & Takeover DesignExplainable AI (XAI)UbiComp
Hey, What's Going On? Conveying Traffic Information to People with Visual Impairments in Highly Automated Vehicles: Introducing OnBoardMeinhardt 等人设计 OnBoard 系统,通过多模态交互向自动驾驶车辆中的视障乘客传达交通信息,解决其信息获取难题。2024LMLuca-Maxim Meinhardt et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)In-Vehicle Haptic, Audio & Multimodal FeedbackVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UbiComp
Eco Is Just Marketing: Unraveling Everyday Barriers to the Adoption of Energy-Saving Features in Major Home Appliances2024AZAlbin Zeqiri et al.Sustainable HCIEnergy Conservation Behavior & InterfacesUbiComp
Story-Driven: Exploring the Impact of Providing Real-time Context Information on Automated StorytellingStories have long captivated the human imagination with narratives that enrich our lives. Traditional storytelling methods are often static and not designed to adapt to the listener’s environment, which is full of dynamic changes. For instance, people often listen to stories in the form of podcasts or audiobooks while traveling in a car. Yet, conventional in-car storytelling systems do not embrace the adaptive potential of this space. The advent of generative AI is the key to creating content that is not just personalized but also responsive to the changing parameters of the environment. We introduce a novel system for interactive, real-time story narration that leverages environment and user context in correspondence with estimated arrival times to adjust the generated story continuously. Through two comprehensive real-world studies with a total of 30 participants in a vehicle, we assess the user experience, level of immersion, and perception of the environment provided by the prototype. Participants' feedback shows a significant improvement over traditional storytelling and highlights the importance of context information for generative storytelling systems.2024JBJan Henry Belz et al.AR Navigation & Context AwarenessGenerative AI (Text, Image, Music, Video)Interactive Narrative & Immersive StorytellingUIST
Exploring Urban Challenges: Understanding Advanced Driver Assistance Systems in Different Situational ContextsNew Advanced Driver Assistance Systems (ADAS) are now available to support urban driving. To adequately use ADAS, especially in complex situations, drivers must comprehend them. An on-road study was conducted to investigate the mental model development while interacting with a state-of-the-art ADAS in both a rural (less complex) and an urban context (more complex). Forty-six participants experienced two rounds of each context. After each round, drivers rated their mental model, acceptance, and trust. Results indicate that for the rural context participants learned the system functionality in the first round without further improvement . In the urban context the mental model was generally less accurate, but improved in the second round. Trust increased from the first to the second rural round while acceptance did not show a significant change within the context. The results provide a first glimpse into the importance of evaluating different contexts and interaction scenarios for ADAS.2024CBClaudia Buchner et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AutoUI
Improving Driver Engagement with Level 2 Automated Systems: The Impact of Fully Shared Longitudinal ControlAccording to the Society of Automotive Engineers (SAE), in Level 2 systems (L2 systems), the system executes the longitudinal and lateral control of the vehicle, with the driver required to monitor the environment and intervene when necessary. To further improve safety and driver engagement, we compared a fully shared longitudinal control system, which permits speed adjustments via acceleration and braking without deactivation, with a conventional system that disengages upon braking. In a simulator study involving 61 participants, both systems were well-received in terms of acceptance and user experience. The fully shared longitudinal control led to more frequent and earlier braking, suggesting anticipatory driving, without compromising perceived safety. Furthermore, it outperformed in hedonic qualities of user experience, and elicited a stronger intention to use. Our findings indicate that fully shared longitudinal control can enhance driver engagement, offering a valuable improvement for L2 automated systems.2024JIJohannes Illgner et al.Automated Driving Interface & Takeover DesignHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AutoUI
Introducing AV-Sketch: An Immersive Participatory Design Tool for Automated Vehicle — Passenger InteractionIn the emerging automated vehicle (AV)—passenger interaction domain, there is no agreed-upon set of methods to design early concepts. Non-designers may find it challenging to brainstorm interfaces for unfamiliar technology like AVs. Therefore, we explore using an immersive virtual environment to enable expert and non-expert designers to actively participate in the design phases. We built AV-Sketch, an in-situ (on-site) simulator that allows the creation of automotive interfaces while being immersed in VR depicting diverse AV-passenger interactions. At first, we conducted a participatory design study (𝑁=15) by utilizing PICTIVE (Plastic Interface for Collaborative Technology) to conceptualize human-machine interfaces for AV passengers. The findings led to the design of AV-Sketch, which we tested in a design session (𝑁=10), assessing users’ design experiences. Overall, participants felt more engaged and confident with the in-situ experience, enabling better contextualization of design ideas in real-world scenarios, with improved spatial considerations and dynamic aspects of in-vehicle interfaces.2024AAAshratuz Zavin Asha et al.Automated Driving Interface & Takeover DesignSocial & Collaborative VRAutoUI