SPAT: Situational Prosocial and Aggressive Behavior Perception in Traffic ScaleAutomated vehicles (AVs) reached technological maturity and will soon arrive on streets as traffic participants. Human traffic participants such as drivers, pedestrians, or cyclists will be increasingly confronted with the presence of AVs within their environment, not necessarily knowing or understanding what to expect and how to interact with them. Although AVs are designed to act safely, effective interaction in mixed traffic scenarios will depend on successful communication, interaction, or even negotiation beyond static rules and regulations. Prosocial behavior, such as yielding one's right of way, will be needed to resolve unclear traffic situations or foster traffic flow. However, what are the characteristics of such prosocial behavior, and how to measure this not only for automated vehicles but for all road users? Here, we describe a new scale to measure perceived social behavior in urban traffic scenarios. Through an online survey on \textit{N} = 318 individuals and a validation study, we developed the Situational Prosocial and Aggressive Behavior in Traffic Scale and assessed it psychometrically.2025HİHatice Şahin İppoliti et al.Teleoperated DrivingV2X (Vehicle-to-Everything) Communication DesignAI-Assisted Decision-Making & AutomationAutoUI
Enhancing Smart Home User Experience: A Study of Everyday Objects for Smart Home ControlCurrent smart home technologies rely on touchscreens and voice assistants for interaction. These interfaces lack tactile engagement and fail to support users' daily routines and preferences, leading to poor user experiences (UX). Designing tangible user interfaces (TUIs) that align with user preferences can improve the status quo. This paper explores the potential of TUIs using everyday objects for smart home control. Four prototypes — a vase, pillow, coaster, and flower — were evaluated for UX and metaphor alignment through a within-subjects study with 25 participants. Using meCUE questionnaires and semi-structured interviews, we examined how physical and contextual attributes influence UX. Our findings indicate that everyday objects are effective TUI and produce positive UX, provided careful consideration is given to their physical and contextual attributes. This research expands our understanding of TUIs' role in bridging the digital-physical divide and offers practical guidelines for embedding intuitive smart home controls into everyday objects.2025MCMichael Chamunorwa et al.Smart Home Interaction DesignCustomizable & Personalized ObjectsMobileHCI
My Data, My Choice, My Insights: Women's Requirements when Collecting, Interpreting and Sharing their Personal Health DataHCI research has been instrumental in enabling self-directed health tracking. Despite a plethora of devices and data, however, users' views of their own health are often fragmented. This is a problem for women's health, where physical and mental observations and symptoms are strongly intertwined. An integrated view throughout different life stages could help to better understand these connections, facilitate symptom alleviation through life-style changes, and support timely diagnosis: currently, women's health issues often go under-researched and under-diagnosed. To capture the needs and worries of self-directed tracking, interpreting and sharing women's health data, we held workshops with 28 women. Drawing upon feminist methods, we conducted a Reflexive Thematic Analysis to identify six central themes that ground opportunities and challenges for life-long, self-directed tracking of intimate data. These themes inform the design of tools for data collection, analysis and sharing that empower women to better understand their bodies and demand adequate health services.2024SGSophie Grimme et al.OFFIS - Institute for Information TechnologyCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Universal & Inclusive DesignReproductive & Women's HealthCHI
Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public AdministrationsThe integration of Artificial Intelligence into decision-making processes within public administration extends to AI-systems that exercise administrative discretion. This raises fairness concerns among citizens, possibly leading to AI-systems abandonment. Uncertainty persists regarding explanation elements impacting citizens' perception of fairness and technology adoption level. In a video-vignette online-survey (N=847), we investigated the impact of explanation levels on citizens' perceptions of informational fairness, distributive fairness, and system adoption level. We enhanced explanations in three stages: none, factor explanations, culminating in factor importance explanations. We found that more detailed explanations improved informational and distributive fairness perceptions, but did not affect citizens' willingness to reuse the system. Interestingly, citizens with higher AI-literacy expressed greater willingness to adopt the system, regardless of the explanation levels. Qualitative findings revealed that greater human involvement and appeal mechanisms could positively influence citizens' perceptions. Our findings highlight the importance of citizen-centered design of AI-based decision-making in public administration.2024SASaja Aljuneidi et al.OFFIS - Institute for Information TechnologyAI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityPrivacy by Design & User ControlCHI
Controlling the Rooms: How People Prefer Using Gestures to Control Their Smart HomesGesture interactions have become ubiquitous, and with increasingly reliable sensing technology we can anticipate their use in everyday environments such as smart homes. Gestures must meet users' needs and constraints in diverse scenarios to gain widespread acceptance. Although mid-air gestures have been proposed in various user contexts, it is still unclear to what extent users want to integrate them into different scenarios in their smart homes, along with the motivations driving this desire. Furthermore, it is uncertain whether users will remain consistent in their suggestions when transitioning to alternative scenarios within a smart home. This study contributes methodologically by adapting a bottom-up frame-based design process. We offer insights into preferred devices and commands in different smart home scenarios. Using our results, we can assist in designing gestures in the smart home that are consistent with individual needs across devices and scenarios, while maximizing the reuse and transferability of gestural knowledge.2024MHMasoumehsadat Hosseini et al.University of OldenburgHand Gesture RecognitionSmart Home Interaction DesignCHI
Exploring Recognition Accuracy of Vibrotactile Stimuli in Sternoclavicular AreaGrowing popularity of wearable haptic devices encouraged researchers to implement on-body interfaces that appropriate different form factors and interaction techniques. Among vibrotactile wearable interfaces, neck-worn devices gathered limited attention in HCI. While the ``necklace area'' offers wide opportunities for subtle haptic interaction, we lack knowledge of its tactile acuity to design interactive systems effectively. In this work, we present a prototype of HaptiNecklace - a vibrotactile necklace designed to study tactile acuity of sternoclavicular area. In the experimental study with N=19 participants, we compared recognition accuracy and cognitive load between different numbers of vibrotactile motors attached to the prototype in two scenarios -- static and mobile. The results show that directional patterns ensure better recognition than single-point vibrations in both mobile and static context. Moreover, introducing mobile scenario does not influence recognition accuracy but highly increases cognitive load. In this work, we provide practical hints to designing vibrotactile necklaces.2023MWMikołaj P. Woźniak et al.Vibrotactile Feedback & Skin StimulationHaptic WearablesUbiComp
An empirical comparison of Moderated and Unmoderated Gesture Elicitation Studies on soft surfaces and objects for smart home control.Conducting gesture elicitation studies (GES) in personal spaces such as smart homes is crucial to achieving high ecological validity of elicited gestures. However, supervising such studies is considered intrusive and negatively affects the results' quality. The alternative is to conduct unsupervised GES under similar conditions, but more side-by-side comparisons documenting the similarities and differences between both approaches are necessary. Consequently, we need more data describing the preferred approach and whether the differences or similarities in the results are so significant to cause concern. This research distributed a DIY observation kit, which 30 participants assembled and used to propose gestures for controlling elements in a smart living room using a pillow’s surface, with and without supervision. Our results show that gestures from supervised and unsupervised studies differ in quantity and max-consensus but not in gesture Agreement Scores. Our results also show that participants preferred conducting unsupervised studies but proposed fewer gesture sets in this condition.2023MCMichael Chamunorwa et al.Hand Gesture RecognitionSmart Home Interaction DesignMobileHCI
Please, Go Ahead! Fostering Prosocial Driving with Sympathy-Eliciting Automated Vehicle External DisplaysRoad traffic is strongly regulated, however informal communication is essential whenever formal rules are flexibly treated. Consequently, conflict-avoidant automated vehicles (AVs) can be disadvantaged when humans do not behave prosocially towards them. This can lead to disruptions of mixed traffic, where human and automated driving co-exists. Equipping AVs with sympathy-eliciting external Human-Machine Interfaces (eHMI) mimicking informal communication cues could mitigate this challenge by fostering the prosocial behavior of drivers. This work contributes video vignettes that are experimentally validated in an online survey (N=90). While we found participants to not behave differently towards human-controlled and baseline automated vehicles, eHMIs were potent in eliciting sympathy and encouraged yielding behavior. This effect was more pronounced when the interface signaled an urgent situation or indicated prolonged waiting times. Non-yielding behavior was rationalized based on priority rules. These results emphasize how fostering prosocial behavior in traffic can be achieved via sympathy-eliciting external displays.2023HİHatice Şahin İppoliti et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsMobileHCI
A Real Bottleneck Scenario with a Wizard of Oz Automated Vehicle - Role of eHMIsAutomated vehicles (AVs) are expected to encounter various ambiguous space-sharing conflicts in urban traffic. Bottleneck scenarios, where one of the parts needs to resolve the conflict by yielding priority to the other, could be utilized as a representative ambiguous scenario to understand human behavior in experimental settings. We conducted a controlled field experiment with a Wizard of Oz automated car in a bottleneck scenario. 24 participants attended the study by driving their own cars. They made yielding, or priority-taking decisions based on implicit and explicit locomotion cues on AV realized with an external display. Results indicate that acceleration and deceleration cues affected participants' driving choices and their perception regarding the social behavior of AV, which further serve as ecological validation of related simulation studies.2023HİHatice Şahin İppoliti et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAutoUI
Co-Speculating on Dark Scenarios and Unintended Consequences of a Ubiquitous(ly) Augmented RealityThe vision of a `metaverse' may soon bring a ubiquitous(ly) Augmented Reality (UAR) delivering context-aware, geo-located, and continuous blends of real and virtual elements into reach. This paper draws on speculative design to explore, question, and problematize consequences of AR becoming pervasive. Elaborating on Desjardin et al.'s bespoke booklets, we co-speculate together with 12 globally dispersed participants. Each participant received a custom-made design workbook containing pictures of their immediate surroundings, which they elaborated on in situated brainstorming activities. We present an integration of their speculative ideas and lived experiences in 3 overarching themes from which 7 `dark' scenarios caused by UAR were formed. The Scenarios are indicative of deceptive design patterns that can (and likely will be) devised to misuse UAR, and anti-patterns that could cause unintended consequences. These contributions enable the timely discussion of potential antidotes and to which extent they can mitigate imminent harms of UAR.2023CEChloe Eghtebas et al.AR Navigation & Context AwarenessTechnology Ethics & Critical HCIDesign FictionDIS
Inhabiting Interconnected Spaces: How Users Shape and Appropriate their Smart Home EcosystemsOver the last decade, smart home technology (SHT) has become an integral part of modern households. As a result, smart home ecosystems blend with daily social life, appropriated and integrated into personalised domestic environments. The lived experience of inhabiting smart home ecosystems, however, is not yet understood, resulting in a mismatch between ecosystem design and inhabitants' needs. Drawing on contextual inquiry methods, we conducted an explorative interview study (N=20) with SHT users in their homes. Our thematic analysis reveals how users shape their smart home ecosystems (SHEs), considering social relationships at home, perceived ownership of SHTs, and expected key benefits. Notably, our analysis shows that household members consciously choose `their' level of SHT interconnectedness, reflecting social, spatial and functional affinities between systems. Following our findings, we formulate five implications for designing future SHTs. Our work contributes insights on the dynamics and appropriation of smart home ecosystems by their inhabitants.2023MWMikołaj P. Woźniak et al.University of OldenburgContext-Aware ComputingSmart Home Interaction DesignCHI
Let's Face It: Influence of Facial Expressions on Social Presence in Collaborative Virtual RealityAs the world becomes more interconnected, physical separation between people increases. Existing collaborative Virtual Reality (VR) applications, designed to bridge this distance, are not yet sufficient in providing a sense of social connection comparable to face-to-face interactions. Possible reasons are the limited multimodality of VR systems and the lack of non-verbal cues in VR avatars. We systematically investigated how facial expressions influence Social Presence in two collaborative VR tasks. We explored four types of facial expressions: eyes and mouth movements, their combination, and no expressions, for two types of explanations: verbal and graphical. To examine how these expressions influence Social Presence, we conducted a controlled VR experiment (N = 48), in which participants had to explain a specific term to their counterpart. Our results demonstrate that eye and mouth movements positively influence Social Presence in VR. Particularly, combining verbal explanations and eye movements induces the highest feeling of co-presence.2023SKSimon Kimmel et al.OFFIS - Institute for Information TechnologySocial & Collaborative VRImmersion & Presence ResearchIdentity & Avatars in XRCHI
Towards a Consensus Gesture Set: A Survey of Mid-Air Gestures in HCI for Maximized Agreement Across DomainsMid-air gesture-based systems are becoming ubiquitous. Many mid-air gestures control different kinds of interactive devices, applications, and systems. They are, however, still targeted at specific devices in specific domains and are not necessarily consistent across domain boundaries. A comprehensive evaluation of the transferability of gesture vocabulary between domains is also lacking. Consequently, interaction designers cannot decide which gestures to use for which domain. In this systematic literature review, we contribute to the future research agenda in this area, based on an analysis of 172 papers. As part of our analysis, we clustered gestures according to the dimensions of an existing taxonomy to identify their common characteristics in different domains, and we investigated the extent to which existing mid-air gesture sets are consistent across different domains. We derived a consensus gesture set containing 22 gestures based on agreement rates calculation and considered their transferability across different domains.2023MHMasoumehsadat Hosseini et al.University of OldenburgHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
HiveFive: Immersion Preserving Attention Guidance in Virtual RealityRecent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.2020DLDaniel Lange et al.University of OldenburgImmersion & Presence ResearchContext-Aware ComputingCHI
Social Acceptability in HCI: A Survey of Methods, Measures, and Design StrategiesWith the increasing ubiquity of personal devices, social acceptability of human-machine interactions has gained relevance and growing interest from the HCI community. Yet, there are no best practices or established methods for evaluating social acceptability. Design strategies for increasing social acceptability have been described and employed, but so far not been holistically appraised and evaluated. We offer a systematic literature analysis (N=69) of social acceptability in HCI and contribute a better understanding of current research practices, namely, methods employed, measures and design strategies. Our review identified an unbalanced distribution of study approaches, shortcomings in employed measures, and a lack of interweaving between empirical and artifact-creating approaches. The latter causes a discrepancy between design recommendations based on user research, and design strategies employed in artifact creation. Our survey lays the groundwork for a more nuanced evaluation of social acceptability, the development of best practices, and a future research agenda.2020MKMarion Koelle et al.University of Oldenburg & Saarland University, Saarland Informatics CampusUniversal & Inclusive DesignPrivacy by Design & User ControlParticipatory DesignCHI
NaviBike: Comparing Unimodal Navigation Cues for Child CyclistsNavigation systems for cyclists are commonly screen-based devices mounted on the handlebar which show map information. Typically, adult cyclists have to explicitly look down for directions. This can be distracting and challenging for children, given their developmental differences in motor and perceptual-motor abilities compared with adults. To address this issue, we designed different unimodal cues and explored their suitability for child cyclists through two experiments. In the first experiment, we developed an indoor bicycle simulator and compared auditory, light, and vibrotactile navigation cues. In the second experiment, we investigated these navigation cues in-situ in an outdoor practice test track using a mid-size tricycle. To simulate road distractions, children were given an additional auditory task in both experiments. We found that auditory navigational cues were the most understandable and the least prone to navigation errors. However, light and vibrotactile cues might be useful for educating younger child cyclists.2019AMAndrii Matviienko et al.OFFIS - Institute for Information TechnologyIn-Vehicle Haptic, Audio & Multimodal FeedbackMicromobility (E-bike, E-scooter) InteractionCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)CHI
All about Acceptability?: Identifying Factors for the Adoption of Data GlassesInnovations often trigger objections before becoming widely accepted. This paper assesses whether a familiarisation over time can be expected for data glasses, too. While user attitudes towards those devices have been reported to be prevalently negative [14], it is still unclear, to what extent this initial, negative user attitude might impede adoption. However, indepth understanding is crucial for reducing barriers early in order to gain access to potential benefits from the technology. With this paper we contribute to a better understanding of factors affecting data glasses adoption, as well as current trends and opinions. Our multiple-year case study (N=118) shows, against expectations, no significant change towards a more positive attitude between 2014 and 2016. We complement these findings with an expert survey (N=51) investigating prognoses, challenges and discussing the relevance of social acceptability. We elicit and contrast a controversial spectrum of expert opinions, and assess whether initial objections can be overwritten. Our analysis shows that while social acceptability is considered relevant for the time being, utility and usability are more valued for long-term adoption.2018MKMarion Koelle et al.University of OldenburgEye Tracking & Gaze InteractionUniversal & Inclusive DesignUser Research Methods (Interviews, Surveys, Observation)CHI
Where to Look: Exploring Peripheral Cues for Shifting Attention to Spatially Distributed Out-of-View ObjectsKnowing the locations of spatially distributed objects is important in many different scenarios (e.g., driving a car and being aware of other road users). In particular, it is critical for preventing accidents with objects that come too close (e.g., cyclists or pedestrians). In this paper, we explore how peripheral cues can shift a user's attention towards spatially distributed out-of-view objects. We identify a suitable technique for visualization of these out-of-view objects and explore different cue designs to advance this technique to shift the user's attention. In a controlled lab study, we investigate non-animated peripheral cues with audio stimuli and animated peripheral cues without audio stimuli. Further, we looked into how user's identify out-of-view objects. Our results show that shifting the user's attention only takes about 0.86 seconds on average when animated stimuli are used, while shifting the attention with non-animated stimuli takes an average of 1.10 seconds.2018UGUwe Gruenefeld et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsIn-Vehicle Haptic, Audio & Multimodal FeedbackAutoUI
Automotive User Interfaces: Expert DiscussionAutomation is making significant advances in vehicles, with adaptive cruise control and lane keeping assistance being prominent technologies we encounter on the road today. How should we design user interactions for vehicles with automation? Panelists will lead the audience in discussions about (a) how to design interactions for driving-related and non-driving-related activities; (b) how the designs are affected by the availability of different types of vehicle automation, and how their effectiveness can be tested, (c) how we can approach the designs from the perspective of vehicle occupants, as well as from the perspective of other traffic participants, and (d) how to guide not only practice but also theory development about human-machine interaction for automated vehicles.2018SBSusanne Boll et al.University of OldenburgAutomated Driving Interface & Takeover DesignAI-Assisted Decision-Making & AutomationMental Health Apps & Online Support CommunitiesCHI