Haru in the Care Network: Stakeholder Perspectives on Privacy with Social Robots in PediatricsSocial robots are beginning to be introduced as technologies to support the collective networks supporting pediatric treatment, but few studies on children's perceptions of privacy with robots in hospitals. Through a mixed-method approach, we introduced hypothetical vignettes and engaged in discussion with 15 youth who are either receiving cancer treatments or are in remission (ages 6-25), 11 of their parents, and 5 out of 8 of their clinical staff to learn how stakeholders in pediatric oncology discuss privacy concerns with child-robot interactions. Our thematic analysis imparts how stakeholders perceive robots as social, non-authoritative extensions of the hospital's care network. As 1) mediators of social interaction among various stakeholders, 2) companions for children and 3) informational tools for clinicians when consent is given by the family, social robots can maximize their social utility within care systems while critically engaging with the comfort and privacy preferences of stakeholders. We emphasize how assistive technologies in pediatrics should be co-designed within communities for identifying appropriate roles and returning agency to stakeholders as they navigate the blurry boundaries of privacy in healthcare.2025LLLeigh M Levinson et al.Perspectives on Data PrivacyCSCW
Impact of Affirmative and Negating Robot Gestures on Perceived Personality, Role, and Contribution of a Human Group MemberRobots can play a role in mediating human group interactions. This study examines how robot gestures affect the perception of a human group member’s personality, role in the group, and contribution. In a vignette study (n=96), participants imagined being in a group discussion and watched a short video of another group member presenting an argument. In one condition (affirmative gesture), a robot nodded while the member spoke; in the other, it shook its head (negating gesture). A control condition featured no robot. The affirmative gesture enhanced perceptions of the speaker’s personality and role in the group, though their contribution was not affected. The negating gesture showed no adverse effects. Additionally, participants perceived the robot as a group member when it nodded but as an onlooker when it shook its head. This suggests that positive robot gestures can improve group dynamics by fostering favorable interpersonal perceptions.2025TPTuan Vu Pham et al.Social Robot InteractionHuman-Robot Collaboration (HRC)DIS
“Teach Me About Objects!” – Experience-Driven Interaction for Teachable RobotsTo adapt to specific places and people, robots must recognize objects, which are typically taught by users – a tedious process. Inspired by anecdotes of positive teaching experiences shared by educators, sports coaches, and animal trainers, we developed seven experience-driven ways to make teaching a robot more engaging. For example, one interaction involved the robot prompting users to tell personal stories about the objects. A video vignette study (N=184) showed that experience-driven teaching was perceived as more positive than the current technology-driven teaching. Participants reported feeling more competent, connected to the robot, and valued. Additionally, the robot was perceived as more extroverted, open, agreeable, and conscientious. Overall, the experience-driven design of teaching interactions enhanced engagement and persistence by fostering reciprocal exchange and mutual understanding. In addition, the study lends support to an anecdotal approach to designing positive experiences through technology.2025TPTuan Vu Pham et al.Social Robot InteractionParticipatory DesignDIS
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot InteractionUnderstanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.2025JLJan Leusmann et al.LMU MunichHand Gesture RecognitionSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Developing and Validating the Perceived System Curiosity Scale (PSC): Measuring Users' Perceived Curiosity of SystemsLike humans, today's systems, such as robots and voice assistants, can express curiosity to learn and engage with their surroundings. While curiosity is a well-established human trait that enhances social connections and drives learning, no existing scales assess the perceived curiosity of systems. Thus, we introduce the Perceived System Curiosity (PSC) scale to determine how users perceive curious systems. We followed a standardized process of developing and validating scales, resulting in a validated 12-item scale with 3 individual sub-scales measuring explorative, investigative, and social dimensions of system curiosity. In total, we generated 831 items based on literature and recruited 414 participants for item selection and 320 additional participants for scale validation. Our results show that the PSC scale has inter-item reliability and convergent and construct validity. Thus, this scale provides an instrument to explore how perceived curiosity influences interactions with technical systems systematically.2025JLJan Leusmann et al.LMU MunichBrain-Computer Interface (BCI) & NeurofeedbackAgent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)CHI
Investigating LLM-Driven Curiosity in Human-Robot InteractionIntegrating curious behavior traits into robots is essential for them to learn and adapt to new tasks over their lifetime and to enhance human-robot interaction. However, the effects of robots expressing curiosity on user perception, user interaction, and user experience in collaborative tasks are unclear. In this work, we present a Multimodal Large Language Model-based system that equips a robot with non-verbal and verbal curiosity traits. We conducted a user study ($N=20$) to investigate how these traits modulate the robot's behavior and the users' impressions of sociability and quality of interaction. Participants prepared cocktails or pizzas with a robot, which was either curious or non-curious. Our results show that we could create user-centric curiosity, which users perceived as more human-like, inquisitive, and autonomous while resulting in a longer interaction time. We contribute a set of design recommendations allowing system designers to take advantage of curiosity in collaborative tasks.2025JLJan Leusmann et al.LMU MunichHuman-LLM CollaborationSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Can we enhance prosocial behavior? Using post-ride feedback to improve micromobility interactionsMicromobility devices, such as e-scooters and delivery robots, hold promise for eco-friendly and cost-effective alternatives for future urban transportation. However, their lack of societal acceptance remains a challenge. Therefore, we must consider ways to promote prosocial behavior with micromobility interactions. We investigate how post-ride feedback can encourage the prosocial behavior of e-scooter riders while interacting with sidewalk users, including pedestrians and delivery robots. Using a web-based platform, we measure the prosocial behavior of e-scooter riders. Results found that post-ride feedback can successfully promote prosocial behavior, and objective measures indicated better gap behavior, lower speeds at interaction, and longer stopping time around other sidewalk actors. The findings of this study demonstrate the efficacy of post-ride feedback and provide a step toward designing methodologies to improve the prosocial behavior of mobility users.2024SSSidney T Scott-Sharoni et al.Teleoperated DrivingMicromobility (E-bike, E-scooter) InteractionAutoUI
Prosociality Matters: How Does Prosocial Behavior in Interdependent Situations Influence the Well-being and Cognition of Road Users?In hybrid mobility societies, where automated vehicles (AVs) and humans interact in public spaces, the significance of prosocial behaviors intensifies. These behaviors are crucial for the smooth functioning of an interdependent transportation environment, mitigating challenges from the integration of AVs and human-operated systems, and enhancing user well-being by fostering more efficient, less stressful, and inclusive environments. This study explores the impact of receiving prosocial behaviors on cognition, riding behavior, and well-being of micromobility users through interdependent traffic situations within a simulated urban environment. Our mixed design study involved two types of social interactions as between-subject conditions of prosocial and asocial interaction, and three categories of time constraint as within-subject conditions: relaxed, neutral, and pressed. The findings reveal that receiving prosocial and asocial behaviors can affect the state of well-being and trial performance in a mobility environment.2024SKSooyeon Kim et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsTeleoperated DrivingMicromobility (E-bike, E-scooter) InteractionAutoUI
Embodied Mediation in Group Ideation – A Gestural Robot Can Facilitate Consensus-BuildingThis paper explores how a gesture-based robot influences human-human interaction in group ideation. The robot was mounted on a whiteboard and responded with six different gestures (e.g., nodding, following speakers with gaze) to specific situations. We coded the participants’ interactions from videos and gathered their experience through post-session interviews. The most frequently invoked robot behavior was following the speaker with gaze. As a result, participants felt socio-emotionally supported and responded by moving the ideation ahead (individual level) and consensus-building (group level). In fact, the groups with the robot showed more consensus-building than the two reference groups without the robot. Participants had different views on the role of the robot in the group, such as active outsider, supportive group member, or assistant. The latter tried to use the robot as decision-support. All in all, to include a robot to mediate human groups seems a promising future application domain.2024TPTuan Vu Pham et al.Hand Gesture RecognitionSocial Robot InteractionHuman-Robot Collaboration (HRC)DIS
So Predictable! Continuous 3D Hand Trajectory Prediction in Virtual RealityWe contribute a novel user- and activity-independent kinematics-based regressive model for continuously predicting ballistic hand movements in virtual reality (VR). Compared to prior work on end-point prediction, continuous hand trajectory prediction in VR enables an early estimation of future events such as collisions between the user’s hand and virtual objects such as UI widgets. We developed and validated our prediction model through a user study with 20 participants. The study collected hand motion data with a 3D pointing task and a gaming task with three popular VR games. Results show that our model can achieve a low Root Mean Square Error (RMSE) of 0.80 cm, 0.85 cm and 3.15 cm from future hand positions ahead of 100 ms, 200 ms and 300 ms respectively across all the users and activities. In pointing tasks, our predictive model achieves an average angular error of 4.0° and 1.5° from the true landing position when 50% and 70% of the way through the movement. A follow-up study showed that the model can be applied to new users and new activities without further training.2021NGNisal Menuka Gamage et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputImmersion & Presence ResearchUIST
CameraReady: Assessing the Influence of Display Types and Visualizations on Posture GuidanceComputer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error (12%). Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.2021HEHesham Elsayed et al.Human Pose & Activity RecognitionDance & Body Movement ComputingDIS
MoveAE: Modifying Affective Robot Movements Using Classifying Variational AutoencodersWe propose a method for modifying affective robot movements using neural networks. Social robots use gestures and other movements to express their internal states. However, a robot’s interactive capabilities are hindered by the predominant use of a limited set of preprogrammed or hand-animated behaviors, which can be repetitive and predictable, making sustained human-robot interactions difficult to maintain. To address this, we developed a method for modifying existing emotive robot movements by using neural networks. We use hand-crafted movement samples and a classifying variational autoencoder trained on these samples. Our method then allows for adjustment of affective movement features by using simple arithmetic in the network’s latent embedding space. We present the implementation and evaluation of this approach and show that editing in the latent space can modify the emotive quality of the movements while preserving recognizability and legibility in many cases. This supports neural networks as viable tools for creating and modifying expressive robot behaviors.2020MSMichael Suguitan et al.Human-Robot Collaboration (HRC)HRI
Next Steps for Human-Computer IntegrationHuman-Computer Integration (HInt) is an emerging paradigm in which computational and human systems are closely interwoven. Integrating computers with the human body is not new. however, we believe that with rapid technological advancements, increasing real-world deployments, and growing ethical and societal implications, it is critical to identify an agenda for future research. We present a set of challenges for HInt research, formulated over the course of a five-day workshop consisting of 29 experts who have designed, deployed and studied HInt systems. This agenda aims to guide researchers in a structured way towards a more coordinated and conscientious future of human-computer integration.2020FMFlorian Floyd Mueller et al.Monash UniversityBrain-Computer Interface (BCI) & NeurofeedbackTechnology Ethics & Critical HCIUser Research Methods (Interviews, Surveys, Observation)CHI
CORA, a Prototype for a Cooperative Speech-Based On-Demand Intersection AssistantWe present the first speech-based advanced driver assistance prototype. It is based on our previously proposed on-demand communication concept for the interaction between the driver and his or her vehicle. Using this concept, drivers can flexibly activate the system via speech whenever they want to receive assistance. We could show via driver simulator studies that an instantiation of this concept as an intersection assistant, supporting the driver in turning left, was well received by drivers and preferred to an alternative, vision-based system. In this paper, we present a prototype implementation and give details on how we adapted it to the intricacy of urban traffic as well as to the shortcomings of current sensor technology in establishing an adequate environment perception. The accompanying video gives an impression of the interaction between the driver and the system when cooperatively turning left from a subordinate road into crossing traffic.2019MHMartin Heckmann et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Voice User Interface (VUI) DesignAutoUI
PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on SmartwatchesSelecting an item of interest on smartwatches can be tedious and time-consuming as it involves a series of swipe and tap actions. We present PageFlip, a novel method that combines into a single action multiple touch operations such as command invocation and value selection for efficient interaction on smartwatches. PageFlip operates with a page flip gesture that starts by dragging the UI from a corner of the device. We first design PageFlip by examining its key design factors such as corners, drag directions and drag distances. We next compare PageFlip to a functionally equivalent radial menu and a standard swipe and tap method. Results reveal that PageFlip improves efficiency for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch interaction opportunities and a set of applications that can benefit from PageFlip.2018THTeng Han et al.University of ManitobaFoot & Wrist InteractionSmartwatches & Fitness BandsCHI
Vibrational Artificial Subtle Expressions: Conveying System’s Confidence Level to Users by Means of Smartphone VibrationArtificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. So far, auditory ASEs using beep sounds, visual ASEs using LEDs, and motion ASEs using robot movements have been implemented and shown to be effective. In this paper, we propose a novel type of ASE that uses vibration (vibrational ASEs). We implemented the vibrational ASEs on a smartphone and conducted experiments to confirm whether they can convey a system’s confidence level to users in the same way as the other types of ASEs. The results clearly showed that vibrational ASEs were able to accurately and intuitively convey the designed confidence level to participants, demonstrating that ASEs can be applied in a variety of applications in real environments.2018TKTakanori Komatsu et al.Meiji UniversityVibrotactile Feedback & Skin StimulationExplainable AI (XAI)CHI