A Framework for Efficient Development and Debugging of Role-Playing Agents with Large Language ModelsWe propose a framework that leverages large language models (LLMs) to semi-automate the development and debugging of role-playing agents, reducing the need for extensive manual effort. Role-playing agents powered by LLMs offer scalable solutions that enhance communication and interaction in various applications, such as employee training, healthcare, and software development. However, creating prompts manually is a time-consuming process, and sequential debugging increases the difficulty of anticipating conversation flow, resulting in increased cognitive load. Our framework addresses these challenges by generating and summarizing dialogue examples, providing a clearer overview of conversation flow and reduce mental workload. It also enhances role-playing quality by mitigating LLMs’ tendency to produce generic or vague responses. In a user study, the proposed method significantly improved perceived workload and five of the six NASA-TLX dimensions. Moreover, it can generate agents comparable to those created with expertly crafted prompts. This framework is model-agnostic, enabling integration of advancements in LLM capabilities and prompting techniques, and is applicable to diverse domains.2025HTHirohane Takagi et al.Agent Personality & AnthropomorphismHuman-LLM CollaborationAI-Assisted Creative WritingIUI
Creating with Care: Co-Designing Immersive Experiences through Art-Making with People Living with DementiaThis paper explores the integration of co-design and art-making in developing technologies that support personhood in dementia care. While technologies for dementia care have advanced, there remains a gap in creating solutions that are directly informed by the experiences of people living with dementia and support their individuality. In collaboration with the specialist arts organisation Bright Shadow CIO, our work involves engaging people living with dementia in the design process. Over five weeks of co-design sessions, 44 participants worked alongside artists to craft four physical boxes that represent ``meaningful places.'' The physical boxes were then transformed into VR environments, allowing participants to immerse themselves in and interact with their creations from a first-person perspective. Our findings demonstrate that VR alone is insufficient in dementia care. For VR to be meaningful, it must be be part of a broader intervention that includes trust-building, sensory engagement, and creative involvement. Within this process, art-making serves as both a method and medium, providing a means of self-expression and connection to identity. Our findings challenge conventional approaches to dementia-focused VR, advocating for a shift toward inclusive and care-driven technology design.2025SPSophia Ppali et al.CYENS Centre of ExcellenceVR Medical Training & RehabilitationEmpowerment of Marginalized GroupsCHI
Selfrionette: A Fingertip Force-Input Controller for Continuous Full-Body Avatar Manipulation and Diverse Haptic InteractionsWe propose Selfrionette, a controller that uses fingertip force input to drive avatar movements in virtual reality (VR). This system enables users to interact with virtual objects and walk in VR using only fingertip force, overcoming physical and spatial constraints. Additionally, by fixing users' fingers, it provides users with counterforces equivalent to the applied force, allowing for diverse and wide dynamic range haptic feedback by adjusting the relationship between force input and virtual movement. To evaluate the effectiveness of the proposed method, this paper focuses on hand interaction as a first step. In User Study 1, we measured usability and embodiment during reaching tasks under Selfrionette, body tracking, and finger tracking conditions. In User Study 2, we investigated whether users could perceive haptic properties such as weight, friction, and compliance under the same conditions as User Study 1. Selfrionette was found to be comparable to body tracking in realism of haptic interaction, enabling embodied avatar experiences even in limited spatial conditions.2024THTakeru Hashimoto et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputUIST
ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodimentVirtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.2024KVKarthikeya Puttur Venkatraj et al.Centrum Wiskunde & Informatica, Delft University of TechnologyMid-Air Haptics (Ultrasonic)Social & Collaborative VRIdentity & Avatars in XRCHI
Presentation of Robot-Intended Handover Position using Vibrotactile Interface during robot-to-human handover taskAdvancements in robot autonomy and safety enable close interactions such as object handovers with a human. During robot-to-human handovers in assembly tasks, the robot considers the human's state to determine the optimal handover position and timing. However, humans may struggle to focus on their primary tasks due to the need to track the robot's movement. This research aims to develop a vibrotactile interface that helps humans maintain focus on their primary tasks during object-receiving. The interface conveys the robot-intended handover position on the human's forearm in polar coordinates, displaying the angular direction and distance relative to the human hand via vibrations. Experimental results demonstrated that this method allowed participants to receive objects with faster reactions and completion time. Subjective evaluations reveal a perception of improved performance and reduced mental workload compared to baseline making the robot-to-human handover smoother and less distracting.2024MZMuhammad Akmal Bin Mohammed Zaffir et al.Vibrotactile Feedback & Skin StimulationHuman-Robot Collaboration (HRC)HRI
eat2pic: An Eating-Painting Interactive System to Nudge Users into Making Healthier Diet Choices"Given the complexity of human eating behaviors, developing interactions to change the way users eat or their choice of meals is challenging. In this study, we propose an interactive system called eat2pic designed to encourage healthy eating habits such as adopting a balanced diet and eating more slowly, by refraining the task of selecting meals into that of adding color to landscape pictures. The eat2pic system comprises a sensor-equipped chopstick (one of a pair) and two types of digital canvases. It provides fast feedback by recognizing a user's eating behavior in real time and displaying the result on a small canvas called ""one-meal eat2pic."" Moreover, it also provides slow feedback by displaying the number of colors of foods that the user consumed on a large canvas called ""one-week eat2pic."" The former was designed and implemented as a guide to help people eat more slowly, and the latter to encourage people to select more balanced menus. Through two user studies, we explored the experience of interaction with eat2pic, in which users' daily eating behavior was reflected in a series of ""paintings,"" that is, images produced by the automated system. The experimental results suggest that eat2pic may provide an opportunity for reflection in meal selection and while eating, as well as assist users in becoming more aware of how they are eating and how balanced their daily meals are. We expect this system to inspire users' curiosity about different diets and ways of eating. This research also contributes to expanding the design space for products and services related to dietary support. https://doi.org/10.1145/3580784"2023YNYugo Nakamura et al.Haptic WearablesDiet Tracking & Nutrition ManagementUbiComp
The "Conversation" about Loss: Understanding How Chatbot Technology was Used in Supporting People in Grief.While conversational agents have traditionally been used for simple tasks such as scheduling meetings and customer service support, recent advancements have led researchers to examine their use in complex social situations, such as to provide emotional support and companionship. For mourners who could be vulnerable to the sense of loneliness and disruption of self-identity, such technology offers a unique way to help them cope with grief. In this study, we explore the potential benefits and risks of such a practice, through semi-structured interviews with 10 mourners who actively used chatbots at different phases of their loss. Our findings indicated seven approaches in which chatbots were used to help people cope with grief, by taking the role of listener, acting as a simulation of the deceased, romantic partner, friend and emotion coach. We then highlight how interacting with the chatbots impacted mourners’ grief experience, and conclude the paper with further research opportunities.2023AXAnna Xygkou et al.University of KentConversational ChatbotsMental Health Apps & Online Support CommunitiesEmpowerment of Marginalized GroupsCHI
Para Cima y Pa’ Abajo: Building Bridges Between HCI Research in Latin America and in the Global NorthThe Human-computer Interaction (HCI) community has the opportunity to foster the integration of research practices across the Global South and North to begin overcoming colonial relationships. In this paper, we focus on the case of Latin America (LATAM), where initiatives to increase the representation of HCI practitioners lack a consolidated understanding of the practices they employ, the factors that influence them, and the challenges that practitioners face. To address this knowledge gap, we employ a mixed-methods approach, comprising a survey (66 respondents) and in-depth interviews (19 interviewees). Our analyses characterize a set of research perspectives on how HCI is practiced in/about LATAM; a set of driving forces and tensions with a heavy reliance on diasporic dynamics; and a set of professional demands and associated structural limitations. We also offer a roadmap towards building connections across HCI communities, in an attempt to rebuild HCI as a pluriverse.2023PRPedro Reynolds-Cuéllar et al.MITInclusive DesignDeveloping Countries & HCI for Development (HCI4D)CHI
ModularHMD: A Reconfigurable Mobile Head-Mounted Display Enabling Ad-hoc Peripheral Interactions with the Real WorldWe propose ModularHMD, a new mobile head-mounted display concept, which adopts a modular mechanism and allows a user to perform ad-hoc peripheral interaction with real-world devices or people during VR experiences. ModularHMD is comprised of a central HMD and three removable module devices installed in the periphery of the HMD cowl. Each module has four main states: occluding, extended VR view, video see-through (VST), and removed/reused. Among different combinations of module states, a user can quickly setup the necessary HMD forms, functions, and real-world visions for ad-hoc peripheral interactions without removing the headset. For instance, an HMD user can see her surroundings by switching a module into the VST mode. She can also physically remove a module to obtain direct peripheral visions of the real world. The removed module can be reused as an instant interaction device (e.g., touch keyboards) for subsequent peripheral interactions. Users can end the peripheral interaction and revert to a full VR experience by re-mounting the module. We design ModularHMD’s configuration and peripheral interactions with real-world objects and people. We also implement a proof-of-concept prototype of ModularHMD to validate its interactions capabilities through a user study. Results show that ModularHMD is an effective solution that enables both immersive VR and ad-hoc peripheral interactions.2021IEIsamu Endo et al.Mixed Reality WorkspacesImmersion & Presence ResearchUIST
How Information Sharing about Care Recipients by Family Caregivers Impacts Family CommunicationPrevious research has shown that tracking technologies have the potential to help family caregivers optimize their coping strategies and improve their relationships with care recipients. In this paper, we explore how sharing the tracked data (i.e., caregiving journals and patient’s conditions) with other family caregivers affects home care and family communication. Although previous works suggested that family caregivers may benefit from reading the records of others, sharing patients’ private information might fuel negative feelings of surveillance and violation of trust for care recipients. To address this research question, we added a sharing feature to the previously developed tracking tool and deployed it for six weeks in the homes of 15 family caregivers who were caring for a depressed family member. Our findings show how the sharing feature attracted the attention of care recipients and helped the family caregivers discuss sensitive issues with care recipients.2018NYNaomi Yamashita et al.NTT Communication Science LabsElderly Care & Dementia SupportAging-in-Place Assistance SystemsCHI