"Closer than Real": How Social VR Platform Features Influence Friendship DynamicsSocial virtual reality (VR) platforms offer unique features that can foster interpersonal relationships that are "closer than real." This study investigates how these platform features influence friendship dynamics in social VR. Through semi-structured interviews with 23 Japanese VRChat users, we explored the characteristics of close relationships formed in social VR, the processes of relationship development, and the role of platform features in shaping these dynamics. Our findings reveal that social VR facilitates a form of selective self-presentation and co-presence through embodied avatars and rich environmental contexts, which can lead to rapid and intense friendship formation. Users reported developing close bonds without relying on real-life background information, instead focusing on perceived familiarity and compatibility within the virtual space, highlighted by the avatar's appearance. Further, platform features such as ``join'' functions that allow users to teleport to friends' locations, were assigned special meanings by users, contributing to developing friendships.2025MHMisato Hide et al.The University of TokyoSocial & Collaborative VRImmersion & Presence ResearchIdentity & Avatars in XRCHI
SealMates: Improving Communication in Video Conferencing using a Collective Behavior-Driven AvatarThe limited nonverbal cues and spatially distributed nature of remote communication make it challenging for unacquainted members to be expressive during social interactions over video conferencing. Though it enables seeing others’ facial expressions, the visual feedback can instead lead to unexpected self-focus, resulting in users missing cues for others to engage in the conversation equally. To support expressive communication and equal participation among unacquainted counterparts, we propose SealMates, a behavior-driven avatar in which the avatar infers the engagement level of the group based on collective gaze and speech patterns and then moves across interlocutors' windows in the video conferencing. By conducting a controlled experiment with 15 groups of triads, we found the avatar's movement encouraged people to experience more self-disclosure and made them perceive everyone was equally engaged in the conversation than when there was no behavior-driven avatar. We discuss how a behavior-driven avatar influences distributed members' perceptions and the implications of avatar-mediated communication for future platforms.2024MAMark Armstrong et al.Session 4f: Multiplayer Gaming and CommunicationCSCW
Understanding and Influencing Responsibility Attribution when Experiencing Technical Issues in Video ConferencingDuring video conferencing, technical issues such as network impairments can unexpectedly hinder remote collaboration for distributed teams. However, it remains unclear how such issues affect the impression formation process between unacquainted interlocutors in situations like job interviews or kick-off meetings. Having first encounters online without prior in-person interaction has become prevalent nowadays. Therefore, examining the impact of technical issues on remote unacquainted interlocutors and exploring design solutions to reduce the negative impact of technical issues on the impression is essential in informing the design of future communication systems. With three controlled experiments simulating an online job interview, we discovered that technical issues made people give low credibility ratings to remote job applicants (Study 1). We further examined two intervention approaches to reduce the negative impact of technical issues in video conferencing, including a virtual agent represented in a human-like avatar to actively take responsibility for the technical issues (Study 2), and forewarning messages to inform people about the technical issues in video conferencing (Study 3). Results demonstrated that introducing an agent that represents a communication system to take responsibility for technical issues actively could reduce the responsibility people assign to remote job applicants, but without increasing their positive credibility rating for remote job applicants. Furthermore, showing forewarning messages, which explicitly made people aware the cause of technical issues was unidentifiable and its potential negative impacts on impression formation, enabled people to rate the credibility of the remote counterpart without being influenced by technical issues. We discussed how interventions can be designed to mitigate negative attribution outcomes when encountering uncontrollable technical issues in computer-mediated communication.2024CYChi-Lan Yang et al.Session 3g: Collaborative Technologies: Empathy, Attribution, and RiskCSCW
People with Disabilities Redefining Identity through Robotic and Virtual Avatars: A Case Study in Avatar Robot CafeRobotic avatars and telepresence technology enable people with disabilities to engage in physical work. Despite the recent popularity of the metaverse, few studies have explored the use of virtual avatars and environments by people with disabilities. In this study, seven disabled participants working in a cafe where remote customer service is provided via robotic avatars, were engaged in the development and use of personalized virtual avatars displayed on a large screen in-situ in combination with existing physical robots, creating a hybrid cyber-physical space. We conducted longitudinal semi-structured interviews to investigate the psychological changes experienced by the participants. The results revealed that mass-produced robotic avatars allowed participants to not disclose their disability if they did not want to, but also backgrounded their identities; by contrast, customized virtual avatars shaped without physical constraints, highlighted their personalities. The combined use of robotic and virtual avatars complemented each other and can support pilots in redefining their identity.2024YHYuji Hatada et al.The University of TokyoIdentity & Avatars in XRSocial Robot InteractionTeleoperation & TelepresenceCHI
Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based CommunicationWhen receiving text messages from unacquainted colleagues in fully remote workplaces, insufficient mutual understanding and limited social cues can lead people to misinterpret the tone of the message and further influence their impression of remote colleagues. Emojis have been commonly used for supporting expressive communication; however, people seldom use emojis before they become acquainted with each other. Hence, we explored how changing facial expressions in profile pictures could be an alternative channel to communicate socio-emotional cues. By conducting an online controlled experiment with 186 participants, we established that changing facial expressions of profile pictures can influence the impression of the message receivers toward the sender and the message valence when receiving neutral messages. Furthermore, presenting incongruent profile pictures to positive messages negatively affected the interpretation of the message valence, but did not have much effect on negative messages. We discuss the implications of affective profile pictures in supporting text-based communication.2023CYChi-Lan Yang et al.The University of TokyoVoice User Interface (VUI) DesignOnline Identity & Self-PresentationCHI
MetamorphX: An Ungrounded 3-DoF Moment Display that Changes its Physical Properties through Rotational Impedance ControlHumans can estimate the properties of wielded objects (e.g., inertia and viscosity) using the force applied to the hand. We focused on this mechanism and aimed to represent the properties of wielded objects by dynamically changing the force applied to the hand. We propose MetamorphX, which uses control moment gyroscopes (CMGs) to generate ungrounded, 3-degrees of freedom moment feedback. The high-response moments obtained CMGs allow the inertia and viscosity of motion to be set to the desired values via impedance control. A technical evaluation indicated that our device can generate a moment with a 60-ms delay. The inertia and viscosity of motion were varied by 0.01 kgm^2 and 0.1 Ns, respectively. Additionally, we demonstrated that our device can dynamically change the inertia and viscosity of motion through virtual reality applications.2022THTakeru Hashimoto et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputUIST
Designing for Speech Practice Systems: How Do User-Controlled Voice Manipulation and Model Speakers Impact Self-Perceptions of Voice?Can you speak the way you desire without feeling the pressure to conform to standards of speaking? In this study, we investigated the impact of user-controlled voice manipulation and listening to recordings of model speakers on self-perceptions of voice and speech. Quantitative analysis showed that there was a significant improvement in the perceived confidence of tone by listening to model speakers, but there were no significant improvements due to voice manipulation. Qualitative analysis of interviews revealed that participants responded positively to the visual and auditory feedback provided by the voice manipulation software. The participants also evaluated the quality of model speakers to decide whether or not they wanted to refer to them for speech practice. Based on the results of these analyses, we summarized the design implications for a speech practice system that would allow further investigation of the impact of the system on self-perceptions of speech performance.2022LOLisa Orii et al.University of WashingtonVoice User Interface (VUI) DesignVoice AccessibilityCHI
Teardrop Glasses: Pseudo Tears Induce Sadness in You and Those Around YouEmotional contagion is a phenomenon in which one's emotions are transmitted among individuals unconsciously by observing others' emotional expressions. In this paper, we propose a method for mediating people's emotions by triggering emotional contagion through artificial bodily changes such as pseudo tears. We focused on shedding tears because of the link to several emotions besides sadness. In addition, it is expected that shedding tears would induce emotional contagion because it is observable by others. We designed an eyeglasses-style wearable device, Teardrop glasses, that release water drops near the wearer's eyes. The drops flow down the cheeks and emulate real tears. The study revealed that artificial crying with pseudo tears increased sadness among both wearers and those observing them. Moreover, artificial crying attenuated happiness and positive feelings in observers. Our findings show that actual bodily changes are not necessary for inducing emotional contagion as artificial bodily changes are also sufficient.2021SYShigeo Yoshida et al.The University of Tokyo, JST, PRESTOHaptic WearablesFull-Body Interaction & Embodied InputMental Health Apps & Online Support CommunitiesCHI
Pop-up Print: Rapidly 3D Printing Mechanically Reversible Objects in the Folded StateDespite recent advancements in 3D printing technology, which allows users to rapidly produce 3D objects, printing tall and/or large objects still consumes more time and large amount of support material. In order to address these problems, we propose Pop-up Print, a method to 3D print an object in a compact “folded” state and then unfold it after printing to achieve the final artifact. Using this method, we can reduce the object’s print height and volume, which directly affects the printing time and support material consumption. In addition, thanks to the reversibility of folding/unfolding, we can reversibly minimize the printed object’s volume when unused for storage or transportation, and expand it only in use. To achieve Pop-up Print, we first conducted an experiment using selected printed sample objects with several parameters, in order to determine suitable crease patterns that make both the unfolded and folded state mechanically stable. Based on this result, we developed an interactive design tool to convert 3D models – such as a Stanford Bunny or a Huffman’s cone – to the folded shape. Our design tool allows users to decide non-intuitive parameters that may affect the form’s mechanical stability, while maintaining both functional crease patterns and the object’s original form factor. Finally, we demonstrate the feasibility of our method through several examples of folded objects.2020YNKoya Narumi et al.Desktop 3D Printing & Personal FabricationShape-Changing Materials & 4D PrintingCustomizable & Personalized ObjectsUIST
Do You Feel Like Passing Through Walls? Effect of Self-Avatar Appearance on Facilitating Realistic Behavior in Virtual EnvironmentsPreventing users from walking through virtual boundaries (e.g., walls) is an important issue to be addressed in room-scale virtual environments (VEs), considering the safety and design limitations. Sensory feedback from wall collisions has been shown to be effective; however, it can disrupt the immersion. We assumed that a greater sense of presence would discourage users from walking through walls and conducted a two-factor between-subjects experiment (N = 92) that controls the anthropomorphism (realistic or abstract) and visibility (full-body or hand-only) of self-avatars. We analyzed the participants' behaviors and the moment they first penetrated the wall in game-like VEs that gradually instigated participants to penetrate the walls. The results showed that the realistic full-body self-avatar was the most effective for discouraging the participants from penetrating the walls. Furthermore, the participants with lower presence tended to walk through the walls sooner. This study can contribute to applications that require realistic user responses in VEs.2020NONami Ogawa et al.The University of TokyoImmersion & Presence ResearchIdentity & Avatars in XRCHI
PaCaPa: A Handheld VR Device for Rendering Size, Shape, and Stiffness of Virtual Objects in Tool-based InteractionsWe present PaCaPa, a handheld device that renders haptics on a user's palm when the user interacts with virtual objects using virtual tools such as a stick. PaCaPa is a cuboid device with two wings that open and close. As the user's stick makes contact with a virtual object, the wings open by a specific degree to dynamically change the pressure on the palm and fingers. The open angle of the wings is calculated from the angle between the virtual stick and hand direction. As the stick bites into the target object, a large force is generated. Our device enables three kinds of renderings: size, shape, and stiffness. We conducted user studies to evaluate the performance of our device. We also evaluated our device in two application scenarios. User feedback and qualitative ratings indicated that our device can make indirect interaction with handheld tools more realistic.2019YSYuqian Sun et al.The University of TokyoForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsCHI
Transcalibur: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception ModelHumans can estimate the shape of a wielded object through the illusory feeling of the mass properties of the object obtained using their hands. Even though the shape of hand-held objects influences immersion and realism in virtual reality (VR), it is difficult to design VR controllers for rendering desired shapes according to the perceptions derived from the illusory effects of mass properties and shape perception. We propose Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes. This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment.2019JSJotaro Shigeyama et al.The University of TokyoForce Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI