When Group Spirit Meets Personal Journeys: Exploring Motivational Dynamics and Design Opportunities in Group TherapyPsychotherapy, such as cognitive-behavioral therapy (CBT), is effective in treating various mental disorders. Technology-facilitated mental health therapy improves client engagement through methods like digitization or gamification. However, these innovations largely cater to individual therapy, ignoring the potential of group therapy—a treatment for multiple clients concurrently, which enables individual clients to receive various perspectives in the treatment process and also addresses the scarcity of healthcare practitioners to reduce costs. Notwithstanding its cost-effectiveness and unique social dynamics that foster peer learning and community support, group therapy, such as group CBT, faces the issue of attrition. While existing medical work has developed guidelines for therapists, such as establishing leadership and empathy to facilitate group therapy, understanding about the interactions between each stakeholder is still missing. To bridge this gap, this study examined a group CBT program called the Serigaya Methamphetamine Relapse Prevention Program (SMARPP) as a case study to understand stakeholder coordination and communication, along with factors promoting and hindering continuous engagement in group therapy. In-depth interviews with eight facilitators and six former clients from SMARPP revealed the motivators and demotivators for facilitator-facilitator, client-client, and facilitator-client communications. Our investigation uncovers the presence of discernible conflicts between clients' intrapersonal motivation as well as interpersonal motivation in the context of group therapy through the lens of self-determination theory. We discuss insights and research opportunities for the HCI community to mediate such tension and enhance stakeholder communication in future technology-assisted group therapy settings.2025SGShixian Geng et al.Caring at a DistanceCSCW
Understanding Collaboration between Professional Designers and Decision-making AI: A Case Study in the WorkplaceThe rapid development of artificial intelligence (AI) has fundamentally transformed creative work practices in the design industry. Existing studies have identified both opportunities and challenges for creative practitioners' collaboration with generative AI to facilitate effective human-AI co-creation in the workplace. However, there is still a limited understanding of designers' collaboration with AI, which supports creative aspects distinct from those at which generative AI excels. To address these gaps, this study focuses on designers' collaboration with decision-making AI, which supports the convergence process in creative workflow, as opposed to the divergent process supported by generative AI. Specifically, we conducted a case study at an online advertising design company that introduced AI that predicts the effectiveness of advertising design and incorporated it into the design workflow as a decision-making support tool to explore how professional graphic designers at the company perceive AI's impact on their work practice. Findings from interviews with 12 designers identified how they trust and rely on AI, its benefits, and its challenges, including how they navigate the challenges. Based on the findings, we discuss design recommendations for designing and introducing such decision-making AI into the creative design workflow in the workplace.2025NONami Ogawa et al.Human-AI (and Robot!) CollaborationCSCW
Poet-Weaver: Reflecting on Communication Failure in Personal Relationships With Stylized AI-Generated Conversation DigestsInterpersonal communication often involves navigating social challenges like managing expectations and conveying intentions. In practical contexts like productivity and navigation of social boundaries, agent- and AI-mediated communication (AIMC) has served as an effective social intermediary. Communication within personal relationships, especially between intercultural friends from different backgrounds, can also face significant challenges, such as miscommunication and expressive suppression. However, AIMC remains underutilized in the context of established personal relationships due to ethical concerns about agency and authenticity. We propose designing openly-interpretable AIMC output as an augmented context cue to reduce its active social involvement, balancing AI support with preservation of relational agency. We developed and evaluated Poet-Weaver, a Discord text chat plugin that presents AI-generated insights on user conversations in a stylized, interpretable way to encourage reflection on communication challenges. We conducted a mixed-methods study with 30 intercultural friend pairs to assess Poet-Weaver's impact. Findings showed Poet-Weaver effectively helped participants address communication failures, although AIMC still influenced users' behavior even with openly-interpretable output. We recommend future uses of AIMC that support transformative, positive relationship changes while preserving individual responsibility and identity within relationships.2025SYSeraphina Yong et al.Communicating With/Through AICSCW
SoilSense: Appropriating Soil-based Microbial Fuel Cells to Create Tangible InterfacesSoil-based Microbial Fuel Cells (SMFCs) offer a sustainable method for powering low-energy computing devices by harnessing electricity from microbial activity in soil. In this paper, we introduce SoilSense, a novel approach that repurposes SMFCs as tangible interfaces, transforming soil into an interactive, computationally responsive medium, instead of energy sources. We explore the voltage variations that occur when pressure is applied to the cathode and systematically characterize this mechanism across different electrode configurations and soil moisture levels. To demonstrate the feasibility of SMFC-based interfaces, we present a series of modular and proof-of-concept prototypes that support diverse interaction modalities. We further illustrate how SoilSense enables interactions through example applications and provide implications and envision for future studies to employ soil as an ecologically compatible material in interactive system design.2025TMShuto Takashita et al.Shape-Changing Materials & 4D PrintingEcological Design & Green ComputingEnergy Conservation Behavior & InterfacesUIST
Imaginary Joint: Proprioceptive Feedback for Virtual Body Extensions via Skin StretchVirtual body extensions such as a wing or tail have the potential to offer users new bodily experiences and capabilities in virtual and augmented reality. To use these extensions as naturally as one’s own body—particularly for body parts that are normally hard to see, such as a tail—it is essential to provide proprioceptive feedback that allows users to perceive the position, orientation, and force exerted by these parts, rather than relying solely on visual cues. In this study, we propose a novel approach by introducing an "Imaginary Joint" at the interface between the user's actual body and the virtual extension, delivering information about joint flexion and force through skin-stretch feedback. We present a wearable device for skin-stretch feedback and explore informing mappings that convey the bending rotation and torque of the Imaginary Joint. The final system presents both types of information simultaneously by superimposing these skin deformations. Results from a controlled experiment with users demonstrate that users could identify tail position and force without relying on visual cues, and do so more effectively than in the vibrotactile condition. Furthermore, the tail was perceived as more embodied than in a vibrotactile condition, resulting in a more naturalistic and intuitive sensation. Finally, we introduce several application scenarios, including Perception of Extended Bodies, Enhanced Bodily Expression, and Body-Mediated Communication, and discuss the potential for future extensions of this system.2025STShuto Takashita et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsDance & Body Movement ComputingUIST
MorphKeys: A Reconfigurable Keyboard System Using Switched NFC TagsKeyboards are widely used as an input method for computers due to their ability to enable fast text entry and shortcut usage. However, their shape is fixed and often not optimized for human use. Even keyboards designed based on ergonomics cannot fully accommodate individual differences and varying use cases. In this paper, we propose MorphKeys, a keyboard that allows users to freely and three-dimensionally arrange keys. In MorphKeys, each key is an independent, battery-free key module. These modules are powered and read using near-field communication (NFC) only while the user presses a key, enabling key input detection. Additionally, by incorporating relay resonators in the NFC reader, we achieve both a sufficient reading range and high reading performance. Furthermore, we employ a base made of clay and iron sand, which enables magnetic fixation of the keys without interfering with NFC operation. Our implemented prototype demonstrates that a large number of keys can be arranged in a sufficiently three-dimensional manner and that the system functions properly as a keyboard.2025KYKoki Yamagami et al.Shape-Changing Interfaces & Soft Robotic MaterialsCircuit Making & Hardware PrototypingUIST
Ultra-low-power ring-based wireless tinymouseWireless mouse rings offer subtle, reliable pointing interactions for wearable computing platforms. However, the small battery below 27 mAh in the miniature rings restricts the ring's continuous lifespan to just 1-10 hours, because even low-powered wireless communication such as BLE is power-consuming for ring's continuous use. The ring's short lifespan frequently disrupts users' mouse use with the need for frequent charging. This paper presents picoRing mouse, enabling a continuous ring-based mouse interaction with ultra-low-powered ring-to-wristband wireless connectivity. picoRing mouse employs a coil-based impedance sensing named semi-passive inductive telemetry, allowing a wristband coil to capture a unique frequency response of a nearby ring coil via a sensitive inductive coupling between the coils. The ring coil converts the corresponding user's mouse input into the unique frequency response via an up to 449 uW mouse-driven modulation system. Therefore, the continuous use of picoRing mouse can last approximately 600 (8hrs use/day)-1000 (4hrs use/day) hours on a single charge of a 27 mAh battery while supporting subtle thumb-to-index scrolling and pressing interactions in real-world wearable computing situations.2025YLDongchi Li et al.Foot & Wrist InteractionContext-Aware ComputingUIST
Examining Input Modalities and Visual Feedback Designs in Mobile Expressive WritingExpressive writing is an established approach for stress management. Recently, information technologies, such as smartphones, have also been explored for expressive writing. Although mobile interfaces have the potential to support various daily writing activities, interface designs for mobile expressive writing and their effects on stress relief still lack empirical understanding. We examined the interface design of mobile expressive writing by investigating the influence of input modalities and visual feedback designs on usability and perceived cathartic effects through field studies. While our studies confirmed the stress-relieving effects of mobile expressive writing, our results offer important insights into interface design. We found keyboard-based text entry more suited and preferred over voice messages for its privacy and reflective nature. Participants expressed different reasons for preferring different post-writing visual feedback depending on the cause and type of stress. This work advances interface design for mobile expressive writing and deepens understanding of its effects.2025SNShunpei Norihama et al.Voice User Interface (VUI) DesignMental Health Apps & Online Support CommunitiesMobileHCI
Multimodal Silent Speech-based Text Entry with Word-initials Conditioned LLMAlthough exhibiting great potential in enabling seamless communication between humans and conversational agents, large vocabulary recognition is still challenging for silent speech interfaces. In this research, we propose a novel interaction technique that combines silent speech and typing to enable more efficient text entry while preserving privacy. This technique allows users to use abbreviated phrase input while still ensuring high accuracy by leveraging visual information. By fine-tuning a large language model with a visual speech encoder, we condition the models to decode the speech content with word initials as hints. Evaluations on existing datasets show that our model can reduce the Word Error Rate from 20.3% to 9.19%, compared to state-of-the-art visual speech recognition models. Results from a user study demonstrated significant improvements in input speed and keystroke saving. Participants reported that our prototype, LipType, leads to an overall lower perceived workload, particularly in the effort and physical demand dimension.2025ZSZixiong Su et al.Electrical Muscle Stimulation (EMS)Hand Gesture RecognitionHuman-LLM CollaborationCUI
A Framework for Efficient Development and Debugging of Role-Playing Agents with Large Language ModelsWe propose a framework that leverages large language models (LLMs) to semi-automate the development and debugging of role-playing agents, reducing the need for extensive manual effort. Role-playing agents powered by LLMs offer scalable solutions that enhance communication and interaction in various applications, such as employee training, healthcare, and software development. However, creating prompts manually is a time-consuming process, and sequential debugging increases the difficulty of anticipating conversation flow, resulting in increased cognitive load. Our framework addresses these challenges by generating and summarizing dialogue examples, providing a clearer overview of conversation flow and reduce mental workload. It also enhances role-playing quality by mitigating LLMs’ tendency to produce generic or vague responses. In a user study, the proposed method significantly improved perceived workload and five of the six NASA-TLX dimensions. Moreover, it can generate agents comparable to those created with expertly crafted prompts. This framework is model-agnostic, enabling integration of advancements in LLM capabilities and prompting techniques, and is applicable to diverse domains.2025HTHirohane Takagi et al.Agent Personality & AnthropomorphismHuman-LLM CollaborationAI-Assisted Creative WritingIUI
Dynamik: Syntactically-Driven Dynamic Font Sizing for Emphasis of Key InformationIn today's globalized world, there are increasing opportunities for individuals to communicate using a common non-native language (lingua franca). Non-native speakers often have opportunities to listen to foreign languages, but may not comprehend them as fully as native speakers do. To aid real-time comprehension, live transcription of subtitles is frequently used in everyday life (e.g., during Zoom conversations, watching YouTube videos, or on social networking sites). However, simultaneously reading subtitles while listening can increase cognitive load. In this study, we propose Dynamik, a system that reduces cognitive load during reading by decreasing the size of less important words and enlarging important ones, thereby enhancing sentence contrast. Our results indicate that Dynamik can reduce certain aspects of cognitive load, specifically, participants' perceived performance and effort among individuals with low proficiency in English, as well as enhance the users' sense of comprehension, especially among people with low English ability. We further discuss our methods' applicability to other languages and potential improvements and further research directions.2025NNNaoto Nishida et al.Voice User Interface (VUI) DesignVoice AccessibilityIUI
User-Guided Correction of Reconstruction Errors in Structure-from-MotionWe propose a user-guided method to correct reconstruction errors in Structure-from-Motion (SfM) processes. SfM takes a set of camera images as input and then estimates the cameras' poses and three-dimensional point clouds based on keypoint matching. However, scenes with repetitive or similar structures often result in false matches, leading to inaccuracies in camera pose estimation. While automatic methods for removing false matches exist, achieving perfect accuracy with them remains challenging. Conversely, human intervention can ensure high accuracy, but manual identification and elimination of false matches is a tedious and error-prone process. Our proposed method strikes a balance by introducing a more efficient user-guided approach. Users provide approximate camera poses, which the system then uses to detect false matches. Specifically, the system examines overlaps between view frustums of camera pairs post user adjustments, classifying pairs as false matches if no overlap is found. This method leverages the user's recollection of camera movements during scene capture to guide the reconstruction process. Evaluation with test cases and a user study confirm that our technique can efficiently remove false matches and enable accurate reconstruction of camera poses.2025SKSotaro Kanazawa et al.User Research Methods (Interviews, Surveys, Observation)Computational Methods in HCIIUI
Beyond Omakase: Designing Shared Control for Navigation Robots with Blind PeopleAutonomous navigation robots can increase the independence of blind people but often limit user control—following what is called in Japanese an "omakase" approach where decisions are left to the robot. This research investigates ways to enhance user control in social robot navigation, based on two studies conducted with blind participants. The first study, involving structured interviews (N=14), identified crowded spaces as key areas with significant social challenges. The second study (N=13) explored navigation tasks with an autonomous robot in these environments and identified design strategies across different modes of autonomy. Participants preferred an active role, termed the "boss" mode, where they managed crowd interactions, while the "monitor" mode helped them assess the environment, negotiate movements, and interact with the robot. These findings highlight the importance of shared control and user involvement for blind users, offering valuable insights for designing future social navigation robots.2025RKRie Kamikubo et al.University of Maryland, College of InformationReproductive & Women's HealthSocial Robot InteractionCHI
MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical SpacesRemote meetings using 3D avatars in Augmented Reality (AR) allow effective communication and enable users to retain awareness of their surroundings. However, positioning 3D avatars effectively and consistently for all users in AR is challenging since most spaces, such as offices or living rooms, are not large enough to accommodate multiple life-sized avatars without interference. To address this issue, we contribute MiniMates---a novel approach leveraging miniature avatars, which make it possible to place multiple remote users in a limited physical space. We see MiniMates as complementary to traditional 2D video conferencing and immersive telepresence. Our approach automatically adjusts the formation of avatars and redirects users' head and body orientation to facilitate communication. Results from our user study (n = 24) show that participants experience a higher sense of co-presence compared to video conferencing, and that MiniMates enabled them to communicate the direction of their interactions non-verbally as well as manage multiple simultaneous conversations.2025AKAkihiro Kiuchi et al.The University of TokyoSocial & Collaborative VRMixed Reality WorkspacesContext-Aware ComputingCHI
HeadTurner: Enhancing Viewing Range and Comfort of using Virtual and Mixed-Reality Headsets while Lying Down via Assisted Shoulder and Head ActuationVirtual and mixed reality headsets, such as the Apple Vision Pro and Meta Quest, began supporting use in reclined postures in 2024, accommodating users who prefer or require this position. However, the surfaces on which users rest restrict shoulder and head rotation, reducing viewing range and comfort. A formative study (n=16) comparing usage while standing vs. lying down showed that head rotation range decreased from 261º to 130º horizontally and from 172º to 94.9º vertically. To improve viewing range and comfort, we present HeadTurner, a novel approach that assists user-initiated head rotations by actuating the resting surface to yield in pitch and yaw axes. In a user study (n=16), HeadTurner significantly expanded the field of view and improved comfort compared to a fixed surface. Although VR sickness was slightly reduced with HeadTurner, the difference was not statistically significant. Overall, HeadTurner was preferred by 75% of participants. Although our proof-of-concept device was prototyped as a bed, the approach can be extended to more compact and affordable device form factors, such as motorized reclining chairs, offering the potential for comfortable use of VR and MR headsets over extended periods, and was shown to inspire users with interested applications in back-rested scenarios.2025EWEn-Huei Wu et al.National Taiwan University, HCI LabMixed Reality WorkspacesImmersion & Presence ResearchCHI
SpineLoft: Interactive Spine-based 2D-to-3D Modeling3D artists (professionals and novices alike) often take inspiration from sketches or photos to guide their designs. Yet, existing modeling systems are not tailored to fully make use of such input. Consequently, significant effort and expertise are needed when creating model prototypes or exploring design options. In this work, we introduce a system to support the exploratory modeling process by enabling the transformation of 2D image elements into geometric 3D objects. Our solution relies on a novel d2 distance function, supporting a region-based lofting process, and delivers easily-editable 3D geometric "spine-rib" representations. The user draws a spine, and the system generates and modifies a generalized cylinder around it, considering image edges. The proposed approach, driven by simple user-defined scribble definitions, can robustly handle various image sources, ranging from photos to hand-drawn content.2025ATAlexandre Thiault et al.Institut Polytechnique de Paris, Telecom Paris3D Modeling & AnimationCustomizable & Personalized ObjectsCHI
FontCraft: Multimodal Font Design Using Interactive Bayesian OptimizationCreating new fonts requires a lot of human effort and professional typographic knowledge. Despite the rapid advancements of automatic font generation models, existing methods require users to prepare pre-designed characters with target styles using font-editing software, which poses a problem for non-expert users. To address this limitation, we propose FontCraft, a system that enables font generation without relying on pre-designed characters. Our approach integrates the exploration of a font-style latent space with human-in-the-loop preferential Bayesian optimization and multimodal references, facilitating efficient exploration and enhancing user control. Moreover, FontCraft allows users to revisit previous designs, retracting their earlier choices in the preferential Bayesian optimization process. Once users finish editing the style of a selected character, they can propagate it to the remaining characters and further refine them as needed. The system then generates a complete outline font in OpenType format. We evaluated the effectiveness of FontCraft through a user study comparing it to a baseline interface. Results from both quantitative and qualitative evaluations demonstrate that FontCraft enables non-expert users to design fonts efficiently.2025YTYuki Tatsukawa et al.The University of Tokyo, Igarashi LabGraphic Design & Typography ToolsCustomizable & Personalized ObjectsCHI
XR-penter: Material-Aware and In Situ Design of Scrap Wood AssembliesWoodworkers have to navigate multiple considerations when planning a project, including available resources, skill-level, and intended effort. Do it yourself (DIY) woodworkers face these challenges most acutely because of tight material constraints and a desire for custom designs tailored to specific spaces. To address these needs, we present XR-penter, an extended reality (XR) application that supports in situ, material-aware woodworking for casual makers. Our system enables users to design virtual scrap wood assemblies directly in their workspace, encouraging sustainable practices through the use of discarded materials. Users register physical material as virtual twins, manipulate these twins into an assembly in XR (while receiving feedback on material usage and alignment with their surroundings), and preview cuts needed for fabrication. We conducted a case study and feedback sessions demonstrating that XR-penter supports improvisational workflows in practice, and found that woodworkers who prioritize material-driven and adaptive workflows would benefit most from our system.2025RIRamya Iyer et al.Georgia Institute of TechnologyMixed Reality WorkspacesShape-Changing Materials & 4D PrintingCHI
Draw2Cut: Direct On-Material Annotations for CNC MillingCreating custom artifacts with computer numerical control (CNC) milling machines typically requires mastery of complex computer-aided design (CAD) software. To eliminate this user barrier, we introduced Draw2Cut, a novel system that allows users to design and fabricate artifacts by sketching directly on physical materials. Draw2Cut employs a custom-drawing language to convert user-drawn lines, symbols, and colors into toolpaths, thereby enabling users to express their creative intent intuitively. The key features include real-time alignment between material and virtual toolpaths, a preview interface for validation, and an open-source platform for customization. Through technical evaluations and user studies, we demonstrate that Draw2Cut lowers the entry barrier for personal fabrication, enabling novices to create customized artifacts with precision and ease. Our findings highlight the potential of the system to enhance creativity, engagement, and accessibility in CNC-based woodworking.2025XGXinyue Gui et al.The University of TokyoDesktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsCHI
Cyberoception: Finding A Painlessly-Measurable New Sense In The Cyberworld Towards Emotion-awareness In ComputingIn Affective computing, recognizing users' emotions accurately is the basis of affective human–computer interaction. Understanding users' interoception contributes to a better understanding of individually different emotional abilities, which is essential for achieving inter-individually accurate emotion estimation. However, existing interoception measurement methods, such as the heart rate discrimination task, have several limitations, including their dependence on a well-controlled laboratory environment and precision apparatus, making monitoring users' interoception challenging. This study aims to determine other forms of data that can explain users' interoceptive or similar states in their real-world lives and propose a novel hypothetical concept "cyberoception," a new sense (1) which has properties similar to interoception in terms of the correlation with other emotion-related abilities, and (2) which can be measured only by the sensors embedded inside commodity smartphone devices in users' daily lives. Results from a 10-day-long in-lab/in-the-wild hybrid experiment reveal a specific cyberoception type "Turn On." (users' subjective sensory perception about the frequency of turning-on behavior on their smartphones)2025TOTadashi Okoshi et al.Keio University, Faculty of Environment and Information StudiesBrain-Computer Interface (BCI) & NeurofeedbackBiosensors & Physiological MonitoringCHI