LegisFlow: Enhancing Korean Legal Research with Temporal-Aware LLM InterfacesIn South Korea's statutory law system, legal research faces challenges like tracking frequent amendments and understanding complex statute relationships. LegisFlow, an innovative AI-powered system, tackles these issues with features such as interactive amendment timelines and advanced inter-statute relationship analysis. Developed based on insights from Korean legal experts, it provides intuitive visualizations and context-aware search capabilities. A user study with 10 legal professionals demonstrated that LegisFlow significantly enhances efficiency, reducing task completion times by up to 36% (e.g., 440s vs. 690s in inter-statute comparison, p=0.022) and lowering cognitive load, with workflows streamlined by 70% fewer manual steps. LegisFlow transforms statutory law research by setting a new standard for AI-assisted tools, providing a scalable, user-centered solution for professionals in Korea and beyond.2025JKJunghwan Kim et al.Human-LLM CollaborationInteractive Data VisualizationUIST
BleacherBot: AI Agent as a Sports Co-Viewing PartnerCo-viewing, traditionally defined as watching content together in the same physical space, enhances emotional connections through shared experiences. With the rise of remote viewing during the COVID-19 pandemic, existing solutions, such as second-screen platforms and rule-based AI companions, struggle to facilitate meaningful social interactions. This study explores the potential of Large Language Models, which offer human-like interactions and personalization. Our formative study with ten participants revealed the importance of managing arousal levels, highlighting the need to balance between high- and low-arousal levels across different viewing contexts. Based on these insights, we developed `BleacherBot', a sports co-viewing agent with distinct interaction styles that vary in arousal levels. Our main study with 27 participants demonstrated that matching users' preferred arousal levels with the agent's interaction style significantly enhanced their engagement and overall enjoyment. We propose design guidelines for AI co-viewing agents that consider their role as complements to human social interactions.2025HSHyungwoo Song et al.Seoul National University, Human Centered Computing LabConversational ChatbotsSocial & Collaborative VRHuman-LLM CollaborationCHI
Cinema Multiverse Lounge: Enhancing Film Appreciation via Multi-Agent ConversationsAdvancements in large language models (LLMs) enable the development of interactive systems that enhance user engagement with cinematic content. We introduce \textit{Cinema Multiverse Lounge}, a multi-agent conversational system where users interact with LLM-based agents embodying diverse film-related personas. We investigate how user interactions with these agents influence their film appreciation. Thirty participants engaged in three discussion sessions, freely selecting persona agents such as film characters, filmmakers, or anonymous audiences. We explored how users composed different combinations of personas, the factors affecting their engagement and interpretation, and how diverse perspectives influenced film appreciation. Results indicate that interactions with varied agents enhanced participants’ appreciation by enabling the exploration of multiple viewpoints and fostering deeper narrative engagement. Moreover, the unexpected clashes between different worldviews added a fresh and enjoyable layer to the interactions. Our findings provide empirical insights and design implications for developing multi-agent systems that support enriched media consumption experiences.2025KKKyusik Kim et al.Seoul National UniversityConversational ChatbotsAgent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)CHI
I feel being there, they feel being together: Exploring How Telepresence Robots Facilitate Long-Distance Family CommunicationMany families often live geographically apart from each other due to work, education, or marriage. Therefore, long-distance families frequently use computer-mediated communication (CMC) tools to stay connected. While CMC tools have significantly improved family communication, they cannot fully mediate social presence. To examine the potential of telepresence robots for improving long-distance family communication, we conducted a two-week qualitative in situ study involving eight families. We analyzed recorded videos of their family interactions and conducted pre- and post-deployment interviews. Our findings highlight telepresence robots' potential as family communication tools, enabling immersive, natural, and dynamic interactions through physical embodiment and autonomy. Particularly, we identified five categories of family interaction mediated by telepresence robots: engaging in multi-party family communication, exploring home, restoring family routines, providing support, and having joint physical activities. Based on our findings, we present design guidelines for leveraging telepresence robots as effective family communication tools.2024JSJiyeon Amy Seo et al.Seoul National UniversityTeleoperation & TelepresenceCHI
Enhancing Auto-Generated Baseball Highlights via Win Probability and Bias Injection MethodThe automatic generation of sports highlight videos is emerging in both the sports entertainment domain and research community. Earlier methods for generating highlights rely on visual-audio cues or contextual cues, so they may not capture the overall flow of the game well. In this paper, we propose a technique based on Win Probability Added (WPA), an empirical sabermetric baseball statistic, to generate baseball highlights that can better reflect in-game dynamics. Additionally, we introduce methods for generating “biased” highlights toward one team by systematically manipulating WPAs. Through a mixed-method user study with 43 baseball enthusiasts, we found that participants evaluated WPA-based highlights more favorably than existing AI highlights. For (un)favorably biased highlights, the game result(win/loss) was the most dominating factor in user perception, but bias directions and strengths also had nuanced effects on them. Our work contributes to the development of automated tools for generating customized sports highlights.2024KPKieun Park et al.Seoul National UniversityRecommender System UXGame UX & Player BehaviorSerious & Functional GamesCHI
“I Want to Reveal, but I Also Want to Hide” Understanding the Conflict of Revealing and Hiding Needs in Virtual Study RoomsSince the COVID-19 pandemic, video conferencing platforms have given rise to new virtual activities, such as virtual study rooms where users utilize video to share ambient presence for study motivation. In virtual study rooms, it can be challenging for the users to determine what to reveal and what to hide on camera, as the video needs to strongly convey their presence without revealing more than necessary. In this paper, we investigate the conflicting needs of virtual study room users to reveal and hide on camera, as well as the methods they employ to cope with these needs using videos. To this end, we conducted a three-step qualitative study. The first study involved interviews to discover the key user needs that entail the conflict to reveal and hide. The second study utilized virtual study room screen analysis to identify the video features that characterize virtual study room videos. In the last study, we employed interviews to associate the video features with the key user needs. Based on these findings, we discussed the effects of studying together that could be applied to a non-physical and non-interactive co-studying environment and the need for further development of video conferencing tools to effectively share ambient presence.2023SCSoobin Cho et al.Remote LearningCSCW
Trkic G00gle: Why and How Users Game Translation AlgorithmsIndividuals interact with algorithms in various ways. Users even game and circumvent algorithms so as to achieve favorable outcomes. This study aims to come to an understanding of how various stakeholders interact with each other in tricking algorithms, with a focus towards online review communities. We employed a mixed-method approach in order to explore how and why users write machine non-translatable reviews as well as how those encrypted messages are perceived by those receiving them. We found that users are able to find tactics to trick the algorithms in order to avoid censoring, to mitigate interpersonal burden, to protect privacy, and to provide authentic information for enabling the formation of informative review communities. They apply several linguistic and social strategies in this regard. Furthermore, users perceive encrypted messages as both more trustworthy and authentic. Based on these findings, we discuss implications for online review community and content moderation algorithms.2021SKSeonghyeon Kim et al.Interpreting and Explaining AICSCW
“I wrote as if I were telling a story to someone I knew.”: Designing Chatbot Interactions for Expressive Writing in Mental HealthWriting about experiences of trauma and other challenges in life is known to provide measurable health benefits. Though writing for an audience may ensure better benefits, confiding one’s most troubled memories in others risks a social stigma. Conversational agents can provide a virtual audience that ensures privacy and allows social disclosure. To understand the writing experience with an agent, we created Diarybot, a chatbot assistant for expressive writing. We designed two versions, Basic and Responsive, to explore the writing experience with and without bot follow-up interactions compared to a Google doc baseline. Findings from a 4-day user study with 30 participants reveal that social disclosure with Diarybot can encourage narrative writing, with relative ease and emotional expression in Basic chat. Responsive chat can mediate social acceptance of the bot and provide guidance for self-reflection in the process. We discuss design reflections on social disclosure with agents in pursuit of wellbeing.2021SPSoHyun Park et al.Conversational ChatbotsMental Health Apps & Online Support CommunitiesDIS
Understanding How People Reason about Aesthetic Evaluations of Artificial IntelligenceArtificial intelligence (AI) algorithms are making remarkable achievements even in creative fields such as aesthetics. However, whether those outside the machine learning (ML) community can sufficiently interpret or agree with their results, especially in such highly subjective domains, is being questioned. In this paper, we try to understand how different user communities reason about AI algorithm results in subjective domains. We designed AI Mirror, a research probe that tells users the algorithmically predicted aesthetic scores of photographs. We conducted a user study of the system with 18 participants from three different groups: AI/ML experts, domain experts (photographers), and general public members. They performed tasks consisting of taking photos and reasoning about AI Mirror’s prediction algorithm with think-aloud sessions, surveys, and interviews. The results showed the following: (1) Users understood the AI using their own group-specific expertise; (2) Users employed various strategies to close the gap between their judgments and AI predictions overtime; (3) The difference between users’ thoughts and AI pre-dictions was negatively related with users’ perceptions of the AI’s interpretability and reasonability. We also discuss design considerations for AI-infused systems in subjective domains.2020COChanghoon Oh et al.Explainable AI (XAI)AI-Assisted Creative WritingDIS
Understanding User Perception of Automated News Generation SystemAutomated journalism refers to the generation of news articles using computer programs. Although it is widely used in practice, its user experience and interface design remain largely unexplored. To understand the user perception of an automated news system, we designed NewsRobot, a research prototype that automatically generated news on major events of the PyeongChang 2018 Winter Olympic Games in real-time. It produces six types of news by combining two kinds of content (general/individualized) and three styles (text, text+image, text+image+sound). A total of 30 users participated in using NewsRobot, completing surveys and interviews on their experience. Our findings are as follows: (1) Users preferred individualized news yet considered it less credible, (2) more presentation elements were appreciated but only if their quality was assured, and (3) NewsRobot was considered factual and accurate yet shallow in depth. Based on our findings, we discuss implications for designing automated journalism user interfaces.2020COChanghoon Oh et al.Carnegie Mellon UniversityGenerative AI (Text, Image, Music, Video)AI-Assisted Decision-Making & AutomationCHI
Bot in the Bunch: Facilitating Group Chat Discussion by Improving Efficiency and Participation with a ChatbotAlthough group chat discussions are prevalent in daily life, they have a number of limitations. When discussing in a group chat, reaching a consensus often takes time, members contribute unevenly to the discussion, and messages are unorganized. Hence, we aimed to explore the feasibility of a facilitator chatbot agent to improve group chat discussions. We conducted a needfinding survey to identify key features for a facilitator chatbot. We then implemented GroupfeedBot, a chatbot agent that could facilitate group discussions by managing the discussion time, encouraging members to participate evenly, and organizing members' opinions. To evaluate GroupfeedBot, we performed preliminary user studies that varied for diverse tasks and different group sizes. We found that the group with GroupfeedBot appeared to exhibit more diversity in opinions even though there were no differences in output quality and message quantity. On the other hand, GroupfeedBot promoted members' even participation and effective communication for the medium-sized group.2020SKSoomin Kim et al.Seoul National UniversityConversational ChatbotsAgent Personality & AnthropomorphismCHI
I Lead, You Help But Only with Enough Details: Understanding User Experience of Co-Creation with Artificial IntelligenceRecent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user–AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.2018COChanghoon Oh et al.Seoul National UniversityGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCreative Collaboration & Feedback SystemsCHI
Touch+Finger: Extending Touch-based User Interface Capabilities with “Idle” Finger Gestures in the AirIn this paper, we present Touch+Finger, a new interaction technique that augments touch input with multi-finger gestures for rich and expressive interaction. The main idea is that while one finger is engaged in a touch event, a user can leverage the remaining fingers, the “idle” fingers, to perform a variety of hand poses or in-air gestures to extend touch-based user interface capabilities. To fully understand the use of these idle fingers, we constructed a design space based on conventional touch gestures (i.e., single- and multi-touch gestures) and inter- action period (i.e., before and during touch). Considering the design space, we investigated the possible movement of the idle fingers and developed a total of 20 Touch+Finger gestures. Using ring-like devices to track the motion of the idle fingers in the air, we evaluated the Touch+Finger gestures on both recognition accuracy and ease of use. They were classified with a recognition accuracy of over 99% and received positive and negative comments from 8 participants. We suggested 8 interaction techniques with Touch+Finger gestures that demonstrate extended touch-based user interface capabilities.2018HLHyunchul Lim et al.Hand Gesture RecognitionFull-Body Interaction & Embodied InputUIST