“HistoChat”: Leveraging AI-Driven Historical Personas for Personalized and Engaging Middle School History EducationTraditional history education often fails to cultivate historical empathy due to rigid curricula and limited opportunities for personalized, emotionally resonant engagement. We explore the potential of AI-powered historical personas to address these gaps by enabling students to engage in real-time, conversational interactions with simulated historical figures. A formative study with teachers and students surfaced key challenges and expectations around AI-mediated historical dialogue, informing the development of \textit{Baseline} and \textit{Experimental} HistoChat, AI persona systems featuring differing prompting strategies. A subsequent user study showed that these interactions fostered deeper inquiry, curiosity, and emotional engagement—while also revealing key limitations. From a CSCW perspective, this work expands the role of AI from task assistant to epistemic partner, contributing to ongoing discourse on how dialogic systems can support meaning-making, empathy, and co-constructed learning in educational settings. Our findings yield valuable insights into the impact of tailored AI interactions on personalized and empathetic history education.2025YKYeon Soo Kim et al.Enhancing LearningCSCW
“Hello, This is a Voice Assistant Calling" When a Human Voice Calls Claiming to Be a Machine on an Ordinary DayWith the advent of neural networks, it has become possible to generate synthetic voices that are nearly indistinguishable from real human speech (i.e., human-sounding voice). In contrast, earlier voice assistants used voices that were instantly recognizable as machine-generated, owing to their standardized, consistent, and highly intelligible qualities (i.e., artificial-sounding voice). Although people tend to prefer human-like voices, adopting human-sounding voices in voice assistants raises ethical concerns related to confusion or unintentional deception, particularly in voice-only contexts, even when their identity as systems is explicitly disclosed. To explore the voice design direction for future voice assistants, we examined how participants perceived and interacted when they were unexpectedly confronted with either an artificial-sounding or a human-sounding voice, both of which clearly identified themselves as voice assistants, through an everyday phone call. Our findings reveal participants’ experiences and conversational behaviors in each voice condition. Furthermore, we discuss how the voices of voice assistants should be designed and propose implications that emphasize transparent and responsive voices.2025JOJeesun Oh et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismDIS
Sasha: Creative Goal-Oriented Reasoning in Smart Homes with Large Language ModelsKing 等人开发 Sasha 系统,利用大语言模型实现智能家居中创造性目标导向推理,为用户提供智能化辅助决策支持。2024EKEvan King et al.Human-LLM CollaborationSmart Home Interaction DesignUbiComp
Understanding the Initial Journey of UX Designers Toward Sustainable Interaction Design: A Focus on Digital Infrastructure Energy ReductionEnvironmental sustainability is increasingly important, and actions on “digital sustainability” are expanding to reduce energy consumption from digital infrastructures. As many digital services today have extensive user bases, exploring sustainable design features holds significant potential for reducing environmental impact. However, further exploration of foundational research is still necessary to enable broader and more effective adoption of digital sustainability in design practice. This study focuses on understanding important considerations when encouraging more designers, especially those with limited expertise in sustainability-oriented design, to integrate sustainable practices into digital services—acknowledging that embracing unfamiliar approaches presents natural challenges. We conducted design workshops and debriefing interviews with user experience (UX) designers unfamiliar with design for sustainability to explore their early encounters with sustainable interaction design (SID) in the context of digital infrastructure energy reduction. Our study provides insight into designers’ initial perceptions and challenges with sustainable design and discusses opportunities for their broader engagement.2024MLMinha Lee et al.Sustainable HCIEcological Design & Green ComputingDIS
Better to Ask Than Assume: Proactive Voice Assistants’ Communication Strategies That Respect User Agency in a Smart Home EnvironmentProactive voice assistants (VAs) in smart homes predict users’ needs and autonomously take action by controlling smart devices and initiating voice-based features to support users’ various activities. Previous studies on proactive systems have primarily focused on determining action based on contextual information, such as user activities, physiological state, or mobile usage. However, there is a lack of research that considers user agency in VAs’ proactive actions, which empowers users to express their dynamic needs and preferences and promotes a sense of control. Thus, our study aims to explore verbal communication through which VAs can proactively take action while respecting user agency. To delve into communication between a proactive VA and a user, we used the Wizard of Oz method to set up a smart home environment, allowing controllable devices and unrestrained communication. This paper proposes design implications for the communication strategies of proactive VAs that respect user agency.2024JOJeesun Oh et al.KAISTVoice User Interface (VUI) DesignSmart Home Interaction DesignHome Voice Assistant ExperienceCHI
Unlock Life with a Chat(GPT): Integrating Conversational AI with Large Language Models into Everyday Lives of Autistic IndividualsAutistic individuals often draw on insights from their supportive networks to develop self-help life strategies ranging from everyday chores to social activities. However, human resources may not always be immediately available. Recently emerging conversational agents (CAs) that leverage large language models (LLMs) have the potential to serve as powerful information-seeking tools, facilitating autistic individuals to tackle daily concerns independently. This study explored the opportunities and challenges of LLM-driven CAs in empowering autistic individuals through focus group interviews and workshops (N=14). We found that autistic individuals expected LLM-driven CAs to offer a non-judgmental space, encouraging them to approach day-to-day issues proactively. However, they raised issues regarding critically digesting the CA responses and disclosing their autistic characteristics. Based on these findings, we propose approaches that place autistic individuals at the center of shaping the meaning and role of LLM-driven CAs in their lives, while preserving their unique needs and characteristics.2024DCDasom Choi et al.KAISTHuman-LLM CollaborationCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)CHI
Toward a Multilingual Conversational Agent: Challenges and Expectations of Code-Mixing Multilingual UsersMultilingual speakers tend to interleave two or more languages when communicating. This communication strategy is called code-mixing, and it has surged with today’s ever-increasing linguistic and cultural diversity. Because of their communication style, multilinguals who use conversational agents have specific needs and expectations which are currently not being met by conversational systems. While research has been undertaken on code-mixing conversational systems, previous works have rarely focused on the code-mixing users themselves to discover their genuine needs. This work furthers our understanding of the challenges faced by code-mixing users in conversational agent interaction, unveils the key factors that users consider in code-mixing scenarios, and explores expectations that users have for future conversational agents capable of code-mixing. This study discusses the design implications of our findings and provides a guide on how to alleviate the challenges faced by multilingual users and how to improve the conversational agent user experience for multilingual users.2023YCYunjae Josephine Choi et al.KAISTConversational ChatbotsMultilingual & Cross-Cultural Voice InteractionCHI
Fostering Youth’s Critical Thinking Competency about AI through ExhibitionToday’s youth lives in a world deeply intertwined with AI, which has become an integral part of everyday life. For this reason, it is important for youth to critically think about and examine AI to become responsible users in the future. Although recent attempts have educated youth on AI with focus on delivering critical perspectives within a structured curriculum, opportunities to develop critical thinking competencies that can be reflected in their lives must be provided. With this background, we designed an informal learning experience through an AI-related exhibition to cultivate critical thinking competency. To explore changes before and after the exhibition, 23 participants were invited to experience the exhibition. We found that the exhibition can support the youth in relating AI to their lives through critical thinking processes. Our findings suggest implications for designing learning experiences to foster critical thinking competency for better coexistence with AI.2023SLSunok Lee et al.KAISTAI Ethics, Fairness & AccountabilitySTEM Education & Science CommunicationParticipatory DesignCHI
“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for VideoconferencingThe COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.2023WKWooseok Kim et al.KAIST, KAISTPrivacy by Design & User ControlHome Voice Assistant ExperienceCHI
"We Speak Visually" : User-generated Icons for Better Video-Mediated Mixed Group Communications Between Deaf and Hearing ParticipantsSince the outbreak of the COVID-19 pandemic, videoconferencing technology has been widely adopted as a convenient, powerful, and fundamental tool that has simplified many day-to-day tasks. However, video communication is dependent on audible conversation and can be strenuous for those who are Hard of Hearing. Communication methods used by the Deaf and Hard of Hearing community differ significantly from those used by the hearing community, and a distinct language gap is evident in workspaces that accommodate workers from both groups. Therefore, we integrated users in both groups to explore ways to alleviate obstacles in mixed-group videoconferencing by implementing user-generated icons. A participatory design methodology was employed to investigate how the users overcome language differences. We observed that individuals utilized icons within video-mediated meetings as a universal language to reinforce comprehension. Herein, we present design implications from these findings, along with recommendations for future icon systems to enhance and support mixed-group conversations.2023YKYeonsu Kim et al.KAISTConversational ChatbotsDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Participatory DesignCHI
Distracting Moments in Videoconferencing: A Look Back at the Pandemic PeriodThe COVID-19 pandemic has forced workers around the world to switch their working paradigms from on-site to video-mediated communication. Despite the advantages of videoconferencing, diverse circumstances have prevented people from focusing on their work. One of the most typical problems they face is that various surrounding factors distract them during their meetings. This study focuses on conditions in which remote workers are distracted by factors that disturb, interrupt, or restrict them during their meetings. We aim to explore the various problem situations and user needs. To understand users’ pain points and needs, focus group interviews and participatory design workshops were conducted to learn about participants’ troubled working experiences over the past two years and the solutions they expected. Our study provides a unified framework of distracting factors by which to understand causes of poor user experience and reveals valuable implications to improve videoconferencing experiences.2022MLMinha Lee et al.KAISTRemote Work Tools & ExperienceNotification & Interruption ManagementCHI
Discovering the Design Challenges of Autonomous Vehicles through Exploring Scenarios in Mixed Complex Urban Traffic via an Immersive Design WorkshopA major challenge for autonomous vehicles (AVs) today is driving in a complex urban environment where various traffic participants, infrastructures, and events are mixed. A growing body of research is studying the interaction between AVs and human road users (HRUs) to mitigate this challenge. Although traffic is complicated, research has focused on limited situations, such as pedestrian crossings. This study aims to explore scenarios that have a high possibility of causing problems in mixed traffic situations. We devised a design workshop method using miniatures and small cameras, thus allowing participants to experience scenarios from the HRU perspective and easily creating new road situations to uncover various problems. By analyzing 133 scenarios found through the workshop, we defined five factors and 51 elements of AV–HRU problem scenarios. Through qualitative analysis of the factors found through experiments and comparison with existing studies, we identified research gaps and discussed future design challenges.2021JLJaemyung Lee et al.Automated Driving Interface & Takeover DesignExternal HMI (eHMI) — Communication with Pedestrians & CyclistsV2X (Vehicle-to-Everything) Communication DesignDIS
"Nobody Speaks that Fast!" An Empirical Study of Speech Rate in Conversational Agents for People with Vision ImpairmentsThe number of people with vision impairments using Conversational Agents (CAs) has increased because of the potential of this technology to support them. As many visually impaired people are accustomed to understanding fast speech, most screen readers or voice assistant systems offer speech rate settings. However, current CAs are designed to interact at a human-like speech rate without considering their accessibility. In this study, we tried to understand how people with vision impairments use CA at a fast speech rate. We conducted a 20-day in-home study that examined the CA use of 10 visually impaired people at default and fast speech rates. We investigated the difference in visually impaired people's CA use with different speech rates and their perception toward CA at each rate. Based on these findings, we suggest considerations for the future design of CA speech rate for those with visual impairments.2020DCDasom Choi et al.Korea Advanced Institute of Science and TechnologyIntelligent Voice Assistants (Alexa, Siri, etc.)Voice AccessibilityCHI