Exploring Parent’s Needs for Children-Centered AI to Support Preschoolers’ Storytelling and Reading ActivitiesInteractive storytelling is vital for preschooler development. While children's interactive partners have traditionally been their parents and teachers, recent advances in artificial intelligence (AI) have sparked a surge of AI-based storytelling technologies. As these technologies become increasingly ubiquitous in preschoolers' lives, questions arise regarding how they function in practical storytelling scenarios and, in particular, how parents, the most critical stakeholders, experience and perceive these technologies. This paper investigates these questions through a qualitative study with 17 parents of children aged 3-6. Our findings suggest that even though AI-based storytelling technologies provide more immersive and engaging interaction, they still cannot meet parents’ expectations due to a series of interactive, functional, and algorithmic challenges. We elaborate on these challenges and discuss the possible implications of future AI-based interactive storytelling technologies for preschoolers.2024YSYuling Sun et al.Session 2a: Designing Technology for Parenting and Child DevelopmentCSCW
Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis DiagnosisToday's AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection that requires an early diagnosis with high uncertainty from the clinician. Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. The study begins with a formative study investigating why clinical experts abandon an existing AI-powered Sepsis predictive module in their electrical health record (EHR) system. We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process (e.g., generating hypotheses or gathering data), instead of focusing only on the final decision. Therefore, we build SepsisLab based on a state-of-the-art AI algorithm and extend it to predict the future projection of sepsis development, visualize the prediction uncertainty, and propose actionable suggestions (i.e., which additional laboratory tests can be collected) to reduce such uncertainty. Through heuristic evaluation with six clinicians using our prototype system, we demonstrate that \system enables a promising human-AI collaboration paradigm for the future of AI-assisted sepsis diagnosis and other high-stakes medical decision making.2024SZShao Zhang et al.Northeastern UniverisityExplainable AI (XAI)AI-Assisted Decision-Making & AutomationMedical & Scientific Data VisualizationCHI
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational AgentsThe widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.2024ZZZheng Zhang et al.Khoury College of Computer SciencesAgent Personality & AnthropomorphismHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityCHI
LLMR: Real-time Prompting of Interactive Worlds using Large Language ModelsWe present Large Language Model for Mixed Reality (LLMR), a framework for the real-time creation and modification of interactive Mixed Reality experiences using LLMs. LLMR leverages novel strategies to tackle difficult cases where ideal training data is scarce, or where the design goal requires the synthesis of internal dynamics, intuitive analysis, or advanced interactivity. Our framework relies on text interaction and the Unity game engine. By incorporating techniques for scene understanding, task planning, self-debugging, and memory management, LLMR outperforms the standard GPT-4 by 4x in average error rate. We demonstrate LLMR's cross-platform interoperability with several example worlds, and evaluate it on a variety of creation and modification tasks to show that it can produce and edit diverse objects, tools, and scenes. Finally, we conducted a usability study (N=11) with a diverse set that revealed participants had positive experiences with the system and would use it again.2024FTFernanda De La Torre et al.MIT, MicrosoftMixed Reality WorkspacesHuman-LLM CollaborationCHI
StoryBuddy: A Human-AI Collaborative Agent for Parent-Child Interactive Storytelling with Flexible Parent InvolvementDespite its benefits for children's skill development and parent-child bonding, many parents do not often engage in interactive storytelling by having story-related dialogues with their child due to limited availability or challenges in coming up with appropriate questions. While recent advances made AI generation of questions from stories possible, the fully-automated approach excludes parent involvement, disregards educational goals, and underoptimizes for child engagement. Informed by need-finding interviews and participatory design (PD) results, we developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences. StoryBuddy's design highlighted the need for accommodating dynamic user needs between the desire for parent involvement and parent-child bonding and the goal of minimizing parent intervention when busy. The PD revealed varied assessment and educational goals of parents, which StoryBuddy addressed by supporting configuring question types and tracking child progress. A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.2022ZZZheng Zhang et al.University of Notre DameParticipatory DesignInteractive Narrative & Immersive StorytellingCHI
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoMLAutomated Machine Learning (AutoML) is a rapidly growing set of technology to automate the model development pipeline, by automatically searching the model space and generating candidate models. A critical final step of AutoML is to have the users, often data scientists, selecting the final model from dozens of candidates. In current AutoML systems the selection is supported by performance metrics. Prior work has shown that in practices people make choice of ML model based on many criteria beyond prediction accuracy, including whether the way a model makes decision is reasonable or reliable. It is possible that AutoML users are interested in further understanding and comparing how these candidate models work. We also hypothesize the comparison may happen at various levels of granularity, from prediction distribution, feature importance to how the models judge selected instances. Based on these hypotheses, we developed Model LineUpper, supporting interactive model comparison for AutoML users by integrating multiple explainable AI (XAI) and visualization techniques. We conducted a user study with 14 data scientists, both to evaluate the design of Model LineUpper, and to use it as a design probe to understand how users perform model comparison with an AutoML system. We discuss the design implications for utilizing explainable AI techniques for model comparison, and supporting the unique user needs to compare candidate models generated by AutoML.2021SNShweta Narkar et al.Explainable AI (XAI)AutoML InterfacesInteractive Data VisualizationIUI
Retroactive Transfer Phenomena in Alternating User InterfacesWe investigated retroactive transfer when users alternate between different interfaces. Retroactive transfer is the influence of a newly learned interface on users' performance with a previously learned interface. In an interview study, participants described their experiences when alternating between different interfaces, e.g. different operating systems, devices or techniques. Negative retroactive transfer related to text entry was the most frequently reported incident. We then reported a laboratory experiment that investigated the impact of similarity between two abstract keyboard layouts, and the number of alternations between them, on retroactive interference. Results indicated that even small changes in the interference interface produced a significant performance drop for the entire previously learned interface. The amplitude of this performance drop decreases with the number of alternations. We suggest that retroactive transfer should receive more attention in HCI, as the ubiquitous nature of interactions across applications and systems requires users to increasingly alternate between similar interfaces.2020RRReyhaneh Raissi et al.Sorbonne Université, CNRS, ISIRUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI