Fuzzy Linkography: Automatic Graphical Summarization of Creative Activity TracesLinkography—the analysis of links between the design moves that make up an episode of creative ideation or design—can be used for both visual and quantitative assessment of creative activity traces. Traditional linkography, however, is time-consuming, requiring a human coder to manually annotate both the design moves within an episode and the connections between them. As a result, linkography has not yet been much applied at scale. To address this limitation, we introduce fuzzy linkography: a means of automatically constructing a linkograph from a sequence of recorded design moves via a "fuzzy" computational model of semantic similarity, enabling wider deployment and new applications of linkographic techniques. We apply fuzzy linkography to three markedly different kinds of creative activity traces (text-to-image prompting journeys, LLM-supported ideation sessions, and researcher publication histories) and discuss our findings, as well as strengths, limitations, and potential future applications of our approach.2025ASAmy Smith et al.Interactive Data VisualizationTime-Series & Network Graph VisualizationData StorytellingC&C
Phraselette: A Poet’s Procedural PaletteAccording to the recently introduced theory of artistic support tools, creativity support tools exert normative influences over artistic production, instantiating a normative ground that shapes both the process and product of artistic expression. We argue that the normative ground of most existing automated writing tools is misaligned with writerly values and identify a potential alternative frame--material writing support--for experimental poetry tools that flexibly support the finding, processing, transforming, and shaping of text(s). Based on this frame, we introduce Phraselette, an artistic material writing support interface that helps experimental poets search for words and phrases. To provide material writing support, Phraselette is designed to counter the dominant mode of automated writing tools, while offering language model affordances in line with writerly values. We further report on an extended expert evaluation involving 10 published poets that indicates support for both our framing of material writing support and for Phraselette itself.2025ACAlex Calderwood et al.AI-Assisted Creative WritingDIS
Toyteller: AI-powered Visual Storytelling Through Toy-Playing with Character SymbolsWe introduce Toyteller, an AI-powered storytelling system where users generate a mix of story text and visuals by directly manipulating character symbols like they are toy-playing. Anthropomorphized symbol motions can convey rich and nuanced social interactions; Toyteller leverages these motions (1) to let users steer story text generation and (2) as a visual output format that accompanies story text. We enabled motion-steered text generation and text-steered motion generation by mapping motions and text onto a shared semantic space so that large language models and motion generation models can use it as a translational layer. Technical evaluations showed that Toyteller outperforms a competitive baseline, GPT-4o. Our user study identified that toy-playing helps express intentions difficult to verbalize. However, only motions could not express all user intentions, suggesting combining it with other modalities like language. We discuss the design space of toy-playing interactions and implications for technical HCI research on human-AI interaction.2025JCJohn Joon Young Chung et al.MidjourneyGenerative AI (Text, Image, Music, Video)AI-Assisted Creative WritingInteractive Narrative & Immersive StorytellingCHI
Patchview: LLM-powered Worldbuilding with Generative Dust and Magnet VisualizationLarge language models (LLMs) can help writers build story worlds by generating world elements, such as factions, characters, and locations. However, making sense of many generated elements can be overwhelming. Moreover, if the user wants to precisely control aspects of generated elements that are difficult to specify verbally, prompting alone may be insufficient. We introduce Patchview, a customizable LLM-powered system that visually aids worldbuilding by allowing users to interact with story concepts and elements through the physical metaphor of magnets and dust. Elements in Patchview are visually dragged closer to concepts with high relevance, facilitating sensemaking. The user can also steer the generation with verbally elusive concepts by indicating the desired position of the element between concepts. When the user disagrees with the LLM's visualization and generation, they can correct those by repositioning the element. These corrections can be used to align the LLM's future behaviors to the user's perception. With a user study, we show that Patchview supports the sensemaking of world elements and steering of element generation, facilitating exploration during the worldbuilding process. Patchview provides insights on how customizable visual representation can help sensemake, steer, and align generative AI model behaviors with the user's intentions.2024JCJohn Joon Young Chung et al.Generative AI (Text, Image, Music, Video)Human-LLM CollaborationUIST
ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals’ ShadowingShadowing allows artists to convey realistic volume and emotion of characters in comic colorization. While AI technologies have the potential to improve professionals’ shadowing experience, current practice is manual and time-consuming. To understand how we can improve their shadowing experience, we conducted interviews with 5 professionals. We found that professionals’ level of engagement can vary depending on semantics, such as characters’ faces or hair. We also found they spent time on shadow “landscaping”—deciding where to place large shadow regions to create a realistic volumetric presentation while the final results can vary dramatically depending on their “staging” and “attention guiding” needs. We discovered they would accept AI suggestions for less engaging semantic parts or landscaping, while needing the capability to adjust details. Based on our observations, we developed ShadowMagic, which (1) generates AI-driven shadows based on commonly used light directions, (2) enables users to selectively choose results depending on semantics, and (3) allows users to complete shadow areas themselves for further perfection. Through a summative evaluation with 5 professionals, we found that they were significantly more satisfied with our AI-driven results compared to a baseline. We also found that ShadowMagic’s “step by step” workflow helps participants more easily adopt AI-driven results. We conclude by providing implications.2024AGAmrita Ganguly et al.Generative AI (Text, Image, Music, Video)Creative Collaboration & Feedback SystemsUIST
Find the Bot!: Gamifying Facial Emotion Recognition for Both Human Training and Machine Learning Data CollectionFacial emotion recognition (FER) constitutes an essential social skill for both humans and machines to interact with others. To this end, computer interfaces serve as valuable tools for training individuals to improve FER abilities, while also serving as tools for gathering labels to train FER machine learning datasets. However, existing tools have limitations on the scope and methods of training non-clinical populations and also on collecting labels for machines. In this study, we introduce Find the Bot!, an integrated game that effectively engages the general population to support not only human FER learning on spontaneous expressions but also the collection of reliable judgment-based labels. We incorporated design guidelines from gamification, education, and crowdsourcing literature to engage and motivate players. Our evaluation (N=59) shows that the game encourages players to learn emotional social norms on perceived facial expressions with a high agreement rate, facilitating effective FER learning and reliable label collection all while enjoying gameplay.2024YYYeonsun Yang et al.DGISTGame UX & Player BehaviorGame AccessibilityPrototyping & User TestingCHI
A Design Space for Intelligent and Interactive Writing AssistantsIn our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities. We seek to address this challenge by proposing a design space as a structured way to examine and explore the multidimensional space of intelligent and interactive writing assistants. Through community collaboration, we explore five aspects of writing assistants: task, user, technology, interaction, and ecosystem. Within each aspect, we define dimensions and codes by systematically reviewing 115 papers while leveraging the expertise of researchers in various disciplines. Our design space aims to offer researchers and designers a practical tool to navigate, comprehend, and compare the various possibilities of writing assistants, and aid in the design of new writing assistants.2024MLMina Lee et al.Microsoft ResearchHuman-LLM CollaborationAI-Assisted Creative WritingCreative Collaboration & Feedback SystemsCHI