Supporting Money Management among Adults with Down Syndrome: A Multi-Technology Probe StudyFinancial decision-making is critical to adult autonomy, yet many adults with Down syndrome (AwDS) have limited opportunities or support to develop money management skills, often receiving allowances while caregivers oversee financial obligations. To better understand the experiences AwDS have with budgeting and their support preferences, we designed and prototyped three cash-based budgeting technology probes: a gamified tablet application, a tablet-based augmented reality application, and a custom tangible device. Seven AwDS used all three prototypes to complete simplified money management tasks. Across probes, modality tradeoffs shaped engagement and checking: gamification increased interest but encouraged rushing; AR reduced arithmetic but encouraged users to trust the system’s output and skip verification; tangible controls supported participation yet introduced coordination challenges. Error recovery relied on brief, situated prompts linking screen and cash, shaped by prior budgeting/technology experience. These findings point to three design implications: (1) surface budgeting as a stimulating multi-goal puzzle, not just a sequence of steps; (2) design error recovery that connects screen state and real money; (3) support interdependent use without collapsing autonomy.2026HJHailey L Johnson et al.University of WisconsinCo-Design and CollaborationCHI
Robot-Assisted Group Tours for Blind PeopleGroup interactions are essential to social functioning, yet effective engagement relies on the ability to recognize and interpret visual cues, making such engagement a significant challenge for blind people. In this paper, we investigate how a mobile robot can support group interactions for blind people. We used the scenario of a guided tour with mixed-visual groups involving blind and sighted visitors. Based on insights from an interview study with blind people (n=5) and museum experts (n=5), we designed and prototyped a robotic system that supported blind visitors to join group tours. We conducted a field study in a science museum where each blind participant (n=8) joined a group tour with one guide and two sighted participants (n=8). Findings indicated users' sense of safety from the robot's navigational support, concerns in the group participation, and preferences for obtaining environmental information. We present design implications for future robotic systems to support blind people's mixed-visual group participation.2026YHYaxin Hu et al.University of Wisconsin-MadisonPractical and Adaptive AccessibilityCHI
AskNow: An LLM-powered Interactive System for Real-Time Question Answering in Large-Scale ClassroomsIn large-scale classrooms, students often struggle to ask questions due to limited instructor attention and social pressure. Based on findings from a formative study with 24 students and 12 instructors, we designed AskNow, an LLM-powered system that enables students to ask questions and receive real-time, context-aware responses grounded in the ongoing lecture and that allows instructors to view students' questions collectively. We deployed AskNow in three university computer science courses for a week and tested with 117 students. To evaluate AskNow's responses, each instructor rated the perceived correctness and satisfaction of 100 randomly sampled AskNow-generated responses. In addition, we conducted interviews with 24 students and the three instructors to understand their experience with AskNow. We found that AskNow significantly reduced students' perceived time to resolve confusion. Instructors rated AskNow's responses as highly accurate and satisfactory. Instructor and student feedback provided insights into the role of such systems in supporting real-time learning in large lecture settings.2026ZLZiqi Liu et al.University of Wisconsin-MadisonLearning in the AI EraCHI
NarraGuide: an LLM-based Narrative Mobile Robot for Remote Place ExplorationRobotic telepresence enables users to navigate and experience remote environments. However, effective navigation and situational awareness depend on users’ prior knowledge of the environment, limiting the usefulness of these systems for exploring unfamiliar places. We explore how integrating location-aware LLM-based narrative capabilities into a mobile robot can support remote exploration. We developed a prototype system, called NarraGuide, that provides narrative guidance for users to explore and learn about a remote place through a dialogue-based interface. We deployed our prototype in a geology museum, where remote participants (𝑛 = 20) used the robot to tour the museum. Our findings reveal how users perceived the robot’s role, engaged in dialogue in the tour, and expressed preferences for bystander encountering. Our work demonstrates the potential of LLM-enabled robotic capabilities to deliver location-aware narrative guidance and enrich the experience of exploring remote environments.2025YHYaxin Hu et al.Social & Collaborative VRAR Navigation & Context AwarenessTeleoperation & TelepresenceUIST
Bridging Generations using AI-Supported Co-Creative ActivitiesIntergenerational co-creation using technology between grandparents and grandchildren can be challenging due to differences in technological familiarity. AI has emerged as a promising tool to support co-creative activities, offering flexibility and creative assistance, but its role in facilitating intergenerational connection remains underexplored. In this study, we conducted a user study with 29 grandparent-grandchild groups engaged in AI-supported story creation to examine how AI-assisted co-creation can foster meaningful intergenerational bonds. Our findings show that grandchildren managed the technical aspects, while grandparents contributed creative ideas and guided the storytelling. AI played a key role in structuring the activity, facilitating brainstorming, enhancing storytelling, and balancing the contributions of both generations. The process fostered mutual appreciation, with each generation recognizing the strengths of the other, leading to an engaging and cohesive co-creation process. We offer design implications for integrating AI into intergenerational co-creative activities, emphasizing how AI can enhance connection across skill levels and technological familiarity.2025CKCallie Y. Kim et al.University of Wisconsin-Madison, Department of Computer SciencesAI-Assisted Creative WritingEmpowerment of Marginalized GroupsCHI
VeriPlan: Integrating Formal Verification and LLMs into End-User PlanningAutomated planning is traditionally the domain of experts, utilized in fields like manufacturing and healthcare with the aid of expert planning tools. Recent advancements in LLMs have made planning more accessible to everyday users due to their potential to assist users with complex planning tasks. However, LLMs face several application challenges within end-user planning, including consistency, accuracy, and user trust issues. This paper introduces VeriPlan, a system that applies formal verification techniques, specifically model checking, to enhance the reliability and flexibility of LLMs for end-user planning. In addition to the LLM planner, VeriPlan includes three additional core features---a rule translator, flexibility sliders, and a model checker---that engage users in the verification process. Through a user study ($n=12$), we evaluate VeriPlan, demonstrating improvements in the perceived quality, usability, and user satisfaction of LLMs. Our work shows the effective integration of formal verification and user-control features with LLMs for end-user planning tasks.2025CLChristine P. Lee et al.University of Wisconsin-Madison, Department of Computer SciencesHuman-LLM CollaborationExplainable AI (XAI)Interactive Data VisualizationCHI
SET-PAiREd: Designing for Parental Involvement in Learning with an AI-Assisted Educational RobotAI-assisted learning companion robots are increasingly used in early education. Many parents express concerns about content appropriateness, while they also value how AI and robots could supplement their limited skill, time, and energy to support their children's learning. We designed a card-based kit, SET, to systematically capture scenarios that have different extents of parental involvement. We developed a prototype interface, PAiREd, with a learning companion robot to deliver LLM-generated educational content that can be reviewed and revised by parents. Parents can flexibly adjust their involvement in the activity by determining what they want the robot to help with. We conducted an in-home field study involving 20 families with children aged 3--5. Our work contributes to an empirical understanding of the level of support parents with different expectations may need from AI and robots and a prototype that demonstrates an innovative interaction paradigm for flexibly including parents in supporting their children.2025HHHui-Ru Ho et al.University of Wisconsin-Madison, Department of Computer SciencesHuman-LLM CollaborationProgramming Education & Computational ThinkingEarly Childhood Education TechnologyCHI
The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsIncreasing evidence suggests that many deployed AI systems do not sufficiently support end-user interaction and information needs. Engaging end-users in the design of these systems can reveal user needs and expectations, yet effective ways of engaging end-users in the AI explanation design remain under-explored. To address this gap, we developed a design method, called \textit{AI-DEC}, that defines four dimensions of AI explanations that are critical to the integration of AI systems in the workplace---communication content, modality, frequency, and direction---and offers design examples for end-users to design AI explanations that meet their needs. We evaluated this method through co-design sessions with workers in healthcare, finance, and management industries who regularly use AI systems in their daily work. Findings indicate that the AI-DEC effectively supported workers in designing explanations that accommodated diverse levels of performance and autonomy needs, which varied depending on the AI system's workplace role and worker values. We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.2024CLChristine P. Lee et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationPrivacy by Design & User ControlDIS
Understanding On-the-Fly End-User Robot ProgrammingNovel end-user programming (EUP) tools enable on-the-fly (i.e., spontaneous, easy, and rapid) creation of interactions with robotic systems. These tools are expected to empower users in determining system behavior, although very little is understood about how end users perceive, experience, and use these systems. In this paper, we seek to address this gap by investigating end-user experience with on-the-fly robot EUP. We trained 21 end users to use an existing on-the-fly EUP tool, asked them to create robot interactions for four scenarios, and assessed their overall experience. Our findings provide insight into how these systems should be designed to better support end-user experience with on-the-fly EUP, focusing on user interaction with an automatic program synthesizer that resolves imprecise user input, the use of multimodal inputs to express user intent, and the general process of programming a robot.2024LSLaura Stegner et al.Human-Robot Collaboration (HRC)Prototyping & User TestingDIS
Tangible Scenography as a Holistic Design Method for Human-Robot InteractionTraditional approaches to human-robot interaction design typically examine robot behaviors in controlled environments and narrow tasks. These methods are impractical for designing robots that interact with diverse user groups in complex human environments. Drawing from the field of theater, we present the construct of scenes --individual environments consisting of specific people, objects, spatial arrangements, and social norms-- and tangible scenography, as a holistic design approach for human-robot interactions. We created a design tool, Tangible Scenography Kit (TaSK), with physical props to aid in design brainstorming. We conducted design sessions with eight professional designers to generate exploratory designs. Designers used tangible scenography and TaSK components to create multiple scenes with specific interaction goals, characterize each scene's social environment, and design scene-specific robot behaviors. From these sessions, we found that this method can encourage designers to think beyond a robot's narrow capabilities and consider how they can facilitate complex social interactions.2024AKAmy Koike et al.Social Robot InteractionParticipatory DesignDIS
"This really let's us see the entire world:" Designing a conversational telepresence robot for homebound older adultsIn this paper, we explore the design and use of conversational telepresence robots to help homebound older adults interact with the external world. An initial needfinding study (N=8) using video vignettes revealed older adults' experiential needs for robot-mediated remote experiences such as exploration, reminiscence and social participation. We then designed a prototype system to support these goals and conducted a technology probe study (N=11) to garner a deeper understanding of user preferences for remote experiences. The study revealed user interactive patterns in each desired experience, highlighting the need of robot guidance in exploration and social engagements for reminiscence. Our work identifies a novel design space where conversational telepresence robots can be used to foster meaningful interactions in the remote physical environment. We offer design insights into the robot's proactive role in providing guidance and using dialogue to create personalized, contextualized and meaningful experiences.2024YHYaxin Hu et al.Aging-in-Place Assistance SystemsTeleoperation & TelepresenceDIS
REX: Designing User-centered Repair and Explanations to Address Robot FailuresRobots in real-world environments continuously engage with multiple users and encounter changes that lead to unexpected conflicts in fulfilling user requests. Recent technical advancements (\eg large-language models (LLMs), program synthesis) offer various methods for automatically generating repair plans that address such conflicts. In this work, we understand how automated repair and explanations can be designed to improve user experience with robot failures through two user studies. In our first, online study ($n=162$), users expressed increased trust, satisfaction, and utility with the robot performing automated repair and explanations. However, we also identified risk factors---safety, privacy, and complexity---that require adaptive repair strategies. The second, in-person study ($n=24$) elucidated distinct repair and explanation strategies depending on the level of risk severity and type. Using a design-based approach, we explore automated repair with explanations as a solution for robots to handle conflicts and failures, complemented by adaptive strategies for risk factors. Finally, we discuss the implications of incorporating such strategies into robot designs to achieve seamless operation amid changing user needs and environments.2024CLChristine P. Lee et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationHuman-Robot Collaboration (HRC)DIS
"It Is Easy Using My Apps:" Understanding Technology Use and Needs of Adults with Down SyndromeAssistive technologies for adults with Down syndrome (DS) need designs tailored to their specific technology requirements. While prior research has explored technology design for individuals with intellectual disabilities, little is understood about the needs and expectations of adults with DS. Assistive technologies should leverage the abilities and interests of the population, while incorporating age- and context-considerate content. In this work, we interviewed six adults with DS, seven parents of adults with DS, and three experts in speech-language pathology, special education, and occupational therapy to determine how technology could support adults with DS. In our thematic analysis, four main themes emerged, including (1) community vs. home social involvement; (2) misalignment of skill expectations between adults with DS and parents; (3) family limitations in technology support; and (4) considerations for technology development. Our findings extend prior literature by including the voices of adults with DS in how and when they use technology.2024HJHailey Johnson et al.University of Wisconsin--MadisonCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Augmentative & Alternative Communication (AAC)Universal & Inclusive DesignCHI
"It's Not a Replacement:'' Enabling Parent-Robot Collaboration to Support In-Home Learning Experiences of Young ChildrenLearning companion robots for young children are increasingly adopted in informal learning environments. Although parents play a pivotal role in their children's learning, very little is known about how parents prefer to incorporate robots into their children's learning activities. We developed prototype capabilities for a learning companion robot to deliver educational prompts and responses to parent-child pairs during reading sessions and conducted in-home user studies involving 10 families with children aged 3--5. Our data indicates that parents want to work with robots as collaborators to augment parental activities to foster children's learning, introducing the notion of parent-robot collaboration. Our findings offer an empirical understanding of the needs and challenges of parent-child interaction in informal learning scenarios and design opportunities for integrating a companion robot into these interactions. We offer insights into how robots might be designed to facilitate parent-robot collaboration, including parenting policies, collaboration patterns, and interaction paradigms.2024HHHui-Ru Ho et al.University of Wisconsin MadisonEye Tracking & Gaze InteractionSpecial Education TechnologyRobots in Education & HealthcareCHI
Sprout: Designing Expressivity for Robots Using Fiber-Embedded ActuationIn this paper, we explore how techniques from soft robotics can help create a new form of robotic expression. We present Sprout, a soft expressive robot that conveys its internal states by changing the shape of its body. By integrating fiber-embedded actuators into its construction, Sprout can extend, expand, twist, and bend. These movements enable Sprout to express its internal states, for example, by expanding when it is angry and bending its body forward when it is curious. Through two user studies, we investigated how Sprout's expressions were interpreted by users, how users perceived Sprout, and how users interacted with it. We propose the integration of soft actuators as a novel design space for designing robot expressions to convey emotional and internal states.2024AKAmy Koike et al.Shape-Changing Interfaces & Soft Robotic MaterialsSocial Robot InteractionHRI
Making Informed Decisions: Supporting Cobot Integration Considering Business and Worker PreferencesRobots are ubiquitous in manufacturing settings from small-scale to large-scale. While collaborative robots (cobots) have significant potential in these settings due to their flexibility and ease of use, they can only reach their full potential when properly integrated. Specifically, cobots need to be integrated in a manner that properly utilizes their strengths, improves the performance of the manufacturing process, and can be used in concert with human workers. Understanding how to properly integrate cobots into existing manufacturing workflows requires careful consideration and the knowledge of roboticists, manufacturing engineers, and business administrators. In this work, we propose an approach to collaborating with manufacturers prior to the integration process that involves planning, analysis, development, and presentation of results. This approach ultimately allows manufacturers to make an informed choice about cobot integration within their facilities. We illustrate the application of this approach through a case study with a manufacturing collaborator and discuss insights learned throughout the process.2024DSDakota Sullivan et al.Human-Robot Collaboration (HRC)HRI
Toward Family-Robot Interactions: A Family-Centered Framework in HRIAs robotic products are increasingly integrated into day-to-day environments, there is a greater need to understand authentic and real-world human-robot interactions to inform the design of future products. Across many domestic, educational, and public settings, robots interact with not only individuals and groups of users, but also families, including children, parents, relatives, and even pets. However, the focus of products developed to date and research in human-robot and child-robot interactions have primarily been on the interaction with their primary users, neglecting the complex and multifaceted interactions between members of families and with the robot. There is a significant gap in knowledge, methods, and theories for how to design robots to support these interactions. To inform the design of robots that can support and enhance family life, this paper provides (1) a narrative review exemplifying the research gap and opportunities for family-robot interactions and (2) an actionable family-centered framework for research and practices in human-robot and child-robot interaction.2024BCBengisu Cagiltay et al.Domestic RobotsSocial Robot InteractionHRI
Understanding Large-Language Model (LLM)-powered Human-Robot InteractionGenerative AI, particularly large-language models (LLMs), hold significant promise in improving human-robot interaction. LLM-powered robots can not only maintain greater conversational capabilities, but they can also handle open-ended user requests across a wide range of tasks and domains. Despite the potential to transform human-robot interaction, very little is known about the distinctive design requirements for utilizing LLMs in robots, which may differ from other interaction modalities such as text and voice, and how these requirements might change across tasks and contexts. To better understand these requirements, we conducted a user study (n=32) that compared an LLM-powered social robot against two other agents---a text-based agent and a voice-based agent. To understand how these requirements differed across tasks, participants completed one of four conversational tasks: choose, generate, execute, and negotiate. Our findings show that LLM-powered robots elevate expectations for sophisticated non-verbal cues. While they excel in connection-building and deliberation tasks, they are less preferred for challenges in logical communication and anxiety-inducing situations. We provide design implications both for robots integrating LLMs and for fine-tuning LLMs for use with robots.2024CKCallie Y. Kim et al.Human-LLM CollaborationSocial Robot InteractionDesktop 3D Printing & Personal FabricationHRI
A System for Human-Robot Teaming through End-User Programming and Shared AutonomyMany industrial tasks-such as sanding, installing fasteners, and wire harnessing-are difficult to automate due to task complexity and variability. We instead investigate deploying robots in an assistive role for these tasks, where the robot assumes the physical task burden and the skilled worker provides both the high-level task planning and low-level feedback necessary to effectively complete the task. In this article, we describe the development of a system for flexible human-robot teaming that combines state-of-the-art methods in end-user programming and shared autonomy and its implementation in sanding applications. We demonstrate the use of the system in two types of sanding tasks, situated in aircraft manufacturing, that highlight two potential workflows within the human-robot teaming setup. We conclude by discussing challenges and opportunities in human-robot teaming identified during the development, application, and demonstration of our system.2024MHMichael Hagenow et al.Human-Robot Collaboration (HRC)Computational Methods in HCIHRI
Periscope: A Robotic Camera System to Support Remote Physical CollaborationWe investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view—an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system’s utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks and discuss design and research implications of our work for future robotic camera system that facilitate remote collaboration.2023PPPragathi Praveena et al.Human Robot InteractionCSCW