Unremarkable to Remarkable AI Agent: Exploring Boundaries of Agent Intervention for Adults With and Without Cognitive ImpairmentAs the population of older adults increases, there is a growing need for support for them to age in place. This is exacerbated by the growing number of individuals struggling with cognitive decline and shrinking number of youth who provide care for them. Artificially intelligent agents could provide cognitive support to older adults experiencing memory problems, and they could help informal caregivers with coordination tasks. To better understand this possible future, we conducted a speed dating with storyboards study to reveal invisible social boundaries that might keep older adults and their caregivers from accepting and using agents. We found that healthy older adults worry that accepting agents into their homes might increase their chances of developing dementia. At the same time, they want immediate access to agents that know them well if they should experience cognitive decline. Older adults in the early stages of cognitive decline expressed desire for agents that can ease the burden they saw themselves becoming for their caregivers. They also speculated that an agent who really knew them well might be an effective advocate for their needs when they were less able to advocate for themselves. That is, the agent may need to transition from being unremarkable to remarkable. Based on these findings, we present design opportunities and considerations for agents and articulate directions of future research.2025MCMai Lee Chang et al.Humanized AI: Avatars, Agents, and Voice AssistantsCSCW
Data Wagers in Worker Advocacy ResearchThis paper draws on Michel de Certeau’s notion of "tactics" to explore the use of data in labor organizing research in CSCW. Taking a historical view, we first analyze a set of cases from 20th century US labor history that offer three distinct lenses on the risks of data-based advocacy campaigns: wagers, compromises, and concessions. Across our cases, we frame reformers' use of data tactics as a rhetorical move, taken to advance incremental worker gains under conditions of precarity. However, by continuing to rely on certain data-based arguments in the short term, we argue that labor reformers may have limited the frame of debate for broader arguments necessary to improve conditions in the long-term. These tensions follow us into data-based advocacy research in the present, such as the emerging "digital workerism" movement. To ensure the continuation of responsible advocacy research in CSCW, we offer insights from social justice movements to suggest how members of the HCI and CSCW communities can work more intentionally alongside (or without) data methods to support worker-led direct action.2025FSFranchesca Spektor et al.Advocacy WorkCSCW
Working Together: Algorithmic Management and Peer Relationships in the Hospitality IndustryAlgorithmic management is transforming traditional face-to-face service sectors like hospitality. To understand this phenomenon, we conducted an interview study in a unionized, mid-sized urban hotel on the West Coast of the USA. Through this work, we examine how an algorithmic management (AM) platform mediates work in a housekeeping department. Our analysis highlights the effects of AM on social processes, revealing that despite careful configuration, the tool's implementation still challenges traditional communication and coordination. This study contributes empirical evidence on AM impacts in a collaborative service environment, emphasizing the importance of organizational dynamics in AM design and implementation. We offer design opportunities for flexible workplace technologies that support, rather than frustrate, the relational aspects of service work.2025FSFranchesca Spektor et al.Social Platform Design & User BehaviorImpact of Automation on WorkDIS
Exploring the Innovation Opportunities for Pre-trained ModelsInnovators transform the world by understanding where services are successfully meeting customers’ needs and then using this knowledge to identify failsafe opportunities for innovation. Pre-trained models have changed the AI innovation landscape, making it faster and easier to create new AI products and services. Understanding where pre-trained models are successful is critical for supporting AI innovation. Unfortunately, the hype cycle surrounding pre-trained models makes it hard to know where AI can really be successful. To address this, we investigated pre-trained model applications developed by HCI researchers as a proxy for commercially successful applications. The research applications demonstrate technical capabilities, address real user needs, and avoid ethical challenges. Using an artifact analysis approach, we categorized capabilities, opportunity domains, data types, and emerging interaction design patterns, uncovering some of the opportunity space for innovation with pre-trained models.2025MPMinjung Park et al.Generative AI (Text, Image, Music, Video)Human-LLM CollaborationExplainable AI (XAI)DIS
Designing Aging Reflection Probes to Elicit Self-Perception of Aging (SPA) Beliefs of Older Adults in IndiaAge-related transitions can influence older adults’ internalized aging beliefs, or Self-Perception of Aging (SPA). Previous studies have shown correlations between SPA and the well-being of older adults. However, there is a lack of specific tools to gain an in-depth understanding of SPA beliefs. This pictorial provides a detailed description of a probe designed to collect SPA-related insights directly from older adults. We describe the iterative co-design process of the 7-day Aging Reflection probe kit, incorporating feedback from pilot and focus group sessions with participants to refine the final design. We also highlight the design decisions made for the cultural adaptation of the probes to ensure they resonate with Indian participants. Our probe kit was instrumental in creating dialogue with participants about various aspects of SPA. Participants used the probes to refresh their memory during follow-up interviews. Insights from the probes played a critical role in conducting semi-structured interviews, advancing our understanding of how to operationalize SPA in HCI research and design.2025NKNeeta M Khanuja et al.Aging-Friendly Technology DesignParticipatory DesignDIS
Making the Right Thing: Bridging HCI and Responsible AI in Early-Stage AI Concept SelectionAI projects often fail due to financial, technical, ethical, or user acceptance challenges—failures frequently rooted in early-stage decisions. While HCI and Responsible AI (RAI) research emphasize this, practical approaches for identifying promising concepts early remain limited. Drawing on Research through Design, this paper investigates how early-stage AI concept sorting in commercial settings can reflect RAI principles. Through three design experiments—including a probe study with industry practitioners—we explored methods for evaluating risks and benefits using multidisciplinary collaboration. Participants demonstrated strong receptivity to addressing RAI concerns early in the process and effectively identified low-risk, high-benefit AI concepts. Our findings highlight the potential of a design-led approach to embed ethical and service design thinking at the front end of AI innovation. By examining how practitioners reason about AI concepts, our study invites HCI and RAI communities to see early-stage innovation as a critical space for engaging ethical and commercial considerations together.2025JJJi-Youn Jung et al.AI Ethics, Fairness & AccountabilityParticipatory DesignSustainable HCIDIS
AI Mismatches: Identifying Potential Algorithmic Harms Before AI DevelopmentAI systems are often introduced with high expectations, yet many fail to deliver, resulting in unintended harm and missed opportunities for benefit. We frequently observe significant "AI Mismatches", where the system’s actual performance falls short of what is needed to ensure safety and co-create value. These mismatches are particularly difficult to address once development is underway, highlighting the need for early-stage intervention. Navigating complex, multi-dimensional risk factors that contribute to AI Mismatches is a persistent challenge. To address it, we propose an AI Mismatch approach to anticipate and mitigate risks early on, focusing on the gap between realistic model performance and required task performance. Through an analysis of 774 AI cases, we extracted a set of critical factors, which informed the development of seven matrices that map the relationships between these factors and highlight high-risk areas. Through case studies, we demonstrate how our approach can help reduce risks in AI development.2025DSDevansh Saxena et al.University of Wisconsin-Madison, The Information SchoolAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
Dynamic Agent Affiliation: Who Should the AI Agent Work for in the Older Adult's Care Network?The population of older adults experiencing cognitive decline is growing faster than the number of workers who can care for them. Artificially intelligent (AI) agents could assist these older adults, keeping them in their homes longer. For this to happen, older adults must be willing to adopt and rely on agents. Would they trust an agent that might need to report their decline to others? We conducted a speed dating study exploring the impact of agent affiliation (i.e., who the agent should work for). Our healthy and declining participants reacted positively to the idea of agents supporting them. They particularly recognized how the agent would reduce the burden placed on their family caregivers. They viewed affiliation to be dynamic, shifting from the declining older adult and orienting more to their caregivers over the course of cognitive decline. They envisioned the agent modifying its decision-making process to be like their caregivers'.2024MCMai Lee Chang et al.Elderly Care & Dementia SupportAging-in-Place Assistance SystemsHuman-Robot Collaboration (HRC)DIS
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy RisksPrivacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.2024HLHao-Ping (Hank) Lee et al.Carnegie Mellon UniversityAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Charting the Automation of Hospitality: An interdisciplinary literature review examining the evolution of frontline service work in the face of algorithmic managementRecent investments in automation and AI are reshaping the hospitality sector. Driven by social and economic forces affecting service delivery and an impulse to seek out efficiencies, these new technologies have transformed the labor that acts as the backbone to the industry—namely frontline service work performed by housekeepers, front desk staff, line cooks and others. We describe the context for recent technological adoption, with particular emphasis on algorithmic management applications. Through this work, we identify gaps in existing literature and highlight areas in need of further research in the domains of worker-centered technology development. Our analysis highlights how technologies such as algorithmic management shape roles and tasks in the high-touch service sector. We outline how harms produced through automation are often due to a lack of attention to non-management stakeholders. We describe an opportunity space for researchers and practitioners to elicit worker participation at all stages of technology adoption, and offer methods for centering workers, increasing transparency, and accounting for the context of use through holistic implementation and training strategies.2023FSFranchesca Spektor et al.Platform Mediated EconomiesCSCW
Designing for Wellbeing: Worker-Generated Ideas on Adapting Algorithmic Management in the Hospitality IndustryLabor shortages have shaped many industries over the past several years, with hospitality experiencing one of the largest rates of attrition. Workers are leaving their jobs for a variety of reasons, ranging from burnout and work intensification to a lack of meaningful employment. While some literature maintains that labor-replacing automation is poised to bridge the shortages, we argue there is an opportunity for technology design to instead improve job quality and retention. Drawing on interviews with unionized guest room attendants, we report on workers’ perceptions of a widely-used algorithmic room assignment system. We then present worker-generated design ideas that adapt this system toward supporting three key facets of wellbeing: self-efficacy, transparency, and workload. We argue for the need to consider these facets of wellbeing through design across the service landscape, particularly as HCI attends to the impacts of AI and automation on frontline work.2023FSFranchesca Spektor et al.Workplace Wellbeing & Work StressImpact of Automation on WorkDIS
Creating Design Resources to Scaffold the Ideation of AI ConceptsAdvances in artificial intelligence have enabled unprecedented technical capabilities, yet making these advances useful in the real world remains challenging. We engaged in a Research through Design process to improve the ideation of AI products and services. We developed a design resource capturing AI capabilities based on 40 AI features commonly used across various domains. To probe its usefulness, we created a set of slides illustrating AI capabilities and asked designers to ideate AI-enabled user experiences. We also incorporated capabilities into our own design process to brainstorm concepts with domain experts and data scientists. Our research revealed that designers should focus on innovations where moderate AI performance creates value. We reflect on our process and discuss research implications for creating and assessing resources to systematically explore AI’s problem-solution space.2023NYNur Yildirim et al.Generative AI (Text, Image, Music, Video)Human-LLM CollaborationPrototyping & User TestingDIS
How Experienced Designers of Enterprise Applications Engage AI as a Design MaterialHCI research has explored AI as a design material, suggesting that designers can envision AI's design opportunities to improve UX. Recent research claimed that enterprise applications offer an opportunity for AI innovation at the user experience level. We conducted design workshops to explore the practices of experienced designers who work on cross-functional AI teams in the enterprise. We discussed how designers successfully work with and struggle with AI. Our findings revealed that designers can innovate at the system and service levels. We also discovered that making a case for an AI feature's return on investment is a barrier for designers when they propose AI concepts and ideas. Our discussions produced novel insights on designers' role on AI teams, and the boundary objects they used for collaborating with data scientists. We discuss the implications of these findings as opportunities for future research aiming to empower designers in working with data and AI.2022NYNur Yildirim et al.Carnegie Mellon UniversityGenerative AI (Text, Image, Music, Video)AI-Assisted Decision-Making & AutomationCHI
Social Robots in Service Contexts: Exploring the Rewards and Risks of Personalization and Re-embodimentSocial agents and robots are moving into front-line positions in brick and mortar services, taking on roles where they directly interact with customers. These agents could potentially recognize customers to personalize service. Will customers like this, or might they feel monitored and profiled? Robots could also re-embody (move their "personality" between one body and another) in order to take on multiple roles that are typically performed by different people. Will this make customers feel more taken care of, or will it raise concerns about the robot’s competence and expertise? Our work investigates when robots should and should not recognize customers and re-embody. Our online study used storyboards to present possible future interactions between robots and customers across several different service contexts. Our findings suggest that people generally accept robots identifying customers and taking on vastly different roles. However, in some contexts, these robot behaviors seem creepy and untrustworthy2021SRSamantha Reig et al.Agent Personality & AnthropomorphismSocial Robot InteractionDIS
Wikipedia ORES Explorer: Visualizing Trade-offs For Designing Applications With Machine Learning APIWith the growing industry applications of Artificial Intelligence (AI) systems, pre-trained models and APIs have emerged and greatly lowered the barrier of building AI-powered products. However, novice AI application designers often struggle to recognize the inherent algorithmic trade-offs and evaluate model fairness before making informed design decisions. In this study, we examined the Objective Revision Evaluation System (ORES), a machine learning (ML) API in Wikipedia used by the community to build anti-vandalism tools. We designed an interactive visualization system to communicate model threshold trade-offs and fairness in ORES. We evaluated our system by conducting 10 in-depth interviews with potential ORES application designers. We found that our system helped application designers who have limited ML backgrounds learn about in-context ML knowledge, recognize inherent value trade-offs, and make design decisions that aligned with their goals. By demonstrating our system in a real-world domain, this paper presents a novel visualization approach to facilitate greater accessibility and human agency in AI application design.2021ZYZining Ye et al.Explainable AI (XAI)Interactive Data VisualizationDIS
Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple ObjectivesArtificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stake contexts. However, these algorithms are complex and have trade-offs, notably between prediction accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users' goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants' likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.2020BYBowen Yu et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationPrivacy by Design & User ControlDIS
Replay Enactments: Exploring Possible Futures through Historical DataAs we design increasingly complex systems, we run up against fundamental limitations of human imagination. To support practice, it becomes essential to use authentic data and algorithms as design materials to augment designers' intuitions. Recent work has explored some dimensions of using data as a design material, suggesting the contours of a new space of design and prototyping methods. In this paper, we present Replay Enactments (REs), an extension of the User Enactments methods that uses data replay as a boundary object, making complex system behavior tangible to designers and stakeholders. We reflect on a set of case studies that have instantiated REs in diverse ways and discuss trade-offs between different ways of using data replays in design. We conclude by highlighting opportunities and challenges for future work.2020KHKenneth Holstein et al.Interactive Data VisualizationComputational Methods in HCIDIS
Robotic Futures: Learning about Personally-Owned Agents through PerformanceAgents that support spoken interaction (e.g., Amazon Echo) are designed for social spaces like the home, yet designers know little about how they should respond to social activity around them. We set out to reconsider current one-on-one interactions with agents, and explore the design space of future socially sophisticated agents. To do so, we use an iterative co-design process with designers and theatre experts to devise an immersive performance, "Robotic Futures." Theatre is a form of knowing through doing—by examining the interactions that persisted in the devising process and those that fell through, we conclude with a proposition for design considerations for future agents. Based on emerging research in this space, we focus on the characteristics of personally-owned agents in comparison to shared agents, and consider the roles and functions each introduce in their integration in the home.2020MLMichal Luria et al.Agent Personality & AnthropomorphismSocial Robot InteractionDIS
Moving for the Movement: Applying Viewpoints and Composition Techniques to the Design of Online Social Justice CampaignsBy leveraging approaches from other disciplines, designers can expand the boundaries of interaction design to tackle complex socio-technical problems. To address the challenges of networked social justice movements, we developed a workshop for designers and social justice activists based in Viewpoints and Composition, a philosophy and set of techniques for the theatre. Building on other experience prototyping and somatic methods, the workshop leads participants through the design of a hypothetical internet-enabled social justice campaign, encouraging them to imagine the felt-experience of networked social justice movement building in a socio-spatial context. We conclude with insights from the workshop and plans to further develop these techniques.2020JCJudeth Oden Choi et al.Activism & Political ParticipationDesign FictionDIS
Not Some Random Agent: Multi-person interactions with a personalizing service robotService robots often perform their main functions in public settings, interacting with more than one person at a time. How these robots should handle the affairs of individual users while also behaving appropriately when others are present is an open question. One option is to design for flexible agent embodiment: letting agents take control of different robots as people move between contexts. Through structured User Enactments, we explored how agents embodied within a single robot might interact with multiple people. Participants interacted with a robot embodied by a singular service agent, agents that re-embody in different robots and devices, and agents that co-embody within the same robot. Findings reveal key insights about the promise of re-embodiment and co-embodiment as design paradigms as well as what people value during interactions with service robots that use personalization.2020SRSamantha Reig et al.Social Robot InteractionHuman-Robot Collaboration (HRC)HRI