Safeguarding Crowdsourcing Surveys from ChatGPT through Prompt InjectionChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses "prompt injection," such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 98% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.2025CWChaofan Wang et al.Working with AICSCW
Less Supervising, More Caring: Design Recommendations for Informal Caregivers' Co-Participation in Cardiac TelerehabilitationInformal caregivers’ engagement with patient data is becoming increasingly central to CSCW and HCI research on health management. Cardiac telerehabilitation (CTR) technologies generate lifestyle and well-being data that support patients and their families in recovery management, yet informal caregivers' roles in CTR remain underexplored. Recreational athletes in rehabilitation are an especially under-researched group, despite their and their support system's unique needs. Focusing on caregivers of recreational athletes, we conducted interviews with ten participants and used six visual scenarios of a dyadic CTR system to explore their perspectives on data and information co-participation. Caregivers reported that co-participation could strengthen dyadic coping and management but emphasized the need to balance important trade-offs. We provide design recommendations for dyadic CTR systems that balance care needs and preferences, promoting caregiver involvement in a supportive, non-supervisory role. We contribute to CSCW research by proposing a conceptual shift in technology-mediated rehabilitation care: positioning caregiver-inclusive CTR systems as negotiation tools that support boundary work and balance competing care values.2025ISIrina Bianca Serban et al.Caregiving & CaregiversCSCW
Mind Over Matter - Investigating the Influence of Driver's Perception in the Misuse of Automated VehiclesAs vehicles with several levels of automation become increasingly common, there is an increase in incidents involving the misuse of Driving Automation Systems (DAS). The manner in which drivers interact with DAS indicates that the problem extends beyond UI design. We investigate how drivers' perceptions and expectations affect the understanding and consequent usage of DAS. The study employed a Wizard-of-Oz approach to simulate a vehicle with a Level 2 and Level 3 DAS on a public highway. Sixteen participants were exposed to the two driving modes and two distinct UIs. Observations, think-aloud protocols, and in-depth interviews documented their interaction with the different DAS. Irrespective of the UI, various errors were detected, including omission, commission, mode confusion. Deeper investigation into the sources led to the conclusion that drivers' preconceptions of the DAS were a major contributor, resulting in misuse. This highlights the need to look beyond UI design as a sole solution to address driver-vehicle interaction.2025FNFjollë Novakazi et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)AI-Assisted Decision-Making & AutomationAutoUI
Transparent Conversational Agents: The Impact of Capability Communication on User Behavior and Mental Model AlignmentWhen a user interacts with a conversational agent for the first time, they may not be aware of the agent's capabilities, leading to suboptimal use or interaction breakdowns. To avoid a mismatch with the actual capabilities, the agent's capabilities have to be made transparent to the user. To investigate whether communication of an agent's capabilities during interactions enhances transparency and improves the user's mental model, we conducted a user study with 56 participants. Each participant had three speech-based interactions with an agent that communicated its capabilities or an agent that did not. Our results suggest that the communication led to a change in user behavior with significantly longer utterances. However, the users' mental models of the agent's capabilities were not significantly different between the conditions. Participants were able to significantly improve their knowledge of the agent's capabilities by aligning their mental model over time in both conditions.2025MRMerle M. Reimann et al.Agent Personality & AnthropomorphismExplainable AI (XAI)Privacy by Design & User ControlCUI
DesignMinds: Enhancing Video-Based Design Ideation with a Vision-Language Model and a Context-Injected Large Language ModelIdeation is a critical component of video-based design (VBD), where videos serve as the primary medium for design exploration and inspiration. The emergence of generative AI offers considerable potential to enhance this process by streamlining video analysis and facilitating idea generation. In this paper, we present DesignMinds, a prototype that integrates a state-of-the-art Vision-Language Model (VLM) with a context-enhanced Large Language Model (LLM) to support ideation in VBD. To evaluate DesignMinds, we conducted a between-subject study with 35 design practitioners, comparing its performance to a baseline condition. Our results demonstrate that DesignMinds significantly enhances the flexibility and originality of ideation, while also increasing task engagement. Importantly, the introduction of this technology did not negatively impact user experience, technology acceptance, or usability.2025THTianhao He et al.Generative AI (Text, Image, Music, Video)Human-LLM CollaborationGraphic Design & Typography ToolsCUI
Prototyping with Uncertainties: Data, Algorithms, and Research through DesignSeen both as a resource and an obstacle to clarity, uncertainty is a concept that permeates many areas of design. As the concept gains prominence in Human-Computer Interaction (HCI), this special issue specifically explores the interplay between uncertainty and prototyping in Research through Design (RtD). We first outline three histories of uncertainty in design, in relation to its philosophical significance, its role in statistical and algorithmic processes, and its importance in prototyping. The convergence of these aspects is crucial as design evolves toward more agentive and entangled systems, introducing challenges such as Design as a Probabilistic Outcome. We then investigate the design spaces for engaging with “being uncertain” that emerge from the papers: from nuancing the relationship between designers and quantitative data to blurring the line between humans, fungi, and algorithms. Finally, we illuminate some preliminary threads for how RtD can navigate and engage with these shifting technological and design landscapes thoughtfully.2025EGElisa Giaccardi et al.Prototyping & User TestingComputational Methods in HCIDIS
Knowing Me, Knowing AU: How Should We Design Agent-Mediated Mimicry?A lack of self-awareness of communicative behaviours can lead to disadvantages in important interactions. Video recordings as a tool for self-observation have been widely adopted to initiate behaviour change and reflection. Seeing oneself in a recording can lead to negative affect. Forcing an external perspective can lead to cognitive dissonance. Avatars and virtual agents have the advantage that they can copy a human's behaviour while potentially avoiding this dissonance. To explore the design space of mimicking agents, we set up a user study where a video baseline is compared to agent-mediated conditions ranging from idle non-verbal behaviour to complete mimicry of the voice and face. We show that participants gain increased self-awareness from seeing themselves mediated through the virtual agent. We further discuss qualitative observations for the future design of systems that aid in self-reflection, and particularly note that partial mimicry seems to be less appreciated than full mimicry.2025AAAgnes Johanna Axelsson et al.Hand Gesture RecognitionImmersion & Presence ResearchIdentity & Avatars in XRDIS
From Bodily Functions to Bodily Fun: Approaching Pleasure as a Process when Designing with Sexual ExperiencesThis paper presents a conceptual exploration of designing sexual pleasure as an evolving whole-body experience. It addresses the historically narrow focus of research and technology on functional outcomes such as reproduction and orgasm. This limited perspective overlooks diverse desires, emotional connection, and sensory engagement, reinforcing restrictive norms that shape how individuals conceptualise and experience sexuality. To inform our design inquiry, we conducted a qualitative survey (N=143) to generate how individuals understand and experience sexual pleasure. Reflexive thematic analysis of the responses reveals the influence of culture and technology on sexuality, alongside several experiential dimensions: emotional and embodied connection, play and sensory immersion, and vulnerability. These insights, together with a theoretical foundation, guide a design exploration communicated through two provocations. These provocations serve as reflections of an alternative design orientation; one that challenges normative assumptions, views pleasure as an ongoing process, supports bodily exploration, and facilitates richer, more inclusive sexual experiences.2025COCéline Offerman et al.Human-Nature Relationships (More-than-Human Design)Interactive Narrative & Immersive StorytellingDIS
On the Habitabilities of Bacterial Cellulose for Living ArtefactsBacterial cellulose (BC), also known as a Kombucha mat or SCOBY, is a grown material widely adopted in design and HCI communities due to its biodegradability, accessibility and mechanical versatility. Alongside these aspects, BC’s qualities to become a habitat for other living organisms, i.e., its habitabilities, have been researched in biotechnological sciences but not fully explored in design. In response to the call for biobased material alternatives and the expanding design space for multispecies interactions in HCI, in this paper, we unpack this habitability potential of BC in the design of living artefacts. Through visual storytelling we unveil our hands-on biolab journey with Komagataeibacter, the bacteria that produce BC, and show how fungi, microalgae and cyanobacteria can inhabit this material. We outline diverse options for tuning the habitabilities of BC to incite HCI designers in the creation of living artefacts that are fully grown and compatible with regenerative ecologies.2025EGEduard Georges Groutars et al.Shape-Changing Interfaces & Soft Robotic MaterialsHuman-Nature Relationships (More-than-Human Design)DIS
Diffractive Interfaces: Facilitating Agential Cuts in Forest Data Across More-than-human ScalesAs cities worldwide adopt data-driven approaches to optimize urban forests, computational tools like agent-based models (ABMs) are increasingly popular to simulate forest growth and inform planting decisions. However, ABMs often focus on individual metrics, neglecting forests as interdependent ecosystems. Rooted in anthropocentric ideals, these models risk reducing forests to infrastructures for human benefit, undermining their long-term resilience. This pictorial challenges these limitations by exploring how interface design can transcend reductive, agent-centric representations to foster relational understandings of forest ecosystems as more-than-human bodies. Drawing on feminist theorist Karen Barad’s concepts of “diffraction” and “agential cuts,” we craft a repertoire of diffractive interfaces that engage with forest simulation data, revealing how more-than-human bodies can be encountered across diverse temporal, spatial, and agential scales. Through this design exploration, we operationalize more-than- human perspectives in data practices, deepening our understanding of the performative dimensions of interfaces and advancing nuanced, practical approaches to more-than-human design.2025EGElisa Giaccardi et al.Interactive Data VisualizationSustainable HCIHuman-Nature Relationships (More-than-Human Design)DIS
Artificial Intelligence and other Speculative MetaphorsThe paper proposes “speculative metaphors” as constructs for reframing and critically engaging with ideas of artificial intelligence. It identifies a broad range of AI metaphors in the wider culture and technical literature and discusses metaphor design in terms of explanation, persuasion and speculation. To explore different metaphor design strategies, we use a custom GPT to generate a large number of variants on the “artificial intelligence” metaphor. The paper contributes a conceptual framing for such speculative metaphor drawing on ideas of knowledge and understanding, fusion and synthesis, collaboration and collectives. We argue that generating speculative metaphors provides a means of thinking critically about human-AI interaction.2025MBMark Blythe et al.Technology Ethics & Critical HCIDesign FictionDIS
D360: a Tool for Supporting Rapid, Iterative, and Collaborative Analysis of 360° VideoDesigners can immerse themselves into the world of users by using 360° video leading to richer insights and better solutions. However, 360° video is challenging to share and incompatible with existing tools, preventing designers from effectively integrating it into their iterative and collaborative workflows. To address these challenges, we developed D360, a tool that enables designers to view, annotate, and collaboratively analyze 360° video. D360 features a web-based 360° video viewing and annotation tool, a database, and Miro integration to analyze 360° video using a familiar collaborative process. We evaluated D360 using walk-throughs with six professional designers that verified its utility and identified improvements to creating and presenting annotations. By providing both design directions for future 360° video tools for designers and our open source tool, we enable practitioners and researchers to leverage the rich interaction and visual context of 360° video for more impactful insights.2025WMWo Meijer et al.Mixed Reality Workspaces360° Video & Panoramic ContentCreative Collaboration & Feedback SystemsDIS
A Design Space for Animated Textile-forms through Shuttle Weaving: A Case of 3D Woven TrousersAnimated textile-forms hold great potential to seamlessly embed interaction in textile-based artefacts. This paper presents a comprehensive design space for animated woven textile-forms, explored via shuttle weaving. HCI designers have explored the potential of shuttle weaving for local material placement via partial weft insertions and continuous yarn paths to create flexible circuits, sensors in textiles, and, more recently, for animated textile-forms. While these examples indicate early steps towards animated woven textiles, further articulation of the many processes, ingredients, structure, and form variables available to designers is required to realize the full potential of this weaving technique. Addressing this gap, we developed a design space through a combination of literature review and practice-led exploration undertaken for a specific design case - animated 3D shuttle-woven trousers. Our work aims to inspire HCI designers to explore and expand the use of shuttle weaving as an accessible and versatile technique for textile-forms with rich interaction possibilities.2025MVMilou Voorwinden et al.Shape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingTextile Art & Craft DigitizationDIS
Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI AssistantExplainable artificial intelligence (XAI) methods are being proposed to help interpret and understand how AI systems reach specific predictions. Inspired by prior work on conversational user interfaces, we argue that augmenting existing XAI methods with conversational user interfaces can increase user engagement and boost user understanding of the AI system. In this paper, we explored the impact of a conversational XAI interface on users’ understanding of the AI system, their trust, and reliance on the AI system. In comparison to an XAI dashboard, we found that the conversational XAI interface can bring about a better understanding of the AI system among users and higher user trust. However, users of both the XAI dashboard and conversational XAI interfaces showed clear overreliance on the AI system. Enhanced conversations powered by large language model (LLM) agents amplified over-reliance. Based on our findings, we reason that the potential cause of such overreliance is the illusion of explanatory depth that is concomitant with both XAI interfaces. Our findings have important implications for designing effective conversational XAI interfaces to facilitate appropriate reliance and improve human-AI collaboration.2025GHGaole He et al.Conversational ChatbotsHuman-LLM CollaborationExplainable AI (XAI)IUI
Technologies Supporting Self-Reflection on Social Interactions: A Systematic ReviewAs intelligent technology and applications have become an integral part of nearly all aspects of people's daily lives, many intelligent systems have been designed to help people navigate the complex space of social interactions. One prominent strategy for such intelligent support is providing meaningful Ad Hoc Interventions (ADI), e.g., through timely notifications. An alternative is Technology-Supported Reflection (TSR), e.g., by offering information about activities in one's past for personal insights. In contrast to straight-up interventions, the aim of the latter strategy is not to directly augment human skills but instead support learning and personal growth over time. However, while TSR has seen widespread interest in applications in some areas, such as physical fitness and mental health, its use for improving human social interactions has not yet been systematically explored. Concretely, it is currently unclear 1) what forms of self-reflection systems intend to support, 2) how their different technological components (e.g., data collection, information integration) are involved in providing support, and 3) what common limitations and design challenges they face. In this article, we present the results of a systematic literature review focusing on these questions to provide a structured foundation for targeted research. Concretely, we identified and analysed a collection of 23 relevant papers, each describing a system deploying TSR to support humans with elements of social interactions. We constructed a framework with a set of features to comprehensively describe and analyze the systems that support self-reflection, including their application domains, how they fit into the existing design framework, how they facilitate learning through reflection, how adaptive they are to individual users, and how they were evaluated. Finally, we propose a direction for designing systems that support individual's social interactions through self-reflection in an adaptive manner.2025CHChenxu Hao et al.Mental Health Apps & Online Support CommunitiesUser Research Methods (Interviews, Surveys, Observation)IUI
Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily AssistantSince the explosion in popularity of ChatGPT, large language models (LLMs) have continued to impact our everyday lives. Equipped with external tools that are designed for a specific purpose (e.g., for flight booking or an alarm clock), LLM agents exercise an increasing capability to assist humans in their daily work. Although LLM agents have shown a promising blueprint as daily assistants, there is a limited understanding of how they can provide daily assistance based on planning and sequential decision making capabilities. We draw inspiration from recent work that has highlighted the value of `\textit{LLM-modulo}' setups in conjunction with humans-in-the-loop for planning tasks. We conducted an empirical study ($N$ = 248) of LLM agents as daily assistants in six commonly occurring tasks with different levels of risk typically associated with them (e.g., flight ticket booking and credit card payments). To ensure user agency and control over the LLM agent, we adopted LLM agents in a plan-then-execute manner, wherein the agents conducted step-wise planning and step-by-step execution in a simulation environment. We analyzed how user involvement at each stage affects their trust and collaborative team performance. Our findings demonstrate that LLM agents can be a double-edged sword --- (1) they can work well when a high-quality plan and necessary user involvement in execution are available, and (2) users can easily mistrust the LLM agents with plans that seem plausible. We synthesized key insights for using LLM agents as daily assistants to calibrate user trust and achieve better overall task outcomes. Our work has important implications for the future design of daily assistants and human-AI collaboration with LLM agents.2025GHGaole He et al.Delft University of TechnologyHuman-LLM CollaborationAI-Assisted Decision-Making & AutomationCHI
“All Sorts of Other Reasons to Do It”: Explaining the Persistence of Sub-optimal IoT Security AdviceThe proliferation of consumer Internet of Things (IoT) devices has raised security concerns. In response, governments have been advising consumers on security measures, but these recommendations are not guaranteed to be implementable owing to the diverse and rapidly evolving IoT landscape, risking wasted efforts and uncertainty caused by unsuccessful attempts to secure devices. Through interviews and a workshop with 14 stakeholders involved in a Dutch national public awareness campaign, we found that while stakeholders recognized the validity of these concerns, they opted to continue the campaign with minor modifications while expecting regulatory changes to resolve the observed problem. Their justifications reveal an institutional incentive structure that overlooks well-documented user realities in security and privacy HCI research. This raises important considerations for the design and delivery of such support strategies. By fostering a collaborative dialogue, we aim to contribute to the development of user-centered security practices.2025VHVeerle van Harten et al.TU DelftIoT Device PrivacyParticipatory DesignCHI
SpineLoft: Interactive Spine-based 2D-to-3D Modeling3D artists (professionals and novices alike) often take inspiration from sketches or photos to guide their designs. Yet, existing modeling systems are not tailored to fully make use of such input. Consequently, significant effort and expertise are needed when creating model prototypes or exploring design options. In this work, we introduce a system to support the exploratory modeling process by enabling the transformation of 2D image elements into geometric 3D objects. Our solution relies on a novel d2 distance function, supporting a region-based lofting process, and delivers easily-editable 3D geometric "spine-rib" representations. The user draws a spine, and the system generates and modifies a generalized cylinder around it, considering image edges. The proposed approach, driven by simple user-defined scribble definitions, can robustly handle various image sources, ranging from photos to hand-drawn content.2025ATAlexandre Thiault et al.Institut Polytechnique de Paris, Telecom Paris3D Modeling & AnimationCustomizable & Personalized ObjectsCHI
Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different LanguagesRecent advances in generative AI have precipitated a proliferation of novel writing assistants. These systems typically rely on multilingual large language models (LLMs), providing globalized workers the ability to revise or create diverse forms of content in different languages. However, there is substantial evidence indicating that the performance of multilingual LLMs varies between languages. Users who employ writing assistance for multiple languages are therefore susceptible to disparate output quality. Importantly, recent research has shown that people tend to generalize algorithmic errors across independent tasks, violating the behavioral axiom of choice independence. In this paper, we analyze whether user utilization of novel writing assistants in a charity advertisement writing task is affected by the AI's performance in a second language. Furthermore, we quantify the extent to which these patterns translate into the persuasiveness of generated charity advertisements, as well as the role of peoples’ beliefs about LLM utilization for their donation choices. Our results provide evidence that writers who engage with an LLM-based writing assistant violate choice independence, as prior exposure to a Spanish LLM reduces subsequent utilization of an English LLM. While these patterns do not affect the aggregate persuasiveness of the generated advertisements, people's beliefs about the source of an advertisement (human versus AI) do. In particular, Spanish-speaking female participants who believed that they read an AI-generated advertisement strongly adjusted their donation behavior downwards. Furthermore, people are generally not able to adequately differentiate between human-generated and LLM-generated ads. Our work has important implications on the design, development, integration, and adoption of multilingual LLMs as assistive agents—particularly in writing tasks.2025SBShreyan Biswas et al.Technical University of DelftMultilingual & Cross-Cultural Voice InteractionGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch InteractionsTouchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.2025ZCZhaochong Cai et al.Delft University of TechnologyVibrotactile Feedback & Skin StimulationCHI