Beyond the Illusion: LLMs and the Case for Pragmatic Cues in ConversationConversational agents are becoming increasingly adept at interacting with humans in a very natural manner. They incorporate subtle linguistic and paralinguistic cues: changes in tone and style, emotional expressions, or fillers like `mm-hm'. In human communication, such cues serve pragmatic functions that support mutual understanding and communicative success. This raises the question: do we want conversational agents to blindly mimic these cues, or can we use them more purposefully to serve a communicative function? We argue that the role of pragmatic cues in interaction with conversational user interfaces remains underexplored. A deeper understanding of how to strategically use them in appropriate contexts and their impact on human-machine interactions is crucial to enhance mutual understanding in conversations with artificial agents. Through this provocation, we propose a research agenda to spark discussion on how future research can address this.2025LSLaura Spillner et al.Agent Personality & AnthropomorphismHuman-LLM CollaborationCUI
Comparing Perceptions of Static and Adaptive Proactive Speech AgentsA growing literature on speech interruptions describes how people interrupt one another with speech, but these behaviours have not yet been implemented in the design of artificial agents which interrupt. Perceptions of a prototype proactive speech agent which adapts its speech to both urgency and to the difficulty of the ongoing task it interrupts are compared against perceptions of a static proactive agent which does not. The study hypothesises that adaptive proactive speech modeled on human speech interruptions will lead to partner models which consider the proactive agent as a stronger conversational partner than a static agent, and that interruptions initiated by an adaptive agent will be judged as better timed and more appropriately asked. These hypotheses are all rejected however, as quantitative analysis reveals that participants view the adaptive agent as a poorer dialogue partner than the static agent and as less appropriate in the style it interrupts. Qualitative analysis sheds light on the source of this surprising finding, as participants see the adaptive agent as less socially appropriate and as less consistent in its interactions than the static agent.2024JEJustin Edwards et al.Conversational ChatbotsAgent Personality & AnthropomorphismCUI
Cross-Cultural Validation of Partner Models for Voice User Interfaces Recent research has begun to assess people's perceptions of voice user interfaces (VUIs) as dialogue partners, termed partner models. Current self-report measures are only available in English, limiting research to English-speaking users. To improve the diversity of user samples and contexts that inform partner modelling research, we translated, localized, and evaluated the Partner Modelling Questionnaire (PMQ) for non-English speaking Western (German, n=185) and East Asian (Japanese, n=198) cohorts where VUI use is popular. Through confirmatory factor analysis (CFA), we find that the scale produces equivalent levels of “goodness-to-fit” for both our German and Japanese translations, confirming its cross-cultural validity. Still, the structure of the communicative flexibility factor did not replicate directly across Western and East Asian cohorts. We discuss how our translations can open up critical research on cultural similarities and differences in partner model use and design, whilst highlighting the challenges for ensuring accurate translation across cultural contexts.2024KSKatie Seaborn et al.Voice User Interface (VUI) DesignMultilingual & Cross-Cultural Voice InteractionCUI
Using Speech Agents for Mood Logging within Blended Mental Healthcare: Mental Healthcare Practitioners' PerspectivesMood logging, where people track mood-related data, is commonly used to support mental healthcare. Speech agents could prove beneficial in supporting mood logging for clients. Yet we know little about how Mental Healthcare Practitioners (MHPs) view speech as a tool to support current care practices. Through a thematic analysis of semi-structured interviews with 15 MHPs, we show that MHPs see opportunities in the convenience, and the data richness that speech agents could afford. However, MHPs also saw this richness as noisy, with using speech potentially diminishing a client's focus on mood logging as an activity. MHPs were wary of overusing AI-based tools, expressing concerns around data ownership, access and privacy. We discuss the role of speech agents within blended care, outlining key considerations when using speech for mood logging in a blended mental healthcare context.2024OCOrla Cooney et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Mental Health Apps & Online Support CommunitiesCUI
Cooking With Agents: Designing Context-aware Voice InteractionVoice Agents (VAs) are touted as being able to help users in complex tasks such as cooking and interacting as a conversational partner to provide information and advice while the task is ongoing. Through conversation analysis of 7 cooking sessions with a commercial VA, we identify challenges caused by a lack of contextual awareness leading to irrelevant responses, misinterpretation of requests, and information overload. Informed by this, we evaluated 16 cooking sessions with a wizard-led context-aware VA. We observed more fluent interaction between humans and agents, including more complex requests, explicit grounding within utterances, and complex social responses. We discuss reasons for this, the potential for personalisation, and the division of labour in VA communication and proactivity. Then, we discuss the recent advances in generative models and the VAs interaction challenges. We propose limited context awareness in VAs as a step toward explainable, explorable conversational interfaces.2024RJRazan Jaber et al.Stockholm University , Stockholm UniversityVoice User Interface (VUI) DesignContext-Aware ComputingCHI
Human Speakers Help Machine Listeners To account For Visual Asymmetries in DialogueHuman-machine dialogue (HMD) research debates the degree to which language production in this context is egocentric or allocentric. That is, the degree to which a person might take a machine’s perspective into account. Our study aims to identify whether users produce allocentric or egocentric language within speech-based HMD when there is asymmetry in the information available to both partners. Through an adapted referential communication task, we manipulated the presence or absence of visual distractors and occlusions, similarly to previous referential tasks used in psycholinguistic research. Results show that people are sensitive to the presence of distractors and occlusions and tend to produce more informative expressions to help machine partners account for the visual asymmetries. We discuss the fndings on how allocentric production in HMD is explained by how the division of labour manifests in spoken HMD. The fndings further our understanding of the language production mechanisms in HMD.2023PPPaola Raquel Peña et al.Voice User Interface (VUI) DesignHuman-LLM CollaborationCUI
Defending Against the Dark Arts: Recognising Dark Patterns in Social MediaInterest in unethical user interfaces has grown in HCI over recent years, with researchers identifying malicious design strategies referred to as "dark patterns". While such strategies have been described in numerous domains, we lack a thorough understanding of how they operate in social networking services (SNSs). Pivoting towards regulations against such practices, we address this gap by offering novel insights into the types of dark patterns deployed in SNSs and people's ability to recognise them across four widely used mobile SNS applications. Following a cognitive walkthrough, experts (N=6) could identify instances of dark patterns in all four SNSs, including co-occurrences. Based on the results, we designed a novel rating procedure for evaluating the malice of interfaces. Our evaluation shows that regular users (N=193) could differentiate between interfaces featuring dark patterns and those without. Such rating procedures could support policymakers' current moves to regulate deceptive and manipulative designs in online interfaces.2023TMThomas Mildner et al.Dark Patterns RecognitionSocial Platform Design & User BehaviorDIS
About Engaging and Governing Strategies: A Thematic Analysis of Dark Patterns in Social Networking ServicesResearch in HCI has shown a growing interest in unethical design practices across numerous domains, often referred to as ``dark patterns''. There is, however, a gap in related literature regarding social networking services (SNSs). In this context, studies emphasise a lack of users' self-determination regarding control over personal data and time spent on SNSs. We collected over 16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and Twitter's mobile applications to understand how dark patterns manifest in these SNSs. For this task, we turned towards HCI experts to mitigate possible difficulties of non-expert participants in recognising dark patterns, as prior studies have noticed. Supported by the recordings, two authors of this paper conducted a thematic analysis based on previously described taxonomies, manually classifying the recorded material while delivering two key findings: We observed which instances occur in SNSs and identified two strategies – engaging and governing – with five dark patterns undiscovered before.2023TMThomas Mildner et al.University of Bremen, University of BremenDark Patterns RecognitionSocial Platform Design & User BehaviorCHI
The Last Decade of HCI Research on Children and Conversational Agents Voice-based Conversational Agents (CAs) are increasingly being used by children. Through a review of 38 research papers, this work maps trends, themes, and methods of empirical research on children and CAs in HCI research over the last decade. A thematic analysis of the research found that work in this domain focuses on seven key topics: ascribing human-like qualities to CAs, CAs’ support of children’s learning, the use and role of CAs in the home and family context, CAs’ support of children’s play, children’s storytelling with CA, issues concerning the collection of information revealed by CAs, and CAs designed for children with differing abilities. Based on our findings, we identify the needs to account for children's intersectional identities and linguistic and cultural diversity and theories from multiple disciples in the design of CAs, develop heuristics for child-centric interaction with CAs, to investigate implications of CAs on social cognition and interpersonal relationships, and to examine and design for multi-party interactions with CAs for different domains and contexts.2022RGRadhika Garg et al.Syracuse UniversityIntelligent Voice Assistants (Alexa, Siri, etc.)Conversational ChatbotsAgent Personality & AnthropomorphismCHI
Eliciting and Analysing Users' Envisioned Dialogues with Perfect Voice AssistantsWe present a dialogue elicitation study to assess how users envision conversations with a perfect voice assistant (VA). In an online survey, N=205 participants were prompted with everyday scenarios, and wrote the lines of both user and VA in dialogues that they imagined as perfect. We analysed the dialogues with text analytics and qualitative analysis, including number of words and turns, social aspects of conversation, implied VA capabilities, and the influence of user personality. The majority envisioned dialogues with a VA that is interactive and not purely functional; it is smart, proactive, and has knowledge about the user. Attitudes diverged regarding the assistant's role as well as it expressing humour and opinions. An exploratory analysis suggested a relationship with personality for these aspects, but correlations were low overall. We discuss implications for research and design of future VAs, underlining the vision of enabling conversational UIs, rather than single command "Q&As".2021SVSarah Theres Völkel et al.LMU MunichVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismCHI
Heuristic Evaluation of Conversational AgentsConversational interfaces have risen in popularity as businesses and users adopt a range of conversational agents, including chatbots and voice assistants. Although guidelines have been proposed, there is not yet an established set of usability heuristics to guide and evaluate conversational agent design. In this paper, we propose a set of heuristics for conversational agents adapted from Nielsen's heuristics and based on expert feedback. We then validate the heuristics through two rounds of evaluations conducted by participants on two conversational agents, one chatbot and one voice-based personal assistant. We find that, when using our heuristics to evaluate both interfaces, evaluators were able to identify more usability issues than when using Nielsen’s heuristics. We propose that our heuristics successfully identify issues related to dialogue content, interaction design, help and guidance, human-like characteristics, and data privacy.2021RLRaina Langevin et al.University of WashingtonConversational ChatbotsAgent Personality & AnthropomorphismCHI
What Do We See in Them? Identifying Dimensions of Partner Models for Speech Interfaces Using a Psycholexical Approach Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 users of speech interfaces, we identify three key dimensions that make up a users’ partner model: 1) perceptions towards partner competence and dependability; 2) assessment of human-likeness; and 3) a system’s perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions.2021PDPhilip R Doyle et al.University College DublinVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismCHI
What Makes a Good Conversation? Challenges in Designing Truly Conversational AgentsConversational agents promise conversational interaction but fail to deliver. Efforts often emulate functional rules from human speech, without considering key characteristics that conversation must encapsulate. Given its potential in supporting long-term human-agent relationships, it is paramount that HCI focuses efforts on delivering this promise. We aim to understand what people value in conversation and how this should manifest in agents. Findings from a series of semi-structured interviews show people make a clear dichotomy between social and functional roles of conversation, emphasising the long-term dynamics of bond and trust along with the importance of context and relationship stage in the types of conversations they have. People fundamentally questioned the need for bond and common ground in agent communication, shifting to more utilitarian definitions of conversational qualities. Drawing on these findings we discuss key challenges for conversational agent design, most notably the need to redefine the design parameters for conversational agent interaction.2019LCLeigh Clark et al.University College DublinConversational ChatbotsAgent Personality & AnthropomorphismCHI