Safeguarding Crowdsourcing Surveys from ChatGPT through Prompt InjectionChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses "prompt injection," such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 98% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.2025CWChaofan Wang et al.Working with AICSCW
"I can feel the risks by looking at the robot face": Communicating Risk through a Physical AgentRisk communication is essential for shaping public understanding and encouraging action in response to hazards. We investigate the potential of physical humanlike agents as a novel visualisation interface for risk communication, given their ability to communicate emotion and visually convey information. We first conducted a design workshop with 9 HCI experts to identify challenges, opportunities, and design strategies for using an agent's face as a visualisation canvas. We then conducted a lab study with 28 participants to assess the effectiveness of this interface to visualise the consequences of health risks. Our findings reveal that it facilitates data comprehension, heightens risk perception, elicits empathy, and motivates behavioural change by making the risk relatable and emotionally resonant. We discuss the potential of using these interfaces for risk communication in public spaces, health campaigns, education, and beyond. We provide design considerations, takeaways and future directions for an important pathway of human-centered risk communication.2025SSSarah Schömbs et al.Social Robot InteractionCommunity Engagement & Civic TechnologyDIS
The Impact of Human-Likeness and Self-Disclosure on Message Acceptance in Virtual AI InfluencersVirtual AI-generated Influencers (VAIIs) are increasingly being used by corporations and public agencies, raising questions about how their visual design and communication strategies impact end-users' propensity to accept the messages they deliver. We examined the impact of human-likeness (how close the VAII visually resembles a human) and self-disclosure (whether the message contains personal information) on message acceptance, alongside dispositional factors like empathy and anthropomorphising tendencies. In a mixed-methods experiment, participants (N=120) watched short-form videos featuring VAIIs of varying human-likeness (High/Moderate-High/Moderate-Low/Low) and self-disclosure (present/absent). We observed the strongest message acceptance from the VAIIs with the lowest human-likeness, and message rejection for VAIIs with moderate to low human-likeness. Additionally, participants' message acceptance was influenced by their empathy tendencies. Our qualitative analysis revealed further insights into participants' perceptions of the human-likeness of VAIIs, their discomfort with self-disclosure, and their tendency to anthropomorphise VAIIs. These findings provide important implications for the design of VAIIs.2025CSCherie Sew et al.Agent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)DIS
Assessing Susceptibility Factors of Confirmation Bias in News Feed ReadingIndividuals tend to apply preferences and beliefs as heuristics to effectively sift through the sheer amount of information available online. Such tendencies, however, often result in cognitive biases, which can skew judgment and open doors for manipulation. In this work, we investigate how individual and contextual factors lead to instances of confirmation bias when seeking, evaluating, and recalling polarising information. We conducted a lab study, in which we exposed participants to opinions on controversial issues through a Twitter-like news feed. We found that low-effortful thinking, strong political beliefs, and content conveying a strong issue amplify the occurrences of confirmation bias, leading to skewed information processing and recall. We discuss how the adverse effects of confirmation bias can be mitigated by taking bias-susceptibility into account. Specifically, social media platforms could aim to reduce strong expressions and integrate media literacy-building mechanisms, as low-effortful thinking styles and strong political beliefs render individuals especially susceptible to cognitive biases.2025NBNattapat Boonprakong et al.University of Melbourne, School of Computing and Information SystemsPrivacy Perception & Decision-MakingMisinformation & Fact-CheckingCHI
"It’s Not the AI’s Fault Because It Relies Purely on Data": How Causal Attributions of AI Decisions Shape Trust in AI SystemsHumans naturally seek to identify causes behind outcomes through causal attribution, yet Human-AI research often overlooks how users perceive causality behind AI decisions. We examine how this perceived locus of causality—internal or external to the AI—influences trust, and how decision stakes and outcome favourability moderate this relationship. Participants (N=192) engaged with AI-based decision-making scenarios operationalising varying loci of causality, stakes, and favourability, evaluating their trust in each AI. We find that internal attributions foster lower trust as participants perceive the AI to have high autonomy and decision-making responsibility. Conversely, external attributions portray the AI as merely "a tool" processing data, reducing its perceived agency and distributing responsibility, thereby boosting trust. Moreover, stakes moderate this relationship—external attributions foster even more trust in lower-risk, low-stakes scenarios. Our findings establish causal attribution as a crucial yet underexplored determinant of trust in AI, highlighting the importance of accounting for it when researching trust dynamics.2025SPSaumya Pareek et al.University of Melbourne, School of Computing and Information SystemsExplainable AI (XAI)AI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
The Influence of Content Modality on Perceptions of Online MisinformationSocial media has become a primary information source, with platforms evolving from text-based to multi-modal environments that include images and videos. While richer media modalities enhance user engagement, they also increase the spread and perceived credibility of misinformation. Most interventions to counter misinformation on social media are text-based, which may lack the persuasive power of richer modalities. This study explores whether the effectiveness of misinformation correction varies by modality, and if certain modalities of misinformation are better countered by a specific correction modality. We conducted a survey-based experiment where participants rated the credibility of misinformation tweets before and after exposure to corrections, across all combinations of text, images and video modalities. Our findings suggest that corrections are most effective when their modality richness matches that of the original misinformation. We discuss factors affecting the perceived credibility of corrections and offer strategies to optimise misinformation correction.2025SGSuwani Gunasekara et al.University of Melbourne, School of Computing and Information SystemsContent Moderation & Platform GovernanceMisinformation & Fact-CheckingCHI
How Do HCI Researchers Study Cognitive Biases? A Scoping ReviewComputing systems are increasingly designed to adapt to users' cognitive states and mental models. Yet, cognitive biases affect how humans form such models and, therefore, they can impact their interactions with computers. To better understand this interplay, we conducted a scoping review to chart how Human-Computer Interaction (HCI) researchers study cognitive biases. Our findings show that computing systems not only have the potential to induce and amplify cognitive biases but also can be designed to steer users' behaviour and decision-making by capitalising on biases. We describe how HCI researchers develop algorithms and sensing methods to detect and quantify the effects of cognitive biases and discuss how we can use their understanding to inform system design. In this paper, we outline a research agenda for more theory-grounded research and highlight ethical issues when researching and designing computing systems with cognitive biases in mind as they affect real-world behaviour.2025NBNattapat Boonprakong et al.University of Melbourne, School of Computing and Information SystemsExplainable AI (XAI)Chronic Disease Self-Management (Diabetes, Hypertension, etc.)Privacy Perception & Decision-MakingCHI
Raising Awareness of Location Information Vulnerabilities in Social Media Photos using LLMsLocation privacy leaks can lead to unauthorised tracking, identity theft, and targeted attacks, compromising personal security and privacy. This study explores LLM-powered location privacy leaks associated with photo sharing on social media, focusing on user awareness, attitudes, and opinions. We developed and introduced an LLM-powered location privacy intervention app to 19 participants, who used it over a two-week period. The app prompted users to reflect on potential privacy leaks that a widely available LLM could easily detect, such as visual landmarks & cues that could reveal their location, and provided ways to conceal this information. Through in-depth interviews, we found that our intervention effectively increased users’ awareness of location privacy and the risks posed by LLMs. It also encouraged users to consider the importance of maintaining control over their privacy data and sparked discussions about the future of location privacy-preserving technologies. Based on these insights, we offer design implications to support the development of future user-centred, location privacy-preserving technologies for social media photos.2025YMYing Ma et al.The University of Melbourne, School of Computing and Information SystemsHuman-LLM CollaborationPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Can you pass that tool?: Implications of Indirect Speech in Physical Human-Robot CollaborationIndirect speech acts (ISAs) are a natural pragmatic feature of human communication, allowing requests to be conveyed implicitly while maintaining subtlety and flexibility. Although advancements in speech recognition have enabled natural language interactions with robots through direct, explicit commands—providing clarity in communication—the rise of large language models presents the potential for robots to interpret ISAs. However, empirical evidence on the effects of ISAs on human-robot collaboration (HRC) remains limited. To address this, we conducted a Wizard-of-Oz study (N=36), engaging a participant and a robot in collaborative physical tasks. Our findings indicate that robots capable of understanding ISAs significantly improve human's perceived robot anthropomorphism, team performance, and trust. However, the effectiveness of ISAs is task- and context-dependent, thus requiring careful use. These results highlight the importance of appropriately integrating direct and indirect requests in HRC to enhance collaborative experiences and task performance.2025YZZheng Zhang et al.University of Melbourne, School of Computing and Information SystemsAgent Personality & AnthropomorphismHuman-LLM CollaborationHuman-Robot Collaboration (HRC)CHI
Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentAs misinformation increasingly proliferates on social media platforms, it has become crucial to explore how to best convey automated news credibility assessments to end-users, and foster trust in fact-checking AIs. In this paper, we investigate how model-agnostic, natural language explanations influence trust and reliance on a fact-checking AI. We construct explanations from four Conceptualisation Validations (CVs) – namely consensual, expert, internal (logical), and empirical – which are foundational units of evidence that humans utilise to validate and accept new information. Our results show that providing explanations significantly enhances trust in AI, even in a fact-checking context where influencing pre-existing beliefs is often challenging, with different CVs causing varying degrees of reliance. We find consensual explanations to be the least influential, with expert, internal, and empirical explanations exerting twice as much influence. However, we also find that users could not discern whether the AI directed them towards the truth, highlighting the dual nature of explanations to both guide and potentially mislead. Further, we uncover the presence of automation bias and aversion during collaborative fact-checking, indicating how users' previously established trust in AI can moderate their reliance on AI judgements. We also observe the manifestation of a 'boomerang'/backfire effect often seen in traditional corrections to misinformation, with individuals who perceive AI as biased or untrustworthy doubling down and reinforcing their existing (in)correct beliefs when challenged by the AI. We conclude by presenting nuanced insights into the dynamics of user behaviour during AI-based fact-checking, offering important lessons for social media platforms.2024SPSaumya Pareek et al.Session 3e: Trust and Understanding in Explainable AICSCW
InfoPrint: Embedding Interactive Information in 3D Prints Using Low-Cost Readily-Available Printers and MaterialsJiang 等人提出 InfoPrint 方法,利用低成本普通3D打印机和常规材料在打印物体中嵌入交互式信息,实现物理对象的数字化增强与可编程功能。2024WJWeiwei Jiang et al.Desktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsUbiComp
Reflected Reality: Augmented Reality through the MirrorZhou等人提出了Reflected Reality系统,利用镜子作为AR交互界面,在反射环境中实现虚拟内容的手势交互与观察。2024QZQiushi Zhou et al.AR Navigation & Context AwarenessUbiComp
Understanding Users' Perspectives on Location Privacy Management on SmartphonesAs the number of applications installed on smartphones continues to grow, the task of effectively managing location privacy has become increasingly complex. In this paper, we explore the factors that influence users' privacy-preserving intentions and contrast them with their actual behaviours. In addition, we compare location privacy concerns across different apps investigating the impact of app-specific features on the willingness to disclose location information. Our findings highlight significant challenges in privacy management due to privacy fatigue and perceived usability. Furthermore, participants raised the importance of more uniform standards regarding location privacy settings across various applications, calling for more detailed and interactive well-informed consent processes that highlight the risks instead of the benefits of disclosing location information. This research contributes important insights towards the development of more effective privacy settings that can foster increased user engagement in managing location privacy on smartphones.2024YMYing Ma et al.Privacy by Design & User ControlPrivacy Perception & Decision-MakingMobileHCI
Augmented Reality at Zoo Exhibits: A Design Framework for Enhancing the Zoo ExperienceAugmented Reality (AR) offers unique opportunities for contributing to zoos' objectives of public engagement and education about animal and conservation issues. However, the diversity of animal exhibits pose challenges in designing AR applications that are not encountered in more controlled environments, such as museums. To support the design of AR applications that meaningfully engage the public with zoo objectives, we first conducted two scoping reviews to interrogate previous work on AR and broader technology use at zoos. We then conducted a workshop with zoo representatives to understand the challenges and opportunities in using AR to achieve zoo objectives. Additionally, we conducted a field trip to a public zoo to identify exhibit characteristics that impacts AR application design. We synthesise the findings from these studies into a framework that enables the design of diverse AR experiences. We illustrate the utility of the framework by presenting two concepts for feasible AR applications.2024BSBrandon Victor Syiem et al.Queensland University of TechnologyAR Navigation & Context AwarenessMuseum & Cultural Heritage DigitizationCHI
AI-Driven Mediation Strategies for Audience Depolarisation in Online DebatesOnline polarisation can tear the fabric of civility through reinforcing social media's perceptions of division and discord. Social media platforms often rely on content-moderation to combat polarisation, contingent on the reactive removal or flagging of content. However, this approach often remains agnostic of the underlying debate's ideas and stifles open discourse. In this study, we use prompt-tuned language models to mediate social media debates, applying the strategies of the Thomas-Kilmann Conflict Mode Instrument (TKI). We evaluate multiple mediation strategies in providing targeted responses to the debates, as shown to a debate audience. Our findings show that high-cooperativeness TKI strategies offered more persuasive arguments, while an accommodating argument strategy was the most successful at depolarising the audience's opinion. Furthermore, high-cooperativeness strategies also increased the perception that the debaters will reach a consensus. Our work paves the way for scalable and personalised tools that mediate social media debates to encourage depolarisation.2024JGJarod Govers et al.University of MelbourneHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and EmbodimentRobots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot’s uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot’s embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot’s transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario.2024SSSarah Schömbs et al.The University of MelbourneAI-Assisted Decision-Making & AutomationUncertainty VisualizationHuman-Robot Collaboration (HRC)CHI
Here and Now: Creating Improvisational Dance Movements with a Mixed Reality MirrorThis paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe’s Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers' temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.2023QZQiushi Zhou et al.University of MelbourneMixed Reality WorkspacesDigital Art Installations & Interactive PerformanceDance & Body Movement ComputingCHI
Modeling Temporal Target Selection: A Perspective from Its Spatial CorrespondenceTemporal target selection requires users to wait and trigger the selection input within a bounded time window, with a selection cursor that is expected to be delayed. This task conceptualizes, for example, a variety of game scenarios such as determining the timing of shooting a projectile towards a moving object. In this work, we explore models that predict "when'' users typically perform a selection (i.e., user selection distribution) and their selection error rates in such tasks. We hypothesize that users react to temporal factors including "distance'', "width'', and "delay'' as how they treat the corresponding variables in spatial target selection. The derived models are evaluated in a controlled experiment and an MTurk-based online study. Our research contributes new knowledge on user behavior in temporal target selection tasks and its potential connection with its spatial correspondence. Our models and conclusions can benefit both users and designers of relevant interactive applications.2023DYDifeng Yu et al.University of MelbourneHuman Pose & Activity RecognitionVisualization Perception & CognitionGamification DesignCHI
Understanding How to Administer Voice Surveys through Smart SpeakersSmart speakers have become exceedingly popular and entered many people's homes due to their ability to engage users with natural conversations. Researchers have also looked into using smart speakers as an interface to collect self-reported health data through conversations. Responding to surveys prompted by smart speakers requires users to listen to questions and answer in voice without any visual stimuli. Compared to traditional web-based surveys, where users can see questions and answers visually, voice surveys may be more cognitively challenging. Therefore, to collect reliable survey data, it is important to understand what types of questions are suitable to be administered by smart speakers. We selected five common survey questionnaires and deployed them as voice surveys and web surveys in a within-subject study. Our 24 participants answered questions using voice and web questionnaires in one session. They then repeated the same study session after 1 week to provide a "retest" response. Our results suggest that voice surveys have comparable reliability to web surveys. We find that, when using 5-point or 7-point scales, voice surveys take about twice as long as web surveys. Based on objective measurements, such as response agreement and test-retest reliability, and subjective evaluations of user experience, we recommend that researchers consider adopting the binary scale and 5-point numerical scales for voice surveys on smart speakers.2022JWJing Wei et al.Human-AI collaboration; Human-AI collaborationCSCW
iText: Hands-free Text Entry on an Imaginary Keyboard for Augmented Reality SystemsText entry is an important and frequent task in interactive devices including augmented reality head-mounted displays (AR HMDs). In current AR HMDs, there are still two main open challenges to overcome for efficient and usable text entry: arm fatigue due to mid-air input and visual occlusion because of their small see-through displays. To address these challenges, we present iText, a technique for AR HMDs that is hands-free and is based on an imaginary (invisible) keyboard. We first show that it is feasible and practical to use an imaginary keyboard on AR HMDs. Then, we evaluated its performance and usability with three hands-free selection mechanisms: eye blinks (E-Type), dwell (D-Type), and swipe gestures (G-Type). Our results show that users could achieve an average text entry speed of 11.95, 9.03 and 9.84 words per minutes (WPM) with E-Type, D-Type, and G-Type, respectively. Given that iText with E-Type outperformed the other two selection mechanisms in text entry rate and subjective feedback, we ran a third, 5-day study. Our results show that iText with E-Type can achieve an average text entry rate of 13.76 WPM with a mean word error rate of 1.5\%. In short, iText can enable efficient eyes-free text entry and can be useful for various application scenarios in AR HMDs.2021XLXueshi Lu et al.Voice User Interface (VUI) DesignAR Navigation & Context AwarenessUIST