Users’ Strategies for Ensuring Trust, Privacy, and Safety on Facebook Marketplace: Challenges and RecommendationsWe engaged in semi-structured interviews with Facebook Marketplace users to gain insights into their strategies for ensuring Trust, Privacy, and Safety (TPS). Our investigation uncovered a range of approaches participants employed. We discovered that users actively sought to convey their trustworthiness to other users while also assessing the trustworthiness of others. Furthermore, they took steps to safeguard their privacy by selectively sharing information and making thoughtful decisions regarding payments. Moreover, participants implemented various strategies to mitigate the risks of physical harm and financial losses, sometimes resulting in conflicting preferences between buyers and sellers. Drawing from these findings, we developed recommendations to aid users in evaluating others' trustworthiness, effectively communicating their own trustworthiness, achieving a more optimal balance between privacy and trust, and increasing awareness of potential risks associated with different payment methods.2025AMAzadeh Mokhberi et al.Trust, Safety, and Privacy in Online CommunitiesCSCW
Seeing the Sound: Supporting Musical Collaboration with Augmented RealityIn musical collaboration, digital musical instruments often hinder effective communication and engagement by restricting visibility and limiting gestural and non-verbal interactions. These challenges reduce musicians’ situational awareness and complicate cohesive performance. To address this, we developed a head-mounted augmented reality (AR) system to enhance collaborative musical experiences by visualising musicians’ hand movements, eye gaze positions, and instrument interactions in real-time. We conducted a user study involving four pairs of musicians performing live music using different AR interface configurations. The results suggest that the AR system can enhance situational awareness and assist collaboration, as reflected in questionnaire responses. Interviews indicated that real-time visualisations of bodily movements and interactions helped participants better understand the collaborative process and anticipate their collaborators’ actions. These findings point to the potential of AR-assisted visualisation to support creative collaboration by tailoring visual information to different needs. Future research could explore its application in broader contexts of real-time creative cooperation.2025YWYichen Wang et al.Social & Collaborative VRAR Navigation & Context AwarenessImmersion & Presence ResearchC&C
From Diagrams to Experience: Data Visceralisation of Ecosystem State-and-Transition Models in Virtual RealityCommunicating complex scientific concepts to non-experts is a persistent challenge. State-and-transition models (STMs), often shown as box-and-arrow diagrams, exemplify well this difficulty. This paper explores how virtual reality (VR) can make STMs more accessible. Using ecosystem STMs as a case study, we present a proof-of-concept system enabling users to viscerally experience model content. We followed a three-phased participatory design process: first, 2 ecology experts guided the development of a VR prototype. Next, 17 government environmental management professionals evaluated its utility and features. Finally, after refining the system, 12 VR researchers informed design considerations and improvements. Our findings provide practical insights for visualising STMs in VR and contribute to the emerging concept of "data visceralisation". We found this approach engages users and supports understanding of qualitative aspects of real-world phenomena. However, complex models like ecosystem STMs require creating accurate and extensive simulations. We conclude with a discussion for future directions.2025AGAdélaïde Genay et al.Medical & Scientific Data VisualizationContext-Aware ComputingDIS
Text-to-Image Generation for Vocabulary Learning Using the Keyword MethodThe 'keyword method' is an effective technique for learning vocabulary of a foreign language. It involves creating a memorable visual link between what a word means and what its pronunciation in a foreign language sounds like in the learner's native language. However, these memorable visual links remain implicit in the people's mind and are not easy to remember for a large number of words. To enhance the memorisation and recall of the vocabulary, we developed an application that combines the keyword method with text-to-image generators to externalise the memorable visual links into visuals. These visuals represent additional stimuli during the memorisation process. To explore the effectiveness of this approach we first run a pilot study to investigate how difficult it is to externalise the descriptions of mental visualisations of memorable links, by asking participants to write them down. We used these descriptions as prompts for text-to-image generator (DALL-E2) to convert them into images and asked participants to select their favourites. Next, we compared different text-to-image generators (DALL-E2, Midjourney, Stable and Latent Diffusion) to evaluate the perceived quality of the generated images by each. Despite heterogeneous results, participants mostly preferred images generated by DALL-E2, which was used also for the final study. In this study, we investigated whether providing such images enhances the retention of vocabulary being learned, compared to the keyword method alone. Our results indicate that people did not encounter difficulties describing their visualisations of memorable links and that providing corresponding images significantly increases memory retention.2025NANuwan T Attygalle et al.Generative AI (Text, Image, Music, Video)Intelligent Tutoring Systems & Learning AnalyticsIUI
Systemization of Knowledge (SoK): Goals, Coverage, and Evaluation in Cybersecurity and Privacy GamesThis paper systematized existing knowledge on cybersecurity and privacy game-based approaches, exploring their goals, scope, and evaluation methods. Our review of 93 academic papers revealed that these approaches serve multiple purposes and target diverse player types. We identified 11 key aspects of cybersecurity and privacy that these approaches addressed, such as threats, defensive strategies, and data privacy. Additionally, we analyzed the effectiveness evaluation methods of these approaches, emphasizing the connections between evaluation techniques, types of data used, and their alignment with the approaches' goals. We also summarized the aspects of user experience evaluated in the literature and the types of questions used to capture these experiences. Reflecting on these methods, we provide guidance for future research and practice in designing and evaluating game-based approaches. Finally, we identify key gaps and propose opportunities to enhance user understanding, foster adaptability, and address emerging cybersecurity and privacy challenges.2025YHYue Huang et al.CSIRO's Data61Accessible GamingCybersecurity Training & AwarenessDark Patterns RecognitionCHI
Vision-Based Multimodal Interfaces: A Survey and Taxonomy for Enhanced Context-Aware System DesignThe recent surge in artificial intelligence, particularly in multimodal processing technology, has advanced human-computer interaction, by altering how intelligent systems perceive, understand, and respond to contextual information (i.e., context awareness). Despite such advancements, there is a significant gap in comprehensive reviews examining these advances, especially from a multimodal data perspective, which is crucial for refining system design. This paper addresses a key aspect of this gap by conducting a systematic survey of data modality-driven Vision-based Multimodal Interfaces (VMIs). VMIs are essential for integrating multimodal data, enabling more precise interpretation of user intentions and complex interactions across physical and digital environments. Unlike previous task- or scenario-driven surveys, this study highlights the critical role of the visual modality in processing contextual information and facilitating multimodal interaction. Adopting a design framework moving from the whole to the details and back, it classifies VMIs across dimensions, providing insights for developing effective, context-aware systems.2025YHYongquan 'Owen' Hu et al.University of New South WalesContext-Aware ComputingUbiquitous ComputingCHI
Trust, Privacy, and Safety Factors Associated with Decision Making in P2P Markets Based on Social Networks: A Case Study of Facebook Marketplace in USA and CanadaAs peer-to-peer (P2P) marketplaces have grown rapidly, concerns related to trust, privacy, and safety (TPS) have also increased. While previous studies have explored these aspects in various P2P marketplaces, there has been limited research on Facebook Marketplace (FM), which is distinguished by dramatic growth and intricate entanglement with the Facebook social networking site (SNS). To address this knowledge gap, we conducted interviews with 42 FM users in the US and Canada, investigating TPS factors associated with trading decisions. We identified four categories of factors: pre-existing concerns, signals, interactions, and perceived benefits. We uncover the challenges arising from the interplay of these factors, offer design recommendations for SNS–based marketplaces like FM, and suggest directions for future research. Our study advances the understanding of decision-making processes in SNS–based marketplaces, informs future design improvements for such platforms, and ultimately contributes to a better user experience related to trust, privacy, and safety.2024AMAzadeh Mokhberi et al.The University of British ColumbiaPrivacy by Design & User ControlPrivacy Perception & Decision-MakingContent Moderation & Platform GovernanceCHI
Exploring Opportunities for Augmenting Homes to Support ExercisingAlthough exercising at home has benefits, it is not always engaging or motivating. Augmented Reality (AR) head-mounted displays (HMDs) offer the potential to make in-home exercising and exergaming more inclusive and immersive, but there is limited research investigating how such systems can be designed. We employed a participatory design approach involving semi-structured interviews to investigate how homes can be augmented to facilitate exercising experiences. We developed 10 recommendations for developing home-based exercising experiences using AR HMDs. Our results further contribute to the existing body of research on the use of AR for exercising, home applications, and everyday objects by presenting the first foundational study investigating the wide range of exercises that can be supported through AR HMDs in home environments and the different ways home elements may support these exercises, and laying the groundwork for future work developing home-based exergaming through AR HMDs to increase people's physical activity levels.2024MAMichelle Adiwangsa et al.Australian National UniversityAR Navigation & Context AwarenessFitness Tracking & Physical Activity MonitoringCHI
MicroCam: Leveraging Smartphone Microscope Camera for Context-Aware Contact Surface Sensing"The primary focus of this research is the discreet and subtle everyday contact interactions between mobile phones and their surrounding surfaces. Such interactions are anticipated to facilitate mobile context awareness, encompassing aspects such as dispensing medication updates, intelligently switching modes (e.g., silent mode), or initiating commands (e.g., deactivating an alarm). We introduce MicroCam, a contact-based sensing system that employs smartphone IMU data to detect the routine state of phone placement and utilizes a built-in microscope camera to capture intricate surface details. In particular, a natural dataset is collected to acquire authentic surface textures in situ for training and testing. Moreover, we optimize the deep neural network component of the algorithm, based on continual learning, to accurately discriminate between object categories (e.g., tables) and material constituents (e.g., wood). Experimental results highlight the superior accuracy, robustness and generalization of the proposed method. Lastly, we conducted a comprehensive discussion centered on our prototype, encompassing topics such as system performance and potential applications and scenarios." https://doi.org/10.1145/36109212023YHYongquan Hu et al.Context-Aware ComputingUbiquitous ComputingUbiComp
Voicify Your UI: Towards Android App Control with Voice CommandsNowadays, voice assistants help users complete tasks on the smartphone with voice commands, replacing traditional touchscreen interactions when such interactions are inhibited. However, the usability of those tools remains moderate due to the problems in understanding rich language variations in human commands, along with efficiency and comprehensibility issues. Therefore, we introduce Voicify, an Android virtual assistant that allows users to interact with on-screen elements in mobile apps through voice commands. Using a novel deep learning command parser, Voicify interprets human verbal input and performs matching with UI elements. In addition, the tool can directly open a specific feature from installed applications by fetching application code information to explore the set of in-app components. Our command parser achieved 90% accuracy on the human command dataset. Furthermore, the direct feature invocation module achieves better feature coverage in comparison to Google Assistant. The user study demonstrates the usefulness of Voicify in real-world scenarios. https://dl.acm.org/doi/10.1145/35819982023MVMinh Duc Vu et al.Voice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)UbiComp
Video2Action: Reducing Human Interactions in Action Annotation of App Tutorial VideosTutorial videos of mobile apps have become a popular and compelling way for users to learn unfamiliar app features. To make the video accessible to the users, video creators always need to annotate the actions in the video, including what actions are performed and where to tap. However, this process can be time-consuming and labor-intensive. In this paper, we introduce a lightweight approach Video2Action, to automatically generate the action scenes and predict the action locations from the video by using image-processing and deep-learning methods. The automated experiments demonstrate the good performance of Video2Action in acquiring actions from the videos, and a user study shows the usefulness of our generated action cues in assisting video creators with action annotation.2023SFSidong Feng et al.Generative AI (Text, Image, Music, Video)Crowdsourcing Task Design & Quality ControlUIST
RadarFoot: Fine-grain Ground Surface Context Awareness for Smart ShoesEveryday, billions of people use footwear for walking, running, or exercise. Of emerging interest are ``smart footwear'', which help users track gait, count steps or even analyse performance. However, such nascent footwear lack fine-grain ground surface context awareness, which could allow them to adapt to the conditions and create usable functions and experiences. Hence, this research aims to recognize the walking surface using a radar sensor embedded in a shoe, enabling ground context-awareness. Using data collected from 23 participants from an in-the-wild setting, we developed several classification models. We show that our model can detect five common terrain types with an accuracy of 80.0\% and further ten terrain types with an accuracy of 66.3\%, while moving. Importantly, it can detect the gait motion types such as `walking', `stepping up', `stepping down', `still', with an accuracy of 90\%. Finally, we present potential use cases and insights for future work based on such ground-aware smart shoes.2023DEDon Samitha Elvitigala et al.Biosensors & Physiological MonitoringContext-Aware ComputingUIST
Drawing Connections: Designing Situated Links for Immersive MapsWe explore the design of situated visual links in outdoor augmented reality (AR) for connecting miniature buildings on a virtual map to their real-world counterparts. We first distill design criteria from prior work, then conduct two user studies to evaluate a set of proposed link designs to better understand users’ preferences for different design choices of the links. In two user studies we evaluated, respectively, a set of link geometries in a virtual environment and a refined AR prototype in two different outdoor environments. The studies reveal that links help in identifying buildings in the environments. Participants prefer straight rather than curved links, simple and thin links to avoid information occlusion, and links and maps aligned with their direction of view. We recommend using a consistent color with a strong contrast to the background color for all links in a scene. To improve visibility, the diameter of links should grow with distance to the viewer and optional animated stripes can be placed on links. The findings of this study have the potential to bolster the development of various situated visualization applications, such as those used in urban planning, tourism, smart agriculture, and other fields.2023ZGZeinab Ghaemi et al.AR Navigation & Context AwarenessGeospatial & Map VisualizationContext-Aware ComputingMobileHCI
Calming Down in Lockdown: Rethinking Technologies for a Slower Pace of LifeThis study investigated Australian older adults' response to the conditions of the COVID-19 pandemic, the adjustments they made to their activities, technology use, and social relations, to inform how technology design could be inspired by these adaptations. Online interviews revealed that some sorely missed social interactions, however, most enjoyed having a greater agency to curate their own activities and slowing down as a result of lockdown. These findings prompted us to rethink the design space of temporal design from the perspective of those craving an ongoing impact of slowness in their lives. We suggest that designing for a slower pace of life can be inspired by people's response to life circumstances in lockdown, complementing the original concept of slow technology which seeks to intervene in a fast-paced life to encourage people to slow down and reflect. We conclude by proposing three new design pathways based on this new standpoint.2023YAYasamin Asadi et al.Sustainable HCIEcological Design & Green ComputingHuman-Nature Relationships (More-than-Human Design)DIS
"Piece it together": Insights from one year of engagement with electronics and programming for people with intellectual disabilitiesWe present the results of one year spent engaging people living with intellectual disabilities with an electronics and programming package. The program was run in collaboration with a disability support organization and delivered by support workers. We evaluate key qualities of the package at three sites via ongoing communication and reflective interviews with five support workers, along with observation of sessions and contextual inquiry with eleven people with a range of disabilities. Our findings demonstrate the importance of physicality in enabling experiences by creating real-world analogues and supporting diverse group interactions; how groups support members' attention, motivating each other, and allow space for coping mechanisms; and participants' growing confidence and creativity in problem solving, and the emergence of self-directed activities. We discuss the importance of diverse repetition for skill development, how skills develop over the year, and pragmatic lessons for conducting a long-term research program with a disability support organization.2023KEKirsten Ellis et al.Monash UniversitySpecial Education TechnologyCHI
Grand Challenges in Immersive AnalyticsImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.2021BEBarrett Ens et al.Monash UniversityImmersion & Presence ResearchInteractive Data VisualizationCHI
OmniGlobeVR: A Collaborative 360-Degree Communication System for VRIn this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner’s focus. We discuss the design implications of these results and directions for future research.2020ZLZhengqing Li et al.Social & Collaborative VRImmersion & Presence ResearchDIS
Do I Trust My Machine Teammate? An Investigation from Perception to DecisionIn the human-machine collaboration context, understanding the reason behind each human decision is critical for interpreting the performance of the human-machine team. Via an experimental study of a system with varied levels of accuracy, we describe how human trust interplays with system performance, human perception and decisions. It is revealed that humans are able to perceive the performance of automatic systems and themselves, and adjust their trust levels according to the accuracy of systems. The 70% system accuracy suggests to be a threshold between increasing and decreasing human trust and system usage. We have also shown that trust can be derived from a series of users’ decisions rather than from a single one, and relates to the perceptions of users. A general framework depicting how trust and perception affect human decision making is proposed, which can be used as future guidelines for human-machine collaboration design.2019KYKun Yu et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationHuman-Robot Collaboration (HRC)IUI
Mixed Reality Remote Collaboration Combining 360 Video and 3D ReconstructionRemote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.2019TTTheophilus Teo et al.University of South AustraliaSocial & Collaborative VRImmersion & Presence ResearchCHI
Detecting Personality Traits Using Eye-Tracking DataPersonality is an established domain of research in psychology, and individual differences in various traits are linked to a variety of real-life outcomes and behaviours. Personality detection is an intricate task that typically requires humans to fill out lengthy questionnaires assessing specific personality traits. The outcomes of this, however, may be unreliable or biased if the respondents do not fully understand or are not willing to honestly answer the questions. To this end, we propose a framework for objective personality detection that leverages humans' physiological responses to external stimuli. We exemplify and evaluate the framework in a case study, where we expose subjects to affective image and video stimuli, and capture their physiological responses using a commercial-grade eye-tracking sensor. These responses are then processed and fed into a classifier capable of accurately predicting a range of personality traits. Our work yields notably high predictive accuracy, suggesting the applicability of the proposed framework for robust personality detection.2019SBShlomo Berkovsky et al.Data61 - CSIRO & Macquarie UniversityEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI