“My Happiness Makes You Smile”: Beginning to Understand Telepathic Superpower Design Via Brain-Muscle Interfaces Designing superpowers in Human-Computer Interaction (HCI), often inspired by science fiction, has garnered increased attention. However, it is important to ask whether such superpower designs might have inherent negative side effects, especially considering that technological advances allow going beyond short demos to integrate these superpowers into everyday life. To understand the positive and negative side effects of superpower design, we created "EmoPals" and studied it in everyday life. EmoPals is a novel system inspired by telepathy, where one user's emotions are detected through a brain-computer interface and replicated on the other user's face through electrical muscle stimulation, therefore one user's happiness makes the other smile and vice versa. A 5-day field study with 12 participants suggests that EmoPals can strengthen emotional connections and facilitate empathy, however, it also highlights the negative side effects of amplifying negative emotions and social discomfort. We propose five design recommendations for designing superpowers that account for negative side effects. Ultimately, we aim to deepen our understanding of superpower design for everyday life.2025SLSiyi Liu et al.Electrical Muscle Stimulation (EMS)Brain-Computer Interface (BCI) & NeurofeedbackDIS
Exploring the Privacy and Security Challenges Faced by Migrant Domestic Workers in Chinese Smart HomesThe growing use of smart home devices poses considerable privacy and security challenges, especially for individuals like migrant domestic workers (MDWs) who may be surveilled by their employers. This paper explores the privacy and security challenges experienced by MDWs in multi-user smart homes through in-depth semi-structured interviews with 26 MDWs and 5 staff members of agencies that recruit and/or train domestic workers in China. Our findings reveal power imbalances in the relationships between MDWs and their employers and agencies, influenced by Chinese cultural and social factors (such as Confucianism and collectivism) as well as legal ones. Furthermore, the widespread and normalized use of surveillance technologies in China, particularly in public spaces, exacerbates these power imbalances, reinforcing a sense of constant monitoring and control. Drawing on our findings, we provide recommendations for domestic worker agencies and policymakers to address the privacy and security challenges faced by MDWs in Chinese smart homes.2025SHShijing He et al.King's College LondonPrivacy by Design & User ControlPrivacy Perception & Decision-MakingSmart Home Privacy & SecurityCHI
The Brain Knows What You Prefer: Using EEG to Decode AR Input PreferencesUnderstanding user input preferences is crucial in immersive environments, where input methods such as gestures and controllers are common. Traditional evaluation methods rely on post experience questionnaires, which don't capture real-time preferences. This study used brain signals to classify input preferences during Augmented Reality (AR) interactions. Thirty participants performed three interaction tasks (pointing, manipulation, and rotation) using hands or controllers. Their electroencephalogram (EEG) data were collected at varying task difficulties (low, medium, high) and phases (preparation, task, and completion). Machine learning was used to classify preferred and non-preferred input methods. Results showed that EEG signals effectively classify preferences with accuracies up to 86%, with the completion phase being the best indicator of preference. In addition, different input methods exhibited distinct EEG patterns. These findings highlight the potential of EEG signals for decoding real-time input preference in AR, offering insights for enhancing user experiences.2025KZKaining Zhang et al.University of South Australia, Empathic Computing LabBrain-Computer Interface (BCI) & NeurofeedbackAR Navigation & Context AwarenessCHI
Strollytelling: Coupling Animation with Physical Locomotion to Explore Immersive Data StoriesWith a growing interest in immersive data storytelling, there is an opportunity to explore story presentation and navigation techniques in virtual reality (VR) that can engage audiences as much as data story techniques have on conventional displays. We propose and explore “strolly”telling, a novel data storytelling technique that maps the story progression with the user/audience’s physical locomotion. Inspired by the conventional web-based technique for scrolling-based stories (i.e. scrollytelling), our technique tightly couples the user’s position in physical space to the animation frame of the data story. This technique leverages the natural tendency of humans to "walk and talk" while telling a story and requires users to engage with the content actively. This work defines strollytelling, design considerations, and a preliminary process for designing a strollytelling experience. A user study comparing strollytelling with virtual locomotion found that strollytelling was preferred by most participants and had higher self-reported immersion. We conclude with opportunities for strollytelling within the immersive data storytelling landscape.2025RJRADHIKA PANKAJ JAIN et al.University of South Australia, IVEData StorytellingInteractive Narrative & Immersive StorytellingCHI
PLAID: Supporting Computing Instructors to Identify Domain-Specific Programming Plans at ScalePedagogical approaches focusing on stereotypical code solutions, known as programming plans, can increase problem-solving ability and motivate diverse learners. However, plan-focused pedagogies are rarely used beyond introductory programming. Our formative study (N=10 educators) showed that identifying plans is a tedious process. To advance plan-focused pedagogies in application-focused domains, we created an LLM-powered pipeline that automates the effortful parts of educators' plan identification process by providing use-case-driven program examples and candidate plans. In design workshops (N=7 educators), we identified design goals to maximize instructors' efficiency in plan identification by optimizing interaction with this LLM-generated content. Our resulting tool, PLAID, enables instructors to access a corpus of relevant programs to inspire plan identification, compare code snippets to assist plan refinement, and facilitates them in structuring code snippets into plans. We evaluated PLAID in a within-subjects user study (N=12 educators) and found that PLAID led to lower cognitive demand and increased productivity compared to the state-of-the-art. Educators found PLAID beneficial for generating instructional material. Thus, our findings suggest that human-in-the-loop approaches hold promise for supporting plan-focused pedagogies at scale.2025YJYoshee Jain et al.University of Illinois Urbana-ChampaignHuman-LLM CollaborationProgramming Education & Computational ThinkingCHI
InfoPrint: Embedding Interactive Information in 3D Prints Using Low-Cost Readily-Available Printers and MaterialsJiang 等人提出 InfoPrint 方法,利用低成本普通3D打印机和常规材料在打印物体中嵌入交互式信息,实现物理对象的数字化增强与可编程功能。2024WJWeiwei Jiang et al.Desktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsUbiComp
RadarHand: a Wrist-Worn Radar for On-Skin Touch based Proprioceptive GesturesWe introduce RadarHand, a wrist-worn wearable with millimetre wave radar that detects on-skin touch-based proprioceptive hand gestures. Radars are robust, private, small, penetrate materials, and require low computation costs. We first evaluated the proprioceptive and tactile perception nature of the back of the hand and found that tapping on the thumb is the least proprioceptive error of all the finger joints, followed by the index finger, middle finger, ring finger, and pinky finger in the eyes-free and high cognitive load situation. Next, we trained deep-learning models for gesture classification. We introduce two types of gestures based on the locations of the back of the hand: generic gestures and discrete gestures. Discrete gestures are gestures that start at specific locations and end at specific locations at the back of the hand, in contrast to generic gestures, which can start anywhere and end anywhere on the back of the hand. Out of 27 gesture group possibilities, we achieved 92% accuracy for a set of seven gestures and 93% accuracy for the set of eight discrete gestures. Finally, we evaluated RadarHand’s performance in real-time under two interaction modes: Active interaction and Reactive interaction. Active interaction is where the user initiates input to achieve the desired output, and reactive interaction is where the device initiates interaction and requires the user to react. We obtained an accuracy of 87% and 74% for active generic and discrete gestures, respectively, as well as 91% and 81.7% for reactive generic and discrete gestures, respectively. We discuss the implications of RadarHand for gesture recognition and directions for future works.2024MHMr Ryo Hajika et al.Vibrotactile Feedback & Skin StimulationFoot & Wrist InteractionUIST
Modulating Heart Activity and Task Performance using Haptic Heartbeat Feedback: A Study Across Four Body PlacementsThis paper explores the impact of vibrotactile haptic feedback on heart activity when the feedback is provided at four different body locations (chest, wrist, neck, and ankle) and with two feedback rates (50 bpm and 110 bpm). A user study found that the neck placement resulted in higher heart rates and lower heart rate variability, and higher frequencies correlated with increased heart rates and decreased heart rate variability. The chest was preferred in self-reported metrics, and neck placement was perceived as less satisfying, harmonious, and immersive. This research contributes to understanding the interplay between psychological experiences and physiological responses when using haptic biofeedback resembling real body signals.2024AVAndreia Valente et al.Vibrotactile Feedback & Skin StimulationUIST
The RayHand Navigation: A Virtual Navigation Method with Relative Position between Hand and Gaze-RayIn this paper, we introduce a novel Virtual Reality (VR) navigation method using gaze ray and hand, named RayHand navigation. It supports controlling navigation speed and direction by quickly indicating the initial direction using gaze and then using dexterous hand movement for controlling the speed and direction based on the relative position between the gaze ray and user’s hand. We conducted a user study comparing our approach to the head-hand and torso-leaning-based navigation methods, and also evaluated their learning effect. The results showed that the RayHand and head-hand navigations were less physically demanding than the torso-leaning navigation, and the RayHand supported rich navigation experience with high hedonic quality and solved the issue of the user unintentionally stepping out from the designated interaction area. In addition, our approach showed a significant improvement over time with a learning effect.2024SKSei Kang et al.Chonnam National UniversityFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
That's Rough! Encoding Data into Roughness for PhysicalizationWhile visual channels (e.g., color, shape, size) have been explored for visualizing data in data physicalizations, there is a lack of understanding regarding how to encode data into physical material properties (e.g., roughness, hardness). This understanding is critical for ensuring data is correctly communicated and for potentially extending the channels and bandwidth available for encoding that data. We present a method to encode ordinal data into roughness, validated through user studies. In the first study, we identified just noticeable differences in perceived roughness from this method. In the second study, we 3D-printed proof of concepts for five different multivariate physicalizations using the model. These physicalizations were qualitatively explored (N=10) to understand people's comprehension and impressions of the roughness channel. Our findings suggest roughness may be used for certain types of data encoding, and the context of the data can impact how people interpret roughness mapping direction.2024XDXiaojiao Du et al.University of South AustraliaData PhysicalizationVisualization Perception & CognitionCHI
Volumetric Hybrid Workspaces: Interactions with Objects in Remote and Co-located Telepresence Volumetric telepresence aims to create a shared space, allowing people in local and remote settings to collaborate seamlessly. Prior telepresence examples typically have asymmetrical designs, with volumetric capture in one location and objects in one format. In this paper, we present a volumetric telepresence mixed reality system that supports real-time, symmetrical, multi-user, partially distributed interactions, using objects in multiple formats, across multiple locations. We align two volumetric environments around a common spatial feature to create a shared workspace for remote and co-located people using objects in three formats: physical, virtual, and volumetric. We conducted a study with 18 participants over 6 sessions, evaluating how telepresence workspaces support spatial coordination and hybrid communication for co-located and remote users undertaking collaborative tasks. Our findings demonstrate the successful integration of remote spaces, effective use of proxemics and deixis to support negotiation, and strategies to manage interactivity in hybrid workspaces.2024AIAndrew Irlitti et al.University of MelbourneMixed Reality WorkspacesTeleoperation & TelepresenceCHI
Towards Applied Remapped Physical-Virtual Interfaces: Synchronization Methods for Resolving Control State ConflictsUser interfaces in virtual reality enable diverse interactions within the virtual world, though they typically lack the haptic cues provided by physical interface controls. Haptic retargeting enables flexible mapping between dynamic virtual interfaces and physical controls to provide real haptic feedback. This investigation aims to extend these remapped interfaces to support more diverse control types. Many interfaces incorporate sliders, switches, and knobs. These controls hold fixed states between interactions creating potential conflicts where a virtual control has a different state from the physical control. This paper presents two methods, ``manual'' and ``automatic'', for synchronizing physical and virtual control states and explores the effects of these methods on the usability of remapped interfaces. Results showed that interfaces without retargeting were the ideal configuration, but they lack the flexibility that remapped interfaces provide. Automatic synchronization was faster and more usable; however, manual synchronization is suitable for a broader range of physical interfaces.2023BMBrandon J Matthews et al.University of South Australia, University of South AustraliaForce Feedback & Pseudo-Haptic WeightMixed Reality WorkspacesImmersion & Presence ResearchCHI
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.2023KSKadek Ananta Satriadi et al.University of South Australia, Monash UniversityInteractive Data VisualizationContext-Aware ComputingCHI
The Impact of Sharing Gaze Behaviours in Collaborative Mixed RealityIn a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators’ cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator’s gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.2022AJAllison Jing et al.XR Collaboration; XR CollaborationCSCW
VRhook: A Data Collection Tool for VR Motion Sickness ResearchDespite the increasing popularity of VR games, one factor hindering the industry's rapid growth is motion sickness experienced by the users. Symptoms such as fatigue and nausea severely hamper the user experience. Machine Learning methods could be used to automatically detect motion sickness in VR experiences, but generating the extensive labeled dataset needed is a challenging task. It needs either very time consuming manual labeling by human experts or modification of proprietary VR application source codes for label capturing. To overcome these challenges, we developed a novel data collection tool, VRhook, which can collect data from any VR game without needing access to its source code. This is achieved by dynamic hooking, where we can inject custom code into a game's run-time memory to record each video frame and its associated transformation matrices. Using this, we can automatically extract various useful labels such as rotation, speed, and acceleration. In addition, VRhook can blend a customized screen overlay on top of game contents to collect self-reported comfort scores. In this paper, we describe the technical development of VRhook, demonstrate its utility with an example, and describe directions for future research.2022EWElliott Wen et al.Motion Sickness & Passenger ExperienceImmersion & Presence ResearchUIST
Emotion Recognition in Conversations using Brain and Physiological SignalsEmotions are complicated psycho-physiological processes that are related to numerous external and internal changes in the body. They play an essential role in human-human interaction and can be important for human-machine interfaces. Automatically recognizing emotions in conversation could be applied in many application domains like health-care, education, social interactions, and entertainment. Facial expressions, speech, and body gestures are primary cues that have been widely used for recognizing emotions in conversation. However, these cues can be ineffective as they cannot reveal underlying emotions when a person involuntarily or deliberately conceals their emotions. Researchers have shown that analyzing brain activity and physiological signals can lead to more reliable emotion recognition since they generally cannot be controlled. However, these body responses in emotional situations have been rarely explored in interactive tasks like conversations. This paper explores and discusses the performance and challenges of using brain activity and other physiological signals in recognizing emotions in a face-to-face conversation. We present an experimental setup for stimulating spontaneous emotions during a face-to-face conversation while recording brain and physiological activity. We then describe our analysis strategies for recognizing emotions using Electroencephalography (EEG), Photoplethysmography (PPG), and Galvanic Skin Responses (GSR) signals in a subject-dependent and subject-independent approach. Finally, we describe new directions for future research in conversational emotion recognition, and the limitations and challenges.2022NSNastaran Saffaryazdi et al.Brain-Computer Interface (BCI) & NeurofeedbackBiosensors & Physiological MonitoringIUI
Bringing the Jury to the Scene of the Crime: Memory and Decision-Making in a Simulated Crime SceneThis paper investigates the use of immersive virtual reconstructions as an aid for jurors during a courtroom trial. The findings of a between-participant user study on memory and decision-making are presented in the context of viewing a simulated hit-run-death scenario. Participants listened to the opening statement of a prosecutor and a defence attorney before viewing the crime scene in Virtual Reality (VR) or as still images. We compare the effects on cognition and usability of using VR over images presented on a screen. We found several significant improvements, including that VR led to more consistent decision-making among participants. This shows that VR could provide a promising solution for the court to present crime scenes when site visitations are not possible.2021CRCarolin Reichherzer et al.University of South AustraliaImmersion & Presence ResearchMuseum & Cultural Heritage DigitizationCHI
Haptic and Visual Comprehension of a 2D Graph Layout Through PhysicalisationData physicalisations afford people the ability to directly interact with data using their hands, potentially achieving a more comprehensive understanding of a dataset. Due to their complex nature, the representation of graphs and networks could benefit from physicalisation, bringing the dataset from the digital world into the physical one. However, no empirical work exists investigating the effects physicalisations have upon comprehension as they relate to graph representations. In this work, we present initial design considerations for graph physicalisations, as well as an empirical study investigating differences in comprehension between virtual and physical representations. We found that participants perceived themselves as being more accurate via touch and sight (visual-haptic) than the graphical-only modality, and perceived a triangle count task as less difficult in visual-haptic than in the graphical-only modality. Additionally, we found that participants significantly preferred interacting with visual-haptic over other conditions, despite no significant effect on task time or error.2021ADAdam Drogemuller et al.University of South AustraliaFoot & Wrist InteractionData PhysicalizationCHI
OmniGlobeVR: A Collaborative 360-Degree Communication System for VRIn this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner’s focus. We discuss the design implications of these results and directions for future research.2020ZLZhengqing Li et al.Social & Collaborative VRImmersion & Presence ResearchDIS
A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture SharingSupporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.2020HBHuidong Bai et al.University of AucklandFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionMixed Reality WorkspacesCHI