“My Happiness Makes You Smile”: Beginning to Understand Telepathic Superpower Design Via Brain-Muscle Interfaces Designing superpowers in Human-Computer Interaction (HCI), often inspired by science fiction, has garnered increased attention. However, it is important to ask whether such superpower designs might have inherent negative side effects, especially considering that technological advances allow going beyond short demos to integrate these superpowers into everyday life. To understand the positive and negative side effects of superpower design, we created "EmoPals" and studied it in everyday life. EmoPals is a novel system inspired by telepathy, where one user's emotions are detected through a brain-computer interface and replicated on the other user's face through electrical muscle stimulation, therefore one user's happiness makes the other smile and vice versa. A 5-day field study with 12 participants suggests that EmoPals can strengthen emotional connections and facilitate empathy, however, it also highlights the negative side effects of amplifying negative emotions and social discomfort. We propose five design recommendations for designing superpowers that account for negative side effects. Ultimately, we aim to deepen our understanding of superpower design for everyday life.2025SLSiyi Liu et al.Electrical Muscle Stimulation (EMS)Brain-Computer Interface (BCI) & NeurofeedbackDIS
AlphaPIG: The Nicest Way to Prolong Interactive Gestures in Extended RealityMid-air gestures serve as a common interaction modality across Extended Reality (XR) applications, enhancing engagement and ownership through intuitive body movements. However, prolonged arm movements induce shoulder fatigue—known as "Gorilla Arm Syndrome"—degrading user experience and reducing interaction duration. Although existing ergonomic techniques derived from Fitts' law (such as reducing target distance, increasing target width, and modifying control-display gain) provide some fatigue mitigation, their implementation in XR applications remains challenging due to the complex balance between user engagement and physical exertion. We present \textit{AlphaPIG}, a meta-technique designed to \textbf{P}rolong \textbf{I}nteractive \textbf{G}estures by leveraging real-time fatigue predictions. AlphaPIG assists designers in extending and improving XR interactions by enabling automated fatigue-based interventions. Through adjustment of intervention timing and intensity decay rate, designers can explore and control the trade-off between fatigue reduction and potential effects such as decreased body ownership. We validated AlphaPIG's effectiveness through a study (N=22) implementing the widely-used Go-Go technique. Results demonstrated that AlphaPIG significantly reduces shoulder fatigue compared to non-adaptive Go-Go, while maintaining comparable perceived body ownership and agency. Based on these findings, we discuss positive and negative perceptions of the intervention. By integrating real-time fatigue prediction with adaptive intervention mechanisms, AlphaPIG constitutes a critical first step towards creating fatigue-aware applications in XR.2025YLZhuying Li et al.Monash UniversityFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Drawing Connections: Designing Situated Links for Immersive MapsWe explore the design of situated visual links in outdoor augmented reality (AR) for connecting miniature buildings on a virtual map to their real-world counterparts. We first distill design criteria from prior work, then conduct two user studies to evaluate a set of proposed link designs to better understand users’ preferences for different design choices of the links. In two user studies we evaluated, respectively, a set of link geometries in a virtual environment and a refined AR prototype in two different outdoor environments. The studies reveal that links help in identifying buildings in the environments. Participants prefer straight rather than curved links, simple and thin links to avoid information occlusion, and links and maps aligned with their direction of view. We recommend using a consistent color with a strong contrast to the background color for all links in a scene. To improve visibility, the diameter of links should grow with distance to the viewer and optional animated stripes can be placed on links. The findings of this study have the potential to bolster the development of various situated visualization applications, such as those used in urban planning, tourism, smart agriculture, and other fields.2023ZGZeinab Ghaemi et al.AR Navigation & Context AwarenessGeospatial & Map VisualizationContext-Aware ComputingMobileHCI
User-Driven Constraints for Layout Optimisation in Augmented RealityAutomatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.2023ANAziz Niyazov et al.IRIT - University of ToulouseAR Navigation & Context AwarenessMixed Reality WorkspacesPrototyping & User TestingCHI
GestureExplorer: Immersive Visualisation and Exploration of Gesture DataThis paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.2023ALAng Li et al.Monash UniversityHand Gesture RecognitionInteractive Data VisualizationCHI
DataDancing: An Exploration of the Design Space For Visualisation View Management for 3D Surfaces and SpacesRecent studies have explored how users of immersive visualisation systems arrange data representations in the space around them. Generally, these have focused on placement centred at eye-level in absolute room coordinates. However, work in HCI exploring full-body interaction has identified zones relative to the user's body with different roles. We encapsulate the possibilities for visualisation view management into a design space (called “DataDancing”). From this design space we extrapolate a variety of view management prototypes, each demonstrating a different combination of interaction techniques and space use. The prototypes are enabled by a full-body tracking system including novel devices for torso and foot interaction. We explore four of these prototypes, encompassing standard wall and table-style interaction as well as novel foot interaction, in depth through a qualitative user study. Learning from the results, we improve the interaction techniques and propose two hybrid interfaces that demonstrate interaction possibilities of the design space.2023JLJiazhou Liu et al.Monash UniversityFull-Body Interaction & Embodied InputInteractive Data VisualizationCHI
Tangible Globes for Data Visualisation in Augmented RealityHead-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the ``tangible-virtual interplay'' of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR.2022KSKadek Ananta Satriadi et al.Monash University, University of South AustraliaGeospatial & Map VisualizationSmart Cities & Urban SensingCHI
A Design Space Exploration of Worlds in MiniatureWorlds-in-Miniature (WiMs) are interactive worlds within a world and combine the advantages of an input space, a cartographic map, and an overview+detail interface. They have been used across the extended virtuality spectrum for a variety of applications. Building on an analysis of examples of WiMs from the research literature we contribute a design space for WiMs based on seven design dimensions. Further, we expand upon existing definitions of WiMs to provide a definition that applies across the extended reality spectrum. We identify the design dimensions of size-scope-scale, abstraction, geometry, reference frame, links, multiples, and virtuality. Using our framework we describe existing Worlds-in-Miniature from the research literature and reveal unexplored research areas. Finally, we generate new examples of WiMs using our framework to fill some of these gaps. With our findings, we identify opportunities that can guide future research into WiMs.2021KDKurtis Danyluk et al.University of CalgaryMixed Reality Workspaces360° Video & Panoramic ContentSustainable HCICHI
Quantitative Data Visualisation on Virtual GlobesGeographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.2021KSKadek Ananta Satriadi et al.Monash UniversityGeospatial & Map VisualizationVisualization Perception & CognitionCHI
Grand Challenges in Immersive AnalyticsImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.2021BEBarrett Ens et al.Monash UniversityImmersion & Presence ResearchInteractive Data VisualizationCHI
Zippro: The Design and Implementation of An Interactive ZipperZippers are common in a wide variety of objects that we use daily. This work investigates how we can take advantage of such common daily activities to support seamless interaction with technology. We look beyond simple zipper-sliding interactions explored previously to determine how to weave foreground and background interactions into a vocabulary of natural usage patterns. We begin by conducting two user studies to understand how people typically interact with zippers. The findings identify several opportunities for zipper input and sensing, which inform the design of Zippro, a self-contained prototype zipper slider, which we evaluate with a standard jacket zipper. We conclude by demonstrating several applications that make use of the identified foreground and background input methods.2020PKPin-sung Ku et al.Dartmouth College & National Taiwan UniversityShape-Changing Interfaces & Soft Robotic MaterialsCHI
On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible InteractionWe propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.2019TPThammathip Piumsomboon et al.University of Canterbury & University of South AustraliaSocial & Collaborative VRMixed Reality Workspaces360° Video & Panoramic ContentCHI
Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D ScatterplotsThree-dimensional scatterplots suffer from well-known perception and usability problems. In particular, overplotting and occlusion, mainly due to density and noise, prevent users from properly perceiving the data. Thanks to accurate head and hand tracking, immersive Virtual Reality (VR) setups provide new ways to interact and navigate with 3D scatterplots. VR also supports additional sensory modalities such as haptic feedback. Inspired by methods commonly used in Scientific Visualisation to visually explore volumes, we propose two techniques that leverage the immersive aspects of VR: first, a density-based haptic vibration technique (Scaptics) which provides feedback through the controller; and second, an adaptation of a cutting plane for 3D scatterplots (Highlight-Plane). We evaluated both techniques in a controlled study with two tasks involving density (finding high- and low-density areas). Overall, Scaptics was the most time-efficient and accurate technique, however, in some conditions, it was outperformed by Highlight-Plane.2019APArnaud Prouzeau et al.Monash UniversityMixed Reality WorkspacesVisualization Perception & CognitionCHI