MagnePins: A Modular, Affordable, and DIY Refreshable Braille and Tactile DisplayRefreshable tactile, braille and shape changing displays have been studied in HCI for many years and have recently become commercially available. These devices offer blind and low vision users the ability to read text directly from a computer application and also the exciting possibility for increased access to dynamic tactile graphics. Commercial devices and research prototypes, however, share similar challenges and tradeoffs including cost, scalability, and miniaturisation. Research prototypes typically have either a low pin count---some only a single cell or line of braille---or a pin pitch and pin dimension far larger than the braille specification. Commercial devices that achieve both high pin count and the 2.5mm pin pitch requirement suffer from high cost, due to the inherent complexity of thousands of individual, precision, electro-mechanical or piezo actuated pins. We present `MagnePins', an innovative, robust, and open source design that achieves a large pin array (24x89 in our prototype) with braille-compliant pin size and spacing of 2.5mm. It utilises a simple electromagnetic actuation mechanism driven by reliable driver circuitry and can be fabricated economically using cheap mass-produced elements in a well-equipped makerspace. Our tests of the device indicate high accuracy (of up to 99.97\%), and in testing with an expert touch reader, it provided high tactile resolution, and easy readability.2025JSJim Smiley et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UIST
AlphaPIG: The Nicest Way to Prolong Interactive Gestures in Extended RealityMid-air gestures serve as a common interaction modality across Extended Reality (XR) applications, enhancing engagement and ownership through intuitive body movements. However, prolonged arm movements induce shoulder fatigue—known as "Gorilla Arm Syndrome"—degrading user experience and reducing interaction duration. Although existing ergonomic techniques derived from Fitts' law (such as reducing target distance, increasing target width, and modifying control-display gain) provide some fatigue mitigation, their implementation in XR applications remains challenging due to the complex balance between user engagement and physical exertion. We present \textit{AlphaPIG}, a meta-technique designed to \textbf{P}rolong \textbf{I}nteractive \textbf{G}estures by leveraging real-time fatigue predictions. AlphaPIG assists designers in extending and improving XR interactions by enabling automated fatigue-based interventions. Through adjustment of intervention timing and intensity decay rate, designers can explore and control the trade-off between fatigue reduction and potential effects such as decreased body ownership. We validated AlphaPIG's effectiveness through a study (N=22) implementing the widely-used Go-Go technique. Results demonstrated that AlphaPIG significantly reduces shoulder fatigue compared to non-adaptive Go-Go, while maintaining comparable perceived body ownership and agency. Based on these findings, we discuss positive and negative perceptions of the intervention. By integrating real-time fatigue prediction with adaptive intervention mechanisms, AlphaPIG constitutes a critical first step towards creating fatigue-aware applications in XR.2025YLZhuying Li et al.Monash UniversityFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Exploring Human Values in Mixed Reality FuturesThe rapid development of immersive technologies is heralding a shift from purely physical environments to one that seamlessly mixes the physical and the digital. As these Mixed Reality (MR) worlds develop quickly we need to reflect on how human values are incorporated into the design and deployment of the technology. Human values map our perception of the world, reflect attitudes, guide behaviours, and provide us with social and moral grounding. However, there is limited research on incorporating values in the design of MR technologies. This research has three contributions: (1) a playful values-driven workshop design, (2) insights into the values of different groups of people in diverse MR scenarios, and (3) recommendations for incorporating human values for future MR design and application. This work will contribute to improving the ethical and responsible development of current and future MR applications.2024MLMengxing Li et al.Mixed Reality WorkspacesTechnology Ethics & Critical HCIHuman-Nature Relationships (More-than-Human Design)DIS
Working with Forensic Practitioners to Understand the Opportunities and Challenges for Mixed-Reality Digital AutopsyForensic practitioners analyse intrinsic 3D data daily on 2D screens. We explore novel immersive visualisation techniques that enable digital autopsy through analysis of 3D imagery. We employ a user-centred design process involving four rounds of user feedback: (1) formative interviews eliciting opportunities and requirements for mixed-reality digital autopsies; (2) a larger workshop identifying our prototype's limitations and further use-cases and interaction ideas; (3+4) two rounds of qualitative user validation of successive prototypes of novel interaction techniques for pathologist sensemaking. Overall, we find MR holds great potential to enable digital autopsy, initially to supplement physical autopsy, but ultimately to replace it. We found that experts were able to use our tool to perform basic virtual autopsy tasks, MR setup promotes exploration and sense making of cause of death, and subject to limitations of current MR technology, the proposed system is a valid option for digital autopsies, according to experts' feedback.2023VPVahid Pooryousef et al.Monash UniversityMixed Reality WorkspacesVR Medical Training & RehabilitationMedical & Scientific Data VisualizationCHI
User-Driven Constraints for Layout Optimisation in Augmented RealityAutomatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.2023ANAziz Niyazov et al.IRIT - University of ToulouseAR Navigation & Context AwarenessMixed Reality WorkspacesPrototyping & User TestingCHI
DataDancing: An Exploration of the Design Space For Visualisation View Management for 3D Surfaces and SpacesRecent studies have explored how users of immersive visualisation systems arrange data representations in the space around them. Generally, these have focused on placement centred at eye-level in absolute room coordinates. However, work in HCI exploring full-body interaction has identified zones relative to the user's body with different roles. We encapsulate the possibilities for visualisation view management into a design space (called “DataDancing”). From this design space we extrapolate a variety of view management prototypes, each demonstrating a different combination of interaction techniques and space use. The prototypes are enabled by a full-body tracking system including novel devices for torso and foot interaction. We explore four of these prototypes, encompassing standard wall and table-style interaction as well as novel foot interaction, in depth through a qualitative user study. Learning from the results, we improve the interaction techniques and propose two hybrid interfaces that demonstrate interaction possibilities of the design space.2023JLJiazhou Liu et al.Monash UniversityFull-Body Interaction & Embodied InputInteractive Data VisualizationCHI
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.2023KSKadek Ananta Satriadi et al.University of South Australia, Monash UniversityInteractive Data VisualizationContext-Aware ComputingCHI
Deimos: A Grammar of Dynamic Embodied Immersive Visualisation Morphs and TransitionsWe present Deimos, a grammar for specifying dynamic embodied immersive visualisation morphs and transitions. A morph is a collection of animated transitions that are dynamically applied to immersive visualisations at runtime and is conceptually modelled as a state machine. It is comprised of state, transition, and signal specifications. States in a morph are used to generate animation keyframes, with transitions connecting two states together. A transition is controlled by signals, which are composable data streams that can be used to enable embodied interaction techniques. Morphs allow immersive representations of data to transform and change shape through user interaction, facilitating the embodied cognition process. We demonstrate the expressivity of Deimos in an example gallery and evaluate its usability in an expert user study of six immersive analytics researchers. Participants found the grammar to be powerful and expressive, and showed interest in drawing upon Deimos’ concepts and ideas in their own research.2023BLBenjamin Lee et al.Monash UniversityMixed Reality WorkspacesInteractive Data VisualizationMedical & Scientific Data VisualizationCHI
Tangible Globes for Data Visualisation in Augmented RealityHead-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the ``tangible-virtual interplay'' of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR.2022KSKadek Ananta Satriadi et al.Monash University, University of South AustraliaGeospatial & Map VisualizationSmart Cities & Urban SensingCHI
GAN'SDA Wrap: Geographic And Network Structured DAta on surfaces that Wrap aroundThere are many methods for projecting spherical maps onto the plane. Interactive versions of these projections allow the user to centre the region of interest. However, the effects of such interaction have not previously been evaluated. In a study with 120 participants we find interaction provides significantly more accurate area, direction and distance estimation in such projections. The surface of 3D sphere and torus topologies provides a continuous surface for uninterrupted network layout. But how best to project spherical network layouts to 2D screens has not been studied, nor have such spherical network projections been compared to torus projections. Using the most successful interactive sphere projections from our first study, we compare spherical, standard and toroidal layouts of networks for cluster and path following tasks with 96 participants, finding benefits for both spherical and toroidal layouts over standard network layouts in terms of accuracy for cluster understanding tasks.2022KCKun-Ting Chen et al.Monash University, Monash UniversityInteractive Data VisualizationGeospatial & Map VisualizationTime-Series & Network Graph VisualizationCHI
A Design Space For Data Visualisation Transformations Between 2D And 3D In Mixed-Reality EnvironmentsAs mixed-reality (MR) technologies become more mainstream, the delineation between data visualisations displayed on screens or other surfaces and those floating in space becomes increasingly blurred. Rather than the choice of using either a 2D surface or the 3D space for visualising data being a dichotomy, we argue that users should have the freedom to transform visualisations seamlessly between the two as needed. However, the design space for such transformations is large, and practically uncharted. To explore this, we first establish an overview of the different states that a data visualisation can take in MR, followed by how transformations between these states can facilitate common visualisation tasks. We then describe a design space of how these transformations function, in terms of the different stages throughout the transformation, and the user interactions and input parameters that affect it. This design space is then demonstrated with multiple exemplary techniques based in MR.2022BLBenjamin Lee et al.Monash UniversityMixed Reality WorkspacesInteractive Data VisualizationCHI
It's a Wrap: Toroidal Wrapping of Network Visualisations Supports Cluster Understanding TasksWe explore network visualisation on a two-dimensional torus topology that continuously wraps when the viewport is panned. That is, links may be “wrapped” across the boundary, allowing additional spreading of node positions to reduce visual clutter. Recent work has investigated such pannable wrapped visualisations, finding them not worse than unwrapped drawings for small networks for path-following tasks. However, they did not evaluate larger networks nor did they consider whether torus-based layout might also better display high-level network structure like clusters. We offer two algorithms for improving toroidal layout that is completely autonomous and automatic panning of the viewport to minimiswe wrapping links. The resulting layouts afford fewer crossings, less stress, and greater cluster separation. In a study of 32 participants comparing performance in cluster understanding tasks, we find that toroidal visualisation offers significant benefits over standard unwrapped visualisation in terms of improvement in error by 62.7% and time by 32.3%.2021KCKun-Ting Chen et al.Faculty of Information Technology, Monash University, Monash UniversityTime-Series & Network Graph VisualizationVisualization Perception & CognitionCHI
Data as Delight: Eating dataThe HCI community has a rich history of finding new ways to engage people with data beyond the screen. With our work, we aim to expand the scope of how interaction design can engage people, arguing that “eating data” has the potential to allow people to experience “data as delight”. With reference to prior work and our design research findings, we discuss the advantages and the challenges of this approach to integrating data and food. We then identify four themes to guide the design of engagements with data through food: food form, food commensality, food ephemerality, and emotional response to food. Within these design themes, we articulate twelve insights for interaction designers to use when working on serving data as delight.2021FMFlorian Floyd Mueller et al.Monash UniversityFood Culture & Food InteractionCHI
Grand Challenges in Immersive AnalyticsImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.2021BEBarrett Ens et al.Monash UniversityImmersion & Presence ResearchInteractive Data VisualizationCHI
DoughNets: Visualising Networks Using Torus WrappingWe investigate visualisations of networks on a 2-dimensional torus topology, like an opened-up and flattened doughnut. That is, the network is drawn on a rectangular area while "wrapping" specific links around the border. Previous work on torus drawings of networks has been mostly theoretical, limited to certain classes of networks, and not evaluated by human readability studies. We offer a simple interactive layout approach applicable to general graphs. We use this to find layouts affording better aesthetics in terms of conventional measures like more equal edge length and fewer crossings. In two controlled user studies we find that torus layout with either additional context or interactive panning provided significant performance improvement (in terms of error and time) over torus layout without either of these improvements, to the point that it is comparable to standard non-torus layout.2020KCKun-Ting Chen et al.Monash UniversityTime-Series & Network Graph VisualizationVisualization Perception & CognitionCHI
Embodied Axes: Tangible, Actuated Interaction for 3D Augmented Reality Data SpacesWe present Embodied Axes, a controller which supports selection operations for 3D imagery and data visualisations in Augmented Reality. The device is an embodied representation of a 3D data space -- each of its three orthogonal arms corresponds to a data axis or domain specific frame of reference. Each axis is composed of a pair of tangible, actuated range sliders for precise data selection, and rotary encoding knobs for additional parameter tuning or menu navigation. The motor actuated sliders support alignment to positions of significant values within the data, or coordination with other input: e.g., mid-air gestures in the data space, touch gestures on the surface below the data, or another Embodied Axes device supporting multi-user scenarios. We conducted expert enquiries in medical imaging which provided formative feedback on domain tasks and refinements to the design. Additionally, a controlled user study was performed and found that the Embodied Axes was overall more accurate than conventional tracked controllers for selection tasks.2020MCMaxime Cordeil et al.Monash UniversityMixed Reality WorkspacesMedical & Scientific Data VisualizationCHI
Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D ScatterplotsThree-dimensional scatterplots suffer from well-known perception and usability problems. In particular, overplotting and occlusion, mainly due to density and noise, prevent users from properly perceiving the data. Thanks to accurate head and hand tracking, immersive Virtual Reality (VR) setups provide new ways to interact and navigate with 3D scatterplots. VR also supports additional sensory modalities such as haptic feedback. Inspired by methods commonly used in Scientific Visualisation to visually explore volumes, we propose two techniques that leverage the immersive aspects of VR: first, a density-based haptic vibration technique (Scaptics) which provides feedback through the controller; and second, an adaptation of a cutting plane for 3D scatterplots (Highlight-Plane). We evaluated both techniques in a controlled study with two tasks involving density (finding high- and low-density areas). Overall, Scaptics was the most time-efficient and accurate technique, however, in some conditions, it was outperformed by Highlight-Plane.2019APArnaud Prouzeau et al.Monash UniversityMixed Reality WorkspacesVisualization Perception & CognitionCHI