Exploring Using Personalised Comics for Healthcare Communication for Patients Living With HemodialysisThrough co-design with patients undergoing hemodialysis and their healthcare professionals, we worked towards discovering how to create a personalised, welcoming, yet quick and accurate method for medical instruction communication. Exploring possibilities of meeting the widely differing goals of patients and their healthcare professionals led to designing a personalise-able method for creating comics. Through ongoing discussions during the comic creation process, we explored variations in comic styles and personalisation factors such as choosing and modifying the appearance of the comic personalities, the settings, the central topics, and word usage to create the comics. Interest in using the approach that supports the creation of medical comics was high among patients and healthcare professionals. Rich feedback was obtained about information to be included and future direction for such medical comic creation support. We reflect on lessons learned during co-design with healthcare givers and patients.2024KWKomal Waseem et al.Special Education TechnologyUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingDIS
MicroCam: Leveraging Smartphone Microscope Camera for Context-Aware Contact Surface Sensing"The primary focus of this research is the discreet and subtle everyday contact interactions between mobile phones and their surrounding surfaces. Such interactions are anticipated to facilitate mobile context awareness, encompassing aspects such as dispensing medication updates, intelligently switching modes (e.g., silent mode), or initiating commands (e.g., deactivating an alarm). We introduce MicroCam, a contact-based sensing system that employs smartphone IMU data to detect the routine state of phone placement and utilizes a built-in microscope camera to capture intricate surface details. In particular, a natural dataset is collected to acquire authentic surface textures in situ for training and testing. Moreover, we optimize the deep neural network component of the algorithm, based on continual learning, to accurately discriminate between object categories (e.g., tables) and material constituents (e.g., wood). Experimental results highlight the superior accuracy, robustness and generalization of the proposed method. Lastly, we conducted a comprehensive discussion centered on our prototype, encompassing topics such as system performance and potential applications and scenarios." https://doi.org/10.1145/36109212023YHYongquan Hu et al.Context-Aware ComputingUbiquitous ComputingUbiComp
Designing Blended Experiences: Laugh TradersDigital transformation is increasingly blurring the line between what is software and what is the world, requiring designers to harmoniously blend digital and physical products, services and spaces if they want to orchestrate meaningful experiences that are specifically aimed at the interweaving relationships between people, places and things. Traditional approaches to product design, interaction design, and user experience design do not often take this new context into account. The pictorial details the results of a twelve-day workshop focusing on real-world audience and performer problems during the Edinburgh Festival Fringe: it illustrates how two distinct tools, the Blended Experiences Tool and the Evaluation Tool, focusing on the creation of a blended experience and respectively meant to provide a structured way to approach the generative and reflective stages of the design process, can be used to address this gap. This pictorial illustrates the theoretical framing supporting the Blended Experiences Tool; describes how the workshop produced Laugh Traders, a speculative experience centering on attending and reviewing comedy shows; provides a page-by-page pictorial storyboard of the Laugh Trader experience; introduces the Evaluation Tool and applies it to Laugh Trader to measure the relevance, complexity, and attractiveness of the resulting blended experience. Preliminary reflections conclude the pictorial.2023BOBRIAN J OKEEFE et al.Design FictionDigital Art Installations & Interactive PerformanceInteractive Narrative & Immersive StorytellingC&C
Prototyping Things: Reflecting on Unreported Objects of Design Research for IoTPrototypes and other ‘things’ have had many uses in HCI research—to help understand a problem, as a stepping stone towards a solution, or as a final outcome of a research process. However, within the messy context of a research through design project, many of these roles do not form part of the final research narratives, restricting the ability of other researchers to learn from this practice. In this paper we revisit prototypes used in three different design research projects, conducted over a period when the Internet of Things emerged into everyday life, exploring complex hidden relationships between the internet, people and physical objects. We aim to explore the unreported roles that prototypes played in these projects, including brokering relationships with participants and deconstructing opaque technologies. We reflect on how these roles align with existing understandings of prototypes in HCI, with particular attention to how these roles can contribute to design around IoT.2021NTNick Taylor et al.Context-Aware ComputingUbiquitous ComputingPrototyping & User TestingDIS
Fighting Fires and Powering Steam Locomotives: Distribution of Control and Its Role in Social Interaction at Tangible Interactive Museum ExhibitsWe present a video-analysis study of museum visitors' interactions at two tangible interactive exhibits in a transport museum. Our focus is on groups’ social and shared interactions, in particular how exhibit setup and structure influence collaboration patterns. Behaviors at the exhibits included individuals focusing beyond their personal activity towards companions’ interaction, adults participating via physical interaction, and visitors taking opportunities to interact when companions moved between sections of the exhibit or stepped back from interaction. We demonstrate how exhibits’ physical configuration and interactive control engendered behavioral patterns. Systematic analysis reveals how different configurations (concerning physical-spatial hardware and interactive software) distribute control differently amongst visitors. We present four mechanisms for how control can be distributed at an interactive installation: functional, temporal, physical and indirect verbal. In summary, our work explores how mechanisms that distribute control influence patterns of shared interaction with the exhibits and social interaction between museum visitor companions.2021LCLoraine Clarke et al.University of St AndrewsDigital Art Installations & Interactive PerformanceMuseum & Cultural Heritage DigitizationCHI
Back-Hand-Pose: 3D Hand Pose Estimation for a Wrist-worn Camera via Dorsum Deformation NetworkThe automatic recognition of how people use their hands and fingers in natural settings – without instrumenting the fngers – can be useful for many mobile computing applications. To achieve such an interface, we propose a vision-based 3D hand pose estimation framework using a wrist-worn camera. The main challenge is the oblique angle of the wrist-worn camera, which makes the fngers scarcely visible. To address this, a special network that observes deformations on the back of the hand is required. We introduce DorsalNet, a two-stream convolutional neural network to regress fnger joint angles from spatio-temporal features of the dorsal hand region (the movement of bones, muscle, and tendons). This work is the frst vision-based real-time 3D hand pose estimator using visual features from the dorsal hand region. Our system achieves a mean joint-angle error of 8.81° for user-specifc models and 9.77° for a general model. Further evaluation shows that our system outperforms previous work with an average of 20% higher accuracy in recognizing dynamic gestures, and achieves a 75% accuracy of detecting 11 different grasp types. We also demonstrate 3 applications which employ our system as a control device, an input device, and a grasped object recognizer.2020EWErwin Wu et al.Hand Gesture RecognitionHuman Pose & Activity RecognitionUIST
Lexichrome: Text Construction and Lexical Discovery with Word-Color Associations Using Interactive VisualizationBased on word-color associations from a comprehensive, crowdsourced lexicon, we present Lexichrome: a web application that explores the popular perception of relationships between English words and eleven basic color terms using interactive visualization. Lexichrome provides three complementary visualizations: "Palette" presents the diversity of word-color associations across the color palette; "Words" reveals the color associations of individual words using a dictionary-like interface; "Roget's Thesaurus" uncovers color association patterns in different semantic categories found in the thesaurus. Finally, our text editor allows users to compose their own texts and examine the resultant chromatic fingerprints throughout the process. We studied the utility of Lexichrome in a two-part qualitative user study with nine participants from various writing-intensive professions. We find that the presence of word-color associations promotes awareness surrounding word choice, editorial decision, and audience reception, and introduce a variety of use cases, features, and opportunities applicable to creative writing, corporate communication, and journalism.2020CKChris Kim et al.Interactive Data VisualizationData StorytellingDIS
Dynamic Network Plaid: A Tool for the Analysis of Dynamic NetworksNetwork data that changes over time can be very useful for studying a wide range of important phenomena, from how social network connections change to epidemiology. However, it is challenging to analyze, especially if it has many actors, connections or if the covered timespan is large with rapidly changing links (e.g., months of changes with changes at second resolution). In these analyses one would often like to compare many periods of time to others, without having to look at the full timeline. To support this kind of analysis we designed and implemented a technique and system to visualize this dynamic data. The Dynamic Network Plaid (DNP) is designed for large displays and based on user-generated interactive timeslicing on the dynamic graph attributes and on linked provenance-preserving representations. We present the technique, interface and the design/evaluation with a group of public health researchers investigating non-suicidal self-harm picture sharing in Instagram.2019ALAlexandra Lee et al.Swansea UniversityTime-Series & Network Graph VisualizationVisualization Perception & CognitionCHI
Opisthenar: Hand Poses and Finger Tapping Recognition by Observing Back of Hand Using Embedded Wrist CameraWe introduce a vision-based technique to recognize hand poses and gestures by simply observing changes on the back of the hand. Our approach employs a camera on the wrist, which we envisage can be included in a wrist-worn device such as a smartwatch, fitness tracker or wristband. However, in this configuration the fingers are occluded from the view of the camera. The oblique angle and placement of the camera make typical vision-based techniques difficult to adopt. Our alternative approach observes small changes and movements in the shape, tendons, skin and bone on the back of the hand. We uses a deep neural network to train and recognize both static hand poses and dynamic gestures. While this is a challenging configuration for sensing, we tested the recognition with a real-time user test and can achieve a high recognition rate of 89.4% (static) and 67.5% (dynamic). Our results further demonstrate that our approach can generalize across sessions and to new users. Namely, users can remove and replace the wrist-worn device while new users can employ a previously trained system, to a certain extent. This form of sensing affords a range of new interaction capabilities from one-handed to subtle inputs or eyes-free to orientation invariant interactions.2019HYHui-Shyong Yeo et al.Foot & Wrist InteractionEye Tracking & Gaze InteractionUIST
Quantitative Measurement of Tool Embodiment for Virtual Reality Input AlternativesVirtual reality (VR) strives to replicate the sensation of the physical environment by mimicking people's perceptions and experience of being elsewhere. These experiences are of-ten mediated by the objects and tools we interact with in the virtual world (e.g., a controller). Evidence from psychology posits that when using the tool proficiently, it becomes em-bodied (i.e., an extension of one's body). There is little work,however, on how to measure this phenomenon in VR, andon how different types of tools and controllers can affect the experience of interaction. In this work, we leverage cognitive psychology and philosophy literature to construct the Locus-of-Attention Index (LAI), a measure of tool embodiment. We designed and conducted a study that measures readiness-to-hand and unreadiness-to-hand for three VR interaction techniques: hands, a physical tool, and a VR controller. The study shows that LAI can measure differences in embodiment with working and broken tools and that using the hand directly results in more embodiment than using controllers.2019AAAyman Alzayat et al.University of WaterlooFull-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
A Comparison of Notification Techniques for Out-of-View Objects in Full-Coverage DisplaysFull-coverage displays can place visual content anywhere on the interior surfaces of a room (e.g., a weather display near the coat stand). In these settings, digital artefacts can be located behind the user and out of their field of view - meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments. We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments.2019JPJulian Petford et al.University of St AndrewsNotification & Interruption ManagementCHI
RotoSwype: Word-Gesture Typing using a RingWe propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body. A five-day study finds both hand positions achieved speeds of at least 14 words-per-minute (WPM) with uncorrected error rates near 1%, outperforming previous comparable techniques.2019AGAakar Gupta et al.University of WaterlooFoot & Wrist InteractionVoice User Interface (VUI) DesignCHI
AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-TimeDeveloping cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.2018SPSeonwook Park et al.ETH ZurichMixed Reality WorkspacesCreative Collaboration & Feedback SystemsCHI
Considering Agency and Data Granularity in the Design of Visualization ToolsPrevious research has identified trade-offs when it comes to designing visualization tools. While constructive ``bottom-up'' tools promote a hands-on, user-driven design process that enables a deep understanding and control of the visual mapping, automated tools are more efficient and allow people to rapidly explore complex alternative designs, often at the cost of transparency. We investigate how to design visualization tools that support a user-driven, transparent design process while enabling efficiency and automation, through a series of design workshops that looked at how both visualization experts and novices approach this problem. Participants produced a variety of solutions that range from example-based approaches expanding constructive visualization to solutions in which the visualization tool infers solutions on behalf of the designer, e.g., based on data attributes. On a higher level, these findings highlight agency and granularity as dimensions that can guide the design of visualization tools in this space.2018GMGonzalo Gabriel Méndez et al.University of St Andrews, Escuela Superior Politécnica del LitoralExplainable AI (XAI)AI-Assisted Decision-Making & AutomationInteractive Data VisualizationCHI
Pointing All Around You: Selection Performance of Mouse and Ray-Cast Pointing in Full-Coverage DisplaysAs display environments become larger and more diverse - now often encompassing multiple walls and room surfaces - it is becoming more common that users must find and manipulate digital artifacts not directly in front of them. There is little understanding, however, about what techniques and devices are best for carrying out basic operations above, behind, or to the side of the user. We conducted an empirical study comparing two main techniques that are suitable for full-coverage display environments: mouse-based pointing, and ray-cast 'laser' pointing. Participants completed search and pointing tasks on the walls and ceiling, and we measured completion time, path lengths and perceived effort. Our study showed a strong interaction between performance and target location: when the target position was not known a priori the mouse was fastest for targets on the front wall, but ray-casting was faster for targets behind the user. Our findings provide new empirical evidence that can help designers choose pointing techniques for full-coverage spaces.2018JPJulian Petford et al.University of St AndrewsKnowledge Worker Tools & WorkflowsNotification & Interruption ManagementCHI