Pixelated Interactions: Exploring Pixel Art for Graphical Primitives on a Tactile DisplayTwo-dimensional pin array displays enable access to tactile graphics that are important for the education of students with visual impairments. Due to their prohibitive cost and limited access, there is limited research within HCI and the rules to design graphics on these low-resolution tactile displays are unclear. In this paper, eight tactile readers with visual impairments qualitatively evaluate the implementation of Pixel Art to create tactile graphical primitives on a pin array display. Every pin of the pin array is assumed to be a pixel on a pixel grid. Our findings suggest that Pixel Art tactile graphics on a pin array are clear and comprehensible to tactile readers, positively confirming its use to design basic tactile shapes and line segments. The guidelines provide a consistent framework to create tactile media which implies that they can be used to downsize basic shapes for refreshable pin-array displays.2023TBTigmanshu Bhatnagar et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Data PhysicalizationDIS
Escapement: A Tool for Interactive Prototyping with Video via Sensor-Mediated Abstraction of Time We present Escapement, a video prototyping tool that introduces a powerful new concept for prototyping screen-based interfaces by flexibly mapping sensor values to dynamic playback control of videos. This recasts the time dimension of video mock-ups as sensor-mediated interaction. This abstraction of time as interaction, which we dub video-escapement prototyping, empowers designers to rapidly explore and viscerally experience direct touch or sensor-mediated interactions across one or more device displays. Our system affords cross-device and bidirectional remote (tele-present) experiences via cloud-based state sharing across multiple devices. This makes Escapement especially potent for exploring multi-device, dual-screen, or remote-work interactions for screen-based applications. We introduce the core concept of sensor-mediated abstraction of time for quickly generating video-based interactive prototypes of screen-based applications, share the results of observations of long-term usage of video-escapement techniques with experienced interaction designers, and articulate design choices for supporting a reflective, iterative, and open-ended creative design process.2023MNMolly Jane Nicholas et al.UC BerkeleyTeleoperation & TelepresencePrototyping & User TestingCHI
AdHocProx: Sensing Mobile, Ad-Hoc Collaborative Device Formations using Dual Ultra-Wideband RadiosWe present AdHocProx, a system that uses device-relative, inside-out sensing to augment co-located collaboration across multiple devices, without recourse to externally-anchored beacons -- or even reliance on WiFi connectivity. AdHocProx achives this via sensors including dual ultra-wideband (UWB) radios for sensing distance and angle to other devices in dynamic, ad-hoc arrangements; plus capacitive grip to determine where the user's hands hold the device, and to partially correct for the resulting UWB signal attenuation. All spatial sensing and communication takes place via the side-channel capability of the UWB radios, suitable for small-group collaboration across up to four devices (eight UWB radios). Together, these sensors detect proximity and natural, socially meaningful device movements to enable contextual interaction techniques. We find that AdHocProx can obtain 95% accuracy recognizing various ad-hoc device arrangements in an offline evaluation, with participants particularly appreciative of interaction techniques that automatically leverage proximity-awareness and relative orientation amongst multiple devices.2023RLRichard Li et al.University of WashingtonContext-Aware ComputingUbiquitous ComputingCHI
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic InterfacesThis paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.2022RSRyo Suzuki et al.University of CalgaryAR Navigation & Context AwarenessSocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
Understanding Multi-Device Usage Patterns: Physical Device Configurations and Fragmented WorkflowsTo better ground technical (systems) investigation and interaction design of cross-device experiences, we contribute an in-depth survey of existing multi-device practices, including fragmented workflows across devices and the way people physically organize and configure their workspaces to support such activity. Further, this survey documents a historically significant moment of transition to a new future of remote work, an existing trend dramatically accelerated by the abrupt switch to work-from-home (and having to contend with the demands of home-at-work) during the COVID-19 pandemic. We surveyed 97 participants, and collected photographs of home setups and open-ended answers to 50 questions categorized in 5 themes. We characterize the wide range of multi-device physical configurations and identify five usage patterns, including: partitioning tasks, integrating multi-device usage, cloning tasks to other devices, expanding tasks and inputs to multiple devices, and migrating between devices. Our analysis also sheds light on the benefits and challenges people face when their workflow is fragmented across multiple devices. These insights have implications for the design of multi-device experiences that support people's fragmented workflows.2022YYYe Yuan et al.Microsoft Research, University of MinnesotaRemote Work Tools & ExperienceDistributed Team CollaborationNotification & Interruption ManagementCHI
Machine Body Language: Expressing a Smart Speaker's Activity with Intelligible Physical MotionPeople's physical movement and body language implicitly convey what they think and feel, are doing or are about to do. In contrast, current smart speakers miss out on this richness of body language, primarily relying on voice commands only. We present QUBI, a dynamic smart speaker that leverages expressive physical motion – stretching, nodding, turning, shrugging, wiggling, pointing and leaning forwards/backwards – to convey cues about its underlying behaviour and activities. We conducted a qualitative Wizard of Oz lab study, in which 12 participants interacted with QUBI in four scripted scenarios. From our study, we distilled six themes: (1) mirroring and mimicking motions; (2) body language to supplement voice instructions; (3) anthropomorphism and personality; (4) audio can trump motion; (5) reaffirming uncertain interpretations to support mutual understanding; and (6) emotional reactions to QUBI's behaviour. From this, we discuss design implications for future smart speakers.2021MAMirzel Avdic et al.Agent Personality & AnthropomorphismDIS
DataMoves: Entangling Data and Movement to Support Computer Science Education In the domain of computing education for children, much work has been done to devise creative and engaging methods of teaching about programming. However, there are many other fundamental aspects of computing that have so far received relatively less attention. This work explores how the topics of number systems and data representation can be taught in a way that piques curiosity and captures learners’ imaginations. Specifically, we present the design of two interactive physical computing artefacts, which we collectively call DataMoves, that enable 12-14 year old students to explore number systems and data through embodied movement and dance. Our evaluation of DataMoves, used in tandem with other pedagogical methods, demonstrates that the form of embodied, exploration-based learning adopted has much potential for deepening students’ understandings of computing topics, as well as for shaping positive perceptions of topics that are traditionally considered boring and dull.2021JBJustas Brazauskas et al.Programming Education & Computational ThinkingSTEM Education & Science CommunicationDIS
EvalMe: Exploring the Value of New Technologies for In Situ Evaluation of Learning ExperiencesTangible interfaces have much potential for engendering shared interaction and reflection, as well as for promoting playful experiences. How can their properties be capitalised on to enable students to reflect on their learning, both individually and together, throughout learning sessions? This Research through Design paper describes our development of EvalMe, a flexible, tangible tool aimed at being playful, enjoyable to use and enabling children to reflect on their learning, both in the moment and after a learning session has ended. We discuss the insights gained through the process of designing EvalMe, co-defining its functionality with two groups of collaborators and deploying it in two workshop settings. Through this process, we map key contextual considerations for the design of technologies for in situ evaluation of learning experiences. Finally, we discuss how tangible evaluation technologies deployed throughout a learning session, can positively contribute to students’ reflection about their learning.2021SLSusan Lechelt et al.University of EdinburghK-12 Digital Education ToolsCollaborative Learning & Peer TeachingCHI
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note TakingSketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.2021RZRebecca Zheng et al.University College London, MumbliInteractive Data VisualizationData StorytellingUser Research Methods (Interviews, Surveys, Observation)CHI
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction ScenariosStatic illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.2021AAAxel Antoine et al.Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStALInteractive Data VisualizationComputational Methods in HCICHI
ToonNote: Improving Communication in Computational Notebooks Using Interactive Data ComicsComputational notebooks help data analysts analyze and visualize datasets, and share analysis procedures and outputs. However, notebooks typically combine code (e.g., Python scripts), notes, and outputs (e.g., tables, graphs). The combination of disparate materials is known to hinder the comprehension of notebooks, making it difficult for analysts to collaborate with other analysts unfamiliar with the dataset. To mitigate this problem, we introduce ToonNote, a JupyterLab extension that enables the conversion of notebooks into "data comics.'' ToonNote provides a simplified view of a Jupyter notebook, highlighting the most important results while supporting interactive and free exploration of the dataset. This paper presents the results of a formative study that motivated the system, its implementation, and an evaluation with 12 users, demonstrating the effectiveness of the produced comics. We discuss how our findings inform the future design of interfaces for computational notebooks and features to support diverse collaborators.2021DKDaYe Kang et al.KAIST, KAISTInteractive Data VisualizationData StorytellingCHI
Live Sketchnoting Across Platforms: Exploring the Potential and Limitations of Analogue and Digital ToolsSketchnoting is the process of creating a visual record with combined text and imagery of an event or presentation. Although analogue tools are still the most common method for sketchnoting, the use of digital tools is increasing. We conducted a study to better understand the current practices, techniques, compromises and opportunities of creating both pen&paper and digital sketchnotes. Our research combines insights from semi-structured interviews with the findings from a within-subjects observational study where ten participants created real time sketchnotes of two video presentations on both paper and digital tablet. We report our key findings, categorised into six themes: insights into sense of space; trade-offs with flexibility; choice paradox and cognitive load; matters of perception, accuracy and texture; issues around confidence; and practicalities. We discuss those findings, the potential and limitations of different methods, and implications for the design of future digital sketchnoting tools.2020MCMarina Fernández Camporro et al.University College LondonInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
GazeConduits: Calibration-Free Cross-Device Collaboration through Gaze and TouchWe present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.2020SVSimon Voelker et al.RWTH Aachen UniversityEye Tracking & Gaze InteractionKnowledge Worker Tools & WorkflowsCHI
Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple DevicesDesigning interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross-device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research.2019FBFrederik Brudy et al.University College LondonKnowledge Management & Team AwarenessContext-Aware ComputingUbiquitous ComputingCHI
Applied Sketching in HCI: Hands-on Course of Sketching TechniquesHand-drawn sketches are an easy way for researchers to communicate and express ideas, as well as document, explore and describe concepts between researcher, user, or client. Sketches are fast, easy to create, and – by varying their fidelity – they can be used in all areas of HCI. The Applied Sketching in HCI course will explore and demonstrate themes around sketching in HCI with the aim of producing tangible outputs. Those attending will leave the course with the confidence to engage actively with sketching on a day-to-day basis. Participants will be encouraged to apply what they have learnt to their own research.2018MLMakayla Lewis et al.Prototyping & User TestingCHI
Applied Sketching in HCI: Hands-on Course of Sketching TechniquesHand-drawn sketches are an easy way for researchers to communicate and express ideas, as well as document, explore and describe concepts between researcher, user, or client. Sketches are fast, easy to create, and – by varying their fidelity – they can be used in all areas of HCI. The Applied Sketching in HCI course will explore and demonstrate themes around sketching in HCI with the aim of producing tangible outputs. Those attending will leave the course with the confidence to engage actively with sketching on a day-to-day basis. Participants will be encouraged to apply what they have learnt to their own research.2018MLMakayla Lewis et al.Lancaster UniversityPrototyping & User TestingCHI
Inclusive Computing in Special Needs Classrooms: Designing for AllWith a growing call for an increased emphasis on computing in school curricula, there is a need to make computing accessible to a diversity of learners. One potential approach is to extend the use of physical toolkits, which have been found to encourage collaboration, sustained engagement and effective learning in classrooms in general. However, little is known as to whether and how these benefits can be leveraged in special needs schools, where learners have a spectrum of distinct cognitive and social needs. Here, we investigate how introducing a physical toolkit can support learning about computing concepts for special education needs (SEN) students in their classroom. By tracing how the students’ interactions—both with the physical toolkit and with each other—unfolded over time, we demonstrate how the design of both the form factor and the learning tasks embedded in a physical toolkit contribute to collaboration, comprehension and engagement when learning in mixed SEN classrooms.2018ZLZuzanna Lechelt et al.UCL Interaction CentreCollaborative Learning & Peer TeachingSpecial Education TechnologyCHI
SurfaceConstellations: A Modular Hardware Platform for Ad-Hoc Reconfigurable Cross-Device WorkspacesWe contribute SurfaceConstellations, a modular hardware platform for linking multiple mobile devices to easily create novel cross-device workspace environments. Our platform combines the advantages of multi-monitor workspaces and multi-surface environments with the flexibility and extensibility of more recent cross-device setups. The SurfaceConstellations platform includes a comprehensive library of 3D-printed link modules to connect and arrange tablets into new workspaces, several strategies for designing setups, and a visual configuration tool for automatically generating link modules. We contribute a detailed design space of cross-device workspaces, a technique for capacitive links between tablets for automatic recognition of connected devices, designs of flexible joint connections, detailed explanations of the physical design of 3D printed brackets and support structures, and the design of a web-based tool for creating new SurfaceConstellation setups.2018NMNicolai Marquardt et al.University College LondonMixed Reality WorkspacesMakerspace CultureCHI
Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature PatternsWe introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognize 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.2018YCYoungjun Cho et al.University College LondonBiosensors & Physiological MonitoringContext-Aware ComputingCHI
SketCHI: Hands-On Special Interest Group on Sketching in HCISketching is of great value as a process, input, output and tool in HCI, but can be confined to individual ideation or note-taking, as few researchers have the confidence to document events, studies and workshops under the public gaze. The recent surge in interest in this sometimes-overlooked skill has manifested itself in courses, workshops and live-scribing of high-profile events – and a renewed enthusiasm for freehand sketching as a formal part of the research process at all levels. SketCHI aims to address both research interests and sketching practice in a combined approach to define, discuss and deliver theory and methods to a broad audience. As well as structuring high level discussions and collating information and resources, this SIG will allow attendees to practice and explore observational sketching on location around the conference, with feedback and encouragement from industry professionals. Finally, attendees will be encouraged to collaborate and form a research community around sketching in HCI.2018MLMakayla Lewis et al.Brunel UniversityAging-Friendly Technology DesignKnowledge Worker Tools & WorkflowsPrototyping & User TestingCHI