ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector ChartsExtracting underlying data from rasterized charts is tedious and inaccurate; values might be partially occluded or hard to distinguish, and the quality of the image limits the precision of the data being recovered. To address these issues, we introduce a semi-automatic system leveraging vector charts to extract the underlying data easily and accurately. The system is designed to make the most of vector information by relying on a drag-and-drop interface combined with selection, filtering, and previsualization features. A user study showed that participants spent less than 4 minutes to accurately recover data from charts published at CHI with diverse styles, thousands of data points, a combination of different encodings, and elements partially or completely occluded. Compared to other approaches relying on raster images, our tool successfully recovered all data, even when hidden, with a 78% lower relative error.2023DMDamien Masson et al.University of WaterlooInteractive Data VisualizationData StorytellingVisualization Perception & CognitionCHI
In-vehicle Performance and Distraction for Midair and Touch Directional GesturesWe compare the performance and level of distraction of expressive directional gesture input in the context of in-vehicle system commands. Center console touchscreen swipes and midair swipe-like movements are tested in 8-directions, with 8-button touchscreen tapping as a baseline. Participants use these input methods for intermittent target selections while performing the Lane Change Task in a virtual driving simulator. Input performance is measured with time and accuracy, cognitive load with deviation of lane position and speed, and distraction from frequency of off-screen glances. Results show midair gestures were less distracting and faster, but with lower accuracy. Touchscreen swipes and touchscreen tapping are comparable across measures. Our work provides empirical evidence for vehicle interface designers and manufacturers considering midair or touch directional gestures for centre console input.2023AHArman Hafizi et al.Computer ScienceIn-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionCHI
Characterizing Stage-aware Writing Assistance for Collaborative Document AuthoringWriting is a complex non-linear process that begins with a mental model of intent, and progresses through an outline of ideas, to words on paper (and their subsequent refinement). Despite past research in understanding writing, Web-scale consumer and enterprise collaborative digital writing environments are yet to greatly benefit from intelligent systems that understand the stages of document evolution, providing opportune assistance based on authors’ situated actions and context. In this paper, present three studies that explore temporal stages of document authoring. We first survey information workers at a large technology company about their writing habits and preferences, concluding that writers do in fact conceptually progress through several distinct phases while authoring documents. We also explore, qualitatively, how writing stages are linked to document lifespan. We supplement these qualitative findings with an analysis of the longitudinal user interaction logs of a popular digital writing platform over several million documents. Finally, as a first step towards facilitating an intelligent digital writing assistant, we conduct a preliminary investigation into the utility of user interaction log data for predicting the temporal stage of a document. Our results support the benefit of tools tailored to writing stages, identify primary tasks associated with these stages, and show that it is possible to predict stages from anonymous interaction logs. Together, these results argue for the benefit and feasibility of more tailored digital writing assistance.2020BSBahareh Sarrafzadeh et al.Collaboration: Creating and Writing TogetherCSCW
Chameleon: Bringing Interactivity to Static Digital DocumentsDocuments such as presentations, instruction manuals, and research papers are disseminated using various file formats, many of which barely support the incorporation of interactive content. To address this lack of interactivity, we present Chameleon, a system-wide tool that combines computer vision algorithms used for image identification with an open database format to allow for the layering of dynamic content. Using Chameleon, static documents can be easily upgraded by layering user-generated interactive content on top of static images, all while preserving the original static document format and without modifying existing applications. We describe the development of Chameleon, including the design and evaluation of vision-based image replacement algorithms, the new document-creation pipeline as well as a user study evaluating Chameleon.2020DMDamien Masson et al.University of Waterloo & InriaInteractive Data VisualizationData StorytellingCHI
Personal Space in Play: Physical and Digital Boundaries in Large-Display Cooperative and Competitive GamesAs multi-touch displays grow in size and shrink in price, they are more commonly used as gaming devices. When co-located users play games on a single, large display, establishing and maintaining their physical and digital territories poses a social challenge to their interaction. To gain insight into the mechanisms of establishing and maintaining users' physical and digital territories, we analyze territorial interactions in cooperative and competitive multiplayer gameplay. Participants reported weighing each game interaction based on perceived intent to determine how socially acceptable they deemed each behaviour. In light of our observations, we contribute and discuss implications for the design of multi-user, large display, co-located, touchscreen games that consider display properties, digital and physical space, permeability of boundaries, and asymmetry of play to create interactions between players.2020RWRina R Wehbe et al.University of WaterlooGame UX & Player BehaviorMultiplayer & Social GamesCHI
Genie in the Bottle: Anthropomorphized Perceptions of Conversational AgentsThis paper presents a qualitative multi-phase study seeking to identify patterns in users' anthropomorphized perceptions of conversational agents. Through a comparative analysis of behavioral perceptions and visual conceptions of three agents — Alexa, Google Assistant, and Siri — we first show that the perceptions of an agent's character are structured according to five categories: approachability, sentiment toward a user, professionalism, intelligence, and individuality. We then explore visualizations of the agents' appearance and discuss the specifics assigned to each agent. Finally, we analyze associative explanations for these perceptions. We demonstrate that the anthropomorphized behavioral and visual perceptions of agents yield structural consistency and discuss how these perceptions are linked with each other and system features.2020AKAnastasia Kuzminykh et al.University of WaterlooAgent Personality & AnthropomorphismCHI
Investigating the Necessity of Delay in Marking Menu InvocationDelayed display of menu items is a core design component of marking menus, arguably to prevent visual distraction and foster the use of mark mode. We investigate these assumptions, by contrasting the original marking menu design with immediately-displayed marking menus. In three controlled experiments, we fail to reveal obvious and systematic performance or usability advantages to using delay and mark mode. Only in very constrained settings after significant training and only two items to learn did traditional marking menus show a time improvement of about 260~ms. Otherwise, we found an overall decrease in performance with delay, whether participants exhibited practiced or unpracticed behaviour. Our final study failed to demonstrate that an immediately-displayed menu interface is more visually disrupting than a delayed menu. These findings inform the costs and benefits of incorporating delay in marking menus, and motivate guidelines for situations in which its use is desirable.2020JHJay Henderson et al.University of WaterlooUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
Understanding Viewport- and World-based Pointing with Everyday Smart Devices in Immersive Augmented RealityPersonal smart devices have demonstrated a variety of efficient techniques for pointing and selecting on physical displays. However, when migrating these input techniques to augmented reality, it is both unclear what the relative performance of different techniques will be given the immersive nature of the environment, and it is unclear how viewport-based versus world-based pointing methods will impact performance. To better understand the impact of device and viewing perspectives on pointing in augmented reality, we present the results of two controlled experiments comparing pointing conditions that leverage various smartphone- and smartwatch-based external display pointing techniques and examine viewport-based versus world-based target acquisition paradigms. Our results demonstrate that viewport-based techniques offer faster selection and that both smartwatch- and smartphone-based pointing techniques represent high-performance options for performing distant target acquisition tasks in augmented reality.2020YCYuan Chen et al.University of WaterlooAR Navigation & Context AwarenessSmartwatches & Fitness BandsCHI
PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on SmartphonesIntensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?2019THTeng Han et al.University of ManitobaHand Gesture RecognitionCHI
Leveraging Distal Vibrotactile Feedback for Target AcquisitionMany touch based interactions provide limited opportunities for direct tactile feedback; examples include multi-user touch displays, augmented reality based projections on passive surfaces, and mid-air input. In this paper, we consider distal feedback, through vibrotactile stimulation on a smart-watch placed on the user's non-dominant wrist, as an alternative feedback mechanism to interaction location vibrotactile feedback, under the user's finger. We compare the effectiveness of interaction location feedback vs. distal feedback through a Fitts's Law task completed on a smartphone. Results show that distal and interaction location feedback both reduce errors in target acquisition and exhibit statistically comparable performance, suggesting that distal vibrotactile feedback is a suitable alternative when interaction location feedback is not readily available.2019JHJay Henderson et al.University of WaterlooVibrotactile Feedback & Skin StimulationFoot & Wrist InteractionCHI
PledgeWork: Online Volunteering through CrowdworkIn this paper, we explore an alternative form of volunteer work, PledgeWork, where individuals, rather than working directly for a charity, make indirect donations by completing tasks provided by a third party task provider. PledgeWork poses novel research questions on issues of user acceptance of on-line volunteerism, on quality and quantity of work performed as a volunteer, and on the benefits low-barrier volunteerism might provide to charities. To evaluate these questions, we conduct a mixed methods study that compares the quality and quantity of work between volunteer workers and paid workers and user attitudes toward PledgeWork, including perceived benefits and drawbacks. We find that PledgeWork can improve the quality of simple tasks and that the vast majority of our participants expressed interest in using our PledgeWork platform to contribute to a charity. Our interview also reveals current problems with volunteering and online donations, thus highlighting additional strengths of PledgeWork.2019KKKeiko Katsuragawa et al.University of WaterlooCrowdsourcing Task Design & Quality ControlCHI
The Perpetual Work Life of Crowdworkers: How Tooling Practices Increase Fragmentation in CrowdworkCrowdworkers regularly support their work with scripts, extensions, and software to enhance their productivity. Despite their evident significance, little is understood regarding how these tools affect crowdworkers' quality of life and work. In this study, we report findings from an interview study (N=21) aimed at exploring the tooling practices used by full-time crowdworkers on Amazon Mechanical Turk. Our interview data suggests that the tooling utilized by crowdworkers (1) strongly contributes to the fragmentation of microwork by enabling task switching and multitasking behavior; (2) promotes the fragmentation of crowdworkers' work-life boundaries by relying on tooling that encourages a `work-anywhere' attitude; and (3) aids the fragmentation of social ties within worker communities through limited tooling access. Our findings have implications for building systems that unify crowdworkers' work practice in support of their productivity and well-being.2019AWAlex C Williams et al.Crowds and CollaborationCSCW
Pointing at a Distance with Everyday Smart DevicesLarge displays are becoming commonplace at work, at home, or in public areas. However, interaction at a distance -- anything greater than arms-length -- remains cumbersome, restricts simultaneous use, and requires specific hardware augmentations of the display: touch layers, cameras, or dedicated input devices. Yet a rapidly increasing number of people carry smartphones and smartwatches, devices with rich input capabilities that can easily be used as input devices to control interactive systems. We contribute (1) the results of a survey on possession and use of smart devices, and (2) the results of a controlled experiment comparing seven distal pointing techniques on phone or watch, one- and two-handed, and using different input channels and mappings. Our results favor using a smartphone as a trackpad, but also explore performance tradeoffs that can inform the choice and design of distal pointing techniques for different contexts of use.2018SSShaishav Siddhpuria et al.University of WaterlooFull-Body Interaction & Embodied InputKnowledge Worker Tools & WorkflowsCHI
Ether-Toolbars: Evaluating Off-Screen Toolbars for MobileInteractionIn mobile interaction, the use of touchscreen interaction, while beneficial from the perspective of portability, has limited spatial accuracy due to the ``fat finger problem''. As a result, an important challenge on mobile interaction is to find solutions to balance the size of individual widgets against the number of widgets needed during interaction. In this work, to address display space limitations, we explore the design of off-screen toolbars (ether-toolbars) that leverage computer vision to expand application features by placing widgets adjacent to the display screen. We show how simple computer vision algorithms can be combined with a natural human ability to estimate physical placing to support highly accurate targeting. Our ether-toolbar design promises targeting accuracy approximating on-screen widget accuracy while significantly expanding the interaction space of mobile devices. Through two experiments, we examine off-screen content placement metaphors and off-screen precision of participants accessing these toolbars. From the data of the second experiment, we provide a basic model that reflects how users perceive mobile surroundings for ether-widgets and validate it. We also demonstrate a prototype system consisting of an inexpensive 3D printed mount for mirror that supports ether-toolbar implementations. Finally, we discuss the implications of our work and potential design extensions that can increase the usability and the utility of ether-toolbars.2018HRHanae Rateau et al.Hand Gesture RecognitionMotor Impairment Assistive Input TechnologiesIUI
Introducing Transient Gestures to Improve Pan and Zoom on Touch SurfacesDespite the ubiquity of touch-based input and the availability of increasingly computationally powerful touchscreen devices, there has been comparatively little work on enhancing basic canonical gestures such as swipe-to-pan and pinch-to-zoom. In this paper, we introduce transient pan and zoom, i.e. pan and zoom manipulation gestures that temporarily alter the view and can be rapidly undone. Leveraging typical touchscreen support for additional contact points, we design our transient gestures such that they co-exist with traditional pan and zoom interaction. We show that our transient pan-and-zoom reduces repetition in multi-level navigation and facilitates rapid movement between document states. We conclude with a discussion of user feedback, and directions for future research.2018JAJeff Avery et al.University of WaterlooHand Gesture RecognitionPrototyping & User TestingCHI