Understanding and Improving the Performance of Action PointingAction pointing involves choosing and executing an action at a specific place in the workspace (e.g., choosing a tool and clicking to start drawing, or selecting an object and copying with a shortcut). The elements of action pointing (choosing an action, specifying a position, and triggering the action) can be carried out in many ways - and our analysis of current techniques identified limitations on performance, particularly for repeated sequences of interactions. To empirically analyse interaction alternatives for action pointing, we developed and evaluated two techniques: ModeKeys removes modifier keys from keyboard shortcuts used to choose actions; AimKeys goes further by using the shortcut (not the mouse) to trigger the action. Three studies over three tasks showed that these reconfigurations were highly effective - in all studies, either AimKeys or ModeKeys were faster, easier, and preferred overall. Our studies show that small variations in the configuration of action pointing can have a large impact, offering opportunities to improve performance with direct-manipulation systems.2025CBCameron Beattie et al.University of SaskatchewanFull-Body Interaction & Embodied InputKnowledge Worker Tools & WorkflowsCHI
Effects of Device Environment and Information Layout on Spatial Memory and Performance in VR Selection TasksVirtual Reality systems are increasingly proposed as a platform for everyday interactive software. Many applications are dependent on actions such as navigation and selection, but it is not clear how well immersive environments support these basic activities. Previous studies have suggested advantages for spatial learning in VR, so we carried out a study that investigated two aspects of immersion on spatial memory and selection: the degree to which the user is immersed in the data, and whether the system uses immersive input and output. The study showed that more-immersive conditions had substantially worse selection performance, and did not improve spatial learning. However, most participants believed that the immersive conditions were better for learning object locations, and most people preferred the immersive layout and the HMD. Our study suggests that designers should be cautious about assuming that everyday software applications will benefit from being deployed in an immersive VR environment.2024KKKim Kargut et al.University of SaskatchewanEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
Automation Confusion: A Grounded Theory of Non-Gamers' Confusion in Partially Automated Action GamesPartial automation makes digital games simpler by performing game actions for players. It may simplify gameplay for non-gamers who have difficulty controlling and understanding games. However, the automation may make players confused about what they control and what the automation controls. To describe and explain non-gamers' experiences of automation confusion, we analyzed gameplay, think-aloud, and interview data from ten non-gamer participants who played two partially automated games. Our results demonstrate how incorrect mental models, behaviours resulting from those models, and players' attitudes towards the games led to different levels and types of confusion.2023GCGabriele Cimolino et al.Queen's UniversityGame UX & Player BehaviorSerious & Functional GamesCHI
Showing Flow: Comparing Usability of Chord and Sankey DiagramsChord and Sankey diagrams are two common techniques for visualizing flows. Chord diagrams use a radial layout with a single circular axis, and Sankey diagrams use a left-to-right layout with two vertical axes. Previous work suggests both strengths and weaknesses of the radial approach, but little is known about the usability and interpretability of these two layout styles for showing flow. We carried out a study where participants answered questions using equivalent Chord and Sankey diagrams. We measured completion time, errors, perceived effort, and preference. Our results show that participants took substantially longer to answer questions with Chord diagrams and made more errors; participants also rated Chord as requiring more effort, and strongly preferred Sankey diagrams. Our study identifies and explains limitations of the popular Chord layout, provides new understanding about radial vs. linear layouts that can help guide visualization designers, and identifies possible design improvements for both visualization types.2023CGCarl Gutwin et al.University of SaskatchewanInteractive Data VisualizationVisualization Perception & CognitionCHI
`Specially For You' -- Examining the Barnum Effect's Influence on the Perceived Quality of System RecommendationsThe ‘Barnum effect’ is a psychological phenomenon under which people assign higher quality ratings to personality descriptions developed ‘specially for you’ than the same descriptions described as ‘generally true of people.’ This effect suggests that recommender interfaces could elevate the perceived quality of recommendations simply by indicating that they are explicitly personalised. We therefore conducted a crowd-sourced experiment (n=492) that examined the perceived quality of personalised versus non-personalised movie recommendations for good and bad movies – importantly, the actual recommendations were identical, and were merely presented as being either personalised or not. Contrary to the Barnum effect, results showed numerically lower mean quality scores for personalised recommendations, but with no significant difference. Our findings suggest that Barnum-like effects of personalisation have at most a small influence on perceived quality, and that designers should not rely on this effect to improve user experience (despite online design guidance suggesting the opposite).2023PSPang Suwanaposee et al.University of CanterburyRecommender System UXVisualization Perception & CognitionCHI
Probability Weighting in Interactive Decisions: Evidence for Overuse of Bad Assistance, Underuse of Good AssistanceThe effective use of assistive interfaces (i.e. those that offer suggestions or reform the user's input to match inferred intentions) depends on users making good decisions about whether and when to engage or ignore assistive features. However, prior work from economics and psychology shows systematic decision-making biases in which people overreact to low probability events and underreact to high probability events -- modelled using a probability weighting function. We examine the theoretical implications of this probability weighting for interaction, including its suggestion that users will overuse inaccurate interface assistance and underuse accurate assistance. We then conduct a new analysis of data from a previously published study, quantifying the degree of bias users exhibited, and demonstrating conformance with these predictions. We discuss implications for design, including strategies that could be used to mitigate the deleterious effects of the observed biases.2022ACAndy Cockburn et al.University of CanterburyExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityCHI
More Errors vs. Longer Commands: The Effects of Repetition and Reduced Expressiveness on Input Interpretation Error, Learning, and User PreferenceMany interactive systems are susceptible to misinterpreting the user's input actions or gestures. Interpretation errors are common when systems gather a series of signals from the user and then attempt to interpret the user's intention based on those signals -- e.g., gesture identification from a touchscreen, camera, or body-worn electrodes -- and previous work has shown that interpretation error can cause significant problems for learning new input commands. Error-reduction strategies from telecommunications, such as repeating a command or increasing the length of the input while reducing its expressiveness, could improve these input mechanisms -- but little is known about whether longer command sequences will cause problems for users (e.g., increased effort or reduced learning). We tested performance, learning, and perceived effort in a crowd-sourced study where participants learned and used input mechanisms with different error-reduction techniques. We found that error reduction techniques are feasible, can outperform error-prone ordinary input, and do not negatively affect learning or perceived effort.2022KLKevin C. Lam et al.University of SaskatchewanHand Gesture RecognitionHuman Pose & Activity RecognitionCHI
The Image of the Interface: How People Use Landmarks to Develop Spatial Memory of Commands in Graphical InterfacesGraphical User Interfaces present commands at particular locations, arranged in menus, toolbars, and ribbons. One hallmark of expertise with a GUI is that experts know the locations of commonly-used commands, such that they can find them quickly and without searching. Although GUIs have been studied for many years, however, there is still little known about how this spatial location memory develops, or how designers can make interfaces more memorable. One of the main ways that people remember locations in the real world is landmarks – so we carried out a study to investigate how users remember commands and navigate in four common applications (Word, Facebook, Reader, and Photoshop). Our study revealed that people strongly rely on landmarks that are readily available in the interface (e.g., layout, corners, and edges) to orient themselves and remember commands. We provide new evidence that landmarks can aid spatial memory and expertise development with an interface, and guidelines for designers to improve the memorability of future GUIs.2021MUMd. Sami Uddin et al.University of SaskatchewanVisualization Perception & CognitionPrototyping & User TestingCHI
The Effects of System Interpretation Errors on Learning New Input MechanismsInput mechanisms can produce noisy signals that computers must interpret, and this interpretation can misconstrue the user’s intention. Researchers have studied how interpretation errors can affect users’ task performance, but little is known about how these errors affect learning, and whether they help or hinder the transition to expertise. Previous findings suggest that increasing the user’s attention can facilitate learning, so frequent interpretation errors may increase attention and learning; alternatively, however, interpretation errors may negatively interfere with skill development. To explore these potentially important effects, we conducted studies where participants learned commands with various rates of artificially injected interpretation errors. Our results showed that higher rates of interpretation error led to worse memory retention, higher completion times, higher occurrences of user error (beyond those injected by the system), and greater perceived effort. These findings indicate that when input mechanisms must interpret the user's input, interpretation errors cause problems for user learning.2021KLKevin C. Lam et al.University of SaskatchewanHand Gesture RecognitionEye Tracking & Gaze InteractionCHI
Interaction Pace and User PreferencesThe overall pace of interaction combines the user's pace and the system's pace, and a pace mismatch could impair user preferences (e.g., animations or timeouts that are too fast or slow for the user). Motivated by studies of speech rate convergence, we conducted an experiment to examine whether user preferences for system pace are correlated with user pace. Subjects first completed a series of trials to determine their user pace. They then completed a series of hierarchical drag-and-drop trials in which folders automatically expanded when the cursor hovered for longer than a controlled timeout. Results showed that preferences for timeout values correlated with user pace -- slow-paced users preferred long timeouts, and fast-paced users preferred short timeouts. Results indicate potential benefits in moving away from fixed or customisable settings for system pace. Instead, systems could improve preferences by automatically adapting their pace to converge towards that of the user.2021AGAlix Goguey et al.Université Grenoble AlpesVisualization Perception & CognitionCHI
Framing Effects Influence Interface Feature DecisionsStudies in psychology have shown that framing effects, where the positive or negative attributes of logically equivalent choices are emphasised, influence people's decisions. When outcomes are uncertain, framing effects also induce patterns of choice reversal, where decisions tend to be risk averse when gains are emphasised and risk seeking when losses are emphasised. Studies of these effects typically use potent framing stimuli, such as the mortality of people suffering from diseases or personal financial standing. We examine whether these effects arise in users' decisions about interface features, which typically have less visceral consequences, using a crowd-sourced study based on snap-to-grid drag-and-drop tasks (n = 842). The study examined several framing conditions: those similar to prior psychological research, and those similar to typical interaction choices (enabling/disabling features). Results indicate that attribute framing strongly influences users' decisions, that these decisions conform to patterns of risk seeking for losses, and that patterns of choice reversal occur.2020ACAndy Cockburn et al.University of CanterburyExplainable AI (XAI)Visualization Perception & CognitionUser Research Methods (Interviews, Surveys, Observation)CHI
Anchoring Effects and Troublesome Asymmetric Transfer in Subjective RatingsWithin-subjects experiments are prone to asymmetric transfer, which confounds results interpretation. While HCI researchers routinely test asymmetric transfer in objective data, doing so for subjective data is rare. Yet literature suggests that anchoring effects should make subjective measures particularly susceptible to asymmetric transfer. We report on four analyses of NASA-TLX data from four previously published HCI papers, with four main findings. First, asymmetric transfer is common, occurring in 42% of tests analysed. Second, the data conforms to predictions of anchoring effects. Third, the magnitude of the anchor's effect correlates with the magnitude of the difference between the interface ratings -- that is, the anchor's 'pull' correlates with the anchoring stimulus. Fourth, several of the previously published findings are changed when data are reanalysed using between-subjects treatment. We urge caution when analysing within-subjects subjective measures and recommend that researchers test for and report the occurrence of asymmetric transfer.2019ACAndy Cockburn et al.University of CanterburyChronic Disease Self-Management (Diabetes, Hypertension, etc.)Computational Methods in HCICHI
Improving Early Navigation in Time-Lapse Video with Spread-Frame LoadingTime-lapse videos are often navigated by scrubbing with a slider. When networks are slow or images are large, however, even thumbnail versions load so slowly that scrubbing is limited to the start of the video. We developed a frame-loading technique called spread-loading that enables scrubbing regardless of delivery rate. Spread-loading orders frame delivery to maximize coverage of the entire sequence; this provides a temporal overview of the entire video that can be fully navigated at any time during delivery. The overview initially has a coarse temporal resolution, becoming finer-grained with each new frame. We compared spread-loading with traditional linear loading in a study where participants were asked to find specific episodes in a long time-lapse sequence, using three views with increasing levels of detail. Results show that participants found target episodes significantly and substantially faster with spread-loading, regardless of whether they could click to change the load point. Users rated spread-loading as requiring less effort, and strongly preferred the new technique.2019CGCarl Gutwin et al.University of SaskatchewanInteractive Data VisualizationData StorytellingCHI
Peripheral Notifications in Large Displays: Effects of Feature Combination and Task InterferenceVisual notifications are integral to interactive computing systems. With large displays, however, much of the content is in the user's visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user's primary task involves the same visual features as the notification. We addressed these questions by conducting two studies. Results of the first study showed that noticeability of feature combinations were approximately equal to the better of the individual features. Results of the second study suggest that there can be interference between the features of primary tasks and the visual features in the notifications. Our findings contribute to a better understanding of how visual features operate when used as peripheral notifications.2019AMAristides Mairena et al.University of SaskatchewanVisualization Perception & CognitionNotification & Interruption ManagementCHI
Effects of Local Latency on Game Pointing Devices and Game Pointing TasksStudies have shown certain game tasks such as targeting to be negatively and significantly affected by latencies as low as 41ms. Therefore it is important to understand the relationship between local latency - delays between an input action and resulting change in the display - and common gaming tasks such as targeting and tracking. In addition, games now use a variety of input devices, including touchscreens, mice, tablets and controllers. These devices provide very different combinations of direct/indirect input, absolute/relative movement, and position/rate control, and are likely to be affected by latency in different ways. We performed a study evaluating and comparing the effects of latency across four devices (touchscreen, mouse, controller and drawing tablet) on targeting and interception tasks. We analyze both throughput and path characteristics, identify differences between devices, and provide design considerations for game designers.2019MLMichael Long et al.University of SaskatchewanGame UX & Player BehaviorGamification DesignCHI
A Comparison of Notification Techniques for Out-of-View Objects in Full-Coverage DisplaysFull-coverage displays can place visual content anywhere on the interior surfaces of a room (e.g., a weather display near the coat stand). In these settings, digital artefacts can be located behind the user and out of their field of view - meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments. We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments.2019JPJulian Petford et al.University of St AndrewsNotification & Interruption ManagementCHI
Investigating the Post-Training Persistence of Expert Interaction TechniquesExpert interaction techniques enable users to greatly improve their performance; however, to realize these advantages, the user must first acquire the skill necessary to use a technique, then choose to use it over competing novice techniques. This article investigates several factors that may influence whether use of an expert technique persists when the context of use changes. Two studies examine the effect of changing performance requirements, and find that a high performance requirement imposed in a training context can effectively push users to adopt an expert technique, and that use of the technique is maintained when the requirement is subsequently reduced or removed. In a final study, performance requirement, high-level task, and environment of use are changed—participants played a training game to learn the menu for a drawing application, which they then used to complete a series of drawings over the following week. Participants exhibited a somewhat surprising “all-or-nothing” effect, using the expert technique nearly exclusively or not at all, and maintaining this behavior over a range of qualitatively different tasks. This suggests that switching to an expert technique involves a global change by the user, rather than an incremental change as suggested by previous work.2018BLBenjamin Lafreniere et al.Autodesk ResearchPrototyping & User TestingCHI
Characterizing Finger Pitch and Roll Orientation During Atomic Touch ActionsAtomic interactions in touch interfaces, like tap, drag, and flick, are well understood in terms of interaction design, but less is known about their physical performance characteristics. We carried out a study to gather baseline data about finger pitch and roll orientation during atomic touch input actions. Our results show differences in orientation and range for different fingers, hands, and actions, and we analyse the effect of tablet angle. Our data provides designers and researchers with a new resource to better understand what interactions are possible in different settings (eg when using the left or right hand), to design novel interaction techniques that use orientation as input (eg using finger tilt as an implicit mode), and to determine whether new sensing techniques are feasible (eg using fingerprints for identifying specific finger touches).2018AGAlix Goguey et al.University of Saskatchewan, InriaHand Gesture RecognitionEye Tracking & Gaze InteractionCHI
Improving Discoverability and Expert Performance in Force-Sensitive Text Selection for Touch Devices with Mode GaugesText selection on touch devices can be a difficult task for users. Letters and words are often too small to select directly, and the enhanced interaction techniques provided by the OS – magnifiers, selection handles, and methods for selecting at the character, word, or sentence level – often lead to as many usability problems as they solve. The introduction of force-sensitive touchscreens has added another enhancement to text selection (using force for different selection modes); however, these modes are difficult to discover and many users continue to struggle with accurate selection. In this paper we report on an investigation of the design of touch-based and force-based text selection mechanisms, and describe two novel text-selection techniques that provide improved discoverability, enhanced visual feedback, and a higher performance ceiling for experienced users. Two evaluations show that one design successfully combined support for novices and experts, was never worse than the standard iOS technique, and was preferred by participants.2018AGAlix Goguey et al.University of SaskatchewanForce Feedback & Pseudo-Haptic WeightComputational Methods in HCICHI
Storyboard-Based Empirical Modeling of Touch Interface PerformanceTouch interactions are now ubiquitous, but few tools are available to help designers quickly prototype touch interfaces and predict their performance. For rapid prototyping, most applications only support visual design. For predictive modelling, tools such as CogTool generate performance predictions but do not represent touch actions natively and do not allow exploration of different usage contexts. To combine the benefits of rapid visual design tools with underlying predictive models, we developed the Storyboard Empirical Modelling tool (StEM) for exploring and predicting user performance with touch interfaces. StEM provides performance models for mainstream touch actions, based on a large corpus of realistic data. We evaluated StEM in an experiment and compared its predictions to empirical times for several scenarios. The study showed that our predictions are accurate (within 7% of empirical values on average), and that StEM correctly predicted differences between alternative designs. Our tool provides new capabilities for exploring and predicting touch performance, even in the early stages of design.2018AGAlix Goguey et al.University of SaskatchewanPrototyping & User TestingCHI