There Is More to Dwell Than Meets the Eye: Toward Better Gaze-Based Text Entry Systems With Multi-Threshold DwellDwell-based text entry seems to peak at 20 words per minute (WPM). Yet, little is known about the factors contributing to this limit, except that it requires extensive training. Thus, we conducted a longitudinal study, broke the overall dwell-based selection time into six different components, and identified several design challenges and opportunities. Subsequently, we designed two novel dwell keyboards that use multiple yet much shorter dwell thresholds: Dual-Threshold Dwell (DTD) and Multi-Threshold Dwell (MTD). The performance analysis showed that MTD (18.3 WPM) outperformed both DTD (15.3 WPM) and the conventional Constant-Threshold Dwell (12.9 WPM). Notably, absolute novices achieved these speeds within just 30 phrases. Moreover, MTD’s performance is also the fastest-ever reported average text entry speed for gaze-based keyboards. Finally, we discuss how our chosen parameters can be further optimized to pave the way toward more efficient dwell-based text entry.2025AMAunnoy K Mutasim et al.Simon Fraser University, School of Interactive Arts and TechnologyEye Tracking & Gaze InteractionMotor Impairment Assistive Input TechnologiesCHI
RedirectedStepper: Exploring Walking-In-Place Locomotion in VR Using a Mini Stepper for AscentsWalking on inclined surfaces is common in some Virtual Reality (VR) scenarios, for instance, when moving between floors of a building, climbing a tower, or ascending a virtual mountain. Existing approaches enabling realistic walking experiences in such settings typically require the user to use bulky walking-in-place hardware or to walk in a physical area. Addressing this challenge, we present RedirectedStepper, a locomotion technique leveraging a novel device based on a mini exercise stepper to provide realistic VR staircase walking experiences by alternating the tilt of the two stepper pedals. RedirectedStepper employs a new exponential mapping function to visually morph the user's real foot motion to a corresponding curved path in the virtual environment (VE). Combining this stepper and the visual mapping function provides an in-place locomotion technique allowing users to virtually ascend an infinite staircase or slope while walking-in-place (WIP). We conducted three within-subject user studies (n=36) comparing RedirectedStepper with a WIP locomotion technique using the Kinect. Our studies indicate that RedirectedStepper improves the users' sense of realism in walking on staircases in VR. Based on a set of design implications derived from the user studies, we developed SnowRun, a VR exergame application, demonstrating the use of the RedirectedStepper concept.2025QLQuang-Tri Le et al.University of Science, VNU-HCMFull-Body Interaction & Embodied InputImmersion & Presence ResearchSerious & Functional GamesCHI
Exploring the Impacts of HEXACO Personality Traits on Text Composition and TranscriptionThis study investigates the relationship between the HEXACO personality traits and text entry behaviors in composition and transcription tasks. By analyzing metrics such as entry speed, accuracy, editing efforts, and readability, we identified correlations between specific traits and text entry performance. In composition, honesty-humility and agreeableness were the strongest predictors, correlating significantly with composition time, text length, and editing efforts. In transcription, openness, honesty-humility, and agreeableness influenced performance, though no single trait consistently predicted all metrics. Interestingly, extraversion did not show strong correlations in either task, despite its established link to composition performance in academic contexts. These findings suggest that personality traits affect text entry behavior differently depending on the task, with creative tasks like composition being shaped by distinct traits compared to repetitive tasks like transcription. This research provides valuable insights into the relationship between personality and text entry, opening avenues for personalizing interaction systems based on individual traits.2025JSJannatul Ferdous Srabonee et al.University of California, Merced, Inclusive Interaction LabAgent Personality & AnthropomorphismAI-Assisted Creative WritingCHI
A Systematic Review of Fitts’ Law in 3D Extended RealityFitts' law is widely used as an evaluation tool for pointing or selection tasks, evolving into diverse applications, including 3D extended reality (XR) environments like virtual, augmented, and mixed reality. Despite standards like ISO 9241:411, the application of Fitts' law varies significantly across studies, complicating comparisons and undermining the reliability of findings in 3D XR research. To address this, we conducted a systematic review of 119 publications, focusing on 122 studies that used Fitts' law in 3D XR user experiments. Our analysis shows that over half of these studies referenced Fitts' law without thoroughly investigating throughput, movement time, or error rate. We performed an in-depth meta-analysis to examine how Fitts' law is incorporated into research. By highlighting trends and inconsistencies, and making recommendations this review aims to guide researchers in designing and performing more effective and consistent Fitts-based studies in 3D XR, enhancing the quality and impact of future research.2025MAMohammadreza Amini et al.Concordia University, Department of Computer Science & Software EngineeringImmersion & Presence ResearchComputational Methods in HCICHI
Multimedia-Enabled 911: Exploring 911 Callers’ Experience of Call Taker Controlled Video Calling in Simulated EmergenciesEmergency response to large-scale disasters is often supported with multimedia from social media. However, while these features are common in everyday video calls, the complex needs of 911 and other systems make it difficult to directly incorporate these features. We assess an ME911 (Multimedia-Enabled 911) app to understand how the design will need to deviate from common norms and how callers will respond to those non-standard choices. We expand the role of 911 call taker control over emergency situations to the calling interface while incorporating key features like map-based location finding. Participants’ experiences in mock emergencies show the non-standard design helps callers in the unfamiliar setting of emergency calling yet it also causes confusion and delays. We find the need for emergency-specific deviations from design norms is supported by participant feedback. We discuss how broader system changes will support callers to use these non-standard designs during emergencies.2024PDPunyashlok Dash et al.Simon Fraser UniversityUncertainty VisualizationCybersecurity Training & AwarenessCHI
EyeGuide & EyeConGuide: Gaze-based Visual Guides to Improve 3D Sketching SystemsVisual guides help to align strokes and raise accuracy in Virtual Reality (VR) sketching tools. Automatic guides that appear at relevant sketching areas are convenient to have for a seamless sketching with a guide. We explore guides that exploit eye-tracking to render them adaptive to the user's visual attention. EyeGuide and EyeConGuide cause visual grid fragments to appear spatially close to the user's intended sketches, based on the information of the user's eye-gaze direction and the 3D position of the hand. Here we evaluated the techniques in two user studies across simple and complex sketching objectives in VR. The results show that gaze-based guides have a positive effect on sketching accuracy, perceived usability and preference over manual activation in the tested tasks. Our research contributes to integrating gaze-contingent techniques for assistive guides and presents important insights into multimodal design applications in VR.2024RTRumeysa Turkmen et al.Kadir Has UniversityEye Tracking & Gaze InteractionMixed Reality Workspaces3D Modeling & AnimationCHI
The Effect of Latency on Movement Time in Path-steeringIn current graphical user interfaces, there exists a (typically unavoidable) end-to-end latency from each pointing-device movement to its corresponding cursor response on the screen, which is known to affect user performance in target selection, e.g., in terms of movement time (MT). Previous work also reported that a long latency increases MTs in path-steering tasks, but the quantitative relationship between latency and MT had not been previously investigated for path-steering. In this work, we derive models to predict MTs for path-steering and evaluate them with five tasks: goal crossing as a preliminary task for model derivation, linear-path steering, circular-path steering, narrowing-path steering, and steering with target pointing. The results show that the proposed models yielded an adjusted R^2 > 0.94, with lower AICs and smaller cross-validation RMSEs than the baseline models, enabling more accurate prediction of MTs.2024SYShota Yamanaka et al.Yahoo Japan CorporationUser Research Methods (Interviews, Surveys, Observation)Computational Methods in HCICHI
Better Definition and Calculation of Throughput and Effective Parameters for Steering to Account for Subjective Speed-accuracy TradeoffsIn Fitts' law studies to investigate pointing, throughput is used to characterize the performance of input devices and users, which is claimed to be independent of task difficulty or the user's subjective speed-accuracy bias. While throughput has been recognized as a useful metric for target-pointing tasks, the corresponding formulation for path-steering tasks and its evaluation have not been thoroughly examined in the past. In this paper, we conducted three experiments using linear, circular, and sine-wave path shapes to propose and investigate a novel formulation for the effective parameters and the throughput of steering tasks. Our results show that the effective width substantially improves the fit to data with mixed speed-accuracy biases for all task shapes. Effective width also smoothed out the throughput across all biases, while the usefulness of the effective amplitude depended on the task shape. Our study thus advances the understanding of user performance in trajectory-based tasks.2024NKNobuhito Kasahara et al.Meiji UniversityUser Research Methods (Interviews, Surveys, Observation)CHI
Dr.’s Eye: The Design and Evaluation of a Video Conferencing System to Support Doctor Appointments in Home SettingsThe spread of COVID-19 has encouraged the practice of using video conferencing for family doctor appointments. Existing applications and off-the-shelf devices face challenges in dealing with capturing the correct view of patients' bodies and supporting ease of use. We created Dr.’s Eye, a video conferencing prototype to support varying types of body exams in home settings. With our prototype, we conducted a study with participants using mock appointments to understand the simultaneous use of the camera and display and to get insights into the issues that might arise in real doctor appointments. Results show the benefits of providing more flexibility with a decoupled camera and display, and privacy protection by limiting the camera view. Yet, challenges remain in maneuvering two devices, presenting feedback for the camera view, coordinating camera work between the participant and the examiner, and reluctance towards showing private body regions. This inspires future research on how to design a video system for doctor appointments.2023DHDongqi Han et al.Simon Fraser UniversityVR Medical Training & RehabilitationTelemedicine & Remote Patient MonitoringSmart Home Interaction DesignCHI
The Effect of the Vergence-Accommodation Conflict on Virtual Hand Pointing in Immersive DisplaysPrevious work hypothesized that for Virtual Reality (VR) and Augmented Reality (AR) displays a mismatch between disparities and optical focus cues, known as the vergence and accommodation conflict (VAC), affects depth perception and thus limits user performance in 3D selection tasks within arm's reach (peri-personal space). To investigate this question, we built a multifocal stereo display, which can eliminate the influence of the VAC for pointing within the investigated distances. In a user study, participants performed a virtual hand 3D selection task with targets arranged laterally or along the line of sight, with and without a change in visual depth, in display conditions with and without the VAC. Our results show that the VAC influences 3D selection performance in common VR and AR stereo displays and that multifocal displays have a positive effect on 3D selection performance with a virtual hand.2022ABAnil Ufuk Batmaz et al.Kadir Has UniversityAR Navigation & Context AwarenessImmersion & Presence ResearchCHI
ProcessAR: An Augmented Reality-Based Tool to Create In-Situ Procedural 2D/3D AR InstructionsAugmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert's (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR.2021SCSubramanian Chidambaram et al.AR Navigation & Context AwarenessContext-Aware ComputingPrototyping & User TestingDIS
ReverseORC: Reverse Engineering of Resizable User Interface Layouts with OR-ConstraintsReverse engineering (RE) of user interfaces (UIs) plays an important role in software evolution. However, the large diversity of UI technologies and the need for UIs to be resizable make this challenging. We propose ReverseORC, a novel RE approach able to discover diverse layout types and their dynamic resizing behaviours independently of their implementation, and to specify them by using OR constraints. Unlike previous RE approaches, ReverseORC infers flexible layout constraint specifications by sampling UIs at different sizes and analyzing the differences between them. It can create specifications that replicate even some non-standard layout managers with complex dynamic layout behaviours. We demonstrate that ReverseORC works across different platforms with very different layout approaches, e.g., for GUIs as well as for the Web. Furthermore, it can be used to detect and fix problems in legacy UIs, extend UIs with enhanced layout behaviours, and support the creation of flexible UI layouts.2021YJYue Jiang et al.Max Planck Institute for Informatics360° Video & Panoramic ContentAlgorithmic Transparency & AuditabilityCHI
ORCSolver: An Efficient Solver for Adaptive GUI Layout with OR-ConstraintsOR-constrained (ORC) graphical user interface layouts unify conventional constraint-based layouts with flow layouts, which enables the definition of flexible layouts that adapt to screens with different sizes, orientations, or aspect ratios with only a single layout specification. Unfortunately, solving ORC layouts with current solvers is time-consuming and the needed time increases exponentially with the number of widgets and constraints. To address this challenge, we propose ORCSolver, a novel solving technique for adaptive ORC layouts, based on a branch-and-bound approach with heuristic preprocessing. We demonstrate that ORCSolver simplifies ORC specifications at runtime and our approach can solve ORC layout specifications efficiently at near-interactive rates.2020YJYanqi Jiang et al.University of MarylandPrototyping & User TestingComputational Methods in HCICHI
Platform for Studying Self-Repairing Auto-Corrections in Mobile Text Entry based on Brain Activity, Gaze, and ContextAuto-correction is a standard feature of mobile text entry. While the performance of state-of-the-art auto-correct methods is usually relatively high, any errors that occur are cumbersome to repair, interrupt the flow of text entry, and challenge the user's agency over the process. In this paper, we describe a system that aims to automatically identify and repair auto-correction errors. This system comprises a multi-modal classifier for detecting auto-correction errors from brain activity, eye gaze, and context information, as well as a strategy to repair such errors by replacing the erroneous correction or suggesting alternatives. We integrated both parts in a generic Android component and thus present a research platform for studying self-repairing end-to-end systems. To demonstrate its feasibility, we performed a user study to evaluate the classification performance and usability of our approach.2020FPFelix Putze et al.University of BremenEye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackHuman-LLM CollaborationCHI
Modeling Fully and Partially Constrained Lasso Movements in a Grid of IconsLassoing objects is a basic function in illustration software and presentation tools. Yet, for many common object arrangements lassoing is sometimes time-consuming to perform and requires precise pen operation. In this work, we studied lassoing movements in a grid of objects similar to icons. We propose a quantitative model to predict the time to lasso such objects depending on the margins between icons, their sizes, and layout, which all affect the number of stopping and crossing movements. Results of two experiments showed that our models predict fully and partially constrained movements with high accuracy. We also analyzed the speed profiles and pen stroke trajectories and identified deeper insights into user behaviors, such as that an unconstrained area can induce higher movement speeds even in preceding path segments.2019SYShota Yamanaka et al.Yahoo Japan CorporationPrototyping & User TestingComputational Methods in HCICHI
The Effect of Stereo Display Deficiencies on Virtual Hand PointingThe limitations of stereo display systems affect depth perception, e.g., due to the vergence-accommodation conflict or diplopia. We performed three studies to understand how stereo display deficiencies impact 3D pointing for targets in front of a screen and close to the user, i.e., in peripersonal space. Our first two experiments compare movements with and without a change in visual depth for virtual respectively physical targets. Results indicate that selecting targets along the depth axis is slower and has less throughput for virtual targets, while physical pointing demonstrates the opposite result. We then propose a new 3D extension for Fitts' law that models the effect of stereo display deficiencies. Next, our third experiment verifies the model and measures more broadly how the change in visual depth between targets affects pointing performance in peripersonal space and confirms significant effects on time and throughput. Finally, we discuss implications for 3D user interface design.2019MMMayra Donaji Barrera Machuca et al.Simon Fraser UniversityHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Full-Body Interaction & Embodied InputImmersion & Presence ResearchCHI
Multilayer Haptic Feedback for Pen-Based Tablet InteractionWe present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.2019EKErnst Kruijff et al.Bonn-Rhein-Sieg University of Applied SciencesIn-Vehicle Haptic, Audio & Multimodal FeedbackVibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightCHI
ORC Layout: Adaptive GUI Layout with OR-ConstraintsWe propose a novel approach for constraint-based graphical user interface (GUI) layout based on OR-constraints (ORC) in standard soft/hard linear constraint systems. ORC layout unifies grid layout and flow layout, supporting both their features as well as cases where grid and flow layouts individually fail. We describe ORC design patterns that enable designers to safely create flexible layouts that work across different screen sizes and orientations. We also present the ORC Editor, a GUI editor that enables designers to apply ORC in a safe and effective manner, mixing grid, flow and new ORC layout features as appropriate. We demonstrate that our prototype can adapt layouts to screens with different aspect ratios with only a single layout specification, easing the burden of GUI maintenance. Finally, we show that ORC specifications can be modified interactively and solved efficiently at runtime.2019YJYue Jiang et al.University of Maryland, College ParkKnowledge Worker Tools & WorkflowsPrototyping & User TestingCHI
Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape ConstraintsModern virtual reality controllers offer direct, quick, and easy manipulation of virtual objects. However, precisely placing objects using direct manipulation is still challenging in 3D environments. This paper presents Plane, Ray, & Point, a set of novel interaction techniques that enable quick object alignment and manipulation in virtual reality. The interaction techniques use hand gestures to create constraints that separate manipulation degrees of freedom. The user can create a Ray by outstretching the index finger or the thumb and use it to limit the rotation or translation of an object to a single axis. By opening both the index finger and the thumb, the user can create a Plane, and use it to limit the object’s movement along the 2D plane. Such gesture-invoked constraints help users quickly align and place virtual objects. We evaluate the applicability and use-cases of our technique in an expert user study; as well as evaluate its learnability in an informal user study with novice users.2019DHDevamardeep Hayatpur et al.Shape-Changing Interfaces & Soft Robotic MaterialsHand Gesture RecognitionFull-Body Interaction & Embodied InputUIST
Steering through Successive ObjectsWe investigate stroking motions through successive objects with styli. There are several promising models for stroking motions, such as crossing tasks, which require endpoint accuracy of a stroke, or steering tasks, which require continuous accuracy throughout the trajectory. However, a task requiring users to repeatedly steer through constrained path segments has never been studied, although such operations are needed in GUIs, e.g., for selecting icons or objects on illustration software through lassoing. We empirically confirmed that the interval, trajectory width, and obstacle size significantly affect the movement speed. Existing models can not accurately predict user performance in such tasks. We found several unexpected results such as that steering through denser objects sometimes required less times than expected. Speed profile analysis showed the reasons behind such behaviors, such as participants' anticipation strategies. We also discuss the applicability of exiting performance models and revisions.2018SYShota Yamanaka et al.Yahoo Japan CorporationPrototyping & User TestingCHI