Investigating Composite Relation with a Data-Physicalized Thing through the Deployment of the WavData LampThis paper reports on a field study of the WavData Lamp: an interactive lamp that can physically visualize people’s music listening data by changing light colors and outstretching its form enclosure. We deployed five WavData Lamps to five participants' homes for two months to investigate their composite relation with a data-physicalized thing. Findings reveal that their music-listening norms were determined by the instantiated materiality of the Lamp in the early days. With a tilted form enclosure, the WavData Lamp successfully engendered rich actions and meanings of the cohabiting participants and their family members. In the end, the participants described their experiences of entangling with and living with the Lamp as a form of collaboration. Reflecting on these empirical insights explicitly extends the intrinsic meaning of the composite relation and offers rich implications to promote further HCI explorations and practices.2025CZCe Zhong et al.University of Waterloo, School of Computer ScienceShape-Changing Interfaces & Soft Robotic MaterialsData PhysicalizationCHI
Exploring Uni-manual Around Ear Off-device Gestures for EarablesShimon 等人研究智能耳穿戴设备的单手耳周的手势交互方式,为可穿戴交互设计提供新思路。2024SSShaikh Shawon Arefin Shimon et al.Foot & Wrist InteractionUbiquitous ComputingUbiComp
Beyond Functionality: Unveiling Dimensions of User Experience in Embodied Conversational Agents for Customer ServiceEmbodied Conversational Agents (ECAs) are increasingly being deployed in the customer service field. However, their user experience post-adoption remains under-researched. Through interviews with customer service practitioners and review of existing ECA research, we identified eighteen key items of ECA experience. Based on these items, we conducted a survey among users who have interacted with ECAs. Utilizing exploratory factor analysis and confirmatory factor analysis, we developed a five-dimension model: trustworthy, approachable, humanized, engaging, and supportive experiences. We found that the evaluation of experience importance varied according to users' educational backgrounds and annual incomes. Users with higher education and income levels exhibited higher expectation for trustworthy experience. Additionally, our research suggests that the design features of ECAs contribute distinctively to these five user experience dimensions. These insights provide novel directions for adopting a user-centered design approach to improve ECA interactions.2024LLLi Lin et al.Conversational ChatbotsAgent Personality & AnthropomorphismSocial Robot InteractionCUI
EmoWear: Exploring Emotional Teasers for Voice Message Interaction on SmartwatchesVoice messages, by nature, prevent users from gauging the emotional tone without fully diving into the audio content. This hinders the shared emotional experience at the pre-retrieval stage. Research scarcely explored "Emotional Teasers"—pre-retrieval cues offering a glimpse into an awaiting message's emotional tone without disclosing its content. We introduce EmoWear, a smartwatch voice messaging system enabling users to apply 30 animation teasers on message bubbles to reflect emotions. EmoWear eases senders' choice by prioritizing emotions based on semantic and acoustic processing. EmoWear was evaluated in comparison with a mirroring system using color-coded message bubbles as emotional cues (N=24). Results showed EmoWear significantly enhanced emotional communication experience in both receiving and sending messages. The animated teasers were considered intuitive and valued for diverse expressions. Desirable interaction qualities and practical implications are distilled for future design. We thereby contribute both a novel system and empirical knowledge concerning emotional teasers for voice messaging.2024PAPengcheng An et al.Southern University of Science and TechnologyHaptic WearablesVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)CHI
Evaluating Across-Hinge Dragging with Pen and Touch on Curved and Foldable DisplaysFoldable touch screens are increasingly popular, but little research has explored how the hinge impacts usability and performance. We evaluate across- and along-hinge drag gestures on a series of prototypes emulating foldable all-screen laptops with a curved hinge radius ranging from 1mm to 24mm. Results show that using a large 24mm hinge radius instead of a small 1mm hinge radius can decrease drag time by 13% and movement variability by 7% for touch input. However, hinge radius had no effect on performance for pen input. Further, we found that dragging along the hinge was up to 30% faster than dragging across the hinge, especially when dragging across at an acute angle to the hinge. Using these results, we demonstrate use cases for across- and along-hinge gestures. Our findings provide guidance for hardware and interaction designers seeking to create foldable touchscreen devices and their accompanying software.2023GZGraeme Zinck et al.University of WaterlooShape-Changing Interfaces & Soft Robotic MaterialsPrototyping & User TestingCHI
T-Force: Exploring the Use of Typing Force for Three State Virtual KeyboardsThree state virtual keyboards which differentiate contact events between released, touched, and pressed states have the potential to improve overall typing experience and reduce the gap between virtual keyboards and physical keyboards. Incorporating force sensitivity, three-state virtual keyboards can utilize a force threshold to better classify a contact event. However, our limited knowledge of how force plays a role during typing on virtual keyboards limits further progress. Through a series of studies we observe that using a uniform threshold is not an optimal approach. Furthermore, the force being applied while typing varies significantly across the keys and among participants. As such, we propose three different approaches to further improve the uniform threshold. We show that a carefully selected non-uniform threshold function could be sufficient in delineating typing events on a three-state keyboard. Finally, we conclude our work with lessons learned, suggestion for future improvements, and comparisons with current methods available.2023SFShariff AM Faleel et al.University of British ColumbiaForce Feedback & Pseudo-Haptic WeightCHI
In-vehicle Performance and Distraction for Midair and Touch Directional GesturesWe compare the performance and level of distraction of expressive directional gesture input in the context of in-vehicle system commands. Center console touchscreen swipes and midair swipe-like movements are tested in 8-directions, with 8-button touchscreen tapping as a baseline. Participants use these input methods for intermittent target selections while performing the Lane Change Task in a virtual driving simulator. Input performance is measured with time and accuracy, cognitive load with deviation of lane position and speed, and distraction from frequency of off-screen glances. Results show midair gestures were less distracting and faster, but with lower accuracy. Touchscreen swipes and touchscreen tapping are comparable across measures. Our work provides empirical evidence for vehicle interface designers and manufacturers considering midair or touch directional gestures for centre console input.2023AHArman Hafizi et al.Computer ScienceIn-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionCHI
ColorCook: Augmenting Color Design for Dashboarding with Domain-Associated PalettesVisualization dashboards serve as an information presentation that uses a tiled layout of key metrics visualized in charts for collaborative decision-making. Existing work has developed tools and techniques for computational color design. Much of these efforts have focused on selecting effective color palettes for independent charts while few attempts have been made to support the expressive color design of multiple coordinated charts in dashboards. In this work, we describe ColorCook, an interactive system that helps design expressive and effective dashboard colorings using domain-associated palettes. ColorCook employs an integrated color workflow for dashboarding, consisting of palette selection, color assignment, and color adjustment. We evaluated ColorCook through a crowdsourcing experiment and a user study. The results of our evaluation indicated that ColorCook is useful for effective and expressive color design.2022YSYang Shi et al.Feedback-giving & Decision-making; Feedback-giving & Decision-makingCSCW
Robust and Deployable Gesture Recognition for SmartwatchesGesture recognition on smartwatches is challenging not only due to resource constraints but the dynamically changing conditions of users. It is currently an open problem how to engineer gesture recognizers that are robust and yet deployable on smartwatches.Recent research has found that common everyday events, such as a user removing and reattaching smartwatch strap, can deteriorate recognition accuracy significantly. In this paper we suggest that prior understanding on causes behind everyday variability and false positives should be exploited in the development of recognizers. To this end, first, we present a data collection method that aims at diversifying gesture data in a representative way, in which users are taken to experimental conditions that resemble known causes of variability (e.g., walking while gesturing) and asked to produce deliberately varied, but realistic gestures. Second, we review known approaches in machine learning for recognizer design on constrained hardware. We propose convolution-based network variations for classifying raw sensor data, achieving greater than 98%accuracy reliably under both individual and situational variations where previous approaches have reported significant performance deterioration. This performance is achieved with a model that is two orders of magnitude less complex than previous state-of-the-art models. Our work suggests that deployable and robust recognition is feasible but requires systematic efforts in data collection and network design to address known causes of gesture variability.2022UKUtkarsh Kunwar et al.Hand Gesture RecognitionSmartwatches & Fitness BandsIUI
Switching Between Standard Pointing Methods with Current and Emerging Computer Form FactorsWe investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.2022MFMargaret Jean Foley et al.University of WaterlooHand Gesture RecognitionUbiquitous ComputingCHI
The Effect of the Vergence-Accommodation Conflict on Virtual Hand Pointing in Immersive DisplaysPrevious work hypothesized that for Virtual Reality (VR) and Augmented Reality (AR) displays a mismatch between disparities and optical focus cues, known as the vergence and accommodation conflict (VAC), affects depth perception and thus limits user performance in 3D selection tasks within arm's reach (peri-personal space). To investigate this question, we built a multifocal stereo display, which can eliminate the influence of the VAC for pointing within the investigated distances. In a user study, participants performed a virtual hand 3D selection task with targets arranged laterally or along the line of sight, with and without a change in visual depth, in display conditions with and without the VAC. Our results show that the VAC influences 3D selection performance in common VR and AR stereo displays and that multifocal displays have a positive effect on 3D selection performance with a virtual hand.2022ABAnil Ufuk Batmaz et al.Kadir Has UniversityAR Navigation & Context AwarenessImmersion & Presence ResearchCHI
VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social CommunicationEmoticons are indispensable in online communications. With users' growing needs for more customized and expressive emoticons, recent messaging applications begin to support (limited) multi-modal emoticons: e.g., enhancing emoticons with animations or vibrotactile feedback. However, little empirical knowledge has been accumulated concerning how people create, share and experience multi-modal emoticons in everyday communication, and how to better support them through design. To tackle this, we developed VibEmoji, a user-authoring multi-modal emoticon interface for mobile messaging. Extending existing designs, VibEmoji grants users greater flexibility to combine various emoticons, vibrations, and animations on-the-fly, and offers non-aggressive recommendations based on these components' emotional relevance. Using VibEmoji as a probe, we conducted a four-week field study with 20 participants, to gain new understandings from in-the-wild usage and experience, and extract implications for design. We thereby contribute both a novel system and various insights for supporting users' creation and communication of multi-modal emoticons.2022PAPengcheng An et al.Southern University of Science and Technology, University of WaterlooVibrotactile Feedback & Skin StimulationIntelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismCHI
Elbow-Anchored Interaction: Designing Restful Mid-Air InputWe designed a mid-air input space for restful interactions on the couch. We observed people gesturing in various postures on a couch and found that posture affects the choice of arm motions when no constraints are imposed by a system. Study participants that sat with the arm rested were more likely to use the forearm and wrist, as opposed to the whole arm. We investigate how a spherical input space, where forearm angles are mapped to screen coordinates, can facilitate restful mid-air input in multiple postures. We present two controlled studies. In the first, we examine how a spherical space compares with a planar space in an elbow-anchored setup, with a shoulder-level input space as baseline. In the second, we examine the performance of a spherical input space in four common couch postures that set unique constraints to the arm. We observe that a spherical model that captures forearm movement facilitates comfortable input across different seated postures.2021RVRafael Veras et al.HuaweiMid-Air Haptics (Ultrasonic)Full-Body Interaction & Embodied InputCHI
ThermalRing: Gesture and Tag Inputs Enabled by a Thermal Imaging Smart RingThe heterogeneous and ubiquitous input demands in smart spaces call for an input device that can enable rich and spontaneous interactions. We propose ThermalRing, a thermal imaging smart ring using low-resolution thermal camera for identity-anonymous, illumination-invariant, and power-efficient sensing of both dynamic and static gestures. We also design ThermalTag, thin and passive thermal imageable tags that reflect the heat from the human hand. ThermalTag can be easily made and applied onto everyday objects by users. We develop sensing techniques for three typical input demands: drawing gestures for device pairing, click and slide gestures for device control, and tag scan gestures for quick access. The study results show that ThermalRing can recognize nine drawing gestures with an overall accuracy of 90.9%, detect click gestures with an accuracy of 94.9%, and identify among six ThermalTags with an overall accuracy of 95.0%. Finally, we show the versatility and potential of ThermalRing through various applications.2020TZTengxiang Zhang et al.Chinese Academy of Sciences & UCASHand Gesture RecognitionOn-Skin Display & On-Skin InputContext-Aware ComputingCHI
DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand ForecastingSelecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.2020DSDong Sun et al.Hong Kong University of Science and TechnologyInteractive Data VisualizationVideo Production & EditingCHI