Model Touch Pointing and Detect Parkinson's Disease via a Mobile GameLing 等人开发基于移动游戏的触控点建模方法,通过分析玩家在游戏中的触控行为特征,实现帕金森病的早期辅助检测,为疾病筛查提供新途径。2024KLKaiyan Ling et al.Motor Impairment Assistive Input TechnologiesSerious & Functional GamesUbiComp
CASS: Towards Building a Social-Support Chatbot for Online Health CommunityChatbots systems, despite their popularity in today’s HCI and CSCW researches, fall short for one of the two reasons: 1) many of the systems use a rule-based dialog flow, thus they can only respond to a limited number of pre-defined user inputs with some scripted responses; or 2) they are designed with a focus on a single user scenario, and therefore little is known about these systems’ influence on other users in a community. In this paper, we present a research project that aims to develop a generalizable chatbot architecture to provide social support for community members in an online health community. The architecture is based on advanced neural network algorithms, thus it can handle new inputs from users and generate a variety of responses to them. The system is also generalizable as it can be easily migrate to other online communities. With a follow-up field experiment with the chatbot deployed back into the community, we illustrate the system’s usefulness in providing emotional supporting to individual members. In addition, our study provides empirical understandings to fill the research gap on how a social-support chatbot can positively impact the community engagement.2021LWLiuping Wang et al.Online Health CommunitiesCSCW
"Brilliant AI Doctor" in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System DeploymentArtificial intelligence (AI) technology has been increasingly used in the implementation of advanced Clinical Decision Support Systems (CDSS). Research demonstrated the potential usefulness of AI-powered CDSS (AI-CDSS) in clinical decision making scenarios. However, post-adoption user perception and experience remain understudied, especially in developing countries. Through observations and interviews with 22 clinicians from 6 rural clinics in China, this paper reports the various tensions between the design of an AI-CDSS system (``Brilliant Doctor'') and the rural clinical context, such as the misalignment with local context and workflow, the technical limitations and usability barriers, as well as issues related to transparency and trustworthiness of AI-CDSS. Despite these tensions, all participants expressed positive attitudes toward the future of AI-CDSS, especially acting as ``a doctor's AI assistant'' to realize a Human-AI Collaboration future in clinical settings. Finally we draw on our findings to discuss implications for designing AI-CDSS interventions for rural clinical contexts in developing countries.2021DWDakuo Wang et al.IBM ResearchAI-Assisted Decision-Making & AutomationDeveloping Countries & HCI for Development (HCI4D)CHI
Mouillé: Exploring Wetness Illusion on Fingertips to Enhance Immersive Experience in VRProviding users with rich sensations is beneficial to enhance their immersion in Virtual Reality (VR) environments. Wetness is one such imperative sensation that affects users' sense of comfort and helps users adjust grip force when interacting with objects. Researchers have recently begun to explore ways to create wetness illusions, primarily on a user's face or body skin. In this work, we extended this line of research by creating wetness illusion on users' fingertips. We first conducted a user study to understand the effect of thermal and tactile feedback on users' perceived wetness sensation. Informed by the findings, we designed and evaluated a prototype---Mouillé---that provides various levels of wetness illusions on fingertips for both hard and soft items when users squeeze, lift, or scratch it. Study results indicated that users were able to feel wetness with different levels of temperature changes and they were able to distinguish three levels of wetness for simulated VR objects. We further presented applications that simulated an ice cube, an iced cola bottle, and a wet sponge, etc, to demonstrate its use in VR.2020THTeng Han et al.Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of SciencesShape-Changing Interfaces & Soft Robotic MaterialsImmersion & Presence ResearchCHI
Modeling the Endpoint Uncertainty in Crossing-based Moving Target SelectionModeling the endpoint uncertainty of moving target selection with crossing is essential to understand factors such as speed-accuracy trade-off and interaction efficiency in crossing-based user interfaces with dynamic contents. However, there have been few studies looking into this research topic in the HCI field. This paper presents a Quaternary-Gaussian model to quantitatively measure the endpoint uncertainty in crossing-based moving target selection. To validate this model, we conducted an experiment with discrete crossing tasks on five factors, i.e., initial distance, size, speed, orientation, and moving direction. Results showed that our model fit the data of ? and ? accurately with adjusted R2 of 0.883 and 0.920. We also demonstrated the validity of our model in predicting error rates in crossing-based moving target selection. We concluded with a set of implications for future designs.2020JHJin Huang et al.Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of SciencesEye Tracking & Gaze InteractionContext-Aware ComputingCHI
Using Bayes' Theorem for Command Input: Principle, Models, and ApplicationsEntering commands on touchscreens can be noisy, but existing interfaces commonly adopt deterministic principles for deciding targets and often result in errors. Building on prior research of using Bayes' theorem to handle uncertainty in input, this paper formalized Bayes' theorem as a generic guiding principle for deciding targets in command input (referred to as "BayesianCommand"), developed three models for estimating prior and likelihood probabilities, and carried out experiments to demonstrate the effectiveness of this formalization. More specifically, we applied BayesianCommand to improve the input accuracy of (1) point-and-click and (2) word-gesture command input. Our evaluation showed that applying BayesianCommand reduced errors compared to using deterministic principles (by over 26.9% for point-and-click and by 39.9% for word-gesture command input) or applying the principle partially (by over 28.0% and 24.5%).2020SZSuwen Zhu et al.Stony Brook UniversityVoice User Interface (VUI) DesignComputational Methods in HCICHI
PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on SmartphonesIntensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?2019THTeng Han et al.University of ManitobaHand Gesture RecognitionCHI
Modeling the Uncertainty in 2D Moving Target SelectionUnderstanding the selection uncertainty of moving targets is a fundamental research problem in HCI. However, the only few works in this domain mainly focus on selecting 1D moving targets with certain input devices, where the model generalizability has not been extensively investigated. In this paper, we propose a 2D Ternary-Gaussian model to describe the selection uncertainty manifested in endpoint distribution for moving target selection. We explore and compare two candidate methods to generalize the problem space from 1D to 2D tasks, and evaluate their performances with three input modalities including mouse, stylus, and finger touch. By applying the proposed model in assisting target selection, we achieved 56.7% improvement in selection speed and 78.8% improvement in pointing accuracy. In addition, we found that when predicting pointing errors, our model can fit the data of error rates with 0.94 R2.2019JHJin Huang et al.Visualization Perception & CognitionComputational Methods in HCIUIST
How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online LecturesThe degree and quality of instructor-student interactions are crucial for students' engagement, retention, and learning outcomes. However, such interactions are limited in live online lectures, where instructors no longer have access to important cues such as raised hands or facial expressions at the time of teaching. As a result, instructors cannot fully understand students' learning progresses. This paper presents an explorative study investigating how presenters perceive and react to audience flow prediction when giving live-stream lectures, which has not been examined yet. The study was conducted with an experimental system that can predict audience's psychological states (e.g., anxiety, flow, boredom) through real-time facial expression analysis, and can provide aggregated views illustrating the flow experience of the whole group. Through evaluation with 8 online lectures N_instructors=8, N_learners=21), we found such real-time flow prediction and visualization can provide value to presenters. This paper contributes a set of useful findings regarding their perception and reaction of such flow prediction, as well as lessons learned in the study, which can be inspirational for building future AI-powered system to assist people in delivering live online presentations.2019WSWei Sun et al.Connecting and Reaching OutCSCW
What Can Gestures Tell? Detecting Motor Impairment in Early Parkinson's from Common Touch Gestural InteractionsParkinson's disease (PD) is a chronic neurological disorder causing progressive disability that severely affects patients' quality of life. Although early interventions can provide significant benefits, PD diagnosis is often delayed due to both the mildness of early signs and the high requirements imposed by traditional screening and diagnosis methods. In this paper, we explore the feasibility and accuracy of detecting motor impairment in early PD via sensing and analyzing users' common touch gestural interactions on smartphones. We investigate four types of common gestures, including flick, drag, pinch, and handwriting gestures, and propose a set of features to capture PD motor signs. Through a 102-subject (35 early PD subjects and 67 age-matched controls) study, our approach achieved an AUC of 0.95 and 0.89/0.88 sensitivity/specificity in discriminating early PD subjects from healthy controls. Our work constitutes an important step towards unobtrusive, implicit, and convenient early PD detection from routine smartphone interactions.2019FTFeng Tian et al.Chinese Academy of Sciences & University of Chinese Academy of SciencesHuman Pose & Activity RecognitionMotor Impairment Assistive Input TechnologiesCHI
SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal NetworkInstant photo taking and sharing has become one of the most popular forms of social networking. However, taking high-quality photos is difficult as it requires knowledge and skill in photography that most non-expert users lack. In this paper we present SmartEye, a novel mobile system to help users take photos with good compositions in-situ. The back-end of SmartEye integrates the View Proposal Network (VPN), a deep learning based model that outputs composition suggestions in real time, and a novel, interactively updated module (P-Module) that adjusts the VPN outputs to account for personalized composition preferences. We also design a novel interface with functions at the front-end to enable real-time and informative interactions for photo taking. We conduct two user studies to investigate SmartEye qualitatively and quantitatively. Results show that SmartEye effectively models and predicts personalized composition preferences, provides instant high-quality compositions in-situ, and outperforms the non-personalized systems significantly.2019SMShuai Ma et al.Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of SciencesGenerative AI (Text, Image, Music, Video)Graphic Design & Typography ToolsCHI
Understanding the Uncertainty in 1D Unidirectional Moving Target SelectionIn contrast to the extensive studies on static target pointing, much less formal understanding of moving target acquisition can be found in the HCI literature. We designed a set of experiments to identify regularities in 1D unidirectional moving target selection, and found a Ternary-Gaussian model to be descriptive of the endpoint distribution in such tasks. The shape of the distribution as characterized by μ and σ in the Gaussian model were primarily determined by the speed and size of the moving target. The model fits the empirical data well with 0.95 and 0.94 R2 values for μ and σ, respectively. We also demonstrated two extensions of the model, including 1) predicting error rates in moving target selection; and 2) a novel interaction technique to implicitly aid moving target selection. By applying them in a game interface design, we observed good performances in both predicting error rates (e.g., 2.7% mean absolute error) and assisting moving target selection (e.g., 33% or a greater increase in pointing accuracy).2018JHJin Huang et al.Institute of Software, Chinese Academy of Sciences, University of Chinese Academy of SciencesHand Gesture RecognitionVoice User Interface (VUI) DesignGame UX & Player BehaviorCHI