Understanding and Improving User Adoption and Security Awareness in Password Checkup ServicesPassword checkup services (PCS) identify compromised, reused, or weak passwords, helping users secure at-risk accounts. However, adoption rates are low. We investigated factors influencing PCS use and password change challenges via an online survey (n=238). Key adoption factors were "perceived usefulness," "ease of use," and "self efficacy." We also identified barriers to changing compromised passwords, including alert fatigue, low perceived urgency, and reliance on other security measures. We then designed interfaces mitigating these issues through clearer messaging and automation (e.g., simultaneous password changes and direct links to change pages). A user study (N=50) showed our designs significantly improved password change success rates, reaching 40% and 74% in runtime alert and PCS checkup reporting scenarios, respectively (compared to 16% and 60% with a baseline).2025SOSanghak Oh et al.Sungkyunkwan University, Sungkyunkwan UniversityPasswords & AuthenticationPrivacy Perception & Decision-MakingCHI
BallistoBud: Heart Rate Variability Monitoring using Earbud Accelerometry for Stress AssessmentThis paper examines the potential of commercial earbuds for detecting physiological biomarkers like heart rate (HR) and heart rate variability (HRV) for stress assessment. Using accelerometer (IMU) and photoplethysmography (PPG) data from earbuds, we compared these estimates with reference electrocardiogram (ECG) data from 81 healthy participants. We explored using low-power accelerometer sensors for capturing ballistocardiography (BCG) signals. However, BCG signal quality can vary due to individual differences and body motion. Therefore, BCG data quality assessment is critical before extracting any meaningful biomarkers. To address this, we introduced the ECG-gated BCG heatmap, a new method for assessing BCG signal quality. We trained a Random Forest model to identify usable signals, achieving 82% test accuracy. Filtering out unusable signals improved HR/HRV estimation accuracy to levels comparable to PPG-based estimates. Our findings demonstrate the feasibility of accurate physiological monitoring with earbuds, advancing the development of user-friendly wearable health technologies for stress management.2025MIMd Saiful Islam et al.University of Rochester, Department of Computer Science; Samsung Research America, Digital HealthSleep & Stress MonitoringBiosensors & Physiological MonitoringCHI
ID.EARS: One-Ear EEG Device with Biosignal Noise for Real-Time Gesture Recognition and Various InteractionsIn-ear EEG research has traditionally treated biological signals other than brainwaves, such as electromyography (EMG) and electrooculography (EOG), as unwanted noise to be removed. However, instead of discarding these signals, we developed ID.EARS, a single-ear, dry electrode-based device that utilizes these signals for real-time gesture input. We first identified the optimal position for EEG measurement around the ear using the Alpha Attenuation Response (AAR) test and collected biological signals that occur alongside brainwaves at this location. Using these signals, we created a real-time artifact detection model capable of recognizing five specific gestures: blinking, left and right winking, teeth clenching, and chewing. This model achieved over 90% accuracy in cross-validation experiments. Leveraging this model and device, we propose several application scenarios, including music control, accessibility features, MR/XR control, and healthcare services. This innovative approach extends the use of ear-EEG devices beyond healthcare, opening up possibilities for natural user interfaces.2025HAHyunjin An et al.Digital health team, Samsung ElectronicsElectrical Muscle Stimulation (EMS)Hand Gesture RecognitionBrain-Computer Interface (BCI) & NeurofeedbackCHI
Characterizing and Quantifying Expert Input Behavior in League of LegendsTo achieve high performance in esports, players must be able to effectively and efficiently control input devices such as a computer mouse and keyboard (i.e., input skills). Characterizing and quantifying a player’s input skills can provide useful insights, but collecting and analyzing sufficient amounts of data in ecologically valid settings remains a challenge. Targeting the popular esports game, League of Legends, we go beyond the limitations of previous studies and demonstrate a holistic pipeline of input behavior analysis: from quantifying the quality of players’ input behavior (i.e., input skill) to training players based on the analysis. Based on interviews with five top-tier professionals and analysis of input behavior logs from 4,835 matches played freely at home collected from 193 players (including 18 professionals), we confirmed that players with higher ranks in the game implement eight different input skills with higher quality. In a three-week follow-up study using a training aid that visualizes a player’s input skill levels, we found that the analysis provided players with actionable lessons, potentially leading to meaningful changes in their input behavior.2024HLHanbyeol Lee et al.Yonsei UniversityGame UX & Player BehaviorSerious & Functional GamesRole-Playing & Narrative GamesCHI
SkullID: Through-Skull Sound Conduction based Authentication for SmartglassesThis paper investigates the use of through-skull sound conduction to authenticate smartglass users. We mount a surface transducer on the right mastoid process to play cue signals and capture skull-transformed audio responses through contact microphones on various skull locations. We use the resultant bio-acoustic information as classification features. In an initial single-session study (N=25), we achieved mean Equal Error Rates (EERs) of 5.68% and 7.95% with microphones on the brow and left mastoid process. Combining the two signals substantially improves performance (to 2.35% EER). A subsequent multi-session study (N=30) demonstrates EERs are maintained over three recalls and, additionally, shows robustness to donning variations and background noise (achieving 2.72% EER). In a follow-up usability study over one week, participants report high levels of usability (as expressed by SUS scores) and that only modest workload is required to authenticate. Finally, a security analysis demonstrates the system's robustness to spoofing and imitation attacks.2024HSHyejin Shin et al.Samsung ResearchPasswords & AuthenticationBiosensors & Physiological MonitoringCHI
On the Long-Term Effects of Continuous Keystroke Authentication: Keeping User Frustration Low through Behavior Adaptation"One of the main challenges in deploying a keystroke dynamics-based continuous authentication scheme on smartphones is ensuring low error rates over time. Unstable false rejection rates (FRRs) would lead to frequent phone locks during long-term use, and deteriorating attack detection rates would jeopardize its security benefits. The fact that it is undesirable to train complex deep learning models directly on smartphones or send private sensor data to servers for training present unique deployment constraints, requiring on-device solutions that can be trained fully on smartphones. To improve authentication accuracy while satisfying such real-world deployment constraints, we propose two novel feature engineering techniques: (1) computation of pair-wise correlations between accelerometer and gyroscope sensor values, and (2) on-device feature extraction technique to compute dynamic time warping (DTW) distance measurements between autoencoder inputs and outputs via transfer-learning. Using those two feature sets in an ensemble blender, we achieved 6.4 percent equal error rate (EER) in a public dataset. In comparison, blending two state-of-the-art solutions achieved 14.1 percent EER in the same test settings. Our real-world dataset evaluation showed increasing FRRs (user frustration) over two months; however, through periodic model retraining, we were able to maintain average FRRs around 2.5 percent while keeping attack detection rates around 89 percent. The proposed solution has been deployed in the latest Samsung Galaxy smartphone series to protect secure workspace through continuous authentication. https://dl.acm.org/doi/10.1145/3596236"2023JHJun Ho Huh et al.Passwords & AuthenticationPrivacy Perception & Decision-MakingUbiComp
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for SmartwatchesPIN and pattern lock are difficult to accurately enter on small watch screens, and are vulnerable against guessing attacks. To address these problems, this paper proposes a novel implicit biometric scheme based on through-wrist acoustic responses. A cue signal is played on a surface transducer mounted on the dorsal wrist and the acoustic response recorded by a contact microphone on the volar wrist. We build classifiers using these recordings for each of three simple hand poses (relax, fist and open), and use an ensemble approach to make final authentication decisions. In an initial single session study (N=25), we achieve an Equal Error Rate (EER) of 0.01%, substantially outperforming prior on-wrist biometric solutions. A subsequent five recall-session study (N=20) shows reduced performance with 5.06% EER. We attribute this to increased variability in how participants perform hand poses over time. However, after retraining classifiers performance improved substantially, ultimately achieving 0.79% EER. We observed most variability with the relax pose. Consequently, we achieve the most reliable multi-session performance by combining the fist and open poses: 0.51% EER. Further studies elaborate on these basic results. A usability evaluation reveals users experience low workload as well as reporting high SUS scores and fluctuating levels of perceived exertion: moderate during initial enrollment dropping to slight during authentication. A final study examining performance in various poses and in the presence of noise demonstrates the system is robust to such disturbances and likely to work well in wide range of real-world contexts. https://dl.acm.org/doi/10.1145/35694732023JHJun Ho Huh et al.Foot & Wrist InteractionMotor Impairment Assistive Input TechnologiesUbiComp
Exploring Digital Communication Needs of Local Communities and Self-organized CollectivesRecent work in HCI has explored the use of ICTs for the mobilisation and organisation of values-led communities and social movements. This paper extends this line of work by exploring the design of a communication system for informal, place-based citizen collectives—also referred to as Social Solidarity Movements. The distinctive characteristics of such collectives, namely their decentralised, bottom-up and self-organised organisation, and their lack of monetary resources, pose interesting challenges for communication technology design. The work reported in this paper sought to explore how the values and practices of such collectives can be embodied in mobile communication tools. A system was designed to mirror on-the-ground informal organisational structures, its primary goal being to serve as a probe for research and discussion. Our findings highlight the diversity of channels and organisational structures prevailing in these contexts, their participatory nature, and issues of temporality, anonymity, privacy, and trust, all of which must be considered when designing technologies to support cooperative work. We contribute methodological insights and design implications for mobile technologies underpinning the work of social collectives and their practices.2023KRKaterina El Raheb et al.Context-Aware ComputingCommunity Engagement & Civic TechnologyParticipatory DesignMobileHCI
Identifying Multimodal Context Awareness Requirements for Supporting User Interaction with Procedural VideosFollowing along how-to videos requires alternating focus between understanding procedural video instructions and performing them. Examining how to support these continuous context switches for the user has been largely unexplored. In this paper, we describe a user study with thirty participants who performed an hour-long cooking task while interacting with a wizard-of-oz hands-free interactive system that is aware of both their cooking progress and environment contexts. Through analysis of the session scripts, we identify a dichotomy between participant query differences and workflow alignment similarities, under-studied interactions that require AI functionality beyond video navigation alone, and queries that call for multimodal sensing of a user’s environment. By understanding the assistant experience through the participants’ interactions, we identify design implications for a smart assistant that can discern a user’s task completion flow and personal characteristics, accommodate requests within and external to the task domain, and support nonvoice-based queries.2023GLGeorgianna Lin et al.University of Toronto, University of TorontoVoice User Interface (VUI) DesignContext-Aware ComputingCHI
GestureMeter: Design and Evaluation of a Gesture Password Strength MeterGestures drawn on touchscreens have been proposed as an authentication method to secure access to smartphones. They provide good usability and a theoretically large password space. However, recent work has demonstrated that users tend to select simple or similar gestures as their passwords, rendering them susceptible to dictionary based guessing attacks. To improve their security, this paper describes a novel gesture password strength meter that interactively provides security assessments and improvement suggestions based on a scoring algorithm that combines a probabilistic model, a gesture dictionary, and a set of novel stroke heuristics. We evaluate this system in both online and offline settings and show it supports creation of gestures that are significantly more resistant to guessing attacks (by up to 67%) while also maintaining performance on usability metrics such as recall success rate and time. We conclude that gesture password strength meters can help users select more secure gesture passwords.2023ECEunyong Cheon et al.UNIST , UNISTPasswords & AuthenticationCHI
Remote Breathing Rate Tracking in Stationary Position Using the Motion and Acoustic Sensors of EarablesBreathing rate is critical for the user's respiratory health and is hard to track outside the clinical context, requiring specialized devices. Earables could provide a convenient solution to track the breathing rate anywhere by leveraging the user's breathing-related motion and sound captured through the earables' motion sensors and microphones. However, small non-breathing head movements or background noises during the assessment affect the estimation accuracy. While noise filtering improves accuracy, it can discard valid measurements. This paper presents a multimodal approach to tracking the user's breathing rate using a signal-processing-based algorithm on motion sensors and a lightweight machine-learning algorithm on acoustic sensors from the earables that balances the accuracy and data retention. A user study with 30 participants shows that the system can accurately calculate breathing rate (Mean Absolute Error < 2 breaths per minute) while retaining most breathing sessions (75\%) performed in real-world settings. This work provides an essential direction for remote breathing rate monitoring.2023TATousif Ahmed et al.Samsung Research America, Inc.Biosensors & Physiological MonitoringCHI
EmoBalloon - Conveying Emotional Arousal in Text Chats with Speech BalloonsText chat applications are an integral part of daily social and professional communication. However, messages sent over text chat applications do not convey vocal or nonverbal information from the sender, and detecting the emotional tone in text-only messages is challenging. In this paper, we explore the effects of speech balloon shapes on the sender-receiver agreement regarding the emotionality of a text message. We first investigated the relationship between the shape of a speech balloon and the emotionality of speech text in Japanese manga. Based on these results, we created a system that automatically generates speech balloons matching linear emotional arousal intensity by Auxiliary Classifier Generative Adversarial Networks (ACGAN). Our evaluation results from a controlled experiment suggested that the use of emotional speech balloons outperforms the use of emoticons in decreasing the differences between message senders' and receivers' perceptions about the level of emotional arousal in text messages.2022TAToshiki Aoki et al.The University of TokyoConversational ChatbotsAgent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)CHI
Towards Understanding People’s Experiences of AI Computer Vision Fitness Instructor AppsThis paper explores people's experiences of using existing AI computer vision Fitness Instructor mobile applications and presents a series of design guidelines for this space. The recent rise in on-device AI computer vision and dialogue systems has facilitated a growing number of fitness related instructional apps. However these technologies have yet to be explored within the HCI community. To investigate this domain we recruited 12 participants and asked them to engage with five recently launched AI fitness instructor apps. We interviewed participants and thematically analysed transcripts to understand their experience and expectations of these technologies. We contribute five main themes from our findings; Limitations of Computer Vision, Visual Feedback, Dialogue with the AI, Adapting to the User, and Workout with the Instructor. Based upon our findings we present five design considerations for designers that relate to three key areas: feedback and motivation, personalising the experience, and building a relationship with the AI. Our design considerations extend beyond existing research focus specifically on what participants expect and desire from an AI instructor experience in order to inform designers when creating AI experiences in this domain.2021AGAndrew Garbett et al.Generative AI (Text, Image, Music, Video)AI-Assisted Decision-Making & AutomationFitness Tracking & Physical Activity MonitoringDIS
On Smartphone Users' Difficulty with Understanding Implicit AuthenticationImplicit authentication (IA) has recently become a popular approach for providing physical security on smartphones. It relies on behavioral traits (e.g., gait patterns) for user identification, instead of biometric data or knowledge of a PIN. However, it is not yet known whether users can understand the semantics of this technology well enough to use it properly. We bridge this knowledge gap by evaluating how Android's Smart Lock (SL), which is the first widely deployed IA solution on smartphones, is understood by its users. We conducted a qualitative user study (N=26) and an online survey (N=331). The results suggest that users often have difficulty understanding SL semantics, leaving them unable to judge when their phone would be (un)locked. We found that various aspects of SL, such as its capabilities and its authentication factors, are confusing for the users. We also found that depth of smartphone adoption is a significant antecedent of SL comprehension.2021MKMasoud Mehrabi Koushki et al.University of British ColumbiaPrivacy by Design & User ControlPasswords & AuthenticationPrivacy Perception & Decision-MakingCHI
Technology Adoption and Learning Preferences for Older Adults: Evolving Perceptions, Ongoing Challenges, and Emerging Design OpportunitiesTechnology adoption among older adults has increased significantly in recent years. Yet, as new technologies proliferate and the demographics of aging shift, continued attention to older adults’ adoption priorities and learning preferences is required. Through semi-structured interviews, we examine the factors adults 65+ prioritize in choosing new technologies, the challenges they encounter in learning to use them, and the human and material resources they employ to support these efforts. Using a video prototype as a design probe, we present scenarios to explore older adults’ perceptions of adoption and learning new technologies within the lens of health management support, a relevant and beneficial context for older adults. Our results reveal that participants appreciated self-paced learning, remote support, and flexible learning methods, and were less reliant on instruction manuals than in the past. This work provides insight into older adults’ evolving challenges, learning needs, and design opportunities for next generation learning support.2021CPCarolyn Pang et al.McGill UniversityAging-Friendly Technology DesignPrototyping & User TestingCHI
TalkingBoogie: Collaborative Mobile AAC System for Non-verbal Children with Developmental Disabilities and Their CaregiversAugmentative and alternative communication (AAC) technologies are widely used to help non-verbal children enable communication. For AAC-aided communication to be successful, caregivers should support children with consistent intervention strategies in various settings. As such, caregivers need to continuously observe and discuss children's AAC usage to create a shared understanding of these strategies. However, caregivers often find it challenging to effectively collaborate with one another due to a lack of family involvement and the unstructured process of collaboration. To address these issues, we present TalkingBoogie, which consists of two mobile apps: TalkingBoogie-AAC for caregiver-child communication, and TalkingBoogie-coach supporting caregiver collaboration. Working together, these applications provide contextualized layouts for symbol arrangement, scaffold the process of sharing and discussing observations, and induce caregivers' balanced participation. A two-week deployment study with four groups (N=11) found that TalkingBoogie helped increase mutual understanding of strategies and encourage balanced participation between caregivers with reduced cognitive loads.2020DSDonghoon Shin et al.Seoul National UniversityAugmentative & Alternative Communication (AAC)Special Education TechnologyCHI
Assessing Severity of Pulmonary Obstruction from Respiration Phase-Based Wheeze-Sensing Using Mobile SensorsObstructive pulmonary diseases cause limited airflow from the lung and severely affect patients' quality of life. Wheeze is one of the most prominent symptoms for them. High requirements imposed by traditional diagnosis methods make regular monitoring of pulmonary obstruction challenging, which hinders the opportunity of early intervention and prevention of significant exacerbation. In this work, we explore the feasibility of developing a mobile sensor-based system as a convenient means of assessing the severity of pulmonary obstruction via respiration phase-based symptomatic wheeze sensing. We conduct a 131 subjects' (91 patients and 40 healthy) study for the detection (F1: 87.96%) and characterization (F1: 79.47%) of wheeze. Subsequently, we develop novel wheeze metrics, which show a significant correlation (Pearson's correlation: -0.22, p-value: 0.024) with standard spirometry measure of pulmonary obstruction severity. This work takes a principal step towards the unobtrusive assessment of pulmonary condition from mobile sensor interactions.2020SCSoujanya Chatterjee et al.University of MemphisTelemedicine & Remote Patient MonitoringBiosensors & Physiological MonitoringCHI
GAZED Gaze-guided Cinematic Editing of Wide-Angle Monocular Video Recordings[object Object]2020KMK. L. Bhanu Moorthy et al.International Institute of Information Technology, Hyderabad3D Modeling & AnimationCHI
Voice Presentation Attack Detection through Text-Converted Voice Command AnalysisVoice assistants are quickly being upgraded to support advanced, security-critical commands such as unlocking devices, checking emails, and making payments. In this paper, we explore the feasibility of using users' text-converted voice command utterances as classification features to help identify users' genuine commands, and detect suspicious commands. To maintain high detection accuracy, our approach starts with a globally trained attack detection model (immediately available for new users), and gradually switches to a user-specific model tailored to the utterance patterns of a target user. To evaluate accuracy, we used a real-world voice assistant dataset consisting of about 34.6 million voice commands collected from 2.6 million users. Our evaluation results show that this approach is capable of achieving about 3.4% equal error rate (EER), detecting 95.7% of attacks when an optimal threshold value is used. As for those who frequently use security-critical (attack-like) commands, we still achieve EER below 5%.2019IKIl-Youp Kwak et al.Samsung ResearchIntelligent Voice Assistants (Alexa, Siri, etc.)Voice AccessibilityDeepfake & Synthetic Media DetectionCHI