SkullID: Through-Skull Sound Conduction based Authentication for SmartglassesThis paper investigates the use of through-skull sound conduction to authenticate smartglass users. We mount a surface transducer on the right mastoid process to play cue signals and capture skull-transformed audio responses through contact microphones on various skull locations. We use the resultant bio-acoustic information as classification features. In an initial single-session study (N=25), we achieved mean Equal Error Rates (EERs) of 5.68% and 7.95% with microphones on the brow and left mastoid process. Combining the two signals substantially improves performance (to 2.35% EER). A subsequent multi-session study (N=30) demonstrates EERs are maintained over three recalls and, additionally, shows robustness to donning variations and background noise (achieving 2.72% EER). In a follow-up usability study over one week, participants report high levels of usability (as expressed by SUS scores) and that only modest workload is required to authenticate. Finally, a security analysis demonstrates the system's robustness to spoofing and imitation attacks.2024HSHyejin Shin et al.Samsung ResearchPasswords & AuthenticationBiosensors & Physiological MonitoringCHI
ThumbAir: In-Air Typing for Head Mounted DisplaysTyping while wearing a standalone Head Mounted Display (HMD)---systems without external input devices or sensors to support text entry---is hard. To address this issue, prior work has used external trackers to monitor finger movements to support in-air typing on virtual keyboards. While performance has been promising, current systems are practically infeasible: finger movements may be visually occluded from inside-out HMD based tracking systems or, otherwise, awkward and uncomfortable to perform. To address these issues, this paper explores an alternative approach. Taking inspiration from the prevalence of thumb-typing on mobile phones, we describe four studies exploring, defining and validating the performance of ThumbAir, an in-air thumb-typing system implemented on a commercial HMD. The first study explores viable target locations, ultimately recommending eight targets sites. The second study collects performance data for taps on pairs of these targets to both inform the design of a target selection procedure and also support a computational design process to select a keyboard layout. The final two studies validate the selected keyboard layout in word repetition and phrase entry tasks, ultimately achieving final WPMs of 27.1 and 13.73. Qualitative data captured in the final study indicate that the discreet movements required to operate ThumbAir, in comparison to the larger scale finger and hand motions used in a baseline design from prior work, lead to reduced levels of perceived exertion and physical demand and are rated as acceptable for use in a wider range of social situations. https://dl.acm.org/doi/10.1145/35694742023HGHyunjae Gil et al.Eye Tracking & Gaze InteractionVoice User Interface (VUI) DesignImmersion & Presence ResearchUbiComp
WristAcoustic: Through-Wrist Acoustic Response Based Authentication for SmartwatchesPIN and pattern lock are difficult to accurately enter on small watch screens, and are vulnerable against guessing attacks. To address these problems, this paper proposes a novel implicit biometric scheme based on through-wrist acoustic responses. A cue signal is played on a surface transducer mounted on the dorsal wrist and the acoustic response recorded by a contact microphone on the volar wrist. We build classifiers using these recordings for each of three simple hand poses (relax, fist and open), and use an ensemble approach to make final authentication decisions. In an initial single session study (N=25), we achieve an Equal Error Rate (EER) of 0.01%, substantially outperforming prior on-wrist biometric solutions. A subsequent five recall-session study (N=20) shows reduced performance with 5.06% EER. We attribute this to increased variability in how participants perform hand poses over time. However, after retraining classifiers performance improved substantially, ultimately achieving 0.79% EER. We observed most variability with the relax pose. Consequently, we achieve the most reliable multi-session performance by combining the fist and open poses: 0.51% EER. Further studies elaborate on these basic results. A usability evaluation reveals users experience low workload as well as reporting high SUS scores and fluctuating levels of perceived exertion: moderate during initial enrollment dropping to slight during authentication. A final study examining performance in various poses and in the presence of noise demonstrates the system is robust to such disturbances and likely to work well in wide range of real-world contexts. https://dl.acm.org/doi/10.1145/35694732023JHJun Ho Huh et al.Foot & Wrist InteractionMotor Impairment Assistive Input TechnologiesUbiComp
GestureMeter: Design and Evaluation of a Gesture Password Strength MeterGestures drawn on touchscreens have been proposed as an authentication method to secure access to smartphones. They provide good usability and a theoretically large password space. However, recent work has demonstrated that users tend to select simple or similar gestures as their passwords, rendering them susceptible to dictionary based guessing attacks. To improve their security, this paper describes a novel gesture password strength meter that interactively provides security assessments and improvement suggestions based on a scoring algorithm that combines a probabilistic model, a gesture dictionary, and a set of novel stroke heuristics. We evaluate this system in both online and offline settings and show it supports creation of gestures that are significantly more resistant to guessing attacks (by up to 67%) while also maintaining performance on usability metrics such as recall success rate and time. We conclude that gesture password strength meters can help users select more secure gesture passwords.2023ECEunyong Cheon et al.UNIST , UNISTPasswords & AuthenticationCHI
Sad or just jealous? Using Experience Sampling to Understand and Detect Negative Affective Experiences on InstagramSocial Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).2022MRMintra Ruensuk et al.UNISTSocial Platform Design & User BehaviorCyberbullying & Online HarassmentOnline Identity & Self-PresentationCHI
SonarID: Using Sonar to Identify Fingers on a SmartwatchThe diminutive size of wrist wearables has prompted the design of many novel input techniques to increase expressivity. Finger identification, or assigning different functionality to different fingers, has been frequently proposed. However, while the value of the technique seems clear, its implementation remains challenging, often relying on external devices (e.g., worn magnets) or explicit instructions. Addressing these limitations, this paper explores a novel approach to natural and unencumbered finger identification on an unmodified smartwatch: sonar. To do this, we adapt an existing finger tracking smartphone sonar implementation---rather than extract finger motion, we process raw sonar fingerprints representing the complete sonar scene recorded during a touch. We capture data from 16 participants operating a smartwatch and use their sonar fingerprints to train a deep learning recognizer that identifies taps by the thumb, index, and middle fingers with an accuracy of up to 93.7%, sufficient to support meaningful application development.2022JKJiwan Kim et al.UNIST, UNISTFoot & Wrist InteractionBiosensors & Physiological MonitoringCHI
FingerText: Exploring and Optimizing Performance for Wearable, Mobile and One-Handed TypingTyping on wearables while situationally impaired, such as while walking, is challenging. However, while HCI research on wearable typing is diverse, existing work focuses on stationary scenarios and fine-grained input that will likely perform poorly when users are on-the-go. To address this issue we explore single-handed wearable typing using inter-hand touches between the thumb and fingers, a modality we argue will be robust to the physical disturbances inherent to input while mobile. We first examine the impact of walking on performance of these touches, noting no significant differences in accuracy or speed, then feed our study data into a multi-objective optimization process in order to design keyboard layouts (for both five and ten keys) capable of supporting rapid, accurate, comfortable, and unambiguous typing. A final study tests these layouts against QWERTY baselines and reports performance improvements of up to 10.45% WPM and 39.44% WER when users type while walking.2021DLDoYoung Lee et al.UNIST, UNISTHaptic WearablesFoot & Wrist InteractionPrototyping & User TestingCHI
Dynamic Field of View Restriction in 360º Video: Aligning Optical Flow and Visual SLAM to Mitigate VIMSHead-Mounted Display based Virtual Reality is proliferating. However, Visually Induced Motion Sickness (VIMS), which prevents many from using VR without discomfort, bars widespread adoption. Prior work has shown that limiting the Field of View (FoV) can reduce VIMS at a cost of also reducing presence. Systems that dynamically adjust a user's FoV may be able to balance these concerns. To explore this idea, we present a technique for standard 360º video that shrinks FoVs only during VIMS inducing scenes. It uses Visual Simultaneous Localization and Mapping and peripheral optical flow to compute camera movements and reduces FoV during rapid motion or optical flow. A user study (N=23) comparing 360º video with unrestricted-FoVs (90º), reduced fixed-FoVs (40º) and dynamic-FoVs (40º-90º) revealed that dynamic-FoVs mitigate VIMS while maintaining presence. We close by discussing the user experience of dynamic-FoVs and recommendations for how they can help make VR comfortable and immersive for all.2021PBPaulo Bala et al.Universidade Nova de Lisboa, Instituto Superior Técnico - U. de LisboaMotion Sickness & Passenger ExperienceImmersion & Presence ResearchCHI
SchemaBoard: Supporting Correct Assembly of Schematic Circuits using Dynamic In-Situ VisualizationAssembling circuits on breadboards using reference designs is a common activity among makers. While tools like Fritzing offer a simplified visualization of how components and wires are connected, such pictorial depictions of circuits are rare in formal educational materials and the vast bulk of online technical documentation. Electronic schematics are more common but are perceived as challenging and confusing by novice makers. To improve access to schematics, we propose SchemaBoard, a system for assisting makers in assembling and inspecting circuits on breadboards from schematic source materials. SchemaBoard uses an LED matrix integrated underneath a working breadboard to visualize via light patterns where and how components should be placed, or to highlight elements of circuit topology such as electrical nets and connected pins. This paper presents a formative study with 16 makers, the SchemaBoard system, and a summative evaluation with an additional 16 users. Results indicate that SchemaBoard is effective in reducing both the time and the number of errors associated with building a circuit based on a reference schematic, and for inspecting the circuit for correctness after its assembly.2020YKYoonji Kim et al.Circuit Making & Hardware PrototypingUser Research Methods (Interviews, Surveys, Observation)UIST
Nailz: Sensing Hand Input with Touch Sensitive NailsTouches between the fingers of an unencumbered hand represent a ready-to-use, eyes-free and expressive input space suitable for interacting with wearable devices such as smart glasses or watches. While prior work has focused on touches to the inner surface of the hand, touches to the nails, a practical site for mounting sensing hardware, have been comparatively overlooked. We extend prior implementations of single touch sensing nails to a full set of five and explore their potential for wearable input. We present design ideas and an input space of 144 touches (taps, flicks and swipes) derived from an ideation workshop. We complement this with data from two studies characterizing the subjective comfort and objective characteristics (task time, accuracy) of each touch. We conclude by synthesizing this material into a set of 29 viable nail touches, assessing their performance in a final study and illustrating how they could be used by presenting, and qualitatively evaluating, two example applications.2020DLDoYoung Lee et al.Ulsan National Institute of Science and TechnologyHaptic WearablesFoot & Wrist InteractionCHI
Whiskers: Exploring the Use of Ultrasonic Haptic Cues on the FaceHaptic cues are a valuable feedback mechanism for smart glasses. Prior work has shown how they can support navigation, deliver notifications and cue targets. However, a focus on actuation technologies such as mechanical tactors or fans has restricted the scope of research to a small number of cues presented at fixed locations. To move beyond this limitation, we explore perception of in-air ultrasonic haptic cues on the face. We present two studies examining the fundamental properties of localization, duration and movement perception on three facial sites suitable for use with glasses: the cheek, the center of the forehead, and above the eyebrow. The center of the forehead led to optimal performance with a localization error of 3.77mm and accurate duration (80%) and movement perception (87%). We apply these findings in a study delivering eight different ultrasonic notifications and report mean recognition rates of up to 92.4% (peak: 98.6%). We close with design recommendations for ultrasonic haptic cues on the face.2018HGHyunjae Gil et al.Ulsan National Institute of Science and TechnologyIn-Vehicle Haptic, Audio & Multimodal FeedbackMid-Air Haptics (Ultrasonic)Vibrotactile Feedback & Skin StimulationCHI
Designing Socially Acceptable Hand-to-Face InputWearable head-mounted displays combine rich graphical output with an impoverished input space. Hand-to-face gestures have been proposed as a way to add input expressivity while keeping control movements unobtrusive. To better understand how to design such techniques, we describe an elicitation study conducted in a busy public space in which pairs of users were asked to generate unobtrusive, socially acceptable hand-to-face input actions. Based on the results, we describe five design strategies: miniaturizing, obfuscating, screening, camouflaging and re-purposing. We instantiate these strategies in two hand-to-face input prototypes, one based on touches to the ear and the other based on touches of the thumbnail to the chin or cheek. Performance assessments characterize time and error rates with these devices. The paper closes with a validation study in which pairs of users experience the prototypes in a public setting and we gather data on the social acceptability of the designs and reflect on the effectiveness of the different strategies.2018DLDoYoung Lee et al.Haptic WearablesHand Gesture RecognitionUIST