“I feel lonely when they stop chatting”: Exploring Auditory Comment Display for Eyes-Free Social-Viewing Experience in Online Music VideosOnline music videos on video-sharing platforms offer video comments that viewers can read to enjoy their social-viewing experience. However, because these comments rely on visual elements through texts, they are not accessible to eyes-free listeners, such as those who listen to music videos while jogging, commuting, or showering. To address this gap, we explore Auditory Comment Display (ACD), which offers text comments via text-to-speech (TTS) synthesis, enabling eyes-free listeners to enjoy a social-viewing experience while listening to music videos. We used music concert videos as example content and prototyped varying comment-to-speech styles in this context. We conducted a formative study (N = 8), prototyping (N = 10), and a user study (N = 12). The results indicated that ACD enhanced eyes-free listeners' social-viewing experience, although it may not be appropriate for specific situations and users. We discuss the design implications and future directions for the eyes-free social-viewing experience via comment-to-speech synthesis.2025YAYuki Abe et al.Voice TechnologyCSCW
“I can run at night!": Using Augmented Reality to Support Nighttime Guided Running for Low-vision RunnersDark environment challenges low-vision (LV) individuals to engage in running by following sighted guide—a Caller-style guided running—due to insufficient illumination, because it prevents them from using their residual vision to follow the guide and be aware about their environment. We design, develop, and evaluate RunSight, an augmented reality (AR)-based assistive tool to support LV individuals to run at night. RunSight combines see-through HMD and image processing to enhance one's visual awareness of the surrounding environment (e.g., potential hazard) and visualize the guide's position with AR-based visualization. To demonstrate RunSight's efficacy, we conducted a user study with 8 LV runners. The results showed that all participants could run at least 1km (mean = 3.44 km) using RunSight, while none could engage in Caller-style guided running without it. Our participants could run safely because they effectively synthesized RunSight-provided cues and information gained from runner-guide communication.2025YAYuki Abe et al.Hokkaido University, Human-Computer Interaction LabAR Navigation & Context AwarenessVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Understanding Usability of VR Pointing Methods with a Handheld-style HMD for Onsite ExhibitionsHandheld-style head-mounted displays (HMDs) are becoming increasingly popular as a convenient option for onsite exhibitions. However, they lack established practices for basic interactions, particularly pointing methods. Through our formative study involving practitioners, we discovered that controllers and hand gestures are the primary pointing methods being utilized. Building upon these findings, we conducted a usability study to explore seven different pointing methods, incorporating insights from the formative study and current virtual reality (VR) practices. The results showed that while controllers remain a viable option, hand gestures are not recommended. Notably, dwell time-based methods, which are not fast and are not commonly recognized by practitioners, demonstrate high usability and user confidence, particularly for inexperienced VR users. We recommend the use of dwell-based methods for onsite exhibition contexts. This research provides insights for the adoption of handheld-style HMDs, laying the groundwork for improving user interaction in exhibition environments, thereby potentially enhancing visitor experiences.2025YAYuki Abe et al.Hokkaido University, Human-Computer Interaction LabEye Tracking & Gaze InteractionSocial & Collaborative VRImmersion & Presence ResearchCHI
EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsWe introduce EarHover, an innovative system that enables mid-air gesture input for hearables. Mid-air gesture input, which eliminates the need to touch the device and thus helps to keep hands and the device clean, has been known to have high demand based on previous surveys. However, existing mid-air gesture input methods for hearables have been limited to adding cameras or infrared sensors. By focusing on the sound leakage phenomenon unique to hearables, we have realized mid-air gesture recognition using a speaker and an external microphone that are highly compatible with hearables. The signal leaked to the outside of the device due to sound leakage can be measured by an external microphone, which detects the differences in reflection characteristics caused by the hand's speed and shape during mid-air gestures. Among 27 types of gestures, we determined the seven most suitable gestures for EarHover in terms of signal discrimination and user acceptability. We then evaluated the gesture detection and classification performance of two prototype devices (in-ear type/open-ear type) for real-world application scenarios.2024SSShunta Suzuki et al.In-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionUIST
Exploring User-Defined Gestures as Input for Hearables and Recognizing of Ear-Touch Gestures by IMUsHearables are highly functional earphone-type wearables; however, existing input methods using stand-alone hearables are limited in the number of commands, and there is a need to extend device operation through hand gestures. In previous research on hearables for hand input, user understanding and gesture recognition systems have been developed. However, in the realm of user understanding, exploration remains incomplete concerning hand input with hearables, and extant recognition systems have not demonstrated proficiency in discerning user-defined gestures. In this study, we conducted a gesture elicitation study (GES) assuming hand input using hearables under six conditions (three interaction areas × two device shapes). Then, we extracted ear-touch gestures that the device's built-in IMU sensor could recognize from the user-defined gestures and investigated recognition performance. The results of the experiments in a sitting experiment showed that the gesture recognition rate for in-ear devices was 91.0%, and for ear-hook devices was 74.7%.2024YSYukina Sato et al.Vibrotactile Feedback & Skin StimulationHand Gesture RecognitionMobileHCI
User Authentication Method for Hearables Using Sound Leakage SignalsWe propose a novel biometric authentication method that leverages sound leakage signals from hearables that are captured by an external microphone. A sweep signal is played from hearables, and sound leakage is recorded using an external microphone. This sound leakage signal represents the acoustic characteristics of the ear canal, auricle, or hand. Then, our system analyzes the echoes and authenticates the user. The proposed method is highly adaptable to hearables because it leverages widely available sensors, such as speakers and external microphones. In addition, the proposed method has the potential to be used in combination with existing methods. In this study, we investigate the characteristics of sound leakage signals using an experimental model and measure the authentication performance of our method using acoustic data from 16 people. The results show that the balanced accuracy (BAC) scores were in the range of 87.0%-96.7% in several scenarios.2023TATakashi Amesaka et al.Passwords & AuthenticationUbiComp
UltrasonicWhisper: Ultrasound Can Generate Audible Sound in Your HearableRecent studies have shown that ultrasound can be used for voice input to microphones such as smart speakers by taking advantage of the nonlinearity of the microphones. A similar attack on the hearing of a user wearing a hearable with an outside microphone is also possible. Specifically, information modulated by ultrasound from an attacker is demodulated into audible sound inside the hearable, and audio information can be presented to the wearer via its inner loudspeaker. This process could result in the presentation of false information disguised as instructions from the hearable and possible interference with the user's hearing. In light of those issues, this study experimentally evaluated the possibility of ultrasonic attacks on hearables. Evaluation results confirmed that mean Mel-cepstral distortion (MCD) and mean opinion score (MOS) of the demodulated sound were 7.90 and 2.53, respectively. We also confirmed that The participants followed 14.9% of the false instructions presented by ultrasound even when they were alerted to the ultrasonic attack.2023HWHiroki Watanabe et al.Mid-Air Haptics (Ultrasonic)Voice AccessibilityUbiComp
Input Interface with Touch and Non-touch Interactions using Atmospheric Pressure for Hearable DevicesA hearable device is a wearable computer that is worn on the ear. In addition to offering conventional music listening functions when utilized as earphones, a hearable device can be linked to smartphones and various onboard sensors to recognize user actions and voice assistants. Although some devices recognize command operations when the earpiece is touched with a hand directly, there are limitations related to the shape of the earpiece and the problem of false recognition that occurs when the earpiece is touched unintentionally. Hands-free input methods utilizing voice assistants and acceleration sensors that measure head movement are available, but these run into problems such as low command recognition accuracy due to noise in public spaces and low social acceptability. In this study, we implement a device that measures the atmospheric pressure in the ear canal and around the ear by installing an atmospheric pressure sensor inside canal-type earphones. We propose a method that recognizes 12 types of gesture based on the pattern of pressure change caused by pressing and releasing the earphone with a finger (touch interaction) and by covering the auricle with a hand and putting pressure on it (non-touch interaction). In our method, six types of gesture are performed with two different interaction methods. We evaluated the recognition accuracy of each gesture by having five participants perform each gesture 50 times. Our findings showed that the ``quick press and quick release'' gestures were recognized with 0.99 accuracy for touch and 0.82 for non-touch.2023KIKoki Iguma et al.Haptic WearablesFoot & Wrist InteractionUbiComp
Kuiper Belt: Utilizing the "Out-of-natural Angle" Region in the Eye-gaze Interaction for Virtual RealityThe maximum physical range of horizontal human eye movement is approximately $45^\circ$. However, in a natural gaze shift, the difference in the direction of the gaze relative to the frontal direction of the head rarely exceeds $25^\circ$. We name this region of $25^\circ - 45^\circ$ the ``Kuiper Belt'' in the eye-gaze interaction. We try to utilize this region to solve the Midas touch problem to enable a search task while reducing false input in the Virtual Reality environment. In this work, we conduct two studies to figure out the design principle of how we place menu items in the Kuiper Belt as an ``out-of-natural angle'' region of the eye-gaze movement, and determine the effectiveness and workload of the Kuiper Belt-based method. The results indicate that the Kuiper Belt-based method facilitated the visual search task while reducing false input. Finally, we present example applications utilizing the findings of these studies.2022MCMyungguen Choi et al.Hokkaido UniversityEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
Asian CHI Symposium: Emerging HCI Research CollectionThis symposium showcases the latest work from Asia on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and create a fresh research community from Asian region.2018SSSaki Sakaguchi et al.The University of TokyoDeveloping Countries & HCI for Development (HCI4D)User Research Methods (Interviews, Surveys, Observation)CHI