Exploring the Impact of Emotional Voice Integration in Sign-to-Speech Translators for Deaf-to-Hearing CommunicationEmotional voice communication plays a crucial role in effective daily interactions. Deaf and Hard of Hearing (DHH) individuals, who often have limited use of voice, rely on facial expressions to supplement sign language and convey emotions. However, in American Sign Language (ASL), facial expressions serve not only emotional purposes but also function as linguistic markers that can alter the meaning of signs. This dual role can often confuse non-signers when interpreting a signer’s emotional state. In this paper, we present studies that: (1) confirm the challenges non-signers face when interpreting emotions from facial expressions in ASL communication, and (2) demonstrate how integrating emotional voice into translation systems can enhance hearing individuals’ understanding of a signer’s emotional intent. An online survey with 45 hearing participants (non-ASL signers) revealed frequent misinterpretations of signers’ emotions when emotional and linguistic facial expressions were used simultaneously. The findings show that incorporating emotional voice into translation systems significantly improves emotion recognition by 32%. Additionally, follow-up survey with 48 DHH participants highlights design considerations for implementing emotional voice features, emphasizing the importance of emotional voice integration to bridge communication gaps between DHH and hearing communities.2025HLHyunchul Lim et al.Deaf and Hard-of-Hearing ResearchCSCW
AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multimodal Information Between Reality and VideosVideos offer rich audiovisual information that can support people in performing activities of daily living (ADLs), but they remain largely inaccessible to blind or low-vision (BLV) individuals. In cooking, BLV people often rely on non-visual cues---such as touch, taste, and smell---to navigate their environment, making it difficult to follow the predominantly audiovisual instructions found in video recipes. To address this problem, we introduce AROMA, an AI system that provides timely responses to the user based on real-time, context-aware assistance by integrating non-visual cues perceived by the user, a wearable camera feed, and video recipe content. AROMA uses a mixed-initiative approach: it responds to user requests while also proactively monitoring the video stream to offer timely alerts and guidance. This collaborative design leverages the complementary strengths of the user and AI system to align the physical environment with the video recipe, helping the user interpret their current cooking state and make sense of the steps. We evaluated AROMA through a study with eight BLV participants and offered insights for designing interactive AI systems to support BLV individuals in performing ADLs.2025ZNZheng Ning et al.Conversational ChatbotsVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Context-Aware ComputingUIST
PANDA: Parkinson's Assistance and Notification Driving AidParkinson's Disease (PD) significantly impacts driving abilities, often leading to early driving cessation or accidents due to reduced motor control and increasing reaction times. To diminish the impact of these symptoms, we developed PANDA (Parkinson's Assistance and Notification Driving Aid), a multi-modality real-time alert system designed to monitor driving patterns continuously and provide immediate alerts for irregular driving behaviors, enhancing driver safety of individuals with PD. The system was developed through a participatory design process with 9 people with PD and 13 non-PD individuals using a driving simulator, which allowed us to identify critical design characteristics and collect detailed data on driving behavior. A user study involving individuals with PD evaluated the effectiveness of PANDA, exploring optimal strategies for delivering alerts and ensuring they are timely and helpful. Our findings demonstrate that PANDA has the potential to enhance the driving safety of individuals with PD, offering a valuable tool for maintaining independence and confidence behind the wheel.2025TWTianyang Wen et al.Institude of Software, Chinese Academy of SciencesIn-Vehicle Haptic, Audio & Multimodal FeedbackMotor Impairment Assistive Input TechnologiesPrototyping & User TestingCHI
How Users Who are Blind or Low Vision Play Mobile Games: Perceptions, Challenges, and StrategiesAs blind and low-vision (BLV) players engage more deeply with games, accessibility features have become essential. While some research has explored tools and strategies to enhance game accessibility, the specific experiences of these players with mobile games remain underexamined. This study addresses this gap by investigating how BLV users experience mobile games with varying accessibility levels. Through interviews with 32 experienced BLV mobile players, we explore their perceptions, challenges, and strategies for engaging with mobile games. Our findings reveal that BLV players turn to mobile games to alleviate boredom, achieve a sense of accomplishment, and build social connections, but face barriers depending on the game's accessibility level. We also compare mobile games to other forms of gaming, highlighting the relative advantages of mobile games, such as the inherent accessibility of smartphones. This study contributes to understanding BLV mobile gaming experiences and provides insights for enhancing accessible mobile game design.2025ZRZihe Ran et al.Communication university of ChinaAccessible GamingGame AccessibilityCHI
SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a RingFingerspelling is a critical part of American Sign Language (ASL) recognition and has become an accessible optional text entry method for Deaf and Hard of Hearing (DHH) individuals. In this paper, we introduce SpellRing, a single smart ring worn on the thumb that recognizes words continuously fingerspelled in ASL. SpellRing uses active acoustic sensing (via a microphone and speaker) and an inertial measurement unit (IMU) to track handshape and movement, which are processed through a deep learning algorithm using Connectionist Temporal Classification (CTC) loss. We evaluated the system with 20 ASL signers (13 fluent and 7 learners), using the MacKenzie-Soukoref Phrase Set of 1,164 words and 100 phrases. Offline evaluation yielded top-1 and top-5 word recognition accuracies of 82.45% (±9.67%) and 92.42% (±5.70%), respectively. In real-time, the system achieved a word error rate (WER) of 0.099 (±0.039) on the phrases. Based on these results, we discuss key lessons and design implications for future minimally obtrusive ASL recognition wearables.2025HLHyunchul Lim et al.Cornell, Computing and Information ScienceFoot & Wrist InteractionVoice AccessibilityMotor Impairment Assistive Input TechnologiesCHI
Selenite: Scaffolding Online Sensemaking with Comprehensive Overviews Elicited from Large Language ModelsSensemaking in unfamiliar domains can be challenging, demanding considerable user effort to compare different options with respect to various criteria. Prior research and our formative study found that people would benefit from reading an overview of an information space upfront, including the criteria others previously found useful. However, existing sensemaking tools struggle with the "cold-start" problem -- not only requiring significant input from previous users to generate and share these overviews, but also that such overviews may turn out to be biased and incomplete. In this work, we introduce a novel system, Selenite, which leverages Large Language Models (LLMs) as reasoning machines and knowledge retrievers to automatically produce a comprehensive overview of options and criteria to jumpstart users' sensemaking processes. Subsequently, Selenite also adapts as people use it, helping users find, read, and navigate unfamiliar information in a systematic yet personalized manner. Through three studies, we found that Selenite produced accurate and high-quality overviews reliably, significantly accelerated users' information processing, and effectively improved their overall comprehension and sensemaking experience.2024MLMichael Xieyang Liu et al.Carnegie Mellon UniversityHuman-LLM CollaborationExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
Designing Upper-Body Gesture Interaction with and for People with Spinal Muscular Atrophy in VRRecent research proposed gaze-assisted gestures to enhance interaction within virtual reality (VR), providing opportunities for people with motor impairments to experience VR. Compared to people with other motor impairments, those with Spinal Muscular Atrophy (SMA) exhibit enhanced distal limb mobility, providing them with more design space. However, it remains unknown what gaze-assisted upper-body gestures people with SMA would want and be able to perform. We conducted an elicitation study in which 12 VR-experienced people with SMA designed upper-body gestures for 26 VR commands, and collected 312 user-defined gestures. Participants predominantly favored creating gestures with their hands. The type of tasks and participants' abilities influence their choice of body parts for gesture design. Participants tended to enhance their body involvement and preferred gestures that required minimal physical effort, and were aesthetically pleasing. Our research will contribute to creating better gesture-based input methods for people with motor impairments to interact with VR.2024JTJingze Tian et al.Southeast University, The Hong Kong University of Science and Technology (Guangzhou)Full-Body Interaction & Embodied InputMotor Impairment Assistive Input TechnologiesCHI
A Contextual Inquiry of People with Vision Impairments in CookingIndividuals with vision impairments employ a variety of strategies for object identification, such as pans or soy sauce, in the culinary process. In addition, they often rely on contextual details about objects, such as location, orientation, and current status, to autonomously execute cooking activities. To understand how people with vision impairments collect and use the contextual information of objects while cooking, we conducted a contextual inquiry study with 12 participants in their own kitchens. This research aims to analyze object interaction dynamics in culinary practices to enhance assistive vision technologies for visually impaired cooks. We outline eight different types of contextual information and the strategies that blind cooks currently use to access the information while preparing meals. Further, we discuss preferences for communicating contextual information about kitchen objects as well as considerations for the deployment of AI-powered assistive technologies.2024FLFranklin Mingzhe Li et al.Carnegie Mellon UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Context-Aware ComputingCHI
Co-design Accessible Public Robots: Insights from People with Mobility Disability, Robotic Practitioners and Their CollaborationsSidewalk robots are increasingly common across the globe. Yet, their operation on public paths poses challenges for people with mobility disabilities (PwMD) who face barriers to accessibility, such as insufficient curb cuts. We interviewed 15 PwMD to understand how they perceive sidewalk robots. Findings indicated that PwMD feel they have to compete for space on the sidewalk when robots are introduced. We next interviewed eight robotics practitioners to learn about their attitudes towards accessibility. Practitioners described how issues often stem from robotic companies addressing accessibility only after problems arise. Both interview groups underscored the importance of integrating accessibility from the outset. Building on this finding, we held four co-design workshops with PwMD and practitioners in pairs. These convenings brought to bear accessibility needs around robots operating in public spaces and in the public interest. Our study aims to set the stage for a more inclusive future around public service robots.2024HHJoel Chan et al.Carnegie Mellon UniversityInclusive DesignEmpowerment of Marginalized GroupsCHI
Breaking the ``Inescapable'' Cycle of Pain: Supporting Wheelchair Users' Upper Extremity Health Awareness and Management with Tracking TechnologiesUpper extremity (UE) health issues are a common concern among wheelchair users and have a large impact on their independence, social participation, and quality of life. However, despite the well-documented prevalence and negative impacts, these issues remain unresolved. Existing solutions (e.g. surgical repair, conservative treatments) often fail to promote sustained UE health improvement in wheelchair users' day-to-day lives. Recent HCI research has shown the effectiveness of health tracking technologies in supporting patients' self-care for different health conditions (e.g. chronic diseases, mental health). In this work, we explore how health tracking technologies could support wheelchair users' UE health self-care. We conducted semi-structured interviews with 12 wheelchair users and 5 therapists to understand their practices and challenges in UE health management, as well as the potential benefits of integrating health tracking technologies into self-care routines. We discuss design implications for UE health tracking technologies and outline opportunities for future investigation.2023YLYunzhi Li et al.Carnegie Mellon UniversityMotor Impairment Assistive Input TechnologiesMental Health Apps & Online Support CommunitiesFitness Tracking & Physical Activity MonitoringCHI
Understanding Visual Arts Experiences of Blind PeopleVisual arts play an important role in cultural life and provide access to social heritage and self-enrichment, but most visual arts are inaccessible to blind people. Researchers have explored different ways to enhance blind people's access to visual arts (e.g., audio descriptions, tactile graphics). However, how blind people adopt these methods remains unknown. We conducted semi-structured interviews with 15 blind visual arts patrons to understand how they engage with visual artwork and the factors that influence their adoption of visual arts access methods. We further examined interview insights in a follow-up survey (N=220). We present: 1) current practices and challenges of accessing visual artwork in-person and online (e.g., Zoom tour), 2) motivation and cognition of perceiving visual arts (e.g., imagination), and 3) implications for designing visual arts access methods. Overall, our findings provide a roadmap for technology-based support for blind people's visual arts experiences.2023FLFranklin Mingzhe Li et al.Carnegie Mellon UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Museum & Cultural Heritage DigitizationCHI
An Exploration of Captioning Practices and Challenges of Individual Content Creators on YouTube for People with Hearing ImpairmentsDeaf and Hard-of-Hearing (DHH) audiences have long complained about caption qualities for many online videos created by individual content creators on video-sharing platforms (e.g., YouTube). However, there lack explorations of practices, challenges, and perceptions of online video captions from the perspectives of both individual content creators and DHH audiences. In this work, we first explore DHH audiences' feedback on and reactions to YouTube video captions through interviews with 13 DHH individuals, and uncover DHH audiences' experiences, challenges, and perceptions on watching videos created by individual content creators (e.g., manually added caption tags could create additional confidence and trust in caption qualities for DHH audiences). We then discover individual content creators' practices, challenges, and perceptions on captioning their videos (e.g., back-captioning problems) by conducting a YouTube video analysis with 189 captioning-related YouTube videos, followed by a survey with 62 individual content creators. Overall, our findings provide an in-depth understanding of captions generated by individual content creators and bridge the knowledge gap mutually between content creators and DHH audiences on captions.2022FLFranklin Mingzhe Li et al.Accessibility; AccessibilityCSCW
"It Feels Like Taking a Gamble": Exploring Perceptions, Practices, and Challenges of Using Makeup and Cosmetics for People with Visual ImpairmentsMakeup and cosmetics offer the potential for self-expression and the reshaping of social roles for visually impaired people. However, there exist barriers to conducting a beauty regime because of the reliance on visual information and color variances in makeup. We present a content analysis of 145 YouTube videos to demonstrate visually impaired individuals' unique practices before, during, and after doing makeup. Based on the makeup practices, we then conducted semi-structured interviews with 12 visually impaired people to discuss their perceptions of and challenges with the makeup process in more depth. Overall, through our findings and discussion, we present novel perceptions of makeup from visually impaired individuals (e.g., broader representations of blindness and beauty). The existing challenges provide opportunities for future research to address learning barriers, insufficient feedback, and physical and environmental barriers, making the experience of doing makeup more accessible to people with visual impairments.2022FLFranklin Mingzhe Li et al.Carnegie Mellon UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignCHI
TeethTap: Recognizing Discrete Teeth Gestures using Motion and Acoustic Sensing on an EarpieceTeeth gestures become an alternative input modality for different situations and accessibility purposes. In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively. TeethTap uses a support vector machine to classify gestures from noise by fusing acoustic and motion data, and implements K-Nearest-Neighbor (KNN) with a Dynamic Time Warping (DTW) distance measurement using motion data for gesture classification. A user study with 11 participants demonstrated that TeethTap could recognize 13 gestures with a real-time classification accuracy of 90.9% in a laboratory environment. We further uncovered the accuracy differences on different teeth gestures when having sensors on single vs. both sides. Moreover, we explored the activation gesture under real-world environments, including eating, speaking, walking and jumping. Based on our findings, we further discussed potential applications and practical challenges of integrating TeethTap into future devices.2021WSWei Sun et al.Haptic WearablesHand Gesture RecognitionFull-Body Interaction & Embodied InputIUI
"I Choose Assistive Devices That Save My Face" A Study on Perceptions of Accessibility and Assistive Technology Use Conducted in ChinaDespite the potential benefits of assistive technologies (ATs) for people with various disabilities, only around 7% of Chinese with disabilities have had an opportunity to use ATs. Even for those who have used ATs, the abandonment rate was high. Although China has the world's largest population with disabilities, prior research exploring how ATs are used and perceived, and why ATs are abandoned have been conducted primarily in North America and Europe. In this paper, we present an interview study conducted in China with 26 people with various disabilities to understand their practices, challenges, perceptions, and misperceptions of using ATs. From the study, we learned about factors that influence AT adoption practices (e.g., misuse of accessible infrastructure, issues with replicating existing commercial ATs), challenges using ATs in social interactions (e.g., Chinese stigma), and misperceptions about ATs (e.g., ATs should overcome inaccessible social infrastructures). Informed by the findings, we derive a set of design considerations to bridge the existing gaps in AT design (e.g., manual vs. electronic ATs) and to improve ATs' social acceptability in China.2021FLFranklin Mingzhe Li et al.Carnegie Mellon University, University of TorontoCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Universal & Inclusive DesignSpecial Education TechnologyCHI