Shifting the Focus: Exploring Video Accessibility Strategies and Challenges for People with ADHDDespite the growth of video as a medium, videos remain inaccessible to many people. Prior video accessibility research has focused primarily on blind and low vision or d/Deaf and hard of hearing audiences. However, the video watching experiences of people with ADHD are largely unexplored. Through semi-structured interviews with 20 participants self-identifying with ADHD, we uncovered video watching frustrations, current strategies for access, and desired accessibility features. Participants faced both overstimulation and understimulation from visuals and audio (e.g., flashing lights, slower speech), which impacted their attention, engagement, and information retention. Common strategies included altering video speed, using captions, and leveraging timestamps for skipping through videos. Participants desired adjustable sound channels for aiding focus, video summaries for retaining information, and warnings for preempting sensory discomfort. We close by discussing (1) design recommendations for platforms and creators to support users in achieving their viewing goals and (2) ADHD-inclusive design principles.2025LJLucy Jiang et al.University of Washington, Human Centered Design and EngineeringCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Universal & Inclusive DesignSpecial Education TechnologyCHI
“Ignorance is not Bliss”: Designing Personalized Moderation to Address Ableist Hate on Social MediaDisabled people on social media often experience ableist hate and microaggressions. Prior work has shown that platform moderation often fails to remove ableist hate, leaving disabled users exposed to harmful content. This paper examines how personalized moderation can safeguard users from viewing ableist comments. During interviews and focus groups with 23 disabled social media users, we presented design probes to elicit perceptions on configuring their filters of ableist speech (e.g., intensity of ableism and types of ableism) and customizing the presentation of the ableist speech to mitigate the harm (e.g., AI rephrasing the comment and content warnings). We found that participants preferred configuring their filters through types of ableist speech and favored content warnings. We surface participants’ distrust in AI-based moderation, skepticism in AI’s accuracy, and varied tolerances in viewing ableist hate. Finally, we share design recommendations to support users’ agency, mitigate harm from hate, and promote safety.2025SHSharon Heung et al.Cornell Tech, Information SciencePrivacy by Design & User ControlOnline Harassment & Counter-ToolsEmpowerment of Marginalized GroupsCHI
Investigating Use Cases of AI-Powered Scene Description Applications for Blind and Low Vision People“Scene description” applications that describe visual content in a photo are useful daily tools for blind and low vision (BLV) people. Researchers have studied their use, but they have only explored those that leverage remote sighted assistants; little is known about applications that use AI to generate their descriptions. Thus, to investigate their use cases, we conducted a two-week diary study where 16 BLV participants used an AI-powered scene description application we designed. Through their diary entries and follow-up interviews, users shared their information goals and assessments of the visual descriptions they received. We analyzed the entries and found frequent use cases, such as identifying visual features of known objects, and surprising ones, such as avoiding contact with dangerous objects. We also found users scored the descriptions relatively low on average, 2.7 out of 5 (SD=1.5) for satisfaction and 2.4 out of 4 (SD=1.2) for trust, showing that descriptions still need significant improvements to deliver satisfying and trustworthy experiences. We discuss future opportunities for AI as it becomes a more powerful accessibility tool for BLV users.2024RPRicardo Gonzalez Penuela et al.Cornell Tech, Cornell UniversityGenerative AI (Text, Image, Music, Video)Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Cognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)CHI
“Vulnerable, Victimized, and Objectified”: Understanding Ableist Hate and Harassment Experienced by Disabled Content Creators on Social Media Content creators (e.g., gamers, activists, vloggers) with marginalized identities are at-risk of experiencing hate and harassment. This paper examines the ableist hate and harassment that disabled content creators experience on social media. Through surveys (N=50) and interviews (N=20) with disabled creators, we developed a taxonomy of 11 types of ableist hate and harassment (e.g., eugenics-related speech, denial and stigmatization of accessibility) and outlined how ableism harms creators’ well-being and content creation practices. Using statistical modeling, we investigated differences in ableist experiences given creators’ intersecting identities such as race and sexuality. We found that LGBTQ disabled creators face significantly more ableist hate compared to non-LGBTQ disabled creators. Lastly, we discuss our findings through an infrastructure lens to highlight how disabled creators experience platform-enabled ableism, undergo labor to cope with hate, and develop strategies to safeguard against future hate.2024SHSharon Heung et al.Cornell TechOnline Harassment & Counter-ToolsEmpowerment of Marginalized GroupsCHI
“It’s Kind of Context Dependent”: Understanding Blind and Low Vision People’s Video Accessibility Preferences Across Viewing ScenariosWhile audio description (AD) is the standard approach for making videos accessible to blind and low vision (BLV) people, existing AD guidelines do not consider BLV users’ varied preferences across viewing scenarios. These scenarios range from how-to videos on YouTube, where users seek to learn new skills, to historical dramas on Netflix, where a user’s goal is entertainment. Additionally, the increase in video watching on mobile devices provides an opportunity to integrate nonverbal output modalities (e.g., audio cues, tactile elements, and visual enhancements). Through a formative survey and 15 semi-structured interviews, we identified BLV people’s video accessibility preferences across diverse scenarios. For example, participants valued action and equipment details for how-to videos, tactile graphics for learning scenarios, and 3D models for fantastical content. We define a six-dimensional video accessibility design space to guide future innovation and discuss how to move from “one-size-fits-all” paradigms to scenario-specific approaches.2024LJLucy Jiang et al.Cornell UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Accessible GamingCHI
A Drone Teacher: Designing Physical Human-Drone Interactions for Movement InstructionDrones (micro unmanned aerial vehicles) are becoming more prevalent in applications that bring them into close human spaces. This is made possible in part by clear drone-to-human communication strategies. However, current auditory and visual communication methods only work with strict environmental settings. To continue expanding the possibilities for drones to be useful in human spaces, we explore ways to overcome these limitations through physical touch. We present a new application for drones--physical instructive feedback. To do this we designed three different physical interaction modes for a drone. We then conducted a user study (N=12) to answer fundamental questions of where and how people want to physically interact with drones, and what people naturally infer the physical touch is communicating. We then used these insights to conduct a second user study (N=14) to understand the best way for a drone to communicate instructions to a human in a movement task. We found that continuous physical feedback is both the preferred mode and is more effective at providing instruction than incremental feedback.2023NWNialâh Jenae Wilson-Small et al.Force Feedback & Pseudo-Haptic WeightDrone Interaction & ControlHRI
Studying Exploration and Long-term Use of Voice Assistants by Older AdultsWhile past research has examined older adults’ voice assistant (VA) use, it is unclear whether VAs provide enough value to sustain use when compared to technologies such as smartphones. Research also suggests that barriers around structured command input may limit use. In order to investigate these gaps in adoption, we conducted interviews with ten older adults in a long-term care community who have adopted Alexa devices for at least one year. Participants learned to use Alexa through a training program that encouraged exploration. They used Alexa to complement their daily routines, improve their mood, engage in cognitively stimulating activities, and support socialization with others.We discuss our findings in the context of prior work, describe strategies to promote VA learning and adoption, and present design recommendations to support aging.2023PUPooja Upadhyay et al.University of MarylandAging-Friendly Technology DesignHome Voice Assistant ExperienceCHI
Molder: An Accessible Design Tool for Tactile MapsTactile materials are powerful teaching aids for students with visual impairments (VIs). To design these materials, designers must use modeling applications, which have high learning curves and rely on visual feedback. Today, Orientation and Mobility (O&M) specialists and teachers are often responsible for designing these materials. However, most of them do not have professional modeling skills, and many are visually impaired themselves. To address this issue, we designed Molder, an accessible design tool for interactive tactile maps, an important type of tactile materials that can help students learn O&M skills. A designer uses Molder to design a map using tangible input techniques, and Molder provides auditory feedback and high-contrast visual feedback. We evaluated Molder with 12 participants (8 with VIs, 4 sighted). After a 30-minute training session, the participants were all able to use Molder to design maps with customized tactile and interactive information.2020LSLei Shi et al.Cornell Tech & Cornell UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Data PhysicalizationCHI
The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low VisionWayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio feedback compares to visual feedback in a wayfinding task. We developed visual and audio wayfinding guidance on smartglasses based on de facto standard approaches for blind and sighted people and conducted a study with 16 low vision participants. We found that participants made fewer mistakes and experienced lower cognitive load with visual feedback. Moreover, participants with a full field of view completed the wayfinding tasks faster when using visual feedback. However, many participants preferred audio feedback because of its shorter learning curve. We propose design guidelines for wayfinding systems for low vision.2020YZYuhang Zhao et al.Cornell UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)CHI
Designing Interactive 3D Printed Models with Teachers of the Visually ImpairedStudents with visual impairments struggle to learn various concepts in the academic curriculum because diagrams, images, and other visual are not accessible to them. To address this, researchers have design interactive 3D printed models (I3Ms) that provide audio descriptions when a user touches components of a model. In prior work, I3Ms were designed on an ad hoc basis, and it is currently unknown what general guidelines produce effective I3M designs. To address this gap, we conducted two studies with Teachers of the Visually Impaired (TVIs). First, we led two design workshops with 35 TVIs, who modified sample models and added interactive elements to them. Second, we worked with three TVIs to design three I3Ms in an iterative instructional design process. At the end of this process, the TVIs used the I3Ms we designed to teach their students. We conclude that I3Ms should (1) have effective tactile features (e.g., distinctive patterns between components), (2) contain both auditory and visual content (e.g., explanatory animations), and (3) consider pedagogical methods (e.g., overview before details).2019LSLei Shi et al.Cornell UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Desktop 3D Printing & Personal FabricationCHI
Designing AR Visualizations to Facilitate Stair Navigation for People with Low VisionNavigating stairs is one of the most dangerous mobility challenges for people with low vision (PLV), who have visual impairments that fall short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but PLV have usable vision and don’t typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for PLV. We designed visualizations for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user’s position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 PLV. We found that the visualizations for projection AR increased participants’ walking speed. Moreover, our designs on both platforms largely increased participants’ self-reported psychological security.2019YZYuhang Zhao et al.AR Navigation & Context AwarenessVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)UIST
A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the LabRecognizing others is a major challenge for people with visual impairments (VIPs) and can hinder engagement in social activities. We present Accessibility Bot, a research prototype bot on Facebook Messenger, that leverages state-of-the-art computer vision and a user’s friends’ tagged photos on Facebook to help people with visual impairments recognize their friends. Accessibility Bot provides users information about identity and facial expressions and attributes of friends captured by their phone’s camera. To guide our design, we interviewed eight VIPs to understand their challenges and needs in social activities. After designing and implementing the bot, we conducted a diary study with six VIPs to study its use in everyday life. While most participants found the Bot helpful, their experience was undermined by perceived low recognition accuracy, difficulty aiming a camera, and lack of knowledge about the phone’s status. We discuss these real-world challenges, identify suitable use cases for Accessibility Bot, and distill design implications for future face recognition applications.2018YZYuhang Zhao et al.Cornell Tech, Cornell Univeristy, Facebook Inc.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
The Effect of Computer-Generated Descriptions on Photo-Sharing Experiences of People with Visual ImpairmentsLike sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We aimed to address this problem by incorporating state-of-the-art computer-generated descriptions into Facebook’s photo-sharing feature. We interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed a photo description feature for the Facebook mobile application. We evaluated this feature with six participants in a seven-day diary study. We found that participants used the descriptions to recall and organize their photos, but they hesitated to upload photos without a sighted person’s input. In addition to basic information about photo content, participants wanted to know more details about salient objects and people, and whether the photos reflected their personal aesthetic. We discuss these findings from the lens of self-disclosure and self-presentation theories and propose new computer vision research directions that will better support visual content sharing by visually impaired people.2018YZYuhang Zhao et al.Sharing and CollaborationCSCW