Light My Way. Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated VehiclesThe introduction of Highly Automated Vehicles (HAVs) has the potential to increase the independence of blind and visually impaired people (BVIPs). However, ensuring safety and situation awareness when exiting these vehicles in unfamiliar environments remains challenging. To address this, we conducted an interactive workshop with N=5 BVIPs to identify their information needs when exiting an HAV and evaluated three prior-developed low-fidelity prototypes. The insights from this workshop guided the development of PathFinder, a multimodal interface combining visual, auditory, and tactile modalities tailored to BVIP's unique needs. In a three-factorial within-between-subject study with N=16 BVIPs, we evaluated PathFinder against an auditory-only baseline in urban and rural scenarios. PathFinder significantly reduced mental demand and maintained high perceived safety in both scenarios, while the auditory baseline led to lower perceived safety in the urban scenario compared to the rural one. Qualitative feedback further supported PathFinder's effectiveness in providing spatial orientation during exiting.2025LMLuca-Maxim Meinhardt et al.Institute of Media Informatics, Ulm UniversityIn-Vehicle Haptic, Audio & Multimodal FeedbackVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Accessible Maps for the Future of Inclusive RidesharingFor people who are blind and low vision (BLV), ridesharing provides an important means of independence and mobility. However, a common challenge relates to finding the vehicle when it arrives to an unanticipated location. Although coordinating with the driver for assistance is serviceable in the near term, new solutions are necessary when a human is no longer available in future automated vehicles. Therefore, this paper presents and evaluates a multisensory smartphone-based map system designed to enable nonvisual tracking of summoned vehicles. Results from a user study with (N=12) BLV users suggest that vibro-audio maps (VAMs) promote superior spatial confidence and reasoning compared to current nonvisual audio interfaces in ridesharing apps, while also being desirable and easy to use. A subsequent expert evaluation based on improvements suggested during the user study indicate the practical utility of VAMs to address both current and future wayfinding challenges for BLV travelers.2024PFPaul D. S. Fink et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Ridesharing PlatformsAutoUI
Towards Robotic Companions: Understanding Handler-Guide Dog Interactions for Informed Guide Dog Robot DesignDog guides are favored by blind and low-vision (BLV) individuals for their ability to enhance independence and confidence by reducing safety concerns and increasing navigation efficiency compared to traditional mobility aids. However, only a relatively small proportion of BLV people work with dog guides due to their limited availability and associated maintenance responsibilities. There is considerable recent interest in addressing this challenge by developing legged guide dog robots. This study was designed to determine critical aspects of the handler-guide dog interaction and better understand handler needs to inform guide dog robot development. We conducted semi-structured interviews and observation sessions with 23 dog guide handlers and 5 trainers. Thematic analysis revealed critical limitations in guide dog work, desired personalization in handler-guide dog interaction, and important perspectives on future guide dog robots. Grounded on these findings, we discuss pivotal design insights for guide dog robots aimed for adoption within the BLV community.2024HHHochul Hwang et al.University of Massachusetts AmherstVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Social Robot InteractionCHI
“X-Ray Vision” as a Compensatory Augmentation for Slowing Cognitive Map Decay in Older AdultsSafe and efficient navigation often relies on the development and retention of accurate cognitive maps that include inter-landmark relations. For many older adults, cognitive maps are difficult to form and remember over time, which introduces serious challenges for independence and mobility. To address this problem, we explore an innovative compensatory augmentation solution enabling enhanced inter-landmark learning via an “X-Ray Vision” simulation. Results with (n=45) user study participants suggest superior older adult cognitive map retention over time from a single learning session with the augmentation versus a control condition without the augmentation. Furthermore, results characterize differences in decay of cognitive maps between older adults and a control of younger adults. These findings suggest important implications for future augmented reality devices and the ways in which they can be used to promote memory and independence among older adults.2024CBChristopher Bennett et al.The University of MaineAR Navigation & Context AwarenessAging-Friendly Technology DesignCHI
Spatial Audio-Enhanced Multimodal Graph Rendering for Efficient Data Trend Learning on Touchscreen DevicesTouchscreen-based rendering of graphics using vibrations, sonification, and text-to-speech is a promising approach for nonvisual access to graphical information, but extracting trends from complex data representations nonvisually is challenging. This work presents the design of a multimodal feedback scheme with integrated spatial audio for the exploration of histograms and scatter plots on touchscreens. We detail the hardware employed and the algorithms used to control vibrations and sonification adjustments through the change of pitch and directional stereo output. We conducted formative testing with 5 blind or visually impaired participants, and results illustrate that spatial audio has the potential to increase the identification of trends in the data, at the expense of a skewed mental representation of the graph. This design work and pilot study are critical to the iterative, human-centered approach of rendering multimodal graphics on touchscreens and contribute a new scheme for efficiently capturing data trends in complex data representations.2024WMWilfredo Joshua Robinson Moore et al.Saint Louis UniversityDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Interactive Data VisualizationCHI
Expanded Situational Awareness Without Vision: A Novel Haptic Interface for Use in Fully Autonomous VehiclesThis work presents a novel ultrasonic haptic interface to improve nonvisual perception and situational awareness in applications such as fully autonomous vehicles. User study results (n=14) suggest comparable performance with the dynamic ultrasonic stimuli versus a control using static embossed stimuli. The utility of the ultrasonic interface is demonstrated with a prototype autonomous small-scale robot vehicle using intersection abstractions. These efforts support the application of ultrasonic haptics for improving nonvisual information access in autonomous transportation with strong implications for people who are blind and visually impaired, accessibility, and human-in-the-loop decision making.2023PFPaul D. S. Fink et al.Automated Driving Interface & Takeover DesignMid-Air Haptics (Ultrasonic)HRI
Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual ImpairmentsShould fully autonomous vehicles (FAVs) be designed inclusively and accessibly, independence will be transformed for millions of people experiencing transportation-limiting disabilities worldwide. Although FAVs hold promise to improve efficient transportation without intervention, a truly accessible experience must enable user input, for all people, in many driving scenarios (e.g., to alter a route or pull over during an emergency). Therefore, this paper explores desires for control in FAVs among (n=23) people who are blind and visually impaired. Results indicate strong support for control across a battery of driving tasks, as well as the need for multimodal information. These findings inspired the design and evaluation of a novel multisensory interface leveraging mid-air gestures, audio, and haptics. All participants successfully navigated driving scenarios using our gestural-audio interface, reporting high ease-of-use. Contributions include the first inclusively designed gesture set for FAV control and insight regarding supplemental haptic and audio cues.2023PFPaul D. S. Fink et al.The University of Maine, University of MaineAutomated Driving Interface & Takeover DesignIn-Vehicle Haptic, Audio & Multimodal FeedbackVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI