Shifting the Focus: Exploring Video Accessibility Strategies and Challenges for People with ADHDDespite the growth of video as a medium, videos remain inaccessible to many people. Prior video accessibility research has focused primarily on blind and low vision or d/Deaf and hard of hearing audiences. However, the video watching experiences of people with ADHD are largely unexplored. Through semi-structured interviews with 20 participants self-identifying with ADHD, we uncovered video watching frustrations, current strategies for access, and desired accessibility features. Participants faced both overstimulation and understimulation from visuals and audio (e.g., flashing lights, slower speech), which impacted their attention, engagement, and information retention. Common strategies included altering video speed, using captions, and leveraging timestamps for skipping through videos. Participants desired adjustable sound channels for aiding focus, video summaries for retaining information, and warnings for preempting sensory discomfort. We close by discussing (1) design recommendations for platforms and creators to support users in achieving their viewing goals and (2) ADHD-inclusive design principles.2025LJLucy Jiang et al.University of Washington, Human Centered Design and EngineeringCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Universal & Inclusive DesignSpecial Education TechnologyCHI
How Beginning Programmers and Code LLMs (Mis)read Each OtherGenerative AI models, specifically large language models (LLMs), have made strides towards the long-standing goal of text-to-code generation. This progress has invited numerous studies of user interaction. However, less is known about the struggles and strategies of non-experts, for whom each step of the text-to-code problem presents challenges: describing their intent in natural language, evaluating the correctness of generated code, and editing prompts when the generated code is incorrect. This paper presents a large-scale controlled study of how 120 beginning coders across three academic institutions approach writing and editing prompts. A novel experimental design allows us to target specific steps in the text-to-code process and reveals that beginners struggle with writing and editing prompts, even for problems at their skill level and when correctness is automatically determined. Our mixed-methods evaluation provides insight into student processes and perceptions with key implications for non-expert Code LLM use within and outside of education.2024SNSydney Nguyen et al.Wellesley CollegeHuman-LLM CollaborationProgramming Education & Computational ThinkingCHI
TOCHI - Privacy Norms and Preferences for Photos Posted OnlineWe are surrounded by digital images of personal lives posted online. Changes in information and communication technologies (ICTs) have enabled widespread sharing of personal photos, increasing access to aspects of private life previously less observable. Most studies of privacy online explore differences in individual privacy preferences. Here we examine privacy perceptions of online photos considering both social norms, collectively- shared expectations of privacy, and individual preferences. We conducted an online factorial vignette study on Amazon Mechanical Turk (n=279). Our findings show that people share common expectations about the privacy of online images, and these privacy norms are socially contingent and multi-dimensional. Use of digital technologies to share personal photos is influenced by social context as well as individual preferences, while such sharing can affect the social meaning of privacy.2020RHRoberto Hoyle et al.Privacy and SecurityCSCW
Privacy and Activism in the Transgender CommunityTransgender people are marginalized, facing specific privacy concerns and high risk of online and offline harassment, discrimination, and violence. They also benefit tremendously from technology. We conducted semi-structured interviews with 18 transgender people from 3 U.S. cities about their computer security and privacy experiences broadly construed. Participants frequently returned to themes of activism and prosocial behavior, such as protest organization, political speech, and role-modeling transgender identities, so we focus our analysis on these themes. We identify several prominent risk models related to visibility, luck, and identity that participants used to analyze their own risk profiles, often as distinct or extreme. These risk perceptions may heavily influence transgender people's defensive behaviors and self-efficacy, jeopardizing their ability to defend themselves or gain technology's benefits. We articulate design lessons emerging from these ideas, contrasting and relating them to lessons about other marginalized groups whenever possible.2020ALAda Lerner et al.Wellesley CollegePrivacy by Design & User ControlCyberbullying & Online HarassmentLGBTQ+ Community Technology DesignCHI
Unakite: Scaffolding Developers’ Decision-Making Using the WebDevelopers spend a significant portion of their time searching for solutions and methods online. While numerous tools have been developed to support this exploratory process, in many cases the answers to developers’ questions involve trade-offs among multiple valid options and not just a single solution. Through interviews, we discovered that developers express a desire for help with decision-making and understanding trade-offs. Through an analysis of Stack Overflow posts, we observed that many answers describe such trade-offs. These findings suggest that tools designed to help a developer capture information and make decisions about trade-offs can provide crucial benefits for both the developers and others who want to understand their design rationale. In this work, we probe this hypothesis with a prototype system named Unakite that captures, organizes, and keeps track of information about trade-offs and builds a comparison table, which can be saved for later as the design rationale. Our evaluation results show that Unakite reduces the cost of collecting tradeoff-related information by 45%, and that the resulting comparison table speeds up a subsequent developer’s ability to understand the trade-offs by about a factor of 3.2019MLMichael Xieyang Liu et al.Explainable AI (XAI)AI-Assisted Decision-Making & AutomationComputational Methods in HCIUIST
Can Privacy Be Satisfying? On Improving Viewer Satisfaction for Privacy-Enhanced Photos Using Aesthetic TransformsPervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers' satisfaction with the photo, which may cause people to avoid using them. In this paper, we study the novel hypothesis that it may be possible to restore viewers' satisfaction by 'boosting' or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.2019RHRakibul Hasan et al.Indiana UniversityAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
Viewer Experience of Obscuring Scene Elements in Photos to Enhance PrivacyWith the rise of digital photography and social networking, people are sharing personal photos online at an unprecedented rate. In addition to their main subject matter, photographs often capture various incidental information that could harm people's privacy. While blurring and other image filters may help obscure private content, they also often affect the utility and aesthetics of the photos, which is important since images shared in social media are mainly for human consumption. Existing studies of privacy-enhancing image filters either primarily focus on obscuring faces, or do not systematically study how filters affect image utility. To understand the trade-offs when obscuring various sensitive aspects of images, we study eleven filters applied to obfuscate twenty different objects and attributes, and evaluate how effectively they protect privacy and preserve image quality for human viewers.2018RHRakibul Hasan et al.Indiana UniversityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI