The Role of Partisan Culture in Mental Health Language OnlineThe impact of culture on how people express distress in online support communities is increasingly a topic of interest within Computer Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI). In the United States, distinct cultures have emerged from each of the two dominant political parties, forming a primary lens by which people navigate online and offline worlds. We examine whether partisan culture may play a role in how U.S. Republican and Democrat users of online mental health support communities express distress. We present a large-scale observational study of 2,184,356 posts from 8,916 statistically matched Republican, Democrat, and unaffiliated online support community members. We utilize methods from causal inference to statistically match partisan users along covariates that correspond with demographic attributes and platform use, in order to create comparable cohorts for analysis. We then leverage methods from natural language processing to understand how partisan expressions of distress compare between these sets of closely matched opposing partisans, and between closely matched partisans and typical support community members. Our data spans January 2013 to December 2022, a period of both rising political polarization and mental health concerns. We find that partisan culture does play into expressions of distress, underscoring the importance of considering partisan cultural differences in the design of online support community platforms.2025SPSachin R Pendse et al.Partisan Discourse OnlineCSCW
Interaction Techniques for Providing Sensitive Location Data of Interpersonal Violence with User-Defined Privacy PreservationViolence is a significant public health issue. Interventions to reduce violence rely on data about where incidents occur. Cities have historically used incomplete law enforcement crime data, but many are shifting toward data collected from hospital patients via the Cardiff Model to form a more complete understanding of violence. Still, location data is wrought with issues related to completeness, quality, and privacy. For example, if a patient feels that sharing a detailed location may present them with additional risks, such as undesired police involvement or retaliatory violence, they may be unwilling or unable to share. Consequently, survivors of violence who are the most vulnerable may remain the most at risk. We have designed a user interface and mapping algorithm to confront these challenges and conducted an experiment with emergency department patients. The results indicate a significant improvement in location data obtained using the interface compared to the existing screening interview.2025AGAlex Godwin et al.American University, Computer SciencePrivacy by Design & User ControlContent Moderation & Platform GovernanceCommunity Engagement & Civic TechnologyCHI
Can AI Model the Complexities of Human Moral Decision-making? A Qualitative Study of Kidney Allocation DecisionsA growing body of work in Ethical AI attempts to capture human moral judgments through simple computational models. The key question we address in this work is whether such simple AI models capture the critical nuances of moral decision-making by focusing on the use case of kidney allocation. We conducted twenty interviews where participants explained their rationale for their judgments about who should receive a kidney. We observe participants: (a) value patients' morally-relevant attributes to different degrees; (b) use diverse decision-making processes, citing heuristics to reduce decision complexity; (c) can change their opinions; (d) sometimes lack confidence in their decisions (e.g., due to incomplete information); and (e) express enthusiasm and concern regarding AI assisting humans in kidney allocation decisions. Based on these findings, we discuss challenges of computationally modeling moral judgments as a stand-in for human input, highlight drawbacks of current approaches, and suggest future directions to address these issues.2025VKVijay Keswani et al.Duke UniversityExplainable AI (XAI)AI Ethics, Fairness & AccountabilityPrivacy Perception & Decision-MakingCHI
LLM Whisperer: An Inconspicuous Attack to Bias LLM ResponsesWriting effective prompts for large language models (LLM) can be unintuitive and burdensome. In response, services that optimize or suggest prompts have emerged. While such services can reduce user effort, they also introduce a risk: the prompt provider can subtly manipulate prompts to produce heavily biased LLM responses. In this work, we show that subtle synonym replacements in prompts can increase the likelihood (by a difference up to 78%) that LLMs mention a target concept (e.g., a brand, political party, nation). We substantiate our observations through a user study, showing that our adversarially perturbed prompts 1) are indistinguishable from unaltered prompts by humans, 2) push LLMs to recommend target concepts more often, and 3) make users more likely to notice target concepts, all without arousing suspicion. The practicality of this attack has the potential to undermine user autonomy. Among other measures, we recommend implementing warnings against using prompts from untrusted parties.2025WLWeiran Lin et al.Carnegie Mellon UniversityHuman-LLM CollaborationAI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
Exploring Security and Privacy Discourse on Twitter During the `Justice Pour Nahel' Movement in FranceThe shooting of Nahel Merzouk in June 2023 ignited widespread protests across France, known as the ``Justice Pour Nahel'' movement, drawing attention to the privacy and security risks faced by protesters. This study explores the discourse on Twitter during the protests, focusing on digital surveillance and censorship concerns. We analyzed 341 tweets using qualitative methods to understand the security and privacy attitudes and advice shared by French-speaking users. Our findings reveal a strong apprehension toward increased long-term government surveillance and censorship, with limited and often low-tech advice on how to counteract these threats. We highlight the discrepancy between the concerns raised and the available guidance and compare our findings with those of prior work. Grounded in our analysis and informed by prior research, we offer targeted recommendations for activists, policymakers, and researchers to mitigate security and privacy concerns arising from social unrest, both in France and globally.2025HLHiba Laabadli et al.Duke University, Computer Science DepartmentPrivacy by Design & User ControlPrivacy Perception & Decision-MakingSocial Platform Design & User BehaviorCHI
"I Deleted It After the Overturn of Roe v. Wade": Understanding Women's Privacy Concerns Toward Period-Tracking Apps in the Post Roe v. Wade EraThe overturn of Roe v. Wade has taken away the constitutional right to abortion. Prior work shows that period-tracking apps' data practices can be used to detect pregnancy and abortion, hence putting women at risk of being prosecuted. It is unclear how much women know about the privacy practices of such apps and how concerned they are after the overturn. Such knowledge is critical to designing effective strategies for stakeholders to enhance women's reproductive privacy. We conducted an online 183-participant vignette survey with US women from states with diverse policies on abortion. Participants were significantly concerned about the privacy practices of the period-tracking apps, such as data access by law enforcement and third parties. However, participants felt uninformed and powerless about risk mitigation practices. We provide several recommendations to enhance women's privacy awareness toward their period-tracking practices.2024JCJiaxun Cao et al.Duke Kunshan University, Duke UniversityAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Help Supporters: Exploring the Design Space of Assistive Technologies to Support Face-to-Face Help Between Blind and Sighted StrangersBlind and low-vision (BLV) people face many challenges when venturing into public environments, often wishing it were easier to get help from people nearby. Ironically, while many sighted individuals are willing to help, such interactions are infrequent. Asking for help is socially awkward for BLV people, and sighted people lack experience in helping BLV people. Through a mixed-ability research-through-design process, we explore four diverse approaches toward how assistive technology can serve as help supporters that collaborate with both BLV and sighted parties throughout the help process. These approaches span two phases: the connection phase (finding someone to help) and the collaboration phase (facilitating help after finding someone). Our findings from a 20-participant mixed-ability study reveal how help supporters can best facilitate connection, which types of information they should present during both phases, and more. We discuss design implications for future approaches to support face-to-face help.2024YTYuanyang Teng et al.Columbia UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignCHI
Assessing User Trust in Active Learning Systems: Insights from Query Policy and Uncertainty VisualizationActive learning systems have become increasingly popular for various applications in machine learning (ML), including medical imaging, environmental monitoring, and geospatial analysis. These systems rely on inputs dynamically queried from people to enhance classification. Ensuring appropriate analyst trust in these systems remains a significant obstacle, as analysts may over-rely or under-rely on the system. Common active learning (AL) strategies enhance classification models by asking an analyst to provide labels for data points with the highest degree of uncertainty. However, model-centric policies do not consider potential priming effects on the analyst and how they will affect people's trust in the system post-training. In this paper, we present an empirical study assessing how AL query policies and visualizations that enhance transparency in a classifier’s certainty influence trust in automated image classifiers. We found that query policy may significantly influence an analyst’s perception of the system’s capabilities, while the level of visual transparency into classifier certainty may influence an analyst’s ability to perform the classification task. Our study informs the design of interactive labeling systems to help mitigate the effects of over-reliance.2024ITIan Thomas et al.Explainable AI (XAI)Interactive Data VisualizationUncertainty VisualizationIUI
Understanding People's Concerns and Attitudes Toward Smart CitiesDesigning privacy-respecting and human-centric smart cities requires a careful investigation of people's attitudes and concerns toward city-wide data collection scenarios. To capture a holistic view, we carried out this investigation in two phases. We first surfaced people's understanding, concerns, and expectations toward smart city scenarios by conducting 21 semi-structured interviews with people in underserved communities. We complemented this in-depth qualitative study with a 348-participant online survey of the general population to quantify the significance of smart city factors (e.g., type of collected data) on attitudes and concerns. Depending on demographics, privacy and ethics were the two most common types of concerns among participants. We found the type of collected data to have the most and the retention time to have the least impact on participants' perceptions and concerns about smart cities. We highlight key takeaways and recommendations for city stakeholders to consider when designing inclusive and protective smart cities.2023PEPardis Emami-Naeini et al.Duke UniversityPrivacy by Design & User ControlSmart Cities & Urban SensingSustainable HCICHI
Interface Design for Crowdsourcing Hierarchical Multi-Label Text AnnotationsHuman data labeling is an important and expensive task at the heart of supervised learning systems. Hierarchies help humans understand and organize concepts. We ask whether and how concept hierarchies can inform the design of annotation interfaces to improve labeling quality and efficiency. We study this question through annotation of vaccine misinformation, where the labeling task is difficult and highly subjective. We investigate 6 user interface designs for crowdsourcing hierarchical labels by collecting over 18,000 individual annotations. Under a fixed budget, integrating hierarchies into the design improves crowdsource workers' F1 scores. We attribute this to (1) Grouping similar concepts, improving F1 scores by +0.16 over random groupings, (2) Strong relative performance on high-difficulty examples (relative F1 score difference of +0.40), and (3) Filtering out obvious negatives, increasing precision by +0.07. Ultimately, labeling schemes integrating the hierarchy outperform those that do not - achieving mean F1 of 0.70.2023RSRickard Stureborg et al.Duke UniversityCrowdsourcing Task Design & Quality ControlField StudiesCHI
Breaking out of the Lab: Mitigating Mind Wandering with Gaze-Based Attention-Aware Technology in ClassroomsWe designed and tested an attention-aware learning technology (AALT) that detects and responds to mind wandering (MW), a shift in attention from task-related to task-unrelated thoughts, that is negatively associated with learning. We leveraged an existing gaze-based mind wandering detector that uses commercial off the shelf eye tracking to inform real-time interventions during learning with an Intelligent Tutoring System in real-world classrooms. The intervention strategies, co-designed with students and teachers, consisted of using student names, reiterating content, and asking questions, with the aim to reengage wandering minds and improve learning. After several rounds of iterative refinement, we tested our AALT in two classroom studies with 287 high-school students. We found that interventions successfully reoriented attention and, compared to two control conditions, reduced mind wandering and improved retention (measured via a delayed assessment) for students with low prior-knowledge who occasionally (but not excessively) mind wandered. We discuss implications for developing gaze-based AALTs for real-world contexts.2021SHStephen Hutt et al.Univeristy of Colorado BoulderEye Tracking & Gaze InteractionIntelligent Tutoring Systems & Learning AnalyticsCHI