"We're utterly ill-prepared to deal with something like this": Teachers' Perspectives on Student Generation of Synthetic Nonconsensual Explicit ImagerySynthetic nonconsensual explicit imagery, also referred to as "deepfake nudes", is becoming faster and easier to generate. In the last year, synthetic nonconsensual explicit imagery was reported in at least ten US middle and high schools, generated by students of other students. Teachers are at the front lines of this new form of image abuse and have a valuable perspective on threat models in this context. We interviewed 17 US teachers to understand their opinions and concerns about synthetic nonconsensual explicit imagery in schools. No teachers knew of it happening at their schools, but most expected it to be a growing issue. Teachers proposed many interventions, such as improving reporting mechanisms, focusing on consent in sex education, and updating technology policies. However, teachers disagreed about appropriate consequences for students who create such images. We unpack our findings relative to differing models of justice, sexual violence, and sociopolitical challenges within schools.2025MWMiranda Wei et al.University of Washington, Paul G. Allen School of Computer Science & EngineeringDeepfake & Synthetic Media DetectionCyberbullying & Online HarassmentCHI
“It doesn’t tell me anything about how my data is used”: User Perceptions of Data Collection PurposesData collection purposes and their descriptions are presented on almost all privacy notices under the GDPR, yet there is a lack of research focusing on how effective they are at informing users about data practices. We fill this gap by investigating users’ perceptions of data collection purposes and their descriptions, a crucial aspect of informed consent. We conducted 23 semi-structured interviews with European users to investigate user perceptions of six common purposes (Strictly Necessary, Statistics and Analytics, Performance and Functionality, Marketing and Advertising, Personalized Advertising, and Personalized Content) and identified elements of an effective purpose name and description. We found that most purpose descriptions do not contain the information users wish to know, and that participants preferred some purpose names over others due to their perceived transparency or ease of understanding. Based on these findings, we suggest how the framing of purposes can be improved toward meaningful informed consent.2024LKLin Kyi et al.Max Planck Institute for Security and PrivacyPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Sharenting on TikTok: Exploring Parental Sharing Behaviors and the Discourse Around Children's Online Privacy Since the inception of social media, parents have been sharing information about their children online. Unfortunately, this ``sharenting'' can expose children to several online and offline risks. Although researchers have studied sharenting on multiple platforms, sharenting on short-form video platforms like TikTok---where posts can contain detailed information, spread quickly, and spark considerable engagement---is understudied. Thus, we provide a targeted exploration of sharenting on TikTok. We analyzed 328 TikTok videos that demonstrate sharenting and 438 videos where TikTok creators discuss sharenting norms. Our results indicate that sharenting on TikTok indeed creates several risks for children, not only within individual posts but also in broader patterns of sharenting that arise when parents repeatedly use children to generate viral content. At the same time, creators voiced sharenting concerns and boundaries that reflect what has been observed on other platforms, indicating the presence of cross-platform norms. Promisingly, we observed that TikTok users are engaging in thoughtful conversations around sharenting and beginning to shift norms toward safer sharenting. We offer concrete suggestions for designers and platforms based on our findings.2024SSSophie Stephenson et al.University of Wisconsin-MadisonPrivacy by Design & User ControlOnline Identity & Self-PresentationCHI
Analyzing User Engagement with TikTok's Short Format Video Recommendations using Data DonationsShort-format videos have exploded on platforms like TikTok, Instagram, and YouTube. Despite this, the research community lacks large-scale empirical studies into how people engage with short-format videos and the role of recommendation systems that offer endless streams of such content. In this work, we analyze user engagement on TikTok using data we collect via a data donation system that allows TikTok users to donate their data. We recruited 347 TikTok users and collected 9.2M TikTok video recommendations they received. By analyzing user engagement, we find that the average daily usage time increases over the users' lifetime while the user attention remains stable at around 45%. We also find that users like more videos uploaded by people they follow than those recommended by people they do not follow. Our study offers valuable insights into how users engage with short-format videos on TikTok and lessons learned from designing a data donation system.2024SZSavvas Zannettou et al.TU DelftRecommender System UXContent Moderation & Platform GovernanceMisinformation & Fact-CheckingCHI
Reframe: An Augmented Reality Storyboarding Tool for Character-Driven Analysis of Security & Privacy ConcernsWhile current augmented reality (AR) authoring tools lower the technical barrier for novice AR designers, they lack explicit guidance to consider potentially harmful aspects of AR with respect to security & privacy (S&P). To address potential threats in the earliest stages of AR design, we developed Reframe, a digital storyboarding tool for designers with no formal training to analyze S&P threats. We accomplish this through a frame-based authoring approach, which captures and enhances storyboard elements that are relevant for threat modeling, and character-driven analysis tools, which personify S&P threats from an underlying threat model to provide simple abstractions for novice designers. Based on evaluations with novice AR designers and S&P experts, we find that Reframe enables designers to analyze threats and propose mitigation techniques that experts consider good quality. We discuss how Reframe can facilitate collaboration between designers and S\&P professionals and propose extensions to Reframe to incorporate additional threat models.2023SRShwetha Rajaram et al.AR Navigation & Context AwarenessPrivacy by Design & User ControlIoT Device PrivacyUIST
"There's so much responsibility on users right now:" Expert Advice for Staying Safer From Hate and HarassmentOnline hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassment-specific digital-safety advice to understand why they felt advice was viable or not. We find that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate efforts to protect users from online hate and harassment, as well as more expansive socio-technical efforts to establish enduring safety.2023MWMiranda Wei et al.University of Washington, GoogleOnline Harassment & Counter-ToolsCyberbullying & Online HarassmentCHI
Eliciting Security & Privacy-Informed Sharing Techniques for Multi-User Augmented RealityThe HCI community has explored new interaction designs for collaborative AR interfaces in terms of usability and feasibility; however, security & privacy (S&P) are often not considered in the design process and left to S&P professionals. To produce interaction proposals with S&P in mind, we extend the user-driven elicitation method with a scenario-based approach that incorporates a threat model involving access control in multi-user AR. We conducted an elicitation study in two conditions, pairing AR/AR experts in one condition and AR/S&P experts in the other, to investigate the impact of each pairing. We contribute a set of expert-elicited interactions for sharing AR content enhanced with access control provisions, analyze the benefits and tradeoffs of pairing AR and S&P experts, and present recommendations for designing future multi-user AR interactions that better balance competing design goals of usability, feasibility, and S&P in collaborative AR.2023SRShwetha Rajaram et al.University of MichiganMixed Reality WorkspacesPrivacy by Design & User ControlCHI
Investigating Deceptive Design in GDPR's Legitimate InterestLegitimate interest is one of the six grounds for processing data under the European Union's General Data Protection Regulation (GDPR). The flexibility and ambiguity of the term "legitimate interests" can be problematic; coupled with the lack of enforcement from legal authorities and different interpretations from the various data protection authorities, legitimate interests can be taken advantage of as a loophole to collect more user data. Drawing insights from multiple disciplines, we ran two studies to empirically investigate the deceptive designs being used when legitimate interests are applied in privacy notices, and how user perceptions line up with these practices. We identified six deceptive designs, and found that the ways legitimate interest is applied in practice does not match user expectations.2023LKLin Kyi et al.Max Planck Institute for Security and PrivacyAlgorithmic Transparency & AuditabilityPrivacy by Design & User ControlDark Patterns RecognitionCHI
How Language Formality in Security and Privacy Interfaces Impacts Intended ComplianceStrong end-user security practices benefit both the user and hosting platform, but it is not well understood how companies communicate with their users to encourage these practices. This paper explores whether web companies and their platforms use different levels of language formality in these communications and tests the hypothesis that higher language formality leads to users' increased intention to comply. We contribute a dataset and systematic analysis of 1,817 English language strings in web security and privacy interfaces across 13 web platforms, showing strong variations in language. An online study with 512 participants further demonstrated that people perceive differences in the language formality across platforms and that a higher language formality is associated with higher self-reported intention to comply. Our findings suggest that formality can be an important factor in designing effective security and privacy prompts. We discuss implications of these results, including how to balance formality with platform language style. In addition to being the first piece of work to analyze language formality in user security, these findings provide valuable insights into how platforms can best communicate with users about account security.2023JSJackson Stokes et al.University of WashingtonPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Understanding People's Concerns and Attitudes Toward Smart CitiesDesigning privacy-respecting and human-centric smart cities requires a careful investigation of people's attitudes and concerns toward city-wide data collection scenarios. To capture a holistic view, we carried out this investigation in two phases. We first surfaced people's understanding, concerns, and expectations toward smart city scenarios by conducting 21 semi-structured interviews with people in underserved communities. We complemented this in-depth qualitative study with a 348-participant online survey of the general population to quantify the significance of smart city factors (e.g., type of collected data) on attitudes and concerns. Depending on demographics, privacy and ethics were the two most common types of concerns among participants. We found the type of collected data to have the most and the retention time to have the least impact on participants' perceptions and concerns about smart cities. We highlight key takeaways and recommendations for city stakeholders to consider when designing inclusive and protective smart cities.2023PEPardis Emami-Naeini et al.Duke UniversityPrivacy by Design & User ControlSmart Cities & Urban SensingSustainable HCICHI
What Makes a "Bad" Ad? User Perceptions of Problematic Online AdvertisingOnline display advertising on websites is widely disliked by users, with many turning to ad blockers to avoid ``bad'' ads. Recent evidence suggests that today’s ads contain potentially problematic content, in addition to well-studied concerns about the privacy and intrusiveness of ads. However, we lack knowledge of which types of ad content users consider problematic and detrimental to their browsing experience. Our work bridges this gap: first, we create a taxonomy of 15 positive and negative user reactions to online advertising from a survey of 60 participants. Second, we characterize classes of online ad content that users dislike or find problematic, using a dataset of 500 ads crawled from popular websites, labeled by 1000 participants using our taxonomy. Among our findings, we report that users consider a substantial amount of ads on the web today to be clickbait, untrustworthy, or distasteful, including ads for software downloads, listicles, and health & supplements.2021EZEric Zeng et al.University of WashingtonDark Patterns RecognitionSocial Platform Design & User BehaviorContent Moderation & Platform GovernanceCHI
Fake News on Facebook and Twitter: Investigating How People (Don't) InvestigateWith misinformation proliferating online and more people getting news from social media, it is crucial to understand how people assess and interact with low-credibility posts. This study explores how users react to fake news posts on their Facebook or Twitter feeds, as if posted by someone they follow. We conducted semi-structured interviews with 25 participants who use social media regularly for news, temporarily caused fake news to appear in their feeds with a browser extension unbeknownst to them, and observed as they walked us through their feeds. We found various reasons why people do not investigate low-credibility posts, including taking trusted posters' content at face value, as well as not wanting to spend the extra time. We also document people's investigative methods for determining credibility using both platform affordances and their own ad-hoc strategies. Based on our findings, we present design recommendations for supporting users when investigating low-credibility posts.2020CGChristine Geeng et al.University of WashingtonContent Moderation & Platform GovernanceMisinformation & Fact-CheckingCHI
Who's In Control? Interactions In Multi-User Smart HomesAdoption of commercial smart home devices is rapidly increasing, allowing in-situ research in people's homes. As these technologies are deployed in shared spaces, we seek to understand interactions among multiple people and devices in a smart home. We conducted a mixed-methods study with 18 participants (primarily people who drive smart device adoption in their homes) living in multi-user smart homes, combining semi-structured interviews and experience sampling. Our findings surface tensions and cooperation among users in several phases of smart device use: device selection and installation, ordinary use, when the smart home does not work as expected, and over longer term use. We observe an outsized role of the person who installs devices in terms of selecting, controlling, and fixing them; negotiations between parents and children; and minimally voiced privacy concerns among co-occupants, possibly due to participant sampling. We make design recommendations for supporting long-term smart homes and non-expert household members.2019CGChristine Geeng et al.University of WashingtonSmart Home Interaction DesignSmart Home Privacy & SecurityParticipatory DesignCHI