When Recommender Systems Snoop into Social Media, Users Trust them Less for Health AdviceRecommender systems (RS) have become increasingly vital for guiding health actions. While traditional systems filter content based on either demographics, personal history of activities, or preferences of other users, newer systems use social media information to personalize recommendations, based either on the users’ own activities and/or those of their friends on social media platforms. However, we do not know if these approaches differ in their persuasiveness. To find out, we conducted a user study of a fitness plan recommender system (N = 341), wherein participants were randomly assigned to one of six personalization approaches, with half of them given a choice to switch to a different approach. Data revealed that social media-based personalization threatens users’ identity and increases privacy concerns. Users prefer personalized health recommendations based on their own preferences. Choice enhances trust by providing users with a greater sense of agency and lowering their privacy concerns. These findings provide design implications for RS, especially in the preventive health domain.2023YSYuan Sun et al.The Pennsylvania State University , The Pennsylvania State UniversityRecommender System UXPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User TrustTo promote data transparency, frameworks such as CrowdWorkSheets encourage documentation of annotation practices on the interfaces of AI systems, but we do not know how they affect user experience. Will the quality of labeling affect perceived credibility of training data? Does the source of annotation matter? Will a credible dataset persuade users to trust a system even if it shows racial biases in its predictions? To find out, we conducted a user study (N = 430) with a prototype of a classification system, using a 2 (labeling quality: high vs. low) × 4 (source: others-as-source vs. self-as-source cue vs. self-as-source voluntary action, vs. self-as-source forced action) × 3 (AI performance: none vs. biased vs. unbiased) experiment. We found that high-quality labeling leads to higher perceived training data credibility, which in turn enhances users’ trust in AI, but not when the system shows bias. Practical implications for explainable and ethical AI interfaces are discussed.2023CCCheng Chen et al.Elon UniversityExplainable AI (XAI)AI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
User Trust in Recommendation Systems: A comparison of Content-Based, Collaborative and Demographic Filtering Three of the most common approaches used in recommender systems are content-based fltering (matching users’ preferences with products’ characteristics), collaborative fltering (matching users with similar preferences), and demographic fltering (catering to users based on demographic characteristics). Do users’ intuitions lead them to trust one of these approaches over others, independent of the actual operations of these diferent systems? Does their faith in one type or another depend on the quality of the recommendation, rather than how the recommendation appears to have been derived? We conducted an empirical study with a prototype of a movie recommender system to fnd out. A 3 (Ostensible Recommender Type: Content vs. Collaborative vs. Demographic Filtering) x 2 (Recommendation Quality: Good vs. Bad) experiment (N=226) investigated how users evaluate systems and attribute responsibility for the recommendations they receive. We found that users trust systems that use collaborative fltering more, regardless of the system’s performance. They think that they themselves are responsible for good recommendations but that the system is responsible for bad recommendations (refecting a self-serving bias). Theoretical insights, design implications and practical solutions for the cold start problem are discussed.2022MLMengqi Liao et al.The Pennsylvania State UniversityRecommender System UXCHI
News Informatics: Engaging Individuals with Data-Rich News Content through Interactivity in Source, Medium, and MessageThis paper introduces the concept of “news informatics” to refer to journalistic presentation of big data in online sites. For users to be engaged with such data-driven public information, it is important to incorporate interactive tools so that each person can extract personally relevant information. Drawing upon a communication model of interactivity, we designed a data-rich site with three different types of interactive features—namely, modality interactivity, message interactivity, and source interactivity—and empirically tested their relative and combined effects on user engagement and user experience with a 2 (modality) × 3 (source) × 2 (message) field experiment (N =166). Findings shed light on how interface designers, online news editors and journalists can maximize user engagement with data-rich news content. Certain interactivity combinations are found to be better than others, with a structural equation model (SEM) revealing the underlying theoretical mechanisms and providing implications for the design of news informatics.2022SSS. Shyam Sundar et al.The Pennsylvania State UniversityAutomated Driving Interface & Takeover DesignData StorytellingCHI
Does Clickbait Actually Attract More Clicks? Three Clickbait studies you must readStudies show that users do not reliably click more often on headlines classified as clickbait by automated classifiers. Is this because the linguistic criteria (e.g., use of lists or questions) emphasized by the classifiers are not psychologically relevant in attracting interest, or because their classifications are confounded by other unknown factors associated with assumptions of the classifiers? We address these possibilities with three studies—a quasi-experiment using headlines classified as clickbait by three machine-learning models (Study 1), a controlled experiment varying the headline of an identical news story to contain only one clickbait characteristic (Study 2), and a computational analysis of four classifiers using real-world sharing data (Study 3). Studies 1 and 2 revealed that clickbait did not generate more curiosity than non-clickbait. Study 3 revealed that while some headlines generate more engagement, the detectors agreed on a classification only 47% of the time, raising fundamental questions about their validity.2021MMMaria D. Molina et al.Michigan State UniversityContent Moderation & Platform GovernanceMisinformation & Fact-CheckingCHI
How should AI Systems Talk to users when Collecting their Personal Information? Effects of Role Framing and Self-referencing on Human-AI InteractionAI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.2021MLMengqi Liao et al.The Pennsylvania State UniversityAI-Assisted Decision-Making & AutomationPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Alexa as Coach: Leveraging Smart Speakers to Build Social Agents that Reduce Public Speaking AnxietyPublic speaking anxiety is one of the most common social phobias. We explore the feasibility of using a conversational agent to reduce this anxiety. We developed a public-speaking tutor on the Amazon Alexa platform that enables users to engage in cognitive reconstruction exercises. We also investigated how the sociability of the agent might affect its performance as a tutor. A user study of 53 college students with fear of public speaking showed that the interaction with the agent served to assuage pre-speech state anxiety. Agent sociability improved the sense of interpersonal closeness, which was associated with lower pre-speech anxiety. Moreover, sociability of the agent increased participants' satisfaction and their willingness to continue engagement. Our findings, thus, have implications not only for addressing public speaking anxiety in a scalable way but also for the design of future conversational agents using smart speaker platforms.2020JWJinping Wang et al.Pennsylvania State UniversityIntelligent Voice Assistants (Alexa, Siri, etc.)Agent Personality & AnthropomorphismMental Health Apps & Online Support CommunitiesCHI
Will Deleting History Make Alexa More Trustworthy? Effects of Privacy and Content Customization on User Experience of Smart Speakers"Always-on" smart speakers have raised privacy and security concerns, to address which vendors have introduced customizable privacy settings. But, does the act of customizing one's privacy preferences have any effects on user experience and trust? To address this question, we developed an app for Amazon Alexa and conducted a user study (N = 90). Our data show that the affordance to customize privacy settings enhances trust and usability for regular users, while it has adverse effects on power users. In addition, only enabling privacy-setting customization without allowing content customization negatively affects trust among users with higher privacy concerns. When they can customize both content and privacy settings, user trust is highest. That is, while privacy customization may cause reactance among power users, allowing privacy-concerned individuals to simultaneously customize content can help to alleviate the resultant negative effect on trust. These findings have implications for designing more privacy-sensitive and trustworthy smart speakers.2020ECEugene Cho et al.Pennsylvania State UniversityHome Voice Assistant ExperienceSmart Home Privacy & SecurityCHI
Online Privacy Heuristics that Predict Information DisclosureOnline users' attitudes toward privacy are context-dependent. Studies show that contextual cues are quite influential in motivating users to disclose personal information. Increasingly, these cues are embedded in the interface, but the mechanisms of their effects (e.g., unprofessional design contributing to more disclosure) are not fully understood. We posit that each cue triggers a specific "cognitive heuristic" that provides a rationale for decision-making. Using a national survey (N = 786) that elicited participants' disclosure intentions in common online scenarios, we identify 12 distinct heuristics relevant to privacy, and demonstrate that they are systematically associated with information disclosure. Data show that those with a higher accessibility to a given heuristic are more likely to disclose information. Design implications for protection of online privacy and security are discussed.2020SSS. Shyam Sundar et al.Pennsylvania State UniversityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Machine Heuristic: When We Trust Computers More than Humans with Our Personal InformationIn this day and age of identity theft, are we likely to trust machines more than humans for handling our personal information? We answer this question by invoking the concept of "machine heuristic," which is a rule of thumb that machines are more secure and trustworthy than humans. In an experiment (N = 160) that involved making airline reservations, users were more likely to reveal their credit card information to a machine agent than a human agent. We demonstrate that cues on the interface trigger the machine heuristic by showing that those with higher cognitive accessibility of the heuristic (i.e., stronger prior belief in the rule of thumb) were more likely than those with lower accessibility to disclose to a machine, but they did not differ in their disclosure to a human. These findings have implications for design of interface cues conveying machine vs. human sources of our online interactions.2019SSS. Shyam Sundar et al.Pennsylvania State UniversityPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
“This App Would Like to Use Your Current Location to Better Serve You”: Importance of User Assent and System Transparency in Personalized Mobile ServicesModern mobile apps aim to provide personalized services without appearing intrusive. A common strategy is to let the user initiate the service request (e.g., “click here to receive coupons for your favorite products”), a practice known as “overt personalization.” Another strategy is to assuage users’ privacy concerns by being transparent about how their data would be collected, utilized and stored. To test these two strategies, we conducted a 2 (Personalization: Overt vs. Covert) x 2 (Transparency: High vs. Low) factorial experiment, with a fifth control condition. Participants (N=302) interacted with GreenByMe, a prototype of an eco-friendly mobile application. Data show that overt personalization affects perceived control. Significant three-way interactions between power usage, perceived overt personalization and perceived information transparency was seen on perceived ease of use, trust in the app, user engagement and behavioral intention to use the app in the future. In addition, results reveal that perceived information transparency also promotes trust, which is negatively linked with privacy concerns and positively correlated with user engagement and product involvement.2018TCTsai-Wei Chen et al.United HealthPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Interface Cues to Promote Disclosure and Build Community: An Experimental Test of Crowd and Connectivity Cues in an Online Sexual Health ForumHealth forums and support groups depend on participant self-disclosure for their success, but the sensitive nature of personal health concerns raises privacy concerns that may constrain what users are willing to reveal. To address this issue, we explore the impact of visual cues designed to convey (1) two facets of social influence—crowd size and social network connectivity—and (2) provide a frame designed to enhance the forum’s sense of community. A 3 (Cue type: Crowd, Connectivity, None) x 2 (Framing) factorial experiment (N = 218) showed that cues implying greater crowd size and connectivity lead to more self-disclosure of sensitive information, and higher intentions to revisit the community. Further, user belief in the community-building heuristic positively predicts self-disclosure and intentions, while also moderating the effect of the connectivity cue in a direction which implies that the cue encourages disclosure by triggering the community-building heuristic. Implications for the design of online groups are discussed.2018JKJinyoung Kim et al.Disclosure and AnonymityCSCW
Panel: Without a Trace: How Studying Invisible Interactions Can Help Us Understand Social MediaScholars studying social media have embraced the opportunities afforded by behavioral data captured by online tools to explore the implications of platform use for outcomes such as well-being, relationship maintenance, and perceptions of social capital. However, the prevalence of these methods demand that we consider their potential limitations and the question of how to best combine them with more traditional methods, such as self-report surveys. For this panel, scholars will share brief presentations then engage with the audience, and each other, to identify concerns, opportunities, and best practices. Guiding questions include: What is lost when we rely exclusively on click-based data? How can researchers better measure and account for “invisible” interactions such as exchanges that are triggered by social media, but occur in other channels? What principles are important to bear in mind as we attempt to capture, document, and understand contemporary social media practices?2018NENicole B. Ellison et al.Panel: Without a Trace: How Studying Invisible Interactions Can Help Us Understand Social MediaCSCW