Meeting Patients Where They're At: Toward the Expansion of Professional Chaplaincy Care into Online Spiritual Care CommunitiesDespite a growing need for spiritual care in the US, it is often under-served, inaccessible, or misunderstood, while almost no prior work in CSCW/HCI research has engaged with professional chaplains and spiritual care providers. This interdisciplinary study aims to develop a foundational understanding of how spiritual care may (or may not) be expanded into online spaces---especially focusing on anonymous, asynchronous, and text-based online communities. We conducted an exploratory mixed-methods study with chaplains (N=22) involving interviews and user testing sessions centered around Reddit support communities to understand participants' perspectives on technology and their ideations about the role of chaplaincy in prospective Online Spiritual Care Communities (OSCCs). Our Grounded Theory Method analysis highlighted benefits of OSCCs including: meeting patients where they are at; accessibility and scalability; and facilitating patient-initiated care. Chaplains highlighted how their presence in OSCCs could help with shaping peer interactions, moderation, synchronous chats for group care, and redirecting to external resources, while also raising important feasibility concerns, risks, and needs for future design and research. We used an existing taxonomy of chaplaincy techniques to show that some spiritual care strategies may be amenable to online spaces, yet we also exposed the limitations of technology to fully mediate spiritual care and the need to develop new online chaplaincy interventions. Based on these findings, we contribute the model of a ``Care Loop'' between institutionally-based formal care and platform-based community care to expand access and drive greater awareness and utilization of spiritual care. We also contribute design implications to guide future work in online spiritual care.2025ABAlemitu Bezabih et al.Palliative CareCSCW
Governance of the Black Experience on Reddit: r/BlackPeopleTwitter as a Case Study in Supporting Sense of Virtual Community for Black UsersDespite frequent efforts to combat racism, almost no research has explored how to cultivate positive experiences of thriving Black culture on Reddit. In this case study, we surveyed users of r/BlackPeopleTwitter (BPT)---a large, popular subreddit that showcases screenshots of hilarious or insightful social media posts made by Black people (mainly from Black Twitter). Our research questions seek to understand users' motivations for visiting BPT, how they experience a sense of virtual community (SOVC) and membership in BPT, and how BPT's governance influences these experiences. We find that that users come to BPT primarily for excellent humor and entertainment, sociopolitical context on issues relevant to Black people, and/or partaking in the shared Black experience. Black users are more likely to report higher SOVC and to identify as members, whereas non-Black users are more likely to identify as guests or visitors to the community. To protect Black expression, the BPT moderation team implemented a governance strategy for verifying racial identity and limiting participation to only verified users in certain threads. Our data suggest that this policy is a contentious but influential aspect of SOVC that simultaneously constructs and challenges the sense of the subreddit existing as a safe space for Black people. We synthesize these results by discussing how: differing platform affordances across Twitter and Reddit combine to cultivate a thriving Black community on Reddit; the need for Black authenticity on an otherwise anonymous platform can guide future research in identity verification; and the limitations of this study motivate future work to support all marginalized communities online.2024CSC. Estelle Smith et al.Session 2b: Exploring Race, Gender, and Identity in Digital PlatformsCSCW
Understanding Roboticists' Power through Matrix Guided Power AnalysisRoboticists wield substantial power through the ways we choose to design and deploy robots. But understanding the nature of this power requires us to consider the different types of power wielded through different types of robot design choices, and the social and historical factors that shape the power landscape into which robots are embedded. To facilitate this type of analysis, I present Matrix-Guided Power Analysis (MGPA), a framework for analyzing the different types of power that technologists wield across different domains of power, with sensitivity to the social and historical forces that determine the default and alternative trajectories of those technologies. Further, I show how MGPA can be used to better understand the specific types of power that roboticists wield.2024TWTom WilliamsMental Health Apps & Online Support CommunitiesHuman-Robot Collaboration (HRC)Technology Ethics & Critical HCIHRI
More Than Binary: Transgender and Nonbinary Perspectives on Human Robot InteractionPrevious research has shown that gendered robot designs prompt users to carry biases from human-human interaction into human-robot interaction. Yet avoiding gendered designs in human-robot interaction may be infeasible, as humans readily gender robots based on factors like name, voice, and pronouns. One solution to this challenge could be to use an intentionally agender robot design, that is explicitly presented as agender in the way that some humans identify. Yet it is unclear whether trans, nonbinary, or otherwise gender nonconforming people would view this approach to be a positive and inclusive step in robot design, or whether they would view it as appropriative or otherwise problematic. In fact, little is known about trans and nonbinary perspectives on human-robot interaction, which has not been previously studied. In this work, we thus present the first study of trans and nonbinary perspectives on robot design, with a particular focus on perceptions of robot gender and agender robot design. Our results suggest that trans and nonbinary users readily accept robots depicted as agender, and view this as a largely positive design strategy that could help normalize non-cisgender identities. Yet our results also highlight key risks posed by this design strategy, including risks of backlash, caricature, and dehumanization, and the ways these risks are moderated by a number of political and economic factors.2024MSMichael Stolp-Smith et al.Gender & Race Issues in HCILGBTQ+ Community Technology DesignHRI
(Gestures Vaguely): The Effects of Robots' Use of Abstract Pointing Gestures in Large-Scale EnvironmentsAs robots are deployed into large-scale human environments, they will need to engage in task-oriented dialogues about objects and locations beyond those that can currently be seen. In these contexts, speakers use a wide range of referring gestures beyond those used in the small-scale interaction contexts that HRI research typically investigates. In this work, we thus seek to understand how robots can better generate gestures to accompany their referring language in large-scale interaction contexts. In service of this goal, we present the results of two human-subject studies: (1) a human-human study exploring how human gestures change in large-scale interaction contexts, and to identify human-like gestures suitable to such contexts yet readily implemented on robot hardware; and (2) a human-robot study conducted in a tightly controlled Virtual Reality environment, to evaluate robots' use of those identified gestures. Our results show that robot use of Precise Deictic and Abstract Pointing gestures afford different types of benefits when used to refer to visible vs. non-visible referents, leading us to formulate three concrete design guidelines. These results highlight both the opportunities for robot use of more humanlike gestures in large-scale interaction contexts, as well as the need for future work exploring their use as part of multi-modal communication.2024AHAnnie Huang et al.Hand Gesture RecognitionDomestic RobotsSocial Robot InteractionHRI
Robots for Social Justice (R4SJ): Toward a More Equitable Practice of Human-Robot InteractionIn this work, we present \textit{Robots for Social Justice (R4SJ)}: a framework for an equitable engineering practice of Human-Robot Interaction, grounded in the Engineering for Social Justice (E4SJ) framework for Engineering Education. To understand the new insights this framework could provide to the field of HRI, we analyze the past decade of papers published at the ACM/IEEE International Conference on Human-Robot Interaction, and examine how well current HRI research aligns with the principles espoused in the E4SJ framework. Based on the gaps identified through this analysis, we make five concrete recommendations, and highlight key questions that can guide the introspection for engineers, designers, and researchers. We believe these considerations are a necessary step not only to ensure that our engineering education efforts encourage students to engage in equitable and societally beneficial engineering practices (the purpose of E4SJ), but also to ensure that the technical advances we present at conferences like HRI promise true advances to society, and not just to fellow researchers and engineers.2024YZYifei Zhu et al.Social Robot InteractionTechnology Ethics & Critical HCIHRI
The Power of Advice: Differential Blame for Human and Robot Advisors and Deciders in a Moral Advising ContextDue to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that people may blame human and robot agents differently for violations of those norms. These complications raise particular challenges for robots giving moral advice to primary decision makers, as the robots and the deciders may be blamed differently for endorsing the same moral action. In this work, we thus explore how people morally evaluate both human and robot advisors for human and robot deciders. In Experiment 1 (𝑛 = 555), we examine human blame judgments of robot and human moral advisors and find clear evidence for an advice as decision hypothesis: advisors are blamed similarly to how they would be blamed for making the decisions they advised. In Experiment 2 (𝑛 = 1326), we examine people’s blame judgments of a robot or human decider following the advice of a robot or human moral advisor. We replicate the results from Experiment 1 and also find clear evidence for a differential dismissal hypothesis, in which moral deciders are penalized for ignoring moral advice, especially when a robot decider ignores a human advisor’s recommendation. Our results raise questions about people’s perception of moral advising situations, especially when they involve robots, and they present challenges for the design of morally competent language-capable robots more generally.2024AHAlyssa Hanson et al.Agent Personality & AnthropomorphismSocial Robot InteractionHRI
What a Thing to Say! Which Linguistic Politeness Strategies Should Robots Use in Noncompliance Interactions?For social robots to succeed human environments, they must comprehend and follow human norms. In particular, robots must respond in effective, yet appropriate ways when humans violate these norms, e.g., when humans give robots unethical commands. Previous work has shown that humans expect robots to be proportional in their norm-violation responses; but there are a wide range of approaches robots could use to tune the politeness of their utterances to achieve proportionality, and it is not obvious whether all such strategies are appropriate for robots to use. In this work, we present the results of a human-subjects study assessing the use of human-like Face Theoretic proportionality. Our results show that while people expect robots to modulate the politeness of their responses, they do not expect them to strictly mimic human linguistic behaviors. Specifically, linguistic politeness strategies that use direct, formal language are perceived as more effective and more appropriate than strategies that use indirect, informal language.2024TMTerran Mott et al.Agent Personality & AnthropomorphismSocial Robot InteractionHRI
A Tale of Two Communities: Privacy of Third Party App Users in Crowdsourcing - The Case of Receipt TranscriptionMobile and web apps are increasingly relying on the data generated or provided by users such as from their uploaded documents and images. Unfortunately, those apps may raise significant user privacy concerns. Specifically, to train or adapt their models for accurately processing huge amounts of data continuously collected from millions of app users, app or service providers have widely adopted the approach of crowdsourcing for recruiting crowd workers to manually annotate or transcribe the sampled ever-changing user data. However, when users' data are uploaded through apps and then become widely accessible to hundreds of thousands of anonymous crowd workers, many human-in-the-loop related privacy questions arise concerning both the app user community and the crowd worker community. In this paper, we propose to investigate the privacy risks brought by this significant trend of large-scale crowd-powered processing of app users' data generated in their daily activities. We consider the representative case of receipt scanning apps that have millions of users, and focus on the corresponding receipt transcription tasks that appear popularly on crowdsourcing platforms. We design and conduct an app user survey study (n=108) to explore how app users perceive privacy in the context of using receipt scanning apps. We also design and conduct a crowd worker survey study (n=102) to explore crowd workers' experiences on receipt and other types of transcription tasks as well as their attitudes towards such tasks. Overall, we found that most app users and crowd workers expressed strong concerns about the potential privacy risks to receipt owners, and they also had a very high level of agreement with the need for protecting receipt owners' privacy. Our work provides insights on app users' potential privacy risks in crowdsourcing, and highlights the need and challenges for protecting third party users' privacy on crowdsourcing platforms. We have responsibly disclosed our findings to the related crowdsourcing platform and app providers.2023WPWeiping Pei et al.CrowdsCSCW
"Thoughts & Prayers" or ":Heart Reaction: & :Prayer Reaction:": How the Release of New Reactions on CaringBridge Reshapes Supportive Communication in Health CrisesFollowing Facebook's introduction of the "Like" in 2009, CaringBridge (a nonprofit health journaling platform) implemented a "Heart" symbol as a single-click reaction affordance in 2012. In 2016, Facebook expanded its Like into a set of emotion-based reactions. In 2021, CaringBridge likewise added three new reactions: "Prayer", "Happy", and "Sad." Through user surveys (N=808) and interviews (N=13), we evaluated this product launch. Unlike Likes on mainstream social media, CaringBridge's single-click Heart was consistently interpreted as a simple, meaningful expression of acknowledgement and support. Although most users accepted the new reactions, the product launch transformed user perceptions of the feature and ignited major disagreement regarding the meanings and functions of reactions in the high stakes context of health crises. Some users found the new reactions to be useful, convenient, and reducing of caregiver burden; others felt they cause emotional harms by stripping communication of meaningful expression and authentic care. Overall, these results surface tensions for small social media platforms that need to survive amidst giants, as well as highlighting crucial trade-offs between the cognitive effort, meaningfulness, and efficiency of different forms of Computer-Mediated Communication (CMC). Our work provides three contributions to support researchers and designers in navigating these tensions: (1) empirical knowledge of how users perceived the reactions launch on CaringBridge; (2) design implications for improving health-focused CMC; and (3) concrete questions to guide future research into reactions and health-focused CMC.2023CSC. Estelle Smith et al.Health SupportCSCW
Beyond the Session: Centering Teleoperators in Socially Assistive Robot-Child Interactions Reveals the Bigger PictureSocially assistive robots play an effective role in children’s therapy and education. Robots engage children and provide interaction that is free of the potential judgment of human peers and adults. Research in socially assistive robots for children generally focuses on therapeutic and educational outcomes for those children, informed by a vision of autonomous robots. This perspective ignores therapists and educators, who operate these robots in practice. Through nine interviews with individuals who have used robots to deliver socially assistive services to neurodivergent children, we (1) define a dual-cycle model of therapy that helps capture the domain expert view of therapy, (2) identify six core themes of teleoperator needs and patterns across these themes, (3) provide high-level guidelines and detailed recommendations for designing teleoperated socially assistive robot systems, and (4) outline a vision of robot-assisted therapy informed by these guidelines and recommendations that centers teleoperators of socially assistive robots in practice.2023SESaad Elbeleidy et al.Human Robot InteractionCSCW
I Need Your Help... or Do I? Maintaining Situation Awareness through Performative AutonomyInteractive intelligent systems are increasingly being deployed in safety critical contexts like Space Exploration. For humans to safely and successfully complete collaborative tasks with robots in these contexts, they must maintain Situational Awareness of their task context without being cognitively overloaded -- regardless of whether they are co-located with robots or interacting with them from a distance of thousands or millions of miles. In this paper, we present a novel autonomy design strategy we term Performative Autonomy, in which robots behave as if they have a lower level of autonomy than they are truly capable of (i.e., asking for advice they do not believe they truly need), for the sole purpose of maintaining interactants' Situational Awareness.In our first experiment (n=264), we begin by demonstrating that Performative Autonomy can increase Situational Awareness (SA) without overly increasing workload, and that this is true across tasks with different baseline levels of Mental Workload.In our second experiment (n=318), we consider cases where robots do not believe they need advice, but in fact have faulty perception or decision making capabilities. In this experiment, we only observed benefits to Performative Autonomy for specific types of questions, and only when there was significant cognitive load imposed by a secondary task; yet we observed uniform benefit on task performance for asking these types of questions regardless of task-imposed Mental workload.Our results from these two studies (total n=582) thus provide strong support for using this autonomy design strategy in future safety-critical missions as humanity explores the Moon, Mars, and beyond.2023SRSayanti Roy et al.Human-Robot Collaboration (HRC)Impact of Automation on WorkHRI
Crossing Reality: Comparing Physical and Virtual Robot DeixisAugmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges. In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts.2023ZHZhao Han et al.Mixed Reality WorkspacesSocial Robot InteractionHRI
Fresh Start: Encouraging Politeness in Wakeword-Driven Human-Robot InteractionDeployed social robots are increasingly relying on wakeword-based interaction, where interactions are human-initiated by a wakeword like “Hey Jibo”. While wakewords help to increase speech recognition accuracy and ensure privacy, there is concern that wakeword-driven interaction could encourage impolite behavior because wakeword-driven speech is typically phrased as commands. To address these concerns, companies have sought to use wakeword design to encourage interactant politeness, through wakewords like “?Name?, please”. But while this solution is intended to encourage people to use more “polite words”, researchers have found that these wakeword designs actually decrease interactant politeness in text-based communication, and that other wakeword designs could better encourage politeness by priming users to use Indirect Speech Acts. Yet there has been no previous research to directly compare these wakewords designs in in-person, voice-based human-robot interaction experiments, and previous in-person HRI studies could not effectively study carryover of wakeword-driven politeness and impoliteness into human-human interactions. In this work, we conceptually reproduced these previous studies (n=69) to assess how the wakewords “Hey ?Name?”, “Excuse me ?Name?”, and “?Name?, please” impact robot-directed and human-directed politeness. Our results demonstrate the ways that different types of linguistic priming interact in nuanced ways to induce different types of robot-directed and human-directed politeness.2023RWRuchen Wen et al.Voice User Interface (VUI) DesignHome Voice Assistant ExperienceHRI
Negotiation Behaviors of Owners and Bystanders over Data Practices of Smart Home DevicesBystanders (i.e., visiting friends, visiting family members, or domestic workers) are often not aware of the data practices in other people’s (i.e., owners’) smart homes, exposing them to privacy risks. One solution to avoid violating bystanders’ privacy is to increase the data practice transparency and facilitate negotiation. In this paper, we designed a negotiation interaction study to explore the behaviors of owners (n1=238 participants assigned with the owner role) and bystanders (n2=222 participants assigned with the by- stander role) when negotiating about smart home data practices with the corresponding bystander and owner digital agents. We also asked questions to explore factors that may potentially correlate with or affect the observed negotiation behaviors and outcomes. We found that owner and bystander participants differ in behaviors regarding numbers of rounds of negotiation, final reached preferences, and total number of agreements. We analyzed the correlating factors and predictability of reaching agreements.2023AAAhmed Alshehri et al.Colorado School of MinesPrivacy by Design & User ControlSmart Home Privacy & SecuritySocial Robot InteractionCHI
Quality Control in Crowdsourcing based on Fine-Grained Behavioral FeaturesCrowdsourcing is popular for large-scale data collection and labeling, but a major challenge is on detecting low-quality submissions. Recent studies have demonstrated that behavioral features of workers are highly correlated with data quality and can be useful in quality control. However, these studies primarily leveraged coarsely extracted behavioral features, and did not further explore quality control at the fine-grained level, i.e., the annotation unit level. In this paper, we investigate the feasibility and benefits of using fine-grained behavioral features, which are the behavioral features finely extracted from a worker's individual interactions with each single unit in a subtask, for quality control in crowdsourcing. We design and implement a framework named Fine-grained Behavior-based Quality Control (FBQC) that specifically extracts fine-grained behavioral features to provide three quality control mechanisms: (1) quality prediction for objective tasks, (2) suspicious behavior detection for subjective tasks, and (3) unsupervised worker categorization. Using the FBQC framework, we conduct two real-world crowdsourcing experiments and demonstrate that using fine-grained behavioral features is feasible and beneficial in all three quality control mechanisms. Our work provides clues and implications for helping job requesters or crowdsourcing platforms to further achieve better quality control.2021WPWeiping Pei et al.Crowds and Data WorkCSCW
Exploring the Role of Gender in Perceptions of Robotic NoncomplianceA key capability of morally competent robots is to reject or question potentially immoral human commands. However, robot rejections of inappropriate commands must be phrased with great care and tact. Previous research has shown that failure to calibrate the “face threat" in a robot’s command rejection to the severity of the norm violation in the command can lead humans to perceive the robot as inappropriately harsh and can needlessly decrease robot likeability. However, it is well-established that gender plays a significant role in determining linguistic politeness norms and that people have a powerful natural tendency to gender robots. Yet, the effect of robotic gender presentation on these noncompliance interactions is not well understood. We present an experiment that explores the effects of robot and human gender on perceptions of robots in noncompliance interactions, and find evidence of a complicated interplay between these gendered factors. Our results suggest that (1) it may be more favorable for a male robot to reject commands than for a female robot to do so, (2) it may be more favorable to reject commands given by a male human than by a female human, and (3) that robots may be perceived more favorably when their gender matches that of human interactants and observers.2020RJRyan Blake Jackson et al.Social Robot InteractionGender & Race Issues in HCITechnology Ethics & Critical HCIHRI