Students’ Privacy and Ethical Concerns of Using Social Virtual Worlds for Online LearningSocial virtual worlds provide students in remote online courses a unique approach to collaborative work and social interactions. However, the use of social virtual worlds in online learning raises concern about what privacy and ethical issues students might encounter. To shed light on this topic, we examined college students’ (N = 68) ethical and privacy concerns of using social virtual worlds across multiple class sessions, courses, universities, virtual environments, and technologies. Students revealed (a) struggling to manage their identity between classmates and strangers, (b) discomfort over violations of their avatars’ personal space, (c) issues of vulnerable populations experiencing harassment, nudity, and loneliness, and (d) concern over companies tracking and storing user data. In addition, students described the technological affordances that mitigated their privacy and ethical concerns. We discuss the implications of our findings for the future of collaborative learning and the design of social virtual worlds.2025JBJakki O. Bailey et al.Perspectives on VRCSCW
Attorneys and AI: How Lawyers Use Artificial Intelligence and Analyze Its ImpactsAI systems are testing lawyers' professional ethics obligations of competence, confidentiality, and candor. In the legal profession, the widespread availability of AI systems presents opportunities, like improving the review of documents during the discovery stage of a lawsuit, and challenges, illustrated by the handful of high-profile incidents where lawyers submitted legal briefs in court citing and describing fictitious cases based on AI-generated output. We conducted interviews with 44 legal professionals in the U.S. to understand how attorneys are making sense of AI technology and the impacts these technologies are having on their profession, legal ethics, and legal institutions. We describe participants' experiences with AI in legal work; opportunities and barriers for AI adoption; as well as beliefs, hopes, and concerns lawyers have about potential AI-induced social change. This work extends our understanding of AI's impact on knowledge work.2025ESEddie A Gomez Schieber et al.Fighting Misinformation, Building BelievabilityCSCW
Lessons from Real-World Settings: What Makes It Uniquely Difficult to Design Cognitive Training Programs for Children with Autism Spectrum Disorder and Other Developmental DisabilitiesDespite the prevalence of autism spectrum disorder (ASD) and other developmental disabilities (DD) worldwide, children with ASD and DD face tremendous difficulties receiving support due to physical, financial, and psychological barriers to onsite health and education clinics. As a result, researchers and practitioners have designed software solutions aimed at providing accessible support to meet users’ needs. However, we have limited knowledge of whether these solutions indeed work in real-world settings. To address this gap, we conducted a case study on a cognitive training program called Dubupang, designed by Dubu Inc. From in-depth interviews with multiple stakeholders and field observations of children with ASD and DD, we identify Dubu Inc.’s internal development processes, the critical design issues that emerged through a series of field trials (e.g., instructional design and feedback), and the key implications (e.g., importance of caregivers’ strategic human interventions) for design that better supports both children with ASD and DD and their caregivers.2025HPHyanghee Park et al.University of Illinois Urbana-Champaign, School of Information SciencesCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Special Education TechnologyCHI
How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative WorkAs artificial intelligence (AI) continues to transform the modern workplace, generative AI (GenAI) has emerged as a prominent tool capable of augmenting work processes. Defined by its ability to create or modify content, GenAI differs significantly from traditional machine learning models that classify, recognize, or predict patterns from existing data. This study explores the role of GenAI in shaping perceptions of AI’s contribution and how these perceptions influence both creators’ internal assessments of their work and their anticipation of external evaluators’ assessments. Our research develops and empirically tests a structural model through a between-subjects experiment, revealing that the role GenAI plays in the work process significantly impacts perceived enhancements in work quality and effort relative to human input. Additionally, we identify a critical trade-off between fostering worker assessments of creativity and managing perceived external assessments of the work’s value.2025ASAaron Schecter et al.University of Georgia, Terry College of BusinessGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
Lessons From Working in the Metaverse: Challenges, Choices, and Implications from a Case StudyAlthough the metaverse workspace has the potential to solve some of the drawbacks of remote work while maintaining its benefits, there are few real-world cases of adopting the metaverse as a legitimate workspace and fewer subsequent studies on how to design and operate the metaverse workspace. Thus, questions exist about the organizational or sociotechnical challenges that may emerge and how decisions are made when adopting and operating the metaverse workspace in a real-world setting. To answer such questions, we scrutinized the startup company Zigbang, which has completely replaced their physical office with Soma— a metaverse platform they developed where thousands of people work and other cooperative companies have moved in as tenants. By conducting field observations and semi-structured interviews with various workers and Zigbang’s stakeholders, we identify essential design challenges and decisions when adopting a metaverse workspace and highlight the key takeaways learned from the company’s trials and errors.2024HPHyanghee Park et al.Seoul National UniversityMixed Reality WorkspacesRemote Work Tools & ExperienceCHI
The Promise and Peril of ChatGPT in Higher Education: Opportunities, Challenges, and Design ImplicationsA growing number of students in higher education are using ChatGPT for various educational purposes, ranging from seeking information to writing essays. Although many universities have officially banned the use of ChatGPT because of its potential harm and unintended consequences, it is still important to uncover how students leverage ChatGPT for learning, what challenges emerge, and how we can make better use of ChatGPT in higher education. Thus, we conducted focus group workshops and a series of participatory design sessions with thirty students who have actively interacted with ChatGPT for one semester in university and with other five stakeholders (e.g., professors, AI experts). Based on these, this paper identifies real opportunities and challenges of utilizing and designing ChatGPT for higher education.2024HPSoHyun Park et al.Seoul National UniversityHuman-LLM CollaborationSTEM Education & Science CommunicationSpecial Education TechnologyCHI
Impact of Model Interpretability and Outcome Feedback on Trust in AIThis paper bridges the gap in Human-Computer Interaction (HCI) research by comparatively assessing the effects of interpretability and outcome feedback on user trust and collaborative performance with AI. Through novel pre-registered experiments (N=1,511 total participants) using an interactive prediction task, we analyzed how interpretability and outcome feedback influence users’ task performance and trust in AI. The results counter the widespread belief that interpretability drives trust, showing that interpretability led to no robust improvements in trust and that outcome feedback had a significantly greater and more reliable effect. However, both factors had modest effects on participants’ task performance. These findings suggest that (1) interpretability may be less effective at increasing trust than factors like outcome feedback, and (2) augmenting human performance via AI systems may not be a simple matter of increasing trust in AI, as increased trust is not always associated with equally sizable performance improvements. Our exploratory analyses further delve into the mechanisms underlying this trust-performance paradox. These findings present an opportunity for research to focus not only on methods for generating interpretations but also on techniques that ensure interpretations impact trust and performance in practice.2024DADaehwan Ahn et al.University of GeorgiaExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
Working with AI to persuade: Examining a large language model’s ability to generate pro-vaccination messagesArtificial Intelligence (AI) is a transformative force in communication and messaging strategy, with potential to disrupt traditional approaches. Large language models (LLMs), a form of AI, are capable of generating high-quality, humanlike text. We investigate the persuasive quality of AI-generated messages to understand how AI could impact public health messaging. Specifically, through a series of studies designed to characterize and evaluate generative AI in developing public health messages, we analyze COVID-19 pro-vaccination messages generated by GPT-3, a state of the art instantiation of a large language model. Study 1 is a systematic evaluation of GPT-3’s ability to generate pro-vaccination messages. Study 2 then observed peoples’ perceptions of curated GPT-3-generated messages compared to human-authored messages released by the CDC, finding that GPT-3 messages were perceived as more effective, stronger arguments, and evoked more positive attitudes than CDC messages. Finally, Study 3 assessed the role of source labels on perceived quality, finding that while participants preferred AI-generated messages, they expressed dispreference for messages that were labeled as AI-generated. The results suggest that with human supervision AI can be used to create effective public health messages, but that individuals prefer their public health messages to come from human institutions rather than AI sources. We propose best practices for assessing generative outputs of large language models in future social science research and the ways health professionals can use AI systems to augment public health messaging.2023EKElise Karinshak et al.Health and AICSCW
Towards a Metaverse Workspace: Opportunities, Challenges, and Design ImplicationsBoth enterprises and their employees have globally experienced remote work at an unprecedented scale since the outbreak of COVID-19. As the pandemic becomes less of a threat, some companies have called their employees back to a physical office, citing issues related to working remotely, but many employees have refused to return. Thus, working in the metaverse has gained much attention as an alternative that could complement the weaknesses of completely remote work or even offline work. However, we do not know yet what benefits and drawbacks the metaverse has as a legitimate workspace, because there are few real cases of 1) working in the metaverse and 2) working remotely at such an unprecedented scale. Thus, this paper aims to identify real challenges and opportunities the metaverse workspace presents when compared to remote work by conducting semi-structured interviews and participatory workshops with various employees and company stakeholders (e.g., HR managers and CEOs) who have experienced at least two of three work types: working in a physical office, remotely, or in the metaverse. Consequently, we identified 1) advantages and disadvantages of remote work and 2) opportunities and challenges of the metaverse. We further discuss design implications that may overcome the identified challenges of working in the metaverse.2023HPHyanghee Park et al.Seoul National UniversityMixed Reality WorkspacesCHI