Understanding Community ‘Likes’ and Clinical Perspective in Mental Health Discourse: Insights from YouTube Comments on College Students’ Mental HealthAnonymized, informal environments, such as social media, provide opportunities for individuals to naturally exchange information and receive or provide social support for stigmatized conditions, such as mental health. The sensitive nature of the content shared on these platforms requires automated moderation, which is often based on keyword detection. However, what is considered as support versus risk in these contexts can be controversial and more situated. To investigate how we might better define what is supportive versus harmful content, we examined 49,006 YouTube comments on videos about college students’ mental health. using statistical tests and qualitative content analysis with a clinical psychologist. We studied (1) the association between community ‘Likes’ and both self-disclosure and related linguistic features and (2) when comments exhibiting features associated with community ‘Likes’ reflect perceived support versus harm in the context of mental health. We discuss the situatedness of how community-generated comments containing self-disclosure can be either supportive or potentially harmful from a clinical perspective. This work highlights the need for a paradigmatic change in developing automated moderation rules and assumptions toward social media environments for supporting mental health.2025HKHeejun Kim et al.Caring at a DistanceCSCW
Uncovering How Scatterplot Features Skew Visual Class SeparationMulti-class scatterplots are essential for visually comparing data, such as examining class distributions in dimensionality reduction and evaluating classification models. Visual class separation (VCS) measures quantify human perception but are largely derived from and evaluated with datasets reflecting limited types of scatterplot features (e.g., data distribution, similar class densities). Quantitatively identifying which scatterplot features are influential to VCS tasks can enable more robust guidance for future measures. We analyze the alignment between VCS measures and people's perceptions of class separation through a crowdsourced study using 70 scatterplot features relevant to class separation. To cover a wide range of scatterplot features, we generated a set of multi-class scatterplots from 6,947 real-world datasets. Our results highlight that multiple combinations of features are needed to best explain VCS. From our analysis, we develop a composite feature model that identifies key scatterplot features for measuring VCS task performance.2025SBS. Sandra Bae et al.University of Colorado Boulder, ATLAS InstituteInteractive Data VisualizationVisualization Perception & CognitionCHI
PD-Insighter: A Visual Analytics System to Monitor Daily Actions for Parkinson's Disease TreatmentPeople with Parkinson's Disease (PD) can slow the progression of their symptoms with physical therapy. However, clinicians lack insight into patients’ motor function during daily life, preventing them from tailoring treatment protocols to patient needs. This paper introduces PD-Insighter, a system for comprehensive analysis of a person's daily movements for clinical review and decision-making. PD-Insighter provides an overview dashboard for discovering motor patterns and identifying critical deficits during activities of daily living and an immersive replay for closely studying the patient's body movements with environmental context. Developed using an iterative design study methodology in consultation with clinicians, we found that PD-Insighter's ability to aggregate and display data with respect to time, actions, and local environment enabled clinicians to assess a person's overall functioning during daily life outside the clinic. PD-Insighter's design offers future guidance for generalized multiperspective body motion analytics, which may significantly improve clinical decision-making and slow the functional decline of PD and other medical conditions.2024JKJade Kandel et al.University of North Carolina at Chapel HillHuman Pose & Activity RecognitionMedical & Scientific Data VisualizationTelemedicine & Remote Patient MonitoringCHI
Cieran: Designing Sequential Colormaps via In-Situ Active Preference LearningQuality colormaps can help communicate important data patterns. However, finding an aesthetically pleasing colormap that looks "just right" for a given scenario requires significant design and technical expertise. We introduce Cieran, a tool that allows any data analyst to rapidly find quality colormaps while designing charts within Jupyter Notebooks. Our system employs an active preference learning paradigm to rank expert-designed colormaps and create new ones from pairwise comparisons, allowing analysts who are novices in color design to tailor colormaps to their data context. We accomplish this by treating colormap design as a path planning problem through the CIELAB colorspace with a context-specific reward model. In an evaluation with twelve scientists, we found that Cieran effectively modeled user preferences to rank colormaps and leveraged this model to create new quality designs. Our work shows the potential of active preference learning for supporting efficient visualization design optimization.2024MHMatt-Heun Hong et al.University of North Carolina at Chapel HillInteractive Data VisualizationVisualization Perception & CognitionCHI
Do You See What I See? A Qualitative Study Eliciting High-Level Visualization ComprehensionDesigners often create visualizations to achieve specific high-level analytical or communication goals. These goals require people to naturally extract complex, contextualized, and interconnected patterns in data. While limited prior work has studied general high-level interpretation, prevailing perceptual studies of visualization effectiveness primarily focus on isolated, predefined, low-level tasks, such as estimating statistical quantities. This study more holistically explores visualization interpretation to examine the alignment between designers' communicative goals and what their audience sees in a visualization, which we refer to as their comprehension. We found that statistics people effectively estimate from visualizations in classical graphical perception studies may differ from the patterns people intuitively comprehend in a visualization. We conducted a qualitative study on three types of visualizations---line graphs, bar graphs, and scatterplots---to investigate the high-level patterns people naturally draw from a visualization. Participants described a series of graphs using natural language and think-aloud protocols. We found that comprehension varies with a range of factors, including graph complexity and data distribution. Specifically, 1) a visualization's stated objective often does not align with people's comprehension, 2) results from traditional experiments may not predict the knowledge people build with a graph, and 3) chart type alone is insufficient to predict the information people extract from a graph. Our study confirms the importance of defining visualization effectiveness from multiple perspectives to assess and inform visualization practices.2024GQGhulam Jilani Quadri et al.University of North CarolinaInteractive Data VisualizationVisualization Perception & CognitionCHI
Assessing User Trust in Active Learning Systems: Insights from Query Policy and Uncertainty VisualizationActive learning systems have become increasingly popular for various applications in machine learning (ML), including medical imaging, environmental monitoring, and geospatial analysis. These systems rely on inputs dynamically queried from people to enhance classification. Ensuring appropriate analyst trust in these systems remains a significant obstacle, as analysts may over-rely or under-rely on the system. Common active learning (AL) strategies enhance classification models by asking an analyst to provide labels for data points with the highest degree of uncertainty. However, model-centric policies do not consider potential priming effects on the analyst and how they will affect people's trust in the system post-training. In this paper, we present an empirical study assessing how AL query policies and visualizations that enhance transparency in a classifier’s certainty influence trust in automated image classifiers. We found that query policy may significantly influence an analyst’s perception of the system’s capabilities, while the level of visual transparency into classifier certainty may influence an analyst’s ability to perform the classification task. Our study informs the design of interactive labeling systems to help mitigate the effects of over-reliance.2024ITIan Thomas et al.Explainable AI (XAI)Interactive Data VisualizationUncertainty VisualizationIUI
User Perspectives on Ethical Challenges in Human-AI Co-Creativity: A Design Fiction StudyIn a human-AI co-creation, AI not only categorizes, evaluates and interprets data but also generates new content and interacts with humans. As co-creative AI is a form of intelligent technology that directly involves humans, it is critical to anticipate and address ethical issues during all design stages. The open-ended nature of human-AI interactions in co-creation poses many challenges for designing ethical co-creative AI systems. Researchers have been exploring ethical issues associated with autonomous AI in recent years, but ethics in human-AI co-creativity is a relatively new research area. In order to design human-centered ethical AI, it is important to understand the perspectives, expectations, and ethical concerns of potential users. In this paper, we present a study with 18 participants to explore ethical dilemmas and challenges in human-AI co-creation from the perspective of potential users using design fiction (DF). DF is a speculative research method that depicts a new concept or technology through stories as an intangible prototype. We present the findings from the study as potential users' concerns, stances and expectations around ethical challenges in human-AI co-creativity to devise guidelines for designing human-centered ethical AI partners for human-AI co-creation.2023JRJeba Rezwana et al.Generative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityDesign FictionC&C
Measuring Categorical Perception in Color-Coded ScatterplotsScatterplots commonly use color to encode categorical data. However, as datasets increase in size and complexity, the efficacy of these channels may vary. Designers lack insight into how robust different design choices are to variations in category numbers. This paper presents a crowdsourced experiment measuring how the number of categories and choice of color encodings used in multiclass scatterplots influences the viewers’ abilities to analyze data across classes. Participants estimated relative means in a series of scatterplots with 2 to 10 categories encoded using ten color palettes drawn from popular design tools. Our results show that the number of categories and color discriminability within a color palette notably impact people's perception of categorical data in scatterplots and that the judgments become harder as the number of categories grows. We examine existing palette design heuristics in light of our results to help designers make robust color choices informed by the parameters of their data.2023CTChin Tseng et al.University of North Carolina at Chapel HillInteractive Data VisualizationGeospatial & Map VisualizationVisualization Perception & CognitionCHI
The Effects of System Initiative during Conversational Collaborative SearchOur research in this paper lies at the intersection of collaborative and conversational search. We report on a Wizard of Oz lab study in which 27 pairs of participants collaborated on search tasks over the Slack messaging platform. To complete tasks, pairs of collaborators interacted with a so-called \emph{searchbot} with conversational capabilities. The role of the searchbot was played by a reference librarian. It is widely accepted that conversational search systems should be able to engage in \emph{mixed-initiative interaction}---take and relinquish control of a multi-agent conversation as appropriate. Research in discourse analysis differentiates between dialog- and task-level initiative. Taking \emph{dialog-level} initiative involves leading a conversation for the sole purpose of establishing mutual belief between agents. Conversely, taking \emph{task-level} initiative involves leading a conversation with the intent to influence the goals of the other agent(s). Participants in our study experienced three \emph{searchbot conditions}, which varied based on the level of initiative the human searchbot was able to take: (1) no initiative, (2) only dialog-level initiative, and (3) both dialog- and task-level initiative. We investigate the effects of the searchbot condition on six different types of outcomes: (RQ1) perceptions of the searchbot's utility, (RQ2) perceptions of workload, (RQ3) perceptions of the collaboration, (RQ4) patterns of communication and collaboration, and perceived (RQ5) benefits and (RQ6) challenges from engaging with the searchbot.2022SAsandeep avula et al.Human-AI collaborationCSCW
The Effects of System Initiative during Conversational Collaborative SearchOur research in this paper lies at the intersection of collaborative and conversational search. We report on a Wizard of Oz lab study in which 27 pairs of participants collaborated on search tasks over the Slack messaging platform. To complete tasks, pairs of collaborators interacted with a so-called \emph{searchbot} with conversational capabilities. The role of the searchbot was played by a reference librarian. It is widely accepted that conversational search systems should be able to engage in \emph{mixed-initiative interaction}---take and relinquish control of a multi-agent conversation as appropriate. Research in discourse analysis differentiates between dialog- and task-level initiative. Taking \emph{dialog-level} initiative involves leading a conversation for the sole purpose of establishing mutual belief between agents. Conversely, taking \emph{task-level} initiative involves leading a conversation with the intent to influence the goals of the other agent(s). Participants in our study experienced three \emph{searchbot conditions}, which varied based on the level of initiative the human searchbot was able to take: (1) no initiative, (2) only dialog-level initiative, and (3) both dialog- and task-level initiative. We investigate the effects of the searchbot condition on six different types of outcomes: (RQ1) perceptions of the searchbot's utility, (RQ2) perceptions of workload, (RQ3) perceptions of the collaboration, (RQ4) patterns of communication and collaboration, and perceived (RQ5) benefits and (RQ6) challenges from engaging with the searchbot.2022SAsandeep avula et al.Human-AI collaborationCSCW
Scholastic: Graphical Human-AI Collaboration for Inductive and Interpretive Text AnalysisInterpretive scholars generate knowledge from text corpora by manually sampling documents, applying codes, and refining and collating codes into categories until meaningful themes emerge. Given a large corpus, machine learning could help scale this data sampling and analysis, but prior research shows that experts are generally concerned about algorithms potentially disrupting or driving interpretive scholarship. We take a human-centered design approach to addressing concerns around machine-assisted interpretive research to build Scholastic, which incorporates a machine-in-the-loop clustering algorithm to scaffold interpretive text analysis. As a scholar applies codes to documents and refines them, the resulting coding schema serves as structured metadata which constrains hierarchical document and word clusters inferred from the corpus. Interactive visualizations of these clusters can help scholars strategically sample documents further toward insights. Scholastic demonstrates how human-centered algorithm design and visualizations employing familiar metaphors can support inductive and interpretive research methodologies through interactive topic modeling and document clustering.2022MHMatt-Heun Hong et al.Explainable AI (XAI)Interactive Data VisualizationData StorytellingUIST
Understanding User Perceptions, Collaborative Experience and User Engagement in Different Human-AI Interaction Designs for Co-Creative SystemsHuman-AI co-creativity involves humans and AI collaborating on a shared creative product as partners. In a creative collaboration, communication is an essential component among collaborators. In many existing co-creative systems, users can communicate with the AI, usually using buttons or sliders. Typically, the AI in co-creative systems cannot communicate back to humans, limiting their potential to be perceived as partners rather than just a tool. This paper presents a study with 38 participants to explore the impact of two interaction designs, with and without AI-to-human communication, on user engagement, collaborative experience and user perception of a co-creative AI. The study involves user interaction with two prototypes of a co-creative system that contributes sketches as design inspirations during a design task. The results show improved collaborative experience and user engagement with the system incorporating AI-to-human communication. Users perceive co-creative AI as more reliable, personal, and intelligent when the AI communicates to users. The findings can be used to design effective co-creative systems, and the insights can be transferred to other fields involving human-AI interaction and collaboration.2022JRJeba Rezwana et al.Generative AI (Text, Image, Music, Video)AI-Assisted Creative WritingCreative Collaboration & Feedback SystemsC&C
A Feminist Utopian Perspective on the Practice and Promise of MakingWhile makerspaces are often discussed in terms of a utopian vision of democratization and empowerment, many have shown how these narratives are problematic. There remains optimism for the future of makerspaces, but there is a gap in knowledge of how to articulate their promise and how to pursue it. We present a reflexive and critical reflection of our efforts as leaders of a university makerspace to articulate a vision, as well as our experience running a maker fashion show that aimed to address some specific critiques. We analyze interviews of participants from the fashion show using feminist utopianism as a lens to help us understand an alternate utopian narrative for making. Our contributions include insights about how a particular making context embodies feminist utopianism, insights about the applicability of feminist utopianism to makerspace research and visioning efforts, and a discussion about how our results can guide makerspace leaders and HCI researchers.2021JOJohanna Okerlund et al.UNC CharlotteMakerspace CultureGender & Race Issues in HCICHI
Smart Home Beyond the Home: A Case for Community-Based Access ControlAs smart devices are becoming commonplace in homes, we need to explore the needs of not just the residents of the home, but also of secondary stakeholders who may be granted access to these devices from outside of the home. We conducted a mixed methods study, which included a survey of 163 smart home device owners and a follow-up interview with 13 individuals who currently share their smart home devices with others outside of their home. Nearly half (47.8%) of our survey participants shared at least one smart home device with someone that did not live with them. Individuals sought greater safety and security by providing remote access to trusted family members or friends. By understanding users' perspectives about privacy and trust in relation to sharing smart home devices beyond the home, we build a case for community-based access control of smart home devices in the Internet of Things.2020MTMadiha Tabassum et al.University of North Carolina at CharlotteSmart Home Interaction DesignHome Energy ManagementSmart Home Privacy & SecurityCHI
NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze EstimationQuality, diversity, and size of training data are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (2M images at 1280x960), and a real-world dataset collected with 35 subjects (2.5M images at 640x480). Using these datasets we train neural networks performing with sub-millisecond latency. Our gaze estimation network achieves 2.06(±0.44)° of accuracy across a wide 30°×40° field of view on real subjects excluded from training and 0.5° best-case accuracy (across the same FOV) when explicitly trained for one real subject. We also train a pupil localization network which achieves higher robustness than previous methods.2019JKJoohwan Kim et al.NVIDIAEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
Gig Platforms, Tensions, Alliances and Ecosystems: An Actor-Network PerspectiveThe algorithm-based management exercised by digital gig platforms contributes to information and power asymmetries that are pervasive in the gig economy. Although the design of these platforms may foster unbalanced relationships, in this paper, we outline how freelancers and clients on the gig platform Upwork can leverage a network of alliances with external digital platforms to repossess their displaced agency within the gig economy. Building on 39 interviews with Upwork freelancers and clients, we found a dynamic ecosystem of digital platforms that facilitate gig work through and around the Upwork platform. We use actor-network theory to: 1) delineate Upwork’s strategy to establish a comprehensive and isolated platform within the gig economy, 2) track human and nonhuman alliances that run counter to Upwork’s system design and control mechanisms, and 3) capture the existence of a larger ecosystem of external digital platforms that undergird online freelancing. This work explicates the tensions that Upwork users face, and also illustrates the multiplicity of actors that create alliances to work with, through, around, and against the platform’s algorithmic management.2019EKEliscia Kinder et al.On-demand economyCSCW
Computer-Human Interaction Mentoring (CHIMe) 2018HCI is a field where diversity should be considered in the systems we build and study. As such, it is important to cultivate a growing group of diverse researchers with a range of experiences to contribute to difficult design, research, and computational problems. Therefore, the CHIMe organizers invite graduate and undergraduate students to attend. CHIMe intends to provide a welcoming environment for mentoring and collaboration amongst peers, faculty, and industry experts in HCI.2018RBRobin Brewer et al.University of MichiganParticipatory DesignUser Research Methods (Interviews, Surveys, Observation)CHI
Surprise Me If You Can: Serendipity in Health InformationOur natural tendency to be curious is increasingly important now that we are exposed to vast amounts of information. We often cope with this overload by focusing on the familiar: information that matches our expectations. In this paper we present a framework for interactive serendipitous information discovery based on a computational model of surprise. This framework delivers information that users were not actively looking for, but which will be valuable to their unexpressed needs. We hypothesize that users will be surprised when presented with information that violates the expectations predicted by our model of them. This surprise model is balanced by a value component which ensures that the information is relevant to the user. Within this framework we have implemented two surprise models, one based on association mining and the other on topic modeling approaches. We evaluate these two models with thirty users in the context of online health news recommendation. Positive user feedback was obtained for both of the computational models of surprise compared to a baseline random method. This research contributes to the understanding of serendipity and how to “engineer” serendipity that is favored by users.2018XNXi Niu et al.University of North Carolina at CharlotteHuman-LLM CollaborationExplainable AI (XAI)Recommender System UXCHI
Computer-Human Interaction Mentoring (CHIMe) 2018HCI is a field where diversity should be considered in the systems we build and study. As such, it is important to cultivate a growing group of diverse researchers with a range of experiences to contribute to difficult design, research, and computational problems. Therefore, the CHIMe organizers invite graduate and undergraduate students to attend. CHIMe intends to provide a welcoming environment for mentoring and collaboration amongst peers, faculty, and industry experts in HCI.2018RBRobin Brewer et al.University of MichiganMental Health Apps & Online Support CommunitiesUser Research Methods (Interviews, Surveys, Observation)CHI