FairPlay: A Collaborative Approach to Mitigate Bias in Datasets for Improved AI FairnessThe issue of fairness in decision-making is a critical one, especially given the variety of stakeholder demands for differing and mutually incompatible versions of fairness. Adopting a strategic interaction of perspectives provides an alternative to enforcing a singular standard of fairness. We present a web-based software application, FairPlay, that enables multiple stakeholders to debias datasets collaboratively. With FairPlay, users can negotiate and arrive at a mutually acceptable outcome without a universally agreed-upon theory of fairness. In the absence of such a tool, reaching a consensus would be highly challenging due to the lack of a systematic negotiation process and the inability to modify and observe changes. We have conducted user studies that demonstrate the success of FairPlay, as users could reach a consensus within about five rounds of gameplay, illustrating the application's potential for enhancing fairness in AI systems.2025TBTina Behzad et al.Facilitating Equity and Fairness in TechCSCW
Enabling Auto-Correction on Soft Braille KeyboardA soft Braille keyboard is a graphical representation of the Braille writing system on smartphones. It provides an essential text input method for visually impaired individuals, but accuracy and efficiency remain significant challenges. We present an intelligent Braille keyboard with auto-correction ability, which uses optimal transportation theory to estimate the distances between touch input and Braille patterns, and combines it with a language model to estimate the probability of entering words. The proposed system was evaluated through both simulations and user studies. In a touch interaction simulation on an Android phone and an iPhone, our intelligent Braille keyboard demonstrated superior error correction performance compared to the Android Braille keyboard with proofreading suggestions and the iPhone Braille keyboard with spelling suggestions. It reduced the error rate from 55.81% on Android and 57.13% on iPhone to 19.80% under high typing noise. Furthermore, in a user study of 12 participants who are legally blind, the intelligent Braille keyboard reduced word error rate (WER) by 59.5% (42.53% to 17.28%) with a slight drop of 0.74 words per minute (WPM), compared to a conventional Braille keyboard without auto-correction. These findings suggest that our approach has the potential to greatly improve the typing experience for Braille users on touchscreen devices.2025DZDan Zhang et al.Voice AccessibilityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Motor Impairment Assistive Input TechnologiesUIST
MACEDON : Supporting Programmers with Real-Time Multi-Dimensional Code Evaluation and OptimizationRecent advancements in Large Language Models (LLMs) have led programmers to increasingly turn to them for code optimization and evaluation. However, programmers need to frequently switch between code evaluation and prompt authoring because there is a lack of understanding of the underlying code. Yet, current LLM- driven code assistants do not provide sufficient transparency to help programmers track their code based on the intended evaluation metrics, a crucial step before aligning these evaluations with their optimization goals. To address this gap, we adopted an iterative, user-centered design process by first conducting a formative study and a large-scale code analysis. Based on the findings, we then developed MACEDON, a system that supports multi-dimensional code evaluation in real time, direct code segment optimization, as well as shareable report displays. We evaluated MACEDON through a controlled lab study with 24 novice programmers and two real-world case studies. The results show that MACEDON significantly improved users’ ability to identify code issues, apply effective optimizations, and understand their code’s evolving state. Our findings suggest that multi-dimensional evaluation, combined with interactive, segment-specific guidance, empowers users to perform more structured and confident code optimization. The code for this paper can be found in <link-TBD>2025XLXuye Liu et al.360° Video & Panoramic ContentGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationUIST
Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on SmartphonesWhile voice input offers a convenient alternative to traditional text editing on mobile devices, practical implementations face two key challenges: 1) reliably distinguishing between editing commands and content dictation, and 2) effortlessly pinpointing the intended edit location. We propose Tap&Say, a novel multimodal system that combines touch interactions with Large Language Models (LLMs) for accurate text correction. By tapping near an error, users signal their edit intent and location, addressing both challenges. Then, the user speaks the correction text. Tap&Say utilizes the touch location, voice input, and existing text to generate contextually relevant correction suggestions. We propose a novel touch location-informed attention layer that integrates the tap location into the LLM's attention mechanism, enabling it to utilize the tap location for text correction. We fine-tuned the touch location-informed LLM on synthetic touch locations and correction commands, achieving significantly higher correction accuracy than the state-of-the-art method VT. A 16-person user study demonstrated that Tap&Say outperforms VT with 16.4% shorter task completion time and 47.5% fewer keyboard clicks and is preferred by users.2025MZMaozheng Zhao et al.Stony Brook University, Department of Computer ScienceHuman-LLM CollaborationCHI
Influencer: Empowering Everyday Users in Creating Promotional Posts via AI-infused Exploration and CustomizationCreating promotional posts on social platforms enables everyday users to disseminate their creative outcomes, engage in community exchanges, or generate additional income from micro-businesses. However, crafting eye-catching posts with appealing images and effective captions can be challenging and time-consuming for everyday users since they are mostly design novices. We propose Influencer, an interactive tool that helps novice creators quickly generate ideas and create high-quality promotional post designs through AI. Influencer offers a multi-dimensional recommendation system for ideation through example-based image and caption suggestions. Further, Influencer implements a holistic promotional post-design system supporting context-aware exploration considering brand messages and user-specified design constraints, flexible fusion of content, and a mind-map-like layout for idea tracking. Our user study, comparing the system with industry-standard tools, along with two real-life case studies, indicates that Influencer is effective in assisting design novices to generate ideas as well as creative and diverse promotional posts with user-friendly interaction.2025XLXuye Liu et al.University of WaterlooGenerative AI (Text, Image, Music, Video)Recommender System UXCHI
SeQR: A User-Friendly and Secure-by-Design Configurator for Enterprise Wi-FiA classic problem in enterprise Wi-Fi is client-side misconfiguration, which enables credential theft via “Evil Twin” (ET) attacks. To mitigate this, we design, develop, and evaluate a new configurator, SeQR, which allows users to effortlessly and securely set up an enterprise Wi-Fi connection. Utilizing existing authenticated channels, SeQR fully automates the client-side enterprise Wi-Fi configuration process with a simple scan, leaving no room for misconfigurations. Specifically, SeQR thwarts ET by making it impossible for users to opt-out from the security-critical certificate validation. We evaluate the efficacy of SeQR on two fronts. First, we implement a prototype of SeQR in Android, and test its functionality and runtime performance. Next, we compare the usability of SeQR against two existing Wi-Fi configuration interfaces of Android in an in-person user study (n=41) with real devices. Our evaluation shows that SeQR achieves noticeable usability improvements over existing designs, and prevents users from misconfiguring.2025SHS Mahmudul Hasan et al.Syracuse UniversityPasswords & AuthenticationPrivacy Perception & Decision-MakingIoT Device PrivacyCHI
SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a RingFingerspelling is a critical part of American Sign Language (ASL) recognition and has become an accessible optional text entry method for Deaf and Hard of Hearing (DHH) individuals. In this paper, we introduce SpellRing, a single smart ring worn on the thumb that recognizes words continuously fingerspelled in ASL. SpellRing uses active acoustic sensing (via a microphone and speaker) and an inertial measurement unit (IMU) to track handshape and movement, which are processed through a deep learning algorithm using Connectionist Temporal Classification (CTC) loss. We evaluated the system with 20 ASL signers (13 fluent and 7 learners), using the MacKenzie-Soukoref Phrase Set of 1,164 words and 100 phrases. Offline evaluation yielded top-1 and top-5 word recognition accuracies of 82.45% (±9.67%) and 92.42% (±5.70%), respectively. In real-time, the system achieved a word error rate (WER) of 0.099 (±0.039) on the phrases. Based on these results, we discuss key lessons and design implications for future minimally obtrusive ASL recognition wearables.2025HLHyunchul Lim et al.Cornell, Computing and Information ScienceFoot & Wrist InteractionVoice AccessibilityMotor Impairment Assistive Input TechnologiesCHI
LLM Powered Text Entry Decoding and Flexible Typing on SmartphonesLarge language models (LLMs) have shown exceptional performance in various language-related tasks. However, their application in keyboard decoding, which involves converting input signals (e.g. taps and gestures) into text, remains underexplored. This paper presents a fine-tuned FLAN-T5 model for decoding. It achieves 93.1% top-1 accuracy on user-drawn gestures, outperforming the widely adopted SHARK2 decoder, and 95.4% on real-word tap typing data. In particular, our decoder supports Flexible Typing, allowing users to enter a word with taps, gestures, multi-stroke gestures, and tap-gesture combinations. User study results show that Flexible Typing is beneficial and well-received by participants, where 35.9% of words were entered using word gestures, 29.0% with taps, 6.1% with multi-stroke gestures, and the remaining 29.0% using tap-gestures. Our investigation suggests that the LLM-based decoder improves decoding accuracy over existing word gesture decoders while enabling the Flexible Typing method, which enhances the overall typing experience and accommodates diverse user preferences.2025YMYan Ma et al.Stony Brook University, Computer Science DepartmentEV Charging & Eco-Driving InterfacesHuman-LLM CollaborationCHI
BIT: Battery-free, IC-less and Wireless Smart Textile Interface and Sensing SystemThe development of smart textile interfaces is hindered by the inclusion of rigid hardware components and batteries within the fabric, which pose challenges in terms of manufacturability, usability, and environmental concerns related to electronic waste. To mitigate these issues, we propose a smart textile interface and its wireless sensing system to eliminate the need for ICs, batteries, and connectors embedded into textiles. Our technique is established on the integration of multi-resonant circuits in smart textile interfaces, and utilizing near-field electromagnetic coupling between two coils to facilitate wireless power transfer and data acquisition from smart textile interface.A key aspect of our system is the development of a mathematical model that accurately represents the equivalent circuit of the sensing system. Using this model, we developed a novel algorithm to accurately estimate sensor signals based on changes in system impedance. Through simulation-based experiments and a user study, we demonstrate that our technique effectively supports multiple textile sensors of various types.2025WXWeiye Xu et al.Tsinghua UniversityElectronic Textiles (E-textiles)Shape-Changing Materials & 4D PrintingCHI
Belief Miner: A Methodology for Discovering Causal Beliefs and Causal Illusions from General PopulationsCausal belief is a cognitive practice that humans apply everyday to reason about cause and effect relations between factors, phenomena, or events. Like optical illusions, humans are prone to drawing causal relations between events that are only coincidental (i.e., causal illusions). Researchers in domains such as cognitive psychology and healthcare often use logistically expensive experiments to understand causal beliefs and illusions. In this paper, we propose Belief Miner, a crowdsourcing method for evaluating people’s causal beliefs and illusions. Our method uses the (dis)similarities between the causal relations collected from the crowds and experts to surface the causal beliefs and illusions. Through an iterative design process, we developed a web-based interface for collecting causal relations from a target population. We then conducted a crowdsourced experiment with 101 workers on Amazon Mechanical Turk and Prolific using this interface and analyzed the collected data with Belief Miner. We discovered a variety of causal beliefs and potential illusions, and we report the design implications for future research.2024SSShahreen Salim et al.Session 3a: Unpacking User Interpretation and System DesignCSCW
Model Touch Pointing and Detect Parkinson's Disease via a Mobile GameLing 等人开发基于移动游戏的触控点建模方法,通过分析玩家在游戏中的触控行为特征,实现帕金森病的早期辅助检测,为疾病筛查提供新途径。2024KLKaiyan Ling et al.Motor Impairment Assistive Input TechnologiesSerious & Functional GamesUbiComp
Accessible Gesture Typing on Smartphones for People with Low VisionWhile gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.2024DZDan Zhang et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Motor Impairment Assistive Input TechnologiesUIST
Hand Gesture Recognition for Blind Users by Tracking 3D Gesture TrajectoryHand gestures provide an alternate interaction modality for blind users and can be supported using commodity smartwatches without requiring specialized sensors. The enabling technology is an accurate gesture recognition algorithm, but almost all algorithms are designed for sighted users. Our study shows that blind user gestures are considerably different from sighted users, rendering current recognition algorithms unsuitable. Blind user gestures have high inter-user variance, making learning gesture patterns difficult without large-scale training data. Instead, we design a gesture recognition algorithm that works on a 3D representation of the gesture trajectory, capturing motion in free space. Our insight is to extract a micro-movement in the gesture that is user-invariant and use this micro-movement for gesture classification. To this end, we develop an ensemble classifier that combines image classification with geometric properties of the gesture. Our evaluation demonstrates a 92% classification accuracy, surpassing the next best state-of-the-art which has an accuracy of 82%.2024PKPrerna Khanna et al.Stony Brook UniversityHand Gesture RecognitionVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Teaching artificial intelligence in extracurricular contexts through narrative-based learnersourcingCollaborative technology provides powerful opportunities to engage young people in active learning experiences that are inclusive, immersive, and personally meaningful. In particular, interactive narratives have proven to be effective scaffolds for learning, and learnersourcing has emerged as a promising student-driven approach to enable personalized education and quality control at-scale. We introduce the first synthesis of these ideas in the context of teaching artificial intelligence (AI), which is now seen as a critical component of 21st-century education. Specifically, we explore the design of a narrative-based learnersourcing platform where engagement is centered around a learner-made choose-your-own-adventure story. In grounding our approach, we draw from pedagogical literature, digital storytelling, and recent work on learnersourcing. We report on our iterative, learner-centered design process as well as our study findings that demonstrate the platform’s positive effects on knowledge gains, interest in AI concepts, and the overall user experience of narrative-based learnersourcing technology.2024DMDylan Edward Moore et al.Dartmouth CollegeSTEM Education & Science CommunicationInteractive Narrative & Immersive StorytellingCHI
Generative AI in the Wild: Prospects, Challenges, and StrategiesPropelled by their remarkable capabilities to generate novel and engaging content, Generative Artificial Intelligence (GenAI) technologies are disrupting traditional workflows in many industries. While prior research has examined GenAI from a techno-centric perspective, there is still a lack of understanding about how users perceive and utilize GenAI in real-world scenarios. To bridge this gap, we conducted semi-structured interviews with (N = 18) GenAI users in creative industries, investigating the human-GenAI co-creation process within a holistic LUA (Learning, Using and Assessing) framework. Our study uncovered an intriguingly complex landscape: Prospects -- GenAI greatly fosters the co-creation between human expertise and GenAI capabilities, profoundly transforming creative workflows; Challenges -- Meanwhile, users face substantial uncertainties and complexities arising from resource availability, tool usability, and regulatory compliance; Strategies -- In response, users actively devise various strategies to overcome many of such challenges. Our study reveals key implications for the design of future GenAI tools.2024YSYuan Sun et al.University of FloridaGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
TouchType-GAN: Modeling Touch Typing with Generative Adversarial NetworkModels that can generate touch typing tasks are important to the development of touch typing keyboards. We propose TouchType- GAN, a Conditional Generative Adversarial Network that can sim- ulate locations and time stamps of touch points in touch typing. TouchType-GAN takes arbitrary text as input to generate realistic touch typing both spatially (i.e., (𝑥, 𝑦) coordinates of touch points) and temporally (i.e., timestamps of touch points). TouchType-GAN in- troduces a variational generator that estimates Gaussian Distribu- tions for every target letter to prevent mode collapse. Our experi- ments on a dataset with 3k typed sentences show that TouchType- GAN outperforms existing touch typing models, including the Ro- tational Dual Gaussian model for simulating the distribution of touch points, and the Finger-Fitts Euclidean Model for sim- ulating typing time. Overall, our research demonstrates that the proposed GAN structure can learn the distribution of user typed touch points, and the resulting TouchType-GAN can also estimate typing movements. TouchType-GAN can serve as a valuable tool for designing and evaluating touch typing input systems.2023JCJeremy Chu et al.Force Feedback & Pseudo-Haptic WeightHuman-LLM CollaborationUIST
Portrayal: Leveraging NLP and Visualization for Analyzing Fictional CharactersMany creative writing tasks (e.g., fiction writing) require authors to write complex narrative components (e.g., characterization, events, dialogue) over the course of a long story. Similarly, literary scholars need to manually annotate and interpret texts to understand such abstract components. In this paper, we explore how Natural Language Processing (NLP) and interactive visualization can help writers and scholars in such scenarios. To this end, we present Portrayal, an interactive visualization system for analyzing characters in a story. Portrayal extracts natural language indicators from a text to capture the characterization process and then visualizes the indicators in an interactive interface. We evaluated the system with 12 creative writers and scholars in a one-week-long qualitative study. Our findings suggest Portrayal helped writers revise their drafts and create dynamic characters and scenes. It helped scholars analyze characters without the need for any manual annotation, and design literary arguments with concrete evidence.2023MHMd Naimul Hoque et al.Interactive Data VisualizationAI-Assisted Creative WritingDIS
Cultural Differences in Friendship Network Behaviors: A Snapchat Case StudyCulture shapes people’s behavior, both online and offline. Surprisingly, there is sparse research on how cultural context affects network formation and content consumption on social media. We analyzed the friendship networks and dyadic relations between content producers and consumers across 73 countries through a cultural lens in a closed-network setting. Closed networks allow for intimate bonds and self-expression, providing a natural setting to study cultural differences in behavior. We studied three theoretical frameworks of culture - individualism, relational mobility, and tightness. We found that friendship networks formed across different cultures differ in egocentricity, meaning the connectedness between a user’s friends. Individualism, mobility, and looseness also significantly negatively impact how tie strength affects content consumption. Our findings show how culture affects social media behavior, and we outline how researchers can incorporate this in their work. Our work has implications for content recommendations and can improve content engagement.2023ASAgrima Seth et al.University of MichiganMultilingual & Cross-Cultural Voice InteractionSocial Platform Design & User BehaviorMisinformation & Fact-CheckingCHI
Modeling Touch-based Menu Selection Performance of Blind Users via Reinforcement LearningAlthough menu selection has been extensively studied in HCI, most existing studies have focused on sighted users, leaving blind users' menu selection under-studied. In this paper, we propose a computational model that can simulate blind users’ menu selection performance and strategies, including the way they use techniques like swiping, gliding, and direct touch. We assume that selection behavior emerges as an adaptation to the user's memory of item positions based on experience and feedback from the screen reader. A key aspect of our model is a model of long-term memory, predicting how a user recalls and forgets item position based on previous menu selections. We compare simulation results predicted by our model against data obtained in an empirical study with ten blind users. The model correctly simulated the effect of the menu length and menu arrangement on selection time, the action composition, and the menu selection strategy of the users.2023ZLZhi Li et al.Stony Brook UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette RecommendationSelecting a proper color palette is critical in crafting a high-quality graphic design to gain visibility and communicate ideas effectively. To facilitate this process, we propose De-Stijl, an intelligent and interactive color authoring tool to assist novice designers in crafting harmonic color palettes, achieving quick design iterations, and fulfilling design constraints. Through De-Stijl, we contribute a novel 2D color palette concept that allows users to intuitively perceive color designs in context with their proportions and proximities. Further, De-Stijl implements a holistic color authoring system that supports 2D palette extraction, theme-aware and spatial-sensitive color recommendation, and automatic graphical elements (re)colorization. We evaluated De-Stijl through an in-lab user study by comparing the system with existing industry standard tools, followed by in-depth user interviews. Quantitative and qualitative results demonstrate that De-Stijl is effective in assisting novice design practitioners to quickly colorize graphic designs and easily deliver several alternatives.2023XSAlark Joshi et al.University of Waterloo360° Video & Panoramic ContentGraphic Design & Typography ToolsPrototyping & User TestingCHI